Jun 20 19:31:32.806162 kernel: Linux version 6.12.34-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Jun 20 17:06:39 -00 2025 Jun 20 19:31:32.806207 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=b7bb3b1ced9c5d47870a8b74c6c30075189c27e25d75251cfa7215e4bbff75ea Jun 20 19:31:32.806216 kernel: BIOS-provided physical RAM map: Jun 20 19:31:32.806222 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jun 20 19:31:32.806229 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jun 20 19:31:32.806235 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jun 20 19:31:32.806242 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jun 20 19:31:32.806251 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jun 20 19:31:32.806257 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jun 20 19:31:32.806263 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jun 20 19:31:32.806270 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jun 20 19:31:32.806276 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jun 20 19:31:32.806283 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jun 20 19:31:32.806323 kernel: NX (Execute Disable) protection: active Jun 20 19:31:32.806334 kernel: APIC: Static calls initialized Jun 20 19:31:32.806341 kernel: SMBIOS 2.8 present. Jun 20 19:31:32.806348 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jun 20 19:31:32.806355 kernel: DMI: Memory slots populated: 1/1 Jun 20 19:31:32.806362 kernel: Hypervisor detected: KVM Jun 20 19:31:32.806369 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jun 20 19:31:32.806376 kernel: kvm-clock: using sched offset of 3238552358 cycles Jun 20 19:31:32.806384 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jun 20 19:31:32.806391 kernel: tsc: Detected 2794.748 MHz processor Jun 20 19:31:32.806400 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jun 20 19:31:32.806408 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jun 20 19:31:32.806415 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jun 20 19:31:32.806431 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jun 20 19:31:32.806446 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jun 20 19:31:32.806453 kernel: Using GB pages for direct mapping Jun 20 19:31:32.806460 kernel: ACPI: Early table checksum verification disabled Jun 20 19:31:32.806467 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jun 20 19:31:32.806474 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 20 19:31:32.806484 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jun 20 19:31:32.806491 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 20 19:31:32.806498 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jun 20 19:31:32.806505 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 20 19:31:32.806512 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 20 19:31:32.806519 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 20 19:31:32.806526 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 20 19:31:32.806533 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Jun 20 19:31:32.806546 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Jun 20 19:31:32.806553 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jun 20 19:31:32.806561 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Jun 20 19:31:32.806570 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Jun 20 19:31:32.806579 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Jun 20 19:31:32.806588 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Jun 20 19:31:32.806600 kernel: No NUMA configuration found Jun 20 19:31:32.806609 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jun 20 19:31:32.806618 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Jun 20 19:31:32.806627 kernel: Zone ranges: Jun 20 19:31:32.806637 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jun 20 19:31:32.806646 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jun 20 19:31:32.806655 kernel: Normal empty Jun 20 19:31:32.806662 kernel: Device empty Jun 20 19:31:32.806671 kernel: Movable zone start for each node Jun 20 19:31:32.806680 kernel: Early memory node ranges Jun 20 19:31:32.806693 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jun 20 19:31:32.806702 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jun 20 19:31:32.806712 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jun 20 19:31:32.806722 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jun 20 19:31:32.806731 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jun 20 19:31:32.806741 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jun 20 19:31:32.806749 kernel: ACPI: PM-Timer IO Port: 0x608 Jun 20 19:31:32.806756 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jun 20 19:31:32.806766 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jun 20 19:31:32.806795 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jun 20 19:31:32.806804 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jun 20 19:31:32.806814 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jun 20 19:31:32.806823 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jun 20 19:31:32.806833 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jun 20 19:31:32.806843 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jun 20 19:31:32.806852 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jun 20 19:31:32.806862 kernel: TSC deadline timer available Jun 20 19:31:32.806871 kernel: CPU topo: Max. logical packages: 1 Jun 20 19:31:32.806884 kernel: CPU topo: Max. logical dies: 1 Jun 20 19:31:32.806892 kernel: CPU topo: Max. dies per package: 1 Jun 20 19:31:32.806902 kernel: CPU topo: Max. threads per core: 1 Jun 20 19:31:32.806911 kernel: CPU topo: Num. cores per package: 4 Jun 20 19:31:32.806921 kernel: CPU topo: Num. threads per package: 4 Jun 20 19:31:32.806931 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jun 20 19:31:32.806939 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jun 20 19:31:32.806946 kernel: kvm-guest: KVM setup pv remote TLB flush Jun 20 19:31:32.806954 kernel: kvm-guest: setup PV sched yield Jun 20 19:31:32.806961 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jun 20 19:31:32.806971 kernel: Booting paravirtualized kernel on KVM Jun 20 19:31:32.806988 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jun 20 19:31:32.806995 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jun 20 19:31:32.807003 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jun 20 19:31:32.807010 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jun 20 19:31:32.807018 kernel: pcpu-alloc: [0] 0 1 2 3 Jun 20 19:31:32.807028 kernel: kvm-guest: PV spinlocks enabled Jun 20 19:31:32.807037 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jun 20 19:31:32.807048 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=b7bb3b1ced9c5d47870a8b74c6c30075189c27e25d75251cfa7215e4bbff75ea Jun 20 19:31:32.807062 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jun 20 19:31:32.807071 kernel: random: crng init done Jun 20 19:31:32.807081 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jun 20 19:31:32.807090 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jun 20 19:31:32.807099 kernel: Fallback order for Node 0: 0 Jun 20 19:31:32.807108 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Jun 20 19:31:32.807118 kernel: Policy zone: DMA32 Jun 20 19:31:32.807127 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jun 20 19:31:32.807140 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jun 20 19:31:32.807150 kernel: ftrace: allocating 40093 entries in 157 pages Jun 20 19:31:32.807160 kernel: ftrace: allocated 157 pages with 5 groups Jun 20 19:31:32.807169 kernel: Dynamic Preempt: voluntary Jun 20 19:31:32.807179 kernel: rcu: Preemptible hierarchical RCU implementation. Jun 20 19:31:32.807187 kernel: rcu: RCU event tracing is enabled. Jun 20 19:31:32.807195 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jun 20 19:31:32.807202 kernel: Trampoline variant of Tasks RCU enabled. Jun 20 19:31:32.807209 kernel: Rude variant of Tasks RCU enabled. Jun 20 19:31:32.807219 kernel: Tracing variant of Tasks RCU enabled. Jun 20 19:31:32.807226 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jun 20 19:31:32.807234 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jun 20 19:31:32.807241 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jun 20 19:31:32.807248 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jun 20 19:31:32.807256 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jun 20 19:31:32.807263 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jun 20 19:31:32.807271 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jun 20 19:31:32.807286 kernel: Console: colour VGA+ 80x25 Jun 20 19:31:32.807294 kernel: printk: legacy console [ttyS0] enabled Jun 20 19:31:32.807302 kernel: ACPI: Core revision 20240827 Jun 20 19:31:32.807309 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jun 20 19:31:32.807319 kernel: APIC: Switch to symmetric I/O mode setup Jun 20 19:31:32.807326 kernel: x2apic enabled Jun 20 19:31:32.807334 kernel: APIC: Switched APIC routing to: physical x2apic Jun 20 19:31:32.807341 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jun 20 19:31:32.807349 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jun 20 19:31:32.807358 kernel: kvm-guest: setup PV IPIs Jun 20 19:31:32.807366 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jun 20 19:31:32.807374 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Jun 20 19:31:32.807382 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Jun 20 19:31:32.807389 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jun 20 19:31:32.807397 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jun 20 19:31:32.807404 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jun 20 19:31:32.807412 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jun 20 19:31:32.807419 kernel: Spectre V2 : Mitigation: Retpolines Jun 20 19:31:32.807429 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jun 20 19:31:32.807437 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jun 20 19:31:32.807444 kernel: RETBleed: Mitigation: untrained return thunk Jun 20 19:31:32.807452 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jun 20 19:31:32.807459 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jun 20 19:31:32.807467 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jun 20 19:31:32.807475 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jun 20 19:31:32.807483 kernel: x86/bugs: return thunk changed Jun 20 19:31:32.807493 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jun 20 19:31:32.807500 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jun 20 19:31:32.807508 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jun 20 19:31:32.807515 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jun 20 19:31:32.807523 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jun 20 19:31:32.807531 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jun 20 19:31:32.807538 kernel: Freeing SMP alternatives memory: 32K Jun 20 19:31:32.807546 kernel: pid_max: default: 32768 minimum: 301 Jun 20 19:31:32.807564 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jun 20 19:31:32.807581 kernel: landlock: Up and running. Jun 20 19:31:32.807601 kernel: SELinux: Initializing. Jun 20 19:31:32.807613 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jun 20 19:31:32.807625 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jun 20 19:31:32.807635 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jun 20 19:31:32.807643 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jun 20 19:31:32.807651 kernel: ... version: 0 Jun 20 19:31:32.807658 kernel: ... bit width: 48 Jun 20 19:31:32.807671 kernel: ... generic registers: 6 Jun 20 19:31:32.807681 kernel: ... value mask: 0000ffffffffffff Jun 20 19:31:32.807689 kernel: ... max period: 00007fffffffffff Jun 20 19:31:32.807696 kernel: ... fixed-purpose events: 0 Jun 20 19:31:32.807704 kernel: ... event mask: 000000000000003f Jun 20 19:31:32.807712 kernel: signal: max sigframe size: 1776 Jun 20 19:31:32.807719 kernel: rcu: Hierarchical SRCU implementation. Jun 20 19:31:32.807727 kernel: rcu: Max phase no-delay instances is 400. Jun 20 19:31:32.807734 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jun 20 19:31:32.807742 kernel: smp: Bringing up secondary CPUs ... Jun 20 19:31:32.807752 kernel: smpboot: x86: Booting SMP configuration: Jun 20 19:31:32.807759 kernel: .... node #0, CPUs: #1 #2 #3 Jun 20 19:31:32.807767 kernel: smp: Brought up 1 node, 4 CPUs Jun 20 19:31:32.807791 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Jun 20 19:31:32.807799 kernel: Memory: 2428912K/2571752K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54424K init, 2544K bss, 136904K reserved, 0K cma-reserved) Jun 20 19:31:32.807807 kernel: devtmpfs: initialized Jun 20 19:31:32.807815 kernel: x86/mm: Memory block size: 128MB Jun 20 19:31:32.807822 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jun 20 19:31:32.807830 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jun 20 19:31:32.807840 kernel: pinctrl core: initialized pinctrl subsystem Jun 20 19:31:32.807848 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jun 20 19:31:32.807858 kernel: audit: initializing netlink subsys (disabled) Jun 20 19:31:32.807868 kernel: audit: type=2000 audit(1750447889.491:1): state=initialized audit_enabled=0 res=1 Jun 20 19:31:32.807878 kernel: thermal_sys: Registered thermal governor 'step_wise' Jun 20 19:31:32.807888 kernel: thermal_sys: Registered thermal governor 'user_space' Jun 20 19:31:32.807898 kernel: cpuidle: using governor menu Jun 20 19:31:32.807908 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jun 20 19:31:32.807919 kernel: dca service started, version 1.12.1 Jun 20 19:31:32.807933 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Jun 20 19:31:32.807943 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jun 20 19:31:32.807952 kernel: PCI: Using configuration type 1 for base access Jun 20 19:31:32.807962 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jun 20 19:31:32.807972 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jun 20 19:31:32.807991 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jun 20 19:31:32.808001 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jun 20 19:31:32.808011 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jun 20 19:31:32.808019 kernel: ACPI: Added _OSI(Module Device) Jun 20 19:31:32.808030 kernel: ACPI: Added _OSI(Processor Device) Jun 20 19:31:32.808037 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jun 20 19:31:32.808045 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jun 20 19:31:32.808052 kernel: ACPI: Interpreter enabled Jun 20 19:31:32.808060 kernel: ACPI: PM: (supports S0 S3 S5) Jun 20 19:31:32.808068 kernel: ACPI: Using IOAPIC for interrupt routing Jun 20 19:31:32.808075 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jun 20 19:31:32.808083 kernel: PCI: Using E820 reservations for host bridge windows Jun 20 19:31:32.808090 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jun 20 19:31:32.808100 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jun 20 19:31:32.808273 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jun 20 19:31:32.808392 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jun 20 19:31:32.808532 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jun 20 19:31:32.808543 kernel: PCI host bridge to bus 0000:00 Jun 20 19:31:32.808662 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jun 20 19:31:32.808785 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jun 20 19:31:32.808899 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jun 20 19:31:32.809011 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jun 20 19:31:32.809116 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jun 20 19:31:32.809219 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jun 20 19:31:32.809322 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jun 20 19:31:32.809454 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jun 20 19:31:32.809586 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jun 20 19:31:32.809704 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Jun 20 19:31:32.809840 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Jun 20 19:31:32.809956 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Jun 20 19:31:32.810078 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jun 20 19:31:32.810209 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jun 20 19:31:32.810325 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Jun 20 19:31:32.810445 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Jun 20 19:31:32.810562 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Jun 20 19:31:32.810692 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jun 20 19:31:32.810834 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Jun 20 19:31:32.810954 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Jun 20 19:31:32.811076 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Jun 20 19:31:32.811200 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jun 20 19:31:32.811320 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Jun 20 19:31:32.811434 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Jun 20 19:31:32.811551 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Jun 20 19:31:32.811667 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Jun 20 19:31:32.811817 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jun 20 19:31:32.811937 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jun 20 19:31:32.812073 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jun 20 19:31:32.812188 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Jun 20 19:31:32.812302 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Jun 20 19:31:32.812423 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jun 20 19:31:32.812539 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Jun 20 19:31:32.812549 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jun 20 19:31:32.812557 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jun 20 19:31:32.812568 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jun 20 19:31:32.812576 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jun 20 19:31:32.812583 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jun 20 19:31:32.812591 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jun 20 19:31:32.812599 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jun 20 19:31:32.812606 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jun 20 19:31:32.812614 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jun 20 19:31:32.812621 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jun 20 19:31:32.812629 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jun 20 19:31:32.812639 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jun 20 19:31:32.812646 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jun 20 19:31:32.812654 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jun 20 19:31:32.812661 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jun 20 19:31:32.812669 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jun 20 19:31:32.812676 kernel: iommu: Default domain type: Translated Jun 20 19:31:32.812684 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jun 20 19:31:32.812692 kernel: PCI: Using ACPI for IRQ routing Jun 20 19:31:32.812699 kernel: PCI: pci_cache_line_size set to 64 bytes Jun 20 19:31:32.812709 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jun 20 19:31:32.812717 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jun 20 19:31:32.812845 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jun 20 19:31:32.812969 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jun 20 19:31:32.813093 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jun 20 19:31:32.813103 kernel: vgaarb: loaded Jun 20 19:31:32.813111 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jun 20 19:31:32.813119 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jun 20 19:31:32.813127 kernel: clocksource: Switched to clocksource kvm-clock Jun 20 19:31:32.813138 kernel: VFS: Disk quotas dquot_6.6.0 Jun 20 19:31:32.813146 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jun 20 19:31:32.813154 kernel: pnp: PnP ACPI init Jun 20 19:31:32.813278 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jun 20 19:31:32.813289 kernel: pnp: PnP ACPI: found 6 devices Jun 20 19:31:32.813297 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jun 20 19:31:32.813305 kernel: NET: Registered PF_INET protocol family Jun 20 19:31:32.813313 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jun 20 19:31:32.813323 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jun 20 19:31:32.813331 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jun 20 19:31:32.813339 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jun 20 19:31:32.813347 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jun 20 19:31:32.813354 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jun 20 19:31:32.813362 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jun 20 19:31:32.813370 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jun 20 19:31:32.813377 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jun 20 19:31:32.813387 kernel: NET: Registered PF_XDP protocol family Jun 20 19:31:32.813497 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jun 20 19:31:32.813603 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jun 20 19:31:32.813707 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jun 20 19:31:32.813831 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jun 20 19:31:32.813935 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jun 20 19:31:32.814047 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jun 20 19:31:32.814057 kernel: PCI: CLS 0 bytes, default 64 Jun 20 19:31:32.814065 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Jun 20 19:31:32.814076 kernel: Initialise system trusted keyrings Jun 20 19:31:32.814084 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jun 20 19:31:32.814092 kernel: Key type asymmetric registered Jun 20 19:31:32.814099 kernel: Asymmetric key parser 'x509' registered Jun 20 19:31:32.814107 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jun 20 19:31:32.814115 kernel: io scheduler mq-deadline registered Jun 20 19:31:32.814123 kernel: io scheduler kyber registered Jun 20 19:31:32.814130 kernel: io scheduler bfq registered Jun 20 19:31:32.814138 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jun 20 19:31:32.814148 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jun 20 19:31:32.814156 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jun 20 19:31:32.814163 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jun 20 19:31:32.814171 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jun 20 19:31:32.814179 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jun 20 19:31:32.814186 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jun 20 19:31:32.814194 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jun 20 19:31:32.814202 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jun 20 19:31:32.814328 kernel: rtc_cmos 00:04: RTC can wake from S4 Jun 20 19:31:32.814341 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jun 20 19:31:32.814450 kernel: rtc_cmos 00:04: registered as rtc0 Jun 20 19:31:32.814558 kernel: rtc_cmos 00:04: setting system clock to 2025-06-20T19:31:32 UTC (1750447892) Jun 20 19:31:32.814665 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jun 20 19:31:32.814675 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jun 20 19:31:32.814682 kernel: NET: Registered PF_INET6 protocol family Jun 20 19:31:32.814690 kernel: Segment Routing with IPv6 Jun 20 19:31:32.814697 kernel: In-situ OAM (IOAM) with IPv6 Jun 20 19:31:32.814708 kernel: NET: Registered PF_PACKET protocol family Jun 20 19:31:32.814715 kernel: Key type dns_resolver registered Jun 20 19:31:32.814723 kernel: IPI shorthand broadcast: enabled Jun 20 19:31:32.814731 kernel: sched_clock: Marking stable (2749002787, 115286038)->(2878107908, -13819083) Jun 20 19:31:32.814739 kernel: registered taskstats version 1 Jun 20 19:31:32.814746 kernel: Loading compiled-in X.509 certificates Jun 20 19:31:32.814754 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.34-flatcar: 9a085d119111c823c157514215d0379e3a2f1b94' Jun 20 19:31:32.814762 kernel: Demotion targets for Node 0: null Jun 20 19:31:32.814783 kernel: Key type .fscrypt registered Jun 20 19:31:32.814793 kernel: Key type fscrypt-provisioning registered Jun 20 19:31:32.814801 kernel: ima: No TPM chip found, activating TPM-bypass! Jun 20 19:31:32.814808 kernel: ima: Allocated hash algorithm: sha1 Jun 20 19:31:32.814816 kernel: ima: No architecture policies found Jun 20 19:31:32.814824 kernel: clk: Disabling unused clocks Jun 20 19:31:32.814831 kernel: Warning: unable to open an initial console. Jun 20 19:31:32.814839 kernel: Freeing unused kernel image (initmem) memory: 54424K Jun 20 19:31:32.814847 kernel: Write protecting the kernel read-only data: 24576k Jun 20 19:31:32.814857 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jun 20 19:31:32.814864 kernel: Run /init as init process Jun 20 19:31:32.814872 kernel: with arguments: Jun 20 19:31:32.814880 kernel: /init Jun 20 19:31:32.814887 kernel: with environment: Jun 20 19:31:32.814895 kernel: HOME=/ Jun 20 19:31:32.814902 kernel: TERM=linux Jun 20 19:31:32.814910 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jun 20 19:31:32.814918 systemd[1]: Successfully made /usr/ read-only. Jun 20 19:31:32.814931 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jun 20 19:31:32.814951 systemd[1]: Detected virtualization kvm. Jun 20 19:31:32.814959 systemd[1]: Detected architecture x86-64. Jun 20 19:31:32.814967 systemd[1]: Running in initrd. Jun 20 19:31:32.814983 systemd[1]: No hostname configured, using default hostname. Jun 20 19:31:32.814993 systemd[1]: Hostname set to . Jun 20 19:31:32.815003 systemd[1]: Initializing machine ID from VM UUID. Jun 20 19:31:32.815013 systemd[1]: Queued start job for default target initrd.target. Jun 20 19:31:32.815021 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 20 19:31:32.815030 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 20 19:31:32.815039 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jun 20 19:31:32.815047 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 20 19:31:32.815056 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jun 20 19:31:32.815067 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jun 20 19:31:32.815077 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jun 20 19:31:32.815086 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jun 20 19:31:32.815094 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 20 19:31:32.815103 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 20 19:31:32.815112 systemd[1]: Reached target paths.target - Path Units. Jun 20 19:31:32.815120 systemd[1]: Reached target slices.target - Slice Units. Jun 20 19:31:32.815130 systemd[1]: Reached target swap.target - Swaps. Jun 20 19:31:32.815138 systemd[1]: Reached target timers.target - Timer Units. Jun 20 19:31:32.815147 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jun 20 19:31:32.815155 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 20 19:31:32.815164 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 20 19:31:32.815172 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jun 20 19:31:32.815181 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 20 19:31:32.815189 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 20 19:31:32.815198 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 20 19:31:32.815208 systemd[1]: Reached target sockets.target - Socket Units. Jun 20 19:31:32.815216 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jun 20 19:31:32.815225 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 20 19:31:32.815233 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jun 20 19:31:32.815243 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jun 20 19:31:32.815255 systemd[1]: Starting systemd-fsck-usr.service... Jun 20 19:31:32.815263 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 20 19:31:32.815272 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 20 19:31:32.815280 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 19:31:32.815289 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jun 20 19:31:32.815298 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 20 19:31:32.815308 systemd[1]: Finished systemd-fsck-usr.service. Jun 20 19:31:32.815317 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 20 19:31:32.815342 systemd-journald[219]: Collecting audit messages is disabled. Jun 20 19:31:32.815367 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 20 19:31:32.815376 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 20 19:31:32.815385 systemd-journald[219]: Journal started Jun 20 19:31:32.815404 systemd-journald[219]: Runtime Journal (/run/log/journal/55d83f635ffd4b178b0484a69e31556c) is 6M, max 48.6M, 42.5M free. Jun 20 19:31:32.806466 systemd-modules-load[221]: Inserted module 'overlay' Jun 20 19:31:32.818793 systemd[1]: Started systemd-journald.service - Journal Service. Jun 20 19:31:32.834766 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jun 20 19:31:32.834799 kernel: Bridge firewalling registered Jun 20 19:31:32.833886 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jun 20 19:31:32.834968 systemd-modules-load[221]: Inserted module 'br_netfilter' Jun 20 19:31:32.872879 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 20 19:31:32.874288 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:31:32.874759 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 20 19:31:32.880345 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 20 19:31:32.882979 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 20 19:31:32.884012 systemd-tmpfiles[241]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jun 20 19:31:32.889026 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 20 19:31:32.893134 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 20 19:31:32.894660 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 20 19:31:32.906950 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 20 19:31:32.907950 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jun 20 19:31:32.931475 dracut-cmdline[266]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=b7bb3b1ced9c5d47870a8b74c6c30075189c27e25d75251cfa7215e4bbff75ea Jun 20 19:31:32.941579 systemd-resolved[256]: Positive Trust Anchors: Jun 20 19:31:32.941598 systemd-resolved[256]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 20 19:31:32.941629 systemd-resolved[256]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jun 20 19:31:32.944051 systemd-resolved[256]: Defaulting to hostname 'linux'. Jun 20 19:31:32.945023 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 20 19:31:32.951345 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 20 19:31:33.036806 kernel: SCSI subsystem initialized Jun 20 19:31:33.045800 kernel: Loading iSCSI transport class v2.0-870. Jun 20 19:31:33.055798 kernel: iscsi: registered transport (tcp) Jun 20 19:31:33.076798 kernel: iscsi: registered transport (qla4xxx) Jun 20 19:31:33.076819 kernel: QLogic iSCSI HBA Driver Jun 20 19:31:33.096257 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 20 19:31:33.125948 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 20 19:31:33.127545 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 20 19:31:33.179161 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jun 20 19:31:33.181740 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jun 20 19:31:33.236799 kernel: raid6: avx2x4 gen() 30536 MB/s Jun 20 19:31:33.253798 kernel: raid6: avx2x2 gen() 30752 MB/s Jun 20 19:31:33.270893 kernel: raid6: avx2x1 gen() 25652 MB/s Jun 20 19:31:33.270911 kernel: raid6: using algorithm avx2x2 gen() 30752 MB/s Jun 20 19:31:33.288903 kernel: raid6: .... xor() 19788 MB/s, rmw enabled Jun 20 19:31:33.288930 kernel: raid6: using avx2x2 recovery algorithm Jun 20 19:31:33.308800 kernel: xor: automatically using best checksumming function avx Jun 20 19:31:33.470808 kernel: Btrfs loaded, zoned=no, fsverity=no Jun 20 19:31:33.478825 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jun 20 19:31:33.482643 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 20 19:31:33.513470 systemd-udevd[474]: Using default interface naming scheme 'v255'. Jun 20 19:31:33.518915 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 20 19:31:33.523239 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jun 20 19:31:33.554225 dracut-pre-trigger[485]: rd.md=0: removing MD RAID activation Jun 20 19:31:33.582666 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jun 20 19:31:33.585359 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 20 19:31:33.661934 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 20 19:31:33.665705 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jun 20 19:31:33.693817 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jun 20 19:31:33.696272 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jun 20 19:31:33.701724 kernel: cryptd: max_cpu_qlen set to 1000 Jun 20 19:31:33.701746 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jun 20 19:31:33.701764 kernel: GPT:9289727 != 19775487 Jun 20 19:31:33.701793 kernel: GPT:Alternate GPT header not at the end of the disk. Jun 20 19:31:33.701804 kernel: GPT:9289727 != 19775487 Jun 20 19:31:33.701814 kernel: GPT: Use GNU Parted to correct GPT errors. Jun 20 19:31:33.701828 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 20 19:31:33.708790 kernel: AES CTR mode by8 optimization enabled Jun 20 19:31:33.716302 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jun 20 19:31:33.739061 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 20 19:31:33.739186 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:31:33.742626 kernel: libata version 3.00 loaded. Jun 20 19:31:33.742669 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 19:31:33.743897 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 19:31:33.754413 kernel: ahci 0000:00:1f.2: version 3.0 Jun 20 19:31:33.754600 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jun 20 19:31:33.759100 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jun 20 19:31:33.759272 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jun 20 19:31:33.759408 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jun 20 19:31:33.763791 kernel: scsi host0: ahci Jun 20 19:31:33.766790 kernel: scsi host1: ahci Jun 20 19:31:33.766970 kernel: scsi host2: ahci Jun 20 19:31:33.770794 kernel: scsi host3: ahci Jun 20 19:31:33.771825 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jun 20 19:31:33.773507 kernel: scsi host4: ahci Jun 20 19:31:33.775811 kernel: scsi host5: ahci Jun 20 19:31:33.776026 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 lpm-pol 0 Jun 20 19:31:33.776044 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 lpm-pol 0 Jun 20 19:31:33.776054 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 lpm-pol 0 Jun 20 19:31:33.776064 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 lpm-pol 0 Jun 20 19:31:33.776074 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 lpm-pol 0 Jun 20 19:31:33.776086 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 lpm-pol 0 Jun 20 19:31:33.792869 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jun 20 19:31:33.813682 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jun 20 19:31:33.817478 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:31:33.828193 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jun 20 19:31:33.845531 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jun 20 19:31:33.847586 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jun 20 19:31:33.881308 disk-uuid[639]: Primary Header is updated. Jun 20 19:31:33.881308 disk-uuid[639]: Secondary Entries is updated. Jun 20 19:31:33.881308 disk-uuid[639]: Secondary Header is updated. Jun 20 19:31:33.885805 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 20 19:31:33.889795 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 20 19:31:34.084765 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jun 20 19:31:34.084854 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jun 20 19:31:34.084865 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jun 20 19:31:34.084876 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jun 20 19:31:34.084886 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jun 20 19:31:34.085806 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jun 20 19:31:34.086812 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jun 20 19:31:34.088111 kernel: ata3.00: applying bridge limits Jun 20 19:31:34.088123 kernel: ata3.00: configured for UDMA/100 Jun 20 19:31:34.088804 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jun 20 19:31:34.130811 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jun 20 19:31:34.131032 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jun 20 19:31:34.148817 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jun 20 19:31:34.518290 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jun 20 19:31:34.521003 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jun 20 19:31:34.523412 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 20 19:31:34.525706 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 20 19:31:34.528701 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jun 20 19:31:34.549647 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jun 20 19:31:34.890575 disk-uuid[640]: The operation has completed successfully. Jun 20 19:31:34.892039 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 20 19:31:34.920882 systemd[1]: disk-uuid.service: Deactivated successfully. Jun 20 19:31:34.921017 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jun 20 19:31:34.954072 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jun 20 19:31:34.976855 sh[669]: Success Jun 20 19:31:34.994098 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jun 20 19:31:34.994125 kernel: device-mapper: uevent: version 1.0.3 Jun 20 19:31:34.995173 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jun 20 19:31:35.003803 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jun 20 19:31:35.033011 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jun 20 19:31:35.036614 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jun 20 19:31:35.049860 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jun 20 19:31:35.055633 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jun 20 19:31:35.055659 kernel: BTRFS: device fsid 048b924a-9f97-43f5-98d6-0fff18874966 devid 1 transid 41 /dev/mapper/usr (253:0) scanned by mount (681) Jun 20 19:31:35.057791 kernel: BTRFS info (device dm-0): first mount of filesystem 048b924a-9f97-43f5-98d6-0fff18874966 Jun 20 19:31:35.057809 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jun 20 19:31:35.057820 kernel: BTRFS info (device dm-0): using free-space-tree Jun 20 19:31:35.062227 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jun 20 19:31:35.063520 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jun 20 19:31:35.065051 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jun 20 19:31:35.065742 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jun 20 19:31:35.067519 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jun 20 19:31:35.094835 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (714) Jun 20 19:31:35.094876 kernel: BTRFS info (device vda6): first mount of filesystem 40288228-7b4b-4005-945b-574c4c10ab32 Jun 20 19:31:35.097348 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jun 20 19:31:35.097374 kernel: BTRFS info (device vda6): using free-space-tree Jun 20 19:31:35.104803 kernel: BTRFS info (device vda6): last unmount of filesystem 40288228-7b4b-4005-945b-574c4c10ab32 Jun 20 19:31:35.105769 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jun 20 19:31:35.108704 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jun 20 19:31:35.361170 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 20 19:31:35.456929 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 20 19:31:35.471007 ignition[761]: Ignition 2.21.0 Jun 20 19:31:35.471019 ignition[761]: Stage: fetch-offline Jun 20 19:31:35.471051 ignition[761]: no configs at "/usr/lib/ignition/base.d" Jun 20 19:31:35.471060 ignition[761]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 20 19:31:35.471146 ignition[761]: parsed url from cmdline: "" Jun 20 19:31:35.471150 ignition[761]: no config URL provided Jun 20 19:31:35.471155 ignition[761]: reading system config file "/usr/lib/ignition/user.ign" Jun 20 19:31:35.471163 ignition[761]: no config at "/usr/lib/ignition/user.ign" Jun 20 19:31:35.471184 ignition[761]: op(1): [started] loading QEMU firmware config module Jun 20 19:31:35.471192 ignition[761]: op(1): executing: "modprobe" "qemu_fw_cfg" Jun 20 19:31:35.483969 ignition[761]: op(1): [finished] loading QEMU firmware config module Jun 20 19:31:35.502602 systemd-networkd[856]: lo: Link UP Jun 20 19:31:35.502613 systemd-networkd[856]: lo: Gained carrier Jun 20 19:31:35.504164 systemd-networkd[856]: Enumeration completed Jun 20 19:31:35.504253 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 20 19:31:35.504504 systemd-networkd[856]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 19:31:35.504508 systemd-networkd[856]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 20 19:31:35.505836 systemd-networkd[856]: eth0: Link UP Jun 20 19:31:35.505839 systemd-networkd[856]: eth0: Gained carrier Jun 20 19:31:35.505847 systemd-networkd[856]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 19:31:35.507306 systemd[1]: Reached target network.target - Network. Jun 20 19:31:35.535824 systemd-networkd[856]: eth0: DHCPv4 address 10.0.0.149/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jun 20 19:31:35.537878 ignition[761]: parsing config with SHA512: 2e25e56f52619d3184a08bf7972df528e20c53335eecfe7d7b2b9d7c385b282623f0ae931ad9d5a3450cc5ff9c8bf5c5c1d7d5b08153fdb8aa14df23f99c3576 Jun 20 19:31:35.542753 unknown[761]: fetched base config from "system" Jun 20 19:31:35.542766 unknown[761]: fetched user config from "qemu" Jun 20 19:31:35.543136 ignition[761]: fetch-offline: fetch-offline passed Jun 20 19:31:35.543188 ignition[761]: Ignition finished successfully Jun 20 19:31:35.547655 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jun 20 19:31:35.550701 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jun 20 19:31:35.552829 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jun 20 19:31:35.592804 ignition[864]: Ignition 2.21.0 Jun 20 19:31:35.592817 ignition[864]: Stage: kargs Jun 20 19:31:35.592996 ignition[864]: no configs at "/usr/lib/ignition/base.d" Jun 20 19:31:35.593007 ignition[864]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 20 19:31:35.595588 ignition[864]: kargs: kargs passed Jun 20 19:31:35.597886 ignition[864]: Ignition finished successfully Jun 20 19:31:35.602222 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jun 20 19:31:35.604327 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jun 20 19:31:35.638023 ignition[872]: Ignition 2.21.0 Jun 20 19:31:35.638036 ignition[872]: Stage: disks Jun 20 19:31:35.638167 ignition[872]: no configs at "/usr/lib/ignition/base.d" Jun 20 19:31:35.638178 ignition[872]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 20 19:31:35.640415 ignition[872]: disks: disks passed Jun 20 19:31:35.640457 ignition[872]: Ignition finished successfully Jun 20 19:31:35.644573 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jun 20 19:31:35.645861 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jun 20 19:31:35.647830 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 20 19:31:35.650030 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 20 19:31:35.650241 systemd[1]: Reached target sysinit.target - System Initialization. Jun 20 19:31:35.650569 systemd[1]: Reached target basic.target - Basic System. Jun 20 19:31:35.656509 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jun 20 19:31:35.692218 systemd-fsck[882]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jun 20 19:31:35.700358 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jun 20 19:31:35.701358 systemd[1]: Mounting sysroot.mount - /sysroot... Jun 20 19:31:35.814801 kernel: EXT4-fs (vda9): mounted filesystem 6290a154-3512-46a6-a5f5-a7fb62c65caa r/w with ordered data mode. Quota mode: none. Jun 20 19:31:35.815447 systemd[1]: Mounted sysroot.mount - /sysroot. Jun 20 19:31:35.817558 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jun 20 19:31:35.818686 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 20 19:31:35.821353 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jun 20 19:31:35.822609 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jun 20 19:31:35.822648 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jun 20 19:31:35.822669 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jun 20 19:31:35.834935 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jun 20 19:31:35.836168 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jun 20 19:31:35.841762 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (890) Jun 20 19:31:35.841802 kernel: BTRFS info (device vda6): first mount of filesystem 40288228-7b4b-4005-945b-574c4c10ab32 Jun 20 19:31:35.841814 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jun 20 19:31:35.842788 kernel: BTRFS info (device vda6): using free-space-tree Jun 20 19:31:35.847654 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 20 19:31:35.875587 initrd-setup-root[914]: cut: /sysroot/etc/passwd: No such file or directory Jun 20 19:31:35.880683 initrd-setup-root[921]: cut: /sysroot/etc/group: No such file or directory Jun 20 19:31:35.885394 initrd-setup-root[928]: cut: /sysroot/etc/shadow: No such file or directory Jun 20 19:31:35.890165 initrd-setup-root[935]: cut: /sysroot/etc/gshadow: No such file or directory Jun 20 19:31:35.977587 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jun 20 19:31:35.979639 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jun 20 19:31:35.981422 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jun 20 19:31:36.015797 kernel: BTRFS info (device vda6): last unmount of filesystem 40288228-7b4b-4005-945b-574c4c10ab32 Jun 20 19:31:36.031910 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jun 20 19:31:36.054799 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jun 20 19:31:36.267893 ignition[1003]: INFO : Ignition 2.21.0 Jun 20 19:31:36.267893 ignition[1003]: INFO : Stage: mount Jun 20 19:31:36.271094 ignition[1003]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 20 19:31:36.271094 ignition[1003]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 20 19:31:36.271094 ignition[1003]: INFO : mount: mount passed Jun 20 19:31:36.271094 ignition[1003]: INFO : Ignition finished successfully Jun 20 19:31:36.271835 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jun 20 19:31:36.274186 systemd[1]: Starting ignition-files.service - Ignition (files)... Jun 20 19:31:36.297377 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 20 19:31:36.339798 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (1016) Jun 20 19:31:36.342993 kernel: BTRFS info (device vda6): first mount of filesystem 40288228-7b4b-4005-945b-574c4c10ab32 Jun 20 19:31:36.343018 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jun 20 19:31:36.343029 kernel: BTRFS info (device vda6): using free-space-tree Jun 20 19:31:36.347216 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 20 19:31:36.383030 ignition[1033]: INFO : Ignition 2.21.0 Jun 20 19:31:36.383030 ignition[1033]: INFO : Stage: files Jun 20 19:31:36.384905 ignition[1033]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 20 19:31:36.384905 ignition[1033]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 20 19:31:36.387287 ignition[1033]: DEBUG : files: compiled without relabeling support, skipping Jun 20 19:31:36.387287 ignition[1033]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jun 20 19:31:36.387287 ignition[1033]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jun 20 19:31:36.391380 ignition[1033]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jun 20 19:31:36.391380 ignition[1033]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jun 20 19:31:36.391380 ignition[1033]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jun 20 19:31:36.389909 unknown[1033]: wrote ssh authorized keys file for user: core Jun 20 19:31:36.396730 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jun 20 19:31:36.396730 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jun 20 19:31:36.458203 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jun 20 19:31:36.984657 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jun 20 19:31:36.986928 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jun 20 19:31:36.986928 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jun 20 19:31:36.986928 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jun 20 19:31:36.986928 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jun 20 19:31:36.986928 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 20 19:31:36.986928 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 20 19:31:36.986928 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 20 19:31:36.986928 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 20 19:31:37.001998 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jun 20 19:31:37.001998 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jun 20 19:31:37.001998 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jun 20 19:31:37.001998 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jun 20 19:31:37.001998 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jun 20 19:31:37.001998 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jun 20 19:31:37.307966 systemd-networkd[856]: eth0: Gained IPv6LL Jun 20 19:31:37.721069 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jun 20 19:31:38.389146 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jun 20 19:31:38.389146 ignition[1033]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jun 20 19:31:38.393203 ignition[1033]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 20 19:31:38.396062 ignition[1033]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 20 19:31:38.396062 ignition[1033]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jun 20 19:31:38.396062 ignition[1033]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jun 20 19:31:38.400811 ignition[1033]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jun 20 19:31:38.400811 ignition[1033]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jun 20 19:31:38.400811 ignition[1033]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jun 20 19:31:38.400811 ignition[1033]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jun 20 19:31:38.416073 ignition[1033]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jun 20 19:31:38.419783 ignition[1033]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jun 20 19:31:38.421377 ignition[1033]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jun 20 19:31:38.421377 ignition[1033]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jun 20 19:31:38.421377 ignition[1033]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jun 20 19:31:38.421377 ignition[1033]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jun 20 19:31:38.421377 ignition[1033]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jun 20 19:31:38.421377 ignition[1033]: INFO : files: files passed Jun 20 19:31:38.421377 ignition[1033]: INFO : Ignition finished successfully Jun 20 19:31:38.424117 systemd[1]: Finished ignition-files.service - Ignition (files). Jun 20 19:31:38.426502 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jun 20 19:31:38.428582 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jun 20 19:31:38.441985 systemd[1]: ignition-quench.service: Deactivated successfully. Jun 20 19:31:38.442119 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jun 20 19:31:38.446953 initrd-setup-root-after-ignition[1062]: grep: /sysroot/oem/oem-release: No such file or directory Jun 20 19:31:38.451758 initrd-setup-root-after-ignition[1064]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 20 19:31:38.451758 initrd-setup-root-after-ignition[1064]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jun 20 19:31:38.454993 initrd-setup-root-after-ignition[1068]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 20 19:31:38.458309 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 20 19:31:38.458548 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jun 20 19:31:38.463045 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jun 20 19:31:38.514719 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jun 20 19:31:38.514860 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jun 20 19:31:38.516045 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jun 20 19:31:38.518238 systemd[1]: Reached target initrd.target - Initrd Default Target. Jun 20 19:31:38.518611 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jun 20 19:31:38.523515 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jun 20 19:31:38.557249 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 20 19:31:38.559963 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jun 20 19:31:38.585635 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jun 20 19:31:38.586973 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 20 19:31:38.587272 systemd[1]: Stopped target timers.target - Timer Units. Jun 20 19:31:38.587601 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jun 20 19:31:38.587711 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 20 19:31:38.595231 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jun 20 19:31:38.596339 systemd[1]: Stopped target basic.target - Basic System. Jun 20 19:31:38.598213 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jun 20 19:31:38.600184 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jun 20 19:31:38.602409 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jun 20 19:31:38.604644 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jun 20 19:31:38.606850 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jun 20 19:31:38.607983 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jun 20 19:31:38.608323 systemd[1]: Stopped target sysinit.target - System Initialization. Jun 20 19:31:38.608652 systemd[1]: Stopped target local-fs.target - Local File Systems. Jun 20 19:31:38.609156 systemd[1]: Stopped target swap.target - Swaps. Jun 20 19:31:38.609462 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jun 20 19:31:38.609564 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jun 20 19:31:38.618705 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jun 20 19:31:38.618844 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 20 19:31:38.619134 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jun 20 19:31:38.623940 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 20 19:31:38.626050 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jun 20 19:31:38.626152 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jun 20 19:31:38.627752 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jun 20 19:31:38.627877 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jun 20 19:31:38.628333 systemd[1]: Stopped target paths.target - Path Units. Jun 20 19:31:38.628587 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jun 20 19:31:38.637859 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 20 19:31:38.638003 systemd[1]: Stopped target slices.target - Slice Units. Jun 20 19:31:38.641450 systemd[1]: Stopped target sockets.target - Socket Units. Jun 20 19:31:38.642378 systemd[1]: iscsid.socket: Deactivated successfully. Jun 20 19:31:38.642465 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jun 20 19:31:38.645048 systemd[1]: iscsiuio.socket: Deactivated successfully. Jun 20 19:31:38.645129 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 20 19:31:38.645977 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jun 20 19:31:38.646082 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 20 19:31:38.648970 systemd[1]: ignition-files.service: Deactivated successfully. Jun 20 19:31:38.649068 systemd[1]: Stopped ignition-files.service - Ignition (files). Jun 20 19:31:38.654093 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jun 20 19:31:38.655803 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jun 20 19:31:38.655923 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jun 20 19:31:38.659908 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jun 20 19:31:38.660118 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jun 20 19:31:38.660216 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jun 20 19:31:38.663043 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jun 20 19:31:38.663170 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jun 20 19:31:38.672552 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jun 20 19:31:38.672679 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jun 20 19:31:38.692725 ignition[1088]: INFO : Ignition 2.21.0 Jun 20 19:31:38.693921 ignition[1088]: INFO : Stage: umount Jun 20 19:31:38.693921 ignition[1088]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 20 19:31:38.693921 ignition[1088]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 20 19:31:38.693179 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jun 20 19:31:38.698052 ignition[1088]: INFO : umount: umount passed Jun 20 19:31:38.698052 ignition[1088]: INFO : Ignition finished successfully Jun 20 19:31:38.697379 systemd[1]: ignition-mount.service: Deactivated successfully. Jun 20 19:31:38.697499 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jun 20 19:31:38.702471 systemd[1]: Stopped target network.target - Network. Jun 20 19:31:38.703385 systemd[1]: ignition-disks.service: Deactivated successfully. Jun 20 19:31:38.703435 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jun 20 19:31:38.705243 systemd[1]: ignition-kargs.service: Deactivated successfully. Jun 20 19:31:38.705287 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jun 20 19:31:38.706178 systemd[1]: ignition-setup.service: Deactivated successfully. Jun 20 19:31:38.706226 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jun 20 19:31:38.707998 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jun 20 19:31:38.708040 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jun 20 19:31:38.709883 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jun 20 19:31:38.711729 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jun 20 19:31:38.720272 systemd[1]: systemd-resolved.service: Deactivated successfully. Jun 20 19:31:38.720403 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jun 20 19:31:38.724934 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jun 20 19:31:38.725155 systemd[1]: systemd-networkd.service: Deactivated successfully. Jun 20 19:31:38.725264 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jun 20 19:31:38.729112 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jun 20 19:31:38.729755 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jun 20 19:31:38.732794 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jun 20 19:31:38.732883 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jun 20 19:31:38.736423 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jun 20 19:31:38.736493 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jun 20 19:31:38.736543 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 20 19:31:38.737057 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 20 19:31:38.737099 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 20 19:31:38.739602 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jun 20 19:31:38.739649 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jun 20 19:31:38.741610 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jun 20 19:31:38.741655 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 20 19:31:38.745154 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 20 19:31:38.751475 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jun 20 19:31:38.751538 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jun 20 19:31:38.763531 systemd[1]: systemd-udevd.service: Deactivated successfully. Jun 20 19:31:38.769032 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 20 19:31:38.770601 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jun 20 19:31:38.770645 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jun 20 19:31:38.772835 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jun 20 19:31:38.772872 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jun 20 19:31:38.774880 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jun 20 19:31:38.774932 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jun 20 19:31:38.777207 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jun 20 19:31:38.777254 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jun 20 19:31:38.778022 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 20 19:31:38.778071 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 20 19:31:38.784858 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jun 20 19:31:38.785184 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jun 20 19:31:38.785230 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jun 20 19:31:38.789664 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jun 20 19:31:38.789716 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 20 19:31:38.794267 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 20 19:31:38.794311 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:31:38.798645 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jun 20 19:31:38.798708 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jun 20 19:31:38.798754 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jun 20 19:31:38.799081 systemd[1]: network-cleanup.service: Deactivated successfully. Jun 20 19:31:38.803731 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jun 20 19:31:38.809481 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jun 20 19:31:38.809593 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jun 20 19:31:38.928208 systemd[1]: sysroot-boot.service: Deactivated successfully. Jun 20 19:31:38.928358 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jun 20 19:31:38.930441 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jun 20 19:31:38.931148 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jun 20 19:31:38.931199 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jun 20 19:31:38.936482 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jun 20 19:31:38.970178 systemd[1]: Switching root. Jun 20 19:31:39.015937 systemd-journald[219]: Journal stopped Jun 20 19:31:40.288421 systemd-journald[219]: Received SIGTERM from PID 1 (systemd). Jun 20 19:31:40.288490 kernel: SELinux: policy capability network_peer_controls=1 Jun 20 19:31:40.288504 kernel: SELinux: policy capability open_perms=1 Jun 20 19:31:40.288515 kernel: SELinux: policy capability extended_socket_class=1 Jun 20 19:31:40.288526 kernel: SELinux: policy capability always_check_network=0 Jun 20 19:31:40.288537 kernel: SELinux: policy capability cgroup_seclabel=1 Jun 20 19:31:40.288558 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jun 20 19:31:40.288569 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jun 20 19:31:40.288584 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jun 20 19:31:40.288595 kernel: SELinux: policy capability userspace_initial_context=0 Jun 20 19:31:40.288606 kernel: audit: type=1403 audit(1750447899.506:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jun 20 19:31:40.288623 systemd[1]: Successfully loaded SELinux policy in 52.211ms. Jun 20 19:31:40.288642 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 12.779ms. Jun 20 19:31:40.288656 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jun 20 19:31:40.288668 systemd[1]: Detected virtualization kvm. Jun 20 19:31:40.288680 systemd[1]: Detected architecture x86-64. Jun 20 19:31:40.288691 systemd[1]: Detected first boot. Jun 20 19:31:40.288705 systemd[1]: Initializing machine ID from VM UUID. Jun 20 19:31:40.288717 zram_generator::config[1135]: No configuration found. Jun 20 19:31:40.288735 kernel: Guest personality initialized and is inactive Jun 20 19:31:40.288747 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jun 20 19:31:40.288757 kernel: Initialized host personality Jun 20 19:31:40.288794 kernel: NET: Registered PF_VSOCK protocol family Jun 20 19:31:40.288806 systemd[1]: Populated /etc with preset unit settings. Jun 20 19:31:40.288818 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jun 20 19:31:40.288830 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jun 20 19:31:40.288844 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jun 20 19:31:40.288863 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jun 20 19:31:40.288879 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jun 20 19:31:40.288891 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jun 20 19:31:40.288903 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jun 20 19:31:40.288914 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jun 20 19:31:40.288926 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jun 20 19:31:40.288937 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jun 20 19:31:40.288951 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jun 20 19:31:40.288963 systemd[1]: Created slice user.slice - User and Session Slice. Jun 20 19:31:40.288974 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 20 19:31:40.288986 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 20 19:31:40.288998 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jun 20 19:31:40.289010 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jun 20 19:31:40.289022 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jun 20 19:31:40.289036 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 20 19:31:40.289049 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jun 20 19:31:40.289060 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 20 19:31:40.289072 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 20 19:31:40.289084 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jun 20 19:31:40.289096 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jun 20 19:31:40.289108 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jun 20 19:31:40.289126 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jun 20 19:31:40.289138 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 20 19:31:40.289150 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 20 19:31:40.289164 systemd[1]: Reached target slices.target - Slice Units. Jun 20 19:31:40.289176 systemd[1]: Reached target swap.target - Swaps. Jun 20 19:31:40.289187 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jun 20 19:31:40.289199 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jun 20 19:31:40.289211 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jun 20 19:31:40.289223 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 20 19:31:40.289235 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 20 19:31:40.289246 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 20 19:31:40.289258 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jun 20 19:31:40.289271 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jun 20 19:31:40.289283 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jun 20 19:31:40.289295 systemd[1]: Mounting media.mount - External Media Directory... Jun 20 19:31:40.289307 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:31:40.289319 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jun 20 19:31:40.289330 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jun 20 19:31:40.289342 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jun 20 19:31:40.289354 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jun 20 19:31:40.289367 systemd[1]: Reached target machines.target - Containers. Jun 20 19:31:40.289386 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jun 20 19:31:40.289398 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 20 19:31:40.289409 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 20 19:31:40.289421 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jun 20 19:31:40.289433 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 20 19:31:40.289444 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 20 19:31:40.289456 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 20 19:31:40.289467 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jun 20 19:31:40.289481 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 20 19:31:40.289493 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jun 20 19:31:40.289505 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jun 20 19:31:40.289516 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jun 20 19:31:40.289528 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jun 20 19:31:40.289539 systemd[1]: Stopped systemd-fsck-usr.service. Jun 20 19:31:40.289552 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 19:31:40.289564 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 20 19:31:40.289583 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 20 19:31:40.289595 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 20 19:31:40.289607 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jun 20 19:31:40.289619 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jun 20 19:31:40.289630 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 20 19:31:40.289650 systemd[1]: verity-setup.service: Deactivated successfully. Jun 20 19:31:40.289662 systemd[1]: Stopped verity-setup.service. Jun 20 19:31:40.289674 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:31:40.289685 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jun 20 19:31:40.289718 systemd-journald[1206]: Collecting audit messages is disabled. Jun 20 19:31:40.289742 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jun 20 19:31:40.289756 systemd[1]: Mounted media.mount - External Media Directory. Jun 20 19:31:40.289768 systemd-journald[1206]: Journal started Jun 20 19:31:40.289815 systemd-journald[1206]: Runtime Journal (/run/log/journal/55d83f635ffd4b178b0484a69e31556c) is 6M, max 48.6M, 42.5M free. Jun 20 19:31:40.308011 kernel: loop: module loaded Jun 20 19:31:40.308044 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jun 20 19:31:40.308061 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jun 20 19:31:40.308075 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jun 20 19:31:40.308089 kernel: fuse: init (API version 7.41) Jun 20 19:31:40.048514 systemd[1]: Queued start job for default target multi-user.target. Jun 20 19:31:40.071719 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jun 20 19:31:40.072172 systemd[1]: systemd-journald.service: Deactivated successfully. Jun 20 19:31:40.310486 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 20 19:31:40.313429 systemd[1]: Started systemd-journald.service - Journal Service. Jun 20 19:31:40.315981 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jun 20 19:31:40.316292 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jun 20 19:31:40.317896 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 20 19:31:40.318210 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 20 19:31:40.319709 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 20 19:31:40.319992 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 20 19:31:40.321728 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jun 20 19:31:40.323411 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jun 20 19:31:40.323675 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jun 20 19:31:40.325160 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 20 19:31:40.325419 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 20 19:31:40.327211 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 20 19:31:40.328676 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 20 19:31:40.330244 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jun 20 19:31:40.331831 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jun 20 19:31:40.336806 kernel: ACPI: bus type drm_connector registered Jun 20 19:31:40.338467 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 20 19:31:40.338793 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 20 19:31:40.351302 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 20 19:31:40.354104 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jun 20 19:31:40.357859 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jun 20 19:31:40.359080 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jun 20 19:31:40.359110 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 20 19:31:40.361131 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jun 20 19:31:40.365053 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jun 20 19:31:40.366299 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 19:31:40.367642 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jun 20 19:31:40.370987 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jun 20 19:31:40.372295 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 20 19:31:40.373360 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jun 20 19:31:40.374627 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 20 19:31:40.376966 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 20 19:31:40.380078 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jun 20 19:31:40.388890 systemd-journald[1206]: Time spent on flushing to /var/log/journal/55d83f635ffd4b178b0484a69e31556c is 15.536ms for 975 entries. Jun 20 19:31:40.388890 systemd-journald[1206]: System Journal (/var/log/journal/55d83f635ffd4b178b0484a69e31556c) is 8M, max 195.6M, 187.6M free. Jun 20 19:31:40.411038 systemd-journald[1206]: Received client request to flush runtime journal. Jun 20 19:31:40.385880 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jun 20 19:31:40.389700 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jun 20 19:31:40.391564 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jun 20 19:31:40.412013 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jun 20 19:31:40.413887 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 20 19:31:40.415487 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jun 20 19:31:40.421150 kernel: loop0: detected capacity change from 0 to 146240 Jun 20 19:31:40.419085 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jun 20 19:31:40.424870 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jun 20 19:31:40.437124 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 20 19:31:40.447804 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jun 20 19:31:40.460140 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jun 20 19:31:40.462940 kernel: loop1: detected capacity change from 0 to 224512 Jun 20 19:31:40.465077 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jun 20 19:31:40.469312 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 20 19:31:40.493165 kernel: loop2: detected capacity change from 0 to 113872 Jun 20 19:31:40.510616 systemd-tmpfiles[1274]: ACLs are not supported, ignoring. Jun 20 19:31:40.510636 systemd-tmpfiles[1274]: ACLs are not supported, ignoring. Jun 20 19:31:40.610471 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 20 19:31:40.630808 kernel: loop3: detected capacity change from 0 to 146240 Jun 20 19:31:40.649813 kernel: loop4: detected capacity change from 0 to 224512 Jun 20 19:31:40.662816 kernel: loop5: detected capacity change from 0 to 113872 Jun 20 19:31:40.670517 (sd-merge)[1279]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jun 20 19:31:40.671143 (sd-merge)[1279]: Merged extensions into '/usr'. Jun 20 19:31:40.677253 systemd[1]: Reload requested from client PID 1254 ('systemd-sysext') (unit systemd-sysext.service)... Jun 20 19:31:40.677271 systemd[1]: Reloading... Jun 20 19:31:40.755856 zram_generator::config[1301]: No configuration found. Jun 20 19:31:40.935839 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 19:31:40.965241 ldconfig[1249]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jun 20 19:31:41.022489 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jun 20 19:31:41.022983 systemd[1]: Reloading finished in 345 ms. Jun 20 19:31:41.052967 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jun 20 19:31:41.054715 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jun 20 19:31:41.071647 systemd[1]: Starting ensure-sysext.service... Jun 20 19:31:41.073709 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jun 20 19:31:41.103906 systemd-tmpfiles[1343]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jun 20 19:31:41.103964 systemd-tmpfiles[1343]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jun 20 19:31:41.104257 systemd-tmpfiles[1343]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jun 20 19:31:41.104512 systemd-tmpfiles[1343]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jun 20 19:31:41.105242 systemd[1]: Reload requested from client PID 1342 ('systemctl') (unit ensure-sysext.service)... Jun 20 19:31:41.105259 systemd[1]: Reloading... Jun 20 19:31:41.105395 systemd-tmpfiles[1343]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jun 20 19:31:41.105662 systemd-tmpfiles[1343]: ACLs are not supported, ignoring. Jun 20 19:31:41.105751 systemd-tmpfiles[1343]: ACLs are not supported, ignoring. Jun 20 19:31:41.110179 systemd-tmpfiles[1343]: Detected autofs mount point /boot during canonicalization of boot. Jun 20 19:31:41.110194 systemd-tmpfiles[1343]: Skipping /boot Jun 20 19:31:41.211335 systemd-tmpfiles[1343]: Detected autofs mount point /boot during canonicalization of boot. Jun 20 19:31:41.211353 systemd-tmpfiles[1343]: Skipping /boot Jun 20 19:31:41.248874 zram_generator::config[1370]: No configuration found. Jun 20 19:31:41.339000 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 19:31:41.425540 systemd[1]: Reloading finished in 319 ms. Jun 20 19:31:41.448253 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jun 20 19:31:41.450228 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 20 19:31:41.468138 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jun 20 19:31:41.471013 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jun 20 19:31:41.473567 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jun 20 19:31:41.485852 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 20 19:31:41.488855 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 20 19:31:41.494292 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jun 20 19:31:41.502183 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:31:41.502622 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 20 19:31:41.505412 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 20 19:31:41.509570 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 20 19:31:41.512597 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 20 19:31:41.514003 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 19:31:41.514112 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 19:31:41.525736 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jun 20 19:31:41.527137 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:31:41.528466 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jun 20 19:31:41.535639 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 20 19:31:41.538036 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 20 19:31:41.540132 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 20 19:31:41.540373 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 20 19:31:41.541333 systemd-udevd[1413]: Using default interface naming scheme 'v255'. Jun 20 19:31:41.542525 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 20 19:31:41.542764 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 20 19:31:41.552674 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jun 20 19:31:41.553693 augenrules[1442]: No rules Jun 20 19:31:41.555788 systemd[1]: audit-rules.service: Deactivated successfully. Jun 20 19:31:41.556037 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jun 20 19:31:41.558915 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:31:41.559194 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 20 19:31:41.560587 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 20 19:31:41.563120 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 20 19:31:41.566205 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 20 19:31:41.568892 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 19:31:41.569002 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 19:31:41.573713 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jun 20 19:31:41.574872 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:31:41.576244 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 20 19:31:41.578248 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jun 20 19:31:41.580417 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 20 19:31:41.580909 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 20 19:31:41.582557 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 20 19:31:41.582763 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 20 19:31:41.584534 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 20 19:31:41.585004 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 20 19:31:41.590358 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jun 20 19:31:41.600047 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:31:41.605934 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jun 20 19:31:41.607385 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 20 19:31:41.610086 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 20 19:31:41.613944 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 20 19:31:41.619820 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 20 19:31:41.628075 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 20 19:31:41.629966 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 19:31:41.630083 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 19:31:41.633068 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 20 19:31:41.634377 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jun 20 19:31:41.634483 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:31:41.636129 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jun 20 19:31:41.638307 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 20 19:31:41.646122 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 20 19:31:41.649653 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 20 19:31:41.650396 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 20 19:31:41.652235 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 20 19:31:41.652443 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 20 19:31:41.654244 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 20 19:31:41.654719 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 20 19:31:41.661402 augenrules[1475]: /sbin/augenrules: No change Jun 20 19:31:41.664849 systemd[1]: Finished ensure-sysext.service. Jun 20 19:31:41.674417 augenrules[1517]: No rules Jun 20 19:31:41.675447 systemd[1]: audit-rules.service: Deactivated successfully. Jun 20 19:31:41.679135 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jun 20 19:31:41.683133 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 20 19:31:41.683216 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 20 19:31:41.687182 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jun 20 19:31:41.691240 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jun 20 19:31:41.821628 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jun 20 19:31:41.825600 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jun 20 19:31:41.848321 systemd-networkd[1488]: lo: Link UP Jun 20 19:31:41.848333 systemd-networkd[1488]: lo: Gained carrier Jun 20 19:31:41.850098 systemd-networkd[1488]: Enumeration completed Jun 20 19:31:41.850199 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 20 19:31:41.850627 systemd-networkd[1488]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 19:31:41.850640 systemd-networkd[1488]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 20 19:31:41.851193 systemd-networkd[1488]: eth0: Link UP Jun 20 19:31:41.851364 systemd-networkd[1488]: eth0: Gained carrier Jun 20 19:31:41.851386 systemd-networkd[1488]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 19:31:41.854349 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jun 20 19:31:41.856599 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jun 20 19:31:41.858793 kernel: mousedev: PS/2 mouse device common for all mice Jun 20 19:31:41.859353 systemd-resolved[1412]: Positive Trust Anchors: Jun 20 19:31:41.859372 systemd-resolved[1412]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 20 19:31:41.859405 systemd-resolved[1412]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jun 20 19:31:41.863100 systemd-networkd[1488]: eth0: DHCPv4 address 10.0.0.149/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jun 20 19:31:41.865574 systemd-resolved[1412]: Defaulting to hostname 'linux'. Jun 20 19:31:41.867364 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jun 20 19:31:41.868898 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 20 19:31:41.870259 systemd[1]: Reached target network.target - Network. Jun 20 19:31:41.871213 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 20 19:31:41.877859 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jun 20 19:31:41.882859 kernel: ACPI: button: Power Button [PWRF] Jun 20 19:31:41.892441 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jun 20 19:31:41.893973 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jun 20 19:31:41.895003 systemd-timesyncd[1525]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jun 20 19:31:41.895039 systemd-timesyncd[1525]: Initial clock synchronization to Fri 2025-06-20 19:31:41.648438 UTC. Jun 20 19:31:41.895725 systemd[1]: Reached target sysinit.target - System Initialization. Jun 20 19:31:41.896976 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jun 20 19:31:41.898255 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jun 20 19:31:41.899768 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jun 20 19:31:41.900960 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jun 20 19:31:41.902327 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jun 20 19:31:41.902361 systemd[1]: Reached target paths.target - Path Units. Jun 20 19:31:41.903378 systemd[1]: Reached target time-set.target - System Time Set. Jun 20 19:31:41.904615 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jun 20 19:31:41.906113 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jun 20 19:31:41.907435 systemd[1]: Reached target timers.target - Timer Units. Jun 20 19:31:41.912293 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jun 20 19:31:41.912524 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jun 20 19:31:41.910967 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jun 20 19:31:41.914411 systemd[1]: Starting docker.socket - Docker Socket for the API... Jun 20 19:31:41.925065 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jun 20 19:31:41.928524 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jun 20 19:31:41.930061 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jun 20 19:31:41.940437 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jun 20 19:31:41.942073 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jun 20 19:31:41.944155 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jun 20 19:31:41.951753 systemd[1]: Reached target sockets.target - Socket Units. Jun 20 19:31:41.952848 systemd[1]: Reached target basic.target - Basic System. Jun 20 19:31:41.953855 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jun 20 19:31:41.953879 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jun 20 19:31:41.955052 systemd[1]: Starting containerd.service - containerd container runtime... Jun 20 19:31:41.957064 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jun 20 19:31:41.959590 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jun 20 19:31:41.962650 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jun 20 19:31:41.968448 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jun 20 19:31:41.969543 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jun 20 19:31:42.068174 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jun 20 19:31:42.071493 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jun 20 19:31:42.076837 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jun 20 19:31:42.078867 jq[1562]: false Jun 20 19:31:42.083194 extend-filesystems[1563]: Found /dev/vda6 Jun 20 19:31:42.085950 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jun 20 19:31:42.089207 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jun 20 19:31:42.092353 extend-filesystems[1563]: Found /dev/vda9 Jun 20 19:31:42.093135 google_oslogin_nss_cache[1564]: oslogin_cache_refresh[1564]: Refreshing passwd entry cache Jun 20 19:31:42.092507 oslogin_cache_refresh[1564]: Refreshing passwd entry cache Jun 20 19:31:42.095293 extend-filesystems[1563]: Checking size of /dev/vda9 Jun 20 19:31:42.102640 systemd[1]: Starting systemd-logind.service - User Login Management... Jun 20 19:31:42.103760 google_oslogin_nss_cache[1564]: oslogin_cache_refresh[1564]: Failure getting users, quitting Jun 20 19:31:42.103760 google_oslogin_nss_cache[1564]: oslogin_cache_refresh[1564]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jun 20 19:31:42.103760 google_oslogin_nss_cache[1564]: oslogin_cache_refresh[1564]: Refreshing group entry cache Jun 20 19:31:42.102924 oslogin_cache_refresh[1564]: Failure getting users, quitting Jun 20 19:31:42.102943 oslogin_cache_refresh[1564]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jun 20 19:31:42.102997 oslogin_cache_refresh[1564]: Refreshing group entry cache Jun 20 19:31:42.105654 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jun 20 19:31:42.106273 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jun 20 19:31:42.111929 google_oslogin_nss_cache[1564]: oslogin_cache_refresh[1564]: Failure getting groups, quitting Jun 20 19:31:42.111929 google_oslogin_nss_cache[1564]: oslogin_cache_refresh[1564]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jun 20 19:31:42.111912 oslogin_cache_refresh[1564]: Failure getting groups, quitting Jun 20 19:31:42.111925 oslogin_cache_refresh[1564]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jun 20 19:31:42.113221 systemd[1]: Starting update-engine.service - Update Engine... Jun 20 19:31:42.117929 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jun 20 19:31:42.119433 extend-filesystems[1563]: Resized partition /dev/vda9 Jun 20 19:31:42.129483 extend-filesystems[1590]: resize2fs 1.47.2 (1-Jan-2025) Jun 20 19:31:42.141612 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jun 20 19:31:42.141665 update_engine[1581]: I20250620 19:31:42.141476 1581 main.cc:92] Flatcar Update Engine starting Jun 20 19:31:42.136456 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jun 20 19:31:42.138152 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jun 20 19:31:42.138394 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jun 20 19:31:42.138706 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jun 20 19:31:42.138949 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jun 20 19:31:42.140305 systemd[1]: motdgen.service: Deactivated successfully. Jun 20 19:31:42.140550 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jun 20 19:31:42.143120 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jun 20 19:31:42.143380 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jun 20 19:31:42.145047 jq[1586]: true Jun 20 19:31:42.162786 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jun 20 19:31:42.172421 jq[1592]: true Jun 20 19:31:42.197879 (ntainerd)[1593]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jun 20 19:31:42.219529 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 19:31:42.221900 sshd_keygen[1588]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jun 20 19:31:42.223873 extend-filesystems[1590]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jun 20 19:31:42.223873 extend-filesystems[1590]: old_desc_blocks = 1, new_desc_blocks = 1 Jun 20 19:31:42.223873 extend-filesystems[1590]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jun 20 19:31:42.229426 extend-filesystems[1563]: Resized filesystem in /dev/vda9 Jun 20 19:31:42.228538 systemd[1]: extend-filesystems.service: Deactivated successfully. Jun 20 19:31:42.230833 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jun 20 19:31:42.237247 kernel: kvm_amd: TSC scaling supported Jun 20 19:31:42.237284 kernel: kvm_amd: Nested Virtualization enabled Jun 20 19:31:42.237322 kernel: kvm_amd: Nested Paging enabled Jun 20 19:31:42.237338 kernel: kvm_amd: LBR virtualization supported Jun 20 19:31:42.238986 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jun 20 19:31:42.239079 kernel: kvm_amd: Virtual GIF supported Jun 20 19:31:42.251815 tar[1591]: linux-amd64/LICENSE Jun 20 19:31:42.251815 tar[1591]: linux-amd64/helm Jun 20 19:31:42.252235 bash[1622]: Updated "/home/core/.ssh/authorized_keys" Jun 20 19:31:42.256132 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jun 20 19:31:42.262041 dbus-daemon[1560]: [system] SELinux support is enabled Jun 20 19:31:42.262339 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jun 20 19:31:42.262417 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jun 20 19:31:42.268157 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jun 20 19:31:42.268202 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jun 20 19:31:42.269560 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jun 20 19:31:42.269580 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jun 20 19:31:42.271165 update_engine[1581]: I20250620 19:31:42.269727 1581 update_check_scheduler.cc:74] Next update check in 9m53s Jun 20 19:31:42.273264 systemd[1]: Started update-engine.service - Update Engine. Jun 20 19:31:42.282585 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jun 20 19:31:42.292039 systemd-logind[1580]: Watching system buttons on /dev/input/event2 (Power Button) Jun 20 19:31:42.292808 systemd-logind[1580]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jun 20 19:31:42.293569 systemd-logind[1580]: New seat seat0. Jun 20 19:31:42.298448 systemd[1]: Started systemd-logind.service - User Login Management. Jun 20 19:31:42.312046 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jun 20 19:31:42.319284 systemd[1]: Starting issuegen.service - Generate /run/issue... Jun 20 19:31:42.390095 systemd[1]: issuegen.service: Deactivated successfully. Jun 20 19:31:42.391118 systemd[1]: Finished issuegen.service - Generate /run/issue. Jun 20 19:31:42.395598 locksmithd[1634]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jun 20 19:31:42.406099 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jun 20 19:31:42.420817 kernel: EDAC MC: Ver: 3.0.0 Jun 20 19:31:42.504460 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:31:42.521619 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jun 20 19:31:42.526122 systemd[1]: Started getty@tty1.service - Getty on tty1. Jun 20 19:31:42.529641 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jun 20 19:31:42.531095 systemd[1]: Reached target getty.target - Login Prompts. Jun 20 19:31:42.669364 containerd[1593]: time="2025-06-20T19:31:42Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jun 20 19:31:42.672917 containerd[1593]: time="2025-06-20T19:31:42.672876426Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jun 20 19:31:42.680596 containerd[1593]: time="2025-06-20T19:31:42.680352538Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.875µs" Jun 20 19:31:42.680596 containerd[1593]: time="2025-06-20T19:31:42.680401583Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jun 20 19:31:42.680596 containerd[1593]: time="2025-06-20T19:31:42.680426363Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jun 20 19:31:42.680767 containerd[1593]: time="2025-06-20T19:31:42.680631761Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jun 20 19:31:42.680767 containerd[1593]: time="2025-06-20T19:31:42.680646307Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jun 20 19:31:42.680767 containerd[1593]: time="2025-06-20T19:31:42.680673330Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jun 20 19:31:42.680767 containerd[1593]: time="2025-06-20T19:31:42.680753797Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jun 20 19:31:42.680767 containerd[1593]: time="2025-06-20T19:31:42.680764265Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jun 20 19:31:42.681583 containerd[1593]: time="2025-06-20T19:31:42.681056790Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jun 20 19:31:42.681583 containerd[1593]: time="2025-06-20T19:31:42.681076037Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jun 20 19:31:42.681583 containerd[1593]: time="2025-06-20T19:31:42.681085941Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jun 20 19:31:42.681583 containerd[1593]: time="2025-06-20T19:31:42.681093165Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jun 20 19:31:42.681583 containerd[1593]: time="2025-06-20T19:31:42.681204899Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jun 20 19:31:42.681583 containerd[1593]: time="2025-06-20T19:31:42.681432134Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jun 20 19:31:42.681583 containerd[1593]: time="2025-06-20T19:31:42.681458710Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jun 20 19:31:42.681583 containerd[1593]: time="2025-06-20T19:31:42.681467080Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jun 20 19:31:42.681583 containerd[1593]: time="2025-06-20T19:31:42.681498328Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jun 20 19:31:42.683009 containerd[1593]: time="2025-06-20T19:31:42.682822763Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jun 20 19:31:42.683074 containerd[1593]: time="2025-06-20T19:31:42.683052436Z" level=info msg="metadata content store policy set" policy=shared Jun 20 19:31:42.689701 containerd[1593]: time="2025-06-20T19:31:42.689665943Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jun 20 19:31:42.689767 containerd[1593]: time="2025-06-20T19:31:42.689728583Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jun 20 19:31:42.689767 containerd[1593]: time="2025-06-20T19:31:42.689754441Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jun 20 19:31:42.689767 containerd[1593]: time="2025-06-20T19:31:42.689766569Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jun 20 19:31:42.689865 containerd[1593]: time="2025-06-20T19:31:42.689814925Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jun 20 19:31:42.689865 containerd[1593]: time="2025-06-20T19:31:42.689826286Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jun 20 19:31:42.689865 containerd[1593]: time="2025-06-20T19:31:42.689837064Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jun 20 19:31:42.689865 containerd[1593]: time="2025-06-20T19:31:42.689848425Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jun 20 19:31:42.689865 containerd[1593]: time="2025-06-20T19:31:42.689858922Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jun 20 19:31:42.689986 containerd[1593]: time="2025-06-20T19:31:42.689868641Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jun 20 19:31:42.689986 containerd[1593]: time="2025-06-20T19:31:42.689877041Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jun 20 19:31:42.689986 containerd[1593]: time="2025-06-20T19:31:42.689888071Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jun 20 19:31:42.690586 containerd[1593]: time="2025-06-20T19:31:42.690044743Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jun 20 19:31:42.690586 containerd[1593]: time="2025-06-20T19:31:42.690074369Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jun 20 19:31:42.690586 containerd[1593]: time="2025-06-20T19:31:42.690088362Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jun 20 19:31:42.690586 containerd[1593]: time="2025-06-20T19:31:42.690100344Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jun 20 19:31:42.690586 containerd[1593]: time="2025-06-20T19:31:42.690109956Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jun 20 19:31:42.690586 containerd[1593]: time="2025-06-20T19:31:42.690119589Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jun 20 19:31:42.690586 containerd[1593]: time="2025-06-20T19:31:42.690130464Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jun 20 19:31:42.690586 containerd[1593]: time="2025-06-20T19:31:42.690144330Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jun 20 19:31:42.690586 containerd[1593]: time="2025-06-20T19:31:42.690168013Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jun 20 19:31:42.690586 containerd[1593]: time="2025-06-20T19:31:42.690189268Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jun 20 19:31:42.690586 containerd[1593]: time="2025-06-20T19:31:42.690200755Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jun 20 19:31:42.690586 containerd[1593]: time="2025-06-20T19:31:42.690279339Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jun 20 19:31:42.690586 containerd[1593]: time="2025-06-20T19:31:42.690298545Z" level=info msg="Start snapshots syncer" Jun 20 19:31:42.690586 containerd[1593]: time="2025-06-20T19:31:42.690327190Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jun 20 19:31:42.691123 containerd[1593]: time="2025-06-20T19:31:42.690634962Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jun 20 19:31:42.691123 containerd[1593]: time="2025-06-20T19:31:42.690690464Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jun 20 19:31:42.691370 containerd[1593]: time="2025-06-20T19:31:42.690770834Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jun 20 19:31:42.691370 containerd[1593]: time="2025-06-20T19:31:42.690902551Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jun 20 19:31:42.691370 containerd[1593]: time="2025-06-20T19:31:42.690919768Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jun 20 19:31:42.691370 containerd[1593]: time="2025-06-20T19:31:42.690930507Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jun 20 19:31:42.691370 containerd[1593]: time="2025-06-20T19:31:42.690939751Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jun 20 19:31:42.691370 containerd[1593]: time="2025-06-20T19:31:42.690949665Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jun 20 19:31:42.691370 containerd[1593]: time="2025-06-20T19:31:42.690958394Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jun 20 19:31:42.691370 containerd[1593]: time="2025-06-20T19:31:42.690968143Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jun 20 19:31:42.691370 containerd[1593]: time="2025-06-20T19:31:42.690990855Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jun 20 19:31:42.691370 containerd[1593]: time="2025-06-20T19:31:42.691000769Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jun 20 19:31:42.691370 containerd[1593]: time="2025-06-20T19:31:42.691029851Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jun 20 19:31:42.691370 containerd[1593]: time="2025-06-20T19:31:42.691050310Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jun 20 19:31:42.691370 containerd[1593]: time="2025-06-20T19:31:42.691061622Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jun 20 19:31:42.691370 containerd[1593]: time="2025-06-20T19:31:42.691069361Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jun 20 19:31:42.691969 containerd[1593]: time="2025-06-20T19:31:42.691078887Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jun 20 19:31:42.691969 containerd[1593]: time="2025-06-20T19:31:42.691086237Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jun 20 19:31:42.691969 containerd[1593]: time="2025-06-20T19:31:42.691104308Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jun 20 19:31:42.691969 containerd[1593]: time="2025-06-20T19:31:42.691114134Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jun 20 19:31:42.691969 containerd[1593]: time="2025-06-20T19:31:42.691145779Z" level=info msg="runtime interface created" Jun 20 19:31:42.691969 containerd[1593]: time="2025-06-20T19:31:42.691152479Z" level=info msg="created NRI interface" Jun 20 19:31:42.691969 containerd[1593]: time="2025-06-20T19:31:42.691169336Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jun 20 19:31:42.691969 containerd[1593]: time="2025-06-20T19:31:42.691179891Z" level=info msg="Connect containerd service" Jun 20 19:31:42.691969 containerd[1593]: time="2025-06-20T19:31:42.691210497Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jun 20 19:31:42.692136 containerd[1593]: time="2025-06-20T19:31:42.692004413Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 20 19:31:43.046246 containerd[1593]: time="2025-06-20T19:31:43.046134913Z" level=info msg="Start subscribing containerd event" Jun 20 19:31:43.046373 containerd[1593]: time="2025-06-20T19:31:43.046228140Z" level=info msg="Start recovering state" Jun 20 19:31:43.046423 containerd[1593]: time="2025-06-20T19:31:43.046406394Z" level=info msg="Start event monitor" Jun 20 19:31:43.046446 containerd[1593]: time="2025-06-20T19:31:43.046431925Z" level=info msg="Start cni network conf syncer for default" Jun 20 19:31:43.046446 containerd[1593]: time="2025-06-20T19:31:43.046442376Z" level=info msg="Start streaming server" Jun 20 19:31:43.046486 containerd[1593]: time="2025-06-20T19:31:43.046459018Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jun 20 19:31:43.046486 containerd[1593]: time="2025-06-20T19:31:43.046471144Z" level=info msg="runtime interface starting up..." Jun 20 19:31:43.046486 containerd[1593]: time="2025-06-20T19:31:43.046478797Z" level=info msg="starting plugins..." Jun 20 19:31:43.046537 containerd[1593]: time="2025-06-20T19:31:43.046497408Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jun 20 19:31:43.047220 containerd[1593]: time="2025-06-20T19:31:43.047196406Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jun 20 19:31:43.047295 containerd[1593]: time="2025-06-20T19:31:43.047276414Z" level=info msg=serving... address=/run/containerd/containerd.sock Jun 20 19:31:43.049065 systemd[1]: Started containerd.service - containerd container runtime. Jun 20 19:31:43.050218 containerd[1593]: time="2025-06-20T19:31:43.049927613Z" level=info msg="containerd successfully booted in 0.381882s" Jun 20 19:31:43.076583 tar[1591]: linux-amd64/README.md Jun 20 19:31:43.099648 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jun 20 19:31:43.131917 systemd-networkd[1488]: eth0: Gained IPv6LL Jun 20 19:31:43.134726 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jun 20 19:31:43.136468 systemd[1]: Reached target network-online.target - Network is Online. Jun 20 19:31:43.139223 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jun 20 19:31:43.141610 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:31:43.143643 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jun 20 19:31:43.165511 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jun 20 19:31:43.167118 systemd[1]: coreos-metadata.service: Deactivated successfully. Jun 20 19:31:43.167380 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jun 20 19:31:43.169522 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jun 20 19:31:44.595724 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:31:44.597272 systemd[1]: Reached target multi-user.target - Multi-User System. Jun 20 19:31:44.598689 systemd[1]: Startup finished in 2.805s (kernel) + 6.889s (initrd) + 5.143s (userspace) = 14.837s. Jun 20 19:31:44.631084 (kubelet)[1702]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:31:45.014418 kubelet[1702]: E0620 19:31:45.014267 1702 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:31:45.018714 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:31:45.018929 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:31:45.019296 systemd[1]: kubelet.service: Consumed 1.737s CPU time, 264.4M memory peak. Jun 20 19:31:45.514130 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jun 20 19:31:45.515403 systemd[1]: Started sshd@0-10.0.0.149:22-10.0.0.1:33900.service - OpenSSH per-connection server daemon (10.0.0.1:33900). Jun 20 19:31:45.580164 sshd[1715]: Accepted publickey for core from 10.0.0.1 port 33900 ssh2: RSA SHA256:YoQ4GiRtY5Hu7FaS/OnNYeCnsR+r8YS2g6Qh7XD/NPE Jun 20 19:31:45.582187 sshd-session[1715]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:31:45.588583 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jun 20 19:31:45.589697 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jun 20 19:31:45.595842 systemd-logind[1580]: New session 1 of user core. Jun 20 19:31:45.612231 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jun 20 19:31:45.614952 systemd[1]: Starting user@500.service - User Manager for UID 500... Jun 20 19:31:45.637085 (systemd)[1719]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jun 20 19:31:45.639239 systemd-logind[1580]: New session c1 of user core. Jun 20 19:31:45.789218 systemd[1719]: Queued start job for default target default.target. Jun 20 19:31:45.800957 systemd[1719]: Created slice app.slice - User Application Slice. Jun 20 19:31:45.800981 systemd[1719]: Reached target paths.target - Paths. Jun 20 19:31:45.801019 systemd[1719]: Reached target timers.target - Timers. Jun 20 19:31:45.802518 systemd[1719]: Starting dbus.socket - D-Bus User Message Bus Socket... Jun 20 19:31:45.813035 systemd[1719]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jun 20 19:31:45.813153 systemd[1719]: Reached target sockets.target - Sockets. Jun 20 19:31:45.813193 systemd[1719]: Reached target basic.target - Basic System. Jun 20 19:31:45.813234 systemd[1719]: Reached target default.target - Main User Target. Jun 20 19:31:45.813263 systemd[1719]: Startup finished in 167ms. Jun 20 19:31:45.813599 systemd[1]: Started user@500.service - User Manager for UID 500. Jun 20 19:31:45.815173 systemd[1]: Started session-1.scope - Session 1 of User core. Jun 20 19:31:45.878625 systemd[1]: Started sshd@1-10.0.0.149:22-10.0.0.1:33902.service - OpenSSH per-connection server daemon (10.0.0.1:33902). Jun 20 19:31:45.934981 sshd[1730]: Accepted publickey for core from 10.0.0.1 port 33902 ssh2: RSA SHA256:YoQ4GiRtY5Hu7FaS/OnNYeCnsR+r8YS2g6Qh7XD/NPE Jun 20 19:31:45.936289 sshd-session[1730]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:31:45.940570 systemd-logind[1580]: New session 2 of user core. Jun 20 19:31:45.949890 systemd[1]: Started session-2.scope - Session 2 of User core. Jun 20 19:31:46.000629 sshd[1732]: Connection closed by 10.0.0.1 port 33902 Jun 20 19:31:46.000948 sshd-session[1730]: pam_unix(sshd:session): session closed for user core Jun 20 19:31:46.011218 systemd[1]: sshd@1-10.0.0.149:22-10.0.0.1:33902.service: Deactivated successfully. Jun 20 19:31:46.012762 systemd[1]: session-2.scope: Deactivated successfully. Jun 20 19:31:46.013889 systemd-logind[1580]: Session 2 logged out. Waiting for processes to exit. Jun 20 19:31:46.016578 systemd[1]: Started sshd@2-10.0.0.149:22-10.0.0.1:33912.service - OpenSSH per-connection server daemon (10.0.0.1:33912). Jun 20 19:31:46.017109 systemd-logind[1580]: Removed session 2. Jun 20 19:31:46.071795 sshd[1738]: Accepted publickey for core from 10.0.0.1 port 33912 ssh2: RSA SHA256:YoQ4GiRtY5Hu7FaS/OnNYeCnsR+r8YS2g6Qh7XD/NPE Jun 20 19:31:46.073060 sshd-session[1738]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:31:46.076987 systemd-logind[1580]: New session 3 of user core. Jun 20 19:31:46.087887 systemd[1]: Started session-3.scope - Session 3 of User core. Jun 20 19:31:46.135703 sshd[1740]: Connection closed by 10.0.0.1 port 33912 Jun 20 19:31:46.136060 sshd-session[1738]: pam_unix(sshd:session): session closed for user core Jun 20 19:31:46.154285 systemd[1]: sshd@2-10.0.0.149:22-10.0.0.1:33912.service: Deactivated successfully. Jun 20 19:31:46.156061 systemd[1]: session-3.scope: Deactivated successfully. Jun 20 19:31:46.156753 systemd-logind[1580]: Session 3 logged out. Waiting for processes to exit. Jun 20 19:31:46.159574 systemd[1]: Started sshd@3-10.0.0.149:22-10.0.0.1:33914.service - OpenSSH per-connection server daemon (10.0.0.1:33914). Jun 20 19:31:46.160112 systemd-logind[1580]: Removed session 3. Jun 20 19:31:46.217898 sshd[1746]: Accepted publickey for core from 10.0.0.1 port 33914 ssh2: RSA SHA256:YoQ4GiRtY5Hu7FaS/OnNYeCnsR+r8YS2g6Qh7XD/NPE Jun 20 19:31:46.219200 sshd-session[1746]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:31:46.223148 systemd-logind[1580]: New session 4 of user core. Jun 20 19:31:46.232908 systemd[1]: Started session-4.scope - Session 4 of User core. Jun 20 19:31:46.283651 sshd[1748]: Connection closed by 10.0.0.1 port 33914 Jun 20 19:31:46.283970 sshd-session[1746]: pam_unix(sshd:session): session closed for user core Jun 20 19:31:46.295213 systemd[1]: sshd@3-10.0.0.149:22-10.0.0.1:33914.service: Deactivated successfully. Jun 20 19:31:46.296789 systemd[1]: session-4.scope: Deactivated successfully. Jun 20 19:31:46.297504 systemd-logind[1580]: Session 4 logged out. Waiting for processes to exit. Jun 20 19:31:46.300100 systemd[1]: Started sshd@4-10.0.0.149:22-10.0.0.1:33926.service - OpenSSH per-connection server daemon (10.0.0.1:33926). Jun 20 19:31:46.300647 systemd-logind[1580]: Removed session 4. Jun 20 19:31:46.347230 sshd[1754]: Accepted publickey for core from 10.0.0.1 port 33926 ssh2: RSA SHA256:YoQ4GiRtY5Hu7FaS/OnNYeCnsR+r8YS2g6Qh7XD/NPE Jun 20 19:31:46.348854 sshd-session[1754]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:31:46.352946 systemd-logind[1580]: New session 5 of user core. Jun 20 19:31:46.367916 systemd[1]: Started session-5.scope - Session 5 of User core. Jun 20 19:31:46.423970 sudo[1757]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jun 20 19:31:46.424278 sudo[1757]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 19:31:46.441056 sudo[1757]: pam_unix(sudo:session): session closed for user root Jun 20 19:31:46.442742 sshd[1756]: Connection closed by 10.0.0.1 port 33926 Jun 20 19:31:46.443076 sshd-session[1754]: pam_unix(sshd:session): session closed for user core Jun 20 19:31:46.461357 systemd[1]: sshd@4-10.0.0.149:22-10.0.0.1:33926.service: Deactivated successfully. Jun 20 19:31:46.463147 systemd[1]: session-5.scope: Deactivated successfully. Jun 20 19:31:46.463841 systemd-logind[1580]: Session 5 logged out. Waiting for processes to exit. Jun 20 19:31:46.466732 systemd[1]: Started sshd@5-10.0.0.149:22-10.0.0.1:33932.service - OpenSSH per-connection server daemon (10.0.0.1:33932). Jun 20 19:31:46.467291 systemd-logind[1580]: Removed session 5. Jun 20 19:31:46.532311 sshd[1763]: Accepted publickey for core from 10.0.0.1 port 33932 ssh2: RSA SHA256:YoQ4GiRtY5Hu7FaS/OnNYeCnsR+r8YS2g6Qh7XD/NPE Jun 20 19:31:46.533790 sshd-session[1763]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:31:46.537997 systemd-logind[1580]: New session 6 of user core. Jun 20 19:31:46.548908 systemd[1]: Started session-6.scope - Session 6 of User core. Jun 20 19:31:46.600746 sudo[1767]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jun 20 19:31:46.601063 sudo[1767]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 19:31:46.607365 sudo[1767]: pam_unix(sudo:session): session closed for user root Jun 20 19:31:46.613617 sudo[1766]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jun 20 19:31:46.613955 sudo[1766]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 19:31:46.623703 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jun 20 19:31:46.666278 augenrules[1789]: No rules Jun 20 19:31:46.667935 systemd[1]: audit-rules.service: Deactivated successfully. Jun 20 19:31:46.668205 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jun 20 19:31:46.669280 sudo[1766]: pam_unix(sudo:session): session closed for user root Jun 20 19:31:46.670748 sshd[1765]: Connection closed by 10.0.0.1 port 33932 Jun 20 19:31:46.671042 sshd-session[1763]: pam_unix(sshd:session): session closed for user core Jun 20 19:31:46.683045 systemd[1]: sshd@5-10.0.0.149:22-10.0.0.1:33932.service: Deactivated successfully. Jun 20 19:31:46.684511 systemd[1]: session-6.scope: Deactivated successfully. Jun 20 19:31:46.685231 systemd-logind[1580]: Session 6 logged out. Waiting for processes to exit. Jun 20 19:31:46.687540 systemd[1]: Started sshd@6-10.0.0.149:22-10.0.0.1:33942.service - OpenSSH per-connection server daemon (10.0.0.1:33942). Jun 20 19:31:46.688072 systemd-logind[1580]: Removed session 6. Jun 20 19:31:46.739216 sshd[1798]: Accepted publickey for core from 10.0.0.1 port 33942 ssh2: RSA SHA256:YoQ4GiRtY5Hu7FaS/OnNYeCnsR+r8YS2g6Qh7XD/NPE Jun 20 19:31:46.740580 sshd-session[1798]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:31:46.744517 systemd-logind[1580]: New session 7 of user core. Jun 20 19:31:46.753894 systemd[1]: Started session-7.scope - Session 7 of User core. Jun 20 19:31:46.804520 sudo[1801]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jun 20 19:31:46.804841 sudo[1801]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 19:31:47.097093 systemd[1]: Starting docker.service - Docker Application Container Engine... Jun 20 19:31:47.119074 (dockerd)[1821]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jun 20 19:31:47.323053 dockerd[1821]: time="2025-06-20T19:31:47.322988472Z" level=info msg="Starting up" Jun 20 19:31:47.323784 dockerd[1821]: time="2025-06-20T19:31:47.323738664Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jun 20 19:31:48.417404 dockerd[1821]: time="2025-06-20T19:31:48.417353324Z" level=info msg="Loading containers: start." Jun 20 19:31:48.427830 kernel: Initializing XFRM netlink socket Jun 20 19:31:48.679504 systemd-networkd[1488]: docker0: Link UP Jun 20 19:31:48.684557 dockerd[1821]: time="2025-06-20T19:31:48.684509938Z" level=info msg="Loading containers: done." Jun 20 19:31:48.697942 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3343981287-merged.mount: Deactivated successfully. Jun 20 19:31:48.699396 dockerd[1821]: time="2025-06-20T19:31:48.699340985Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jun 20 19:31:48.699481 dockerd[1821]: time="2025-06-20T19:31:48.699441725Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jun 20 19:31:48.699600 dockerd[1821]: time="2025-06-20T19:31:48.699575276Z" level=info msg="Initializing buildkit" Jun 20 19:31:48.727657 dockerd[1821]: time="2025-06-20T19:31:48.727596984Z" level=info msg="Completed buildkit initialization" Jun 20 19:31:48.734452 dockerd[1821]: time="2025-06-20T19:31:48.734410083Z" level=info msg="Daemon has completed initialization" Jun 20 19:31:48.734566 dockerd[1821]: time="2025-06-20T19:31:48.734509706Z" level=info msg="API listen on /run/docker.sock" Jun 20 19:31:48.734628 systemd[1]: Started docker.service - Docker Application Container Engine. Jun 20 19:31:49.389761 containerd[1593]: time="2025-06-20T19:31:49.389717464Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\"" Jun 20 19:31:49.964435 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2402743597.mount: Deactivated successfully. Jun 20 19:31:51.734276 containerd[1593]: time="2025-06-20T19:31:51.734213750Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:31:51.734876 containerd[1593]: time="2025-06-20T19:31:51.734802133Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.6: active requests=0, bytes read=28799045" Jun 20 19:31:51.735965 containerd[1593]: time="2025-06-20T19:31:51.735919034Z" level=info msg="ImageCreate event name:\"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:31:51.738224 containerd[1593]: time="2025-06-20T19:31:51.738163366Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:31:51.739019 containerd[1593]: time="2025-06-20T19:31:51.738987183Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.6\" with image id \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\", size \"28795845\" in 2.349230539s" Jun 20 19:31:51.739076 containerd[1593]: time="2025-06-20T19:31:51.739020643Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\" returns image reference \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\"" Jun 20 19:31:51.739596 containerd[1593]: time="2025-06-20T19:31:51.739556447Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\"" Jun 20 19:31:53.458879 containerd[1593]: time="2025-06-20T19:31:53.458818655Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:31:53.459631 containerd[1593]: time="2025-06-20T19:31:53.459602516Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.6: active requests=0, bytes read=24783912" Jun 20 19:31:53.460850 containerd[1593]: time="2025-06-20T19:31:53.460806091Z" level=info msg="ImageCreate event name:\"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:31:53.463270 containerd[1593]: time="2025-06-20T19:31:53.463218366Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:31:53.464125 containerd[1593]: time="2025-06-20T19:31:53.464092046Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.6\" with image id \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\", size \"26385746\" in 1.724499032s" Jun 20 19:31:53.464173 containerd[1593]: time="2025-06-20T19:31:53.464127728Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\" returns image reference \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\"" Jun 20 19:31:53.464805 containerd[1593]: time="2025-06-20T19:31:53.464619444Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\"" Jun 20 19:31:54.766916 containerd[1593]: time="2025-06-20T19:31:54.766843764Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:31:54.767937 containerd[1593]: time="2025-06-20T19:31:54.767874093Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.6: active requests=0, bytes read=19176916" Jun 20 19:31:54.769210 containerd[1593]: time="2025-06-20T19:31:54.769178669Z" level=info msg="ImageCreate event name:\"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:31:54.771584 containerd[1593]: time="2025-06-20T19:31:54.771549229Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:31:54.772444 containerd[1593]: time="2025-06-20T19:31:54.772413582Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.6\" with image id \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\", size \"20778768\" in 1.307768516s" Jun 20 19:31:54.772444 containerd[1593]: time="2025-06-20T19:31:54.772443442Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\" returns image reference \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\"" Jun 20 19:31:54.773087 containerd[1593]: time="2025-06-20T19:31:54.772921210Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\"" Jun 20 19:31:55.269346 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jun 20 19:31:55.270898 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:31:56.379816 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 1590942974 wd_nsec: 1590942884 Jun 20 19:31:56.798924 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:31:56.802869 (kubelet)[2104]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:31:56.970045 kubelet[2104]: E0620 19:31:56.969959 2104 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:31:56.977000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:31:56.977191 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:31:56.977554 systemd[1]: kubelet.service: Consumed 1.362s CPU time, 110.6M memory peak. Jun 20 19:31:57.314922 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4261217321.mount: Deactivated successfully. Jun 20 19:31:58.032304 containerd[1593]: time="2025-06-20T19:31:58.032238611Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:31:58.033030 containerd[1593]: time="2025-06-20T19:31:58.032974988Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.6: active requests=0, bytes read=30895363" Jun 20 19:31:58.034030 containerd[1593]: time="2025-06-20T19:31:58.033997260Z" level=info msg="ImageCreate event name:\"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:31:58.036012 containerd[1593]: time="2025-06-20T19:31:58.035962683Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:31:58.036409 containerd[1593]: time="2025-06-20T19:31:58.036357914Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.6\" with image id \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\", repo tag \"registry.k8s.io/kube-proxy:v1.32.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\", size \"30894382\" in 3.263396884s" Jun 20 19:31:58.036437 containerd[1593]: time="2025-06-20T19:31:58.036422211Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\" returns image reference \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\"" Jun 20 19:31:58.037110 containerd[1593]: time="2025-06-20T19:31:58.037064874Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jun 20 19:31:58.522004 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1882586880.mount: Deactivated successfully. Jun 20 19:31:59.180214 containerd[1593]: time="2025-06-20T19:31:59.180147562Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:31:59.180926 containerd[1593]: time="2025-06-20T19:31:59.180845987Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jun 20 19:31:59.181999 containerd[1593]: time="2025-06-20T19:31:59.181958707Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:31:59.184490 containerd[1593]: time="2025-06-20T19:31:59.184438095Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:31:59.185311 containerd[1593]: time="2025-06-20T19:31:59.185263842Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.148165151s" Jun 20 19:31:59.185311 containerd[1593]: time="2025-06-20T19:31:59.185294592Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jun 20 19:31:59.185811 containerd[1593]: time="2025-06-20T19:31:59.185757184Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jun 20 19:31:59.702308 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1445054833.mount: Deactivated successfully. Jun 20 19:31:59.707323 containerd[1593]: time="2025-06-20T19:31:59.707207303Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 19:31:59.708021 containerd[1593]: time="2025-06-20T19:31:59.707969414Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jun 20 19:31:59.709808 containerd[1593]: time="2025-06-20T19:31:59.709338358Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 19:31:59.711880 containerd[1593]: time="2025-06-20T19:31:59.711839986Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 19:31:59.712516 containerd[1593]: time="2025-06-20T19:31:59.712475942Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 526.514796ms" Jun 20 19:31:59.712557 containerd[1593]: time="2025-06-20T19:31:59.712524898Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jun 20 19:31:59.713257 containerd[1593]: time="2025-06-20T19:31:59.713227156Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jun 20 19:32:00.853511 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2434873443.mount: Deactivated successfully. Jun 20 19:32:04.353220 containerd[1593]: time="2025-06-20T19:32:04.353155557Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:32:04.353951 containerd[1593]: time="2025-06-20T19:32:04.353887654Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551360" Jun 20 19:32:04.355426 containerd[1593]: time="2025-06-20T19:32:04.355390100Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:32:04.358143 containerd[1593]: time="2025-06-20T19:32:04.358112463Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:32:04.359089 containerd[1593]: time="2025-06-20T19:32:04.359045120Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 4.645778993s" Jun 20 19:32:04.359089 containerd[1593]: time="2025-06-20T19:32:04.359082650Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jun 20 19:32:06.809029 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:32:06.809198 systemd[1]: kubelet.service: Consumed 1.362s CPU time, 110.6M memory peak. Jun 20 19:32:06.811736 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:32:06.835562 systemd[1]: Reload requested from client PID 2259 ('systemctl') (unit session-7.scope)... Jun 20 19:32:06.835578 systemd[1]: Reloading... Jun 20 19:32:06.904843 zram_generator::config[2301]: No configuration found. Jun 20 19:32:07.112714 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 19:32:07.226312 systemd[1]: Reloading finished in 390 ms. Jun 20 19:32:07.293467 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jun 20 19:32:07.293563 systemd[1]: kubelet.service: Failed with result 'signal'. Jun 20 19:32:07.293872 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:32:07.293914 systemd[1]: kubelet.service: Consumed 160ms CPU time, 98.2M memory peak. Jun 20 19:32:07.295364 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:32:07.469469 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:32:07.484065 (kubelet)[2349]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 20 19:32:07.537342 kubelet[2349]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 19:32:07.537342 kubelet[2349]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jun 20 19:32:07.537342 kubelet[2349]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 19:32:07.537934 kubelet[2349]: I0620 19:32:07.537382 2349 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 20 19:32:07.710618 kubelet[2349]: I0620 19:32:07.710579 2349 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jun 20 19:32:07.710618 kubelet[2349]: I0620 19:32:07.710606 2349 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 20 19:32:07.710900 kubelet[2349]: I0620 19:32:07.710885 2349 server.go:954] "Client rotation is on, will bootstrap in background" Jun 20 19:32:07.738670 kubelet[2349]: E0620 19:32:07.738588 2349 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.149:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.149:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:32:07.739136 kubelet[2349]: I0620 19:32:07.738859 2349 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 20 19:32:07.745841 kubelet[2349]: I0620 19:32:07.745818 2349 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jun 20 19:32:07.750686 kubelet[2349]: I0620 19:32:07.750643 2349 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 20 19:32:07.750955 kubelet[2349]: I0620 19:32:07.750917 2349 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 20 19:32:07.751113 kubelet[2349]: I0620 19:32:07.750944 2349 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jun 20 19:32:07.751537 kubelet[2349]: I0620 19:32:07.751510 2349 topology_manager.go:138] "Creating topology manager with none policy" Jun 20 19:32:07.751537 kubelet[2349]: I0620 19:32:07.751527 2349 container_manager_linux.go:304] "Creating device plugin manager" Jun 20 19:32:07.751675 kubelet[2349]: I0620 19:32:07.751652 2349 state_mem.go:36] "Initialized new in-memory state store" Jun 20 19:32:07.755258 kubelet[2349]: I0620 19:32:07.755232 2349 kubelet.go:446] "Attempting to sync node with API server" Jun 20 19:32:07.755258 kubelet[2349]: I0620 19:32:07.755256 2349 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 20 19:32:07.755314 kubelet[2349]: I0620 19:32:07.755276 2349 kubelet.go:352] "Adding apiserver pod source" Jun 20 19:32:07.755314 kubelet[2349]: I0620 19:32:07.755286 2349 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 20 19:32:07.759228 kubelet[2349]: W0620 19:32:07.759183 2349 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.149:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.149:6443: connect: connection refused Jun 20 19:32:07.759291 kubelet[2349]: E0620 19:32:07.759239 2349 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.149:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.149:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:32:07.759321 kubelet[2349]: W0620 19:32:07.759302 2349 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.149:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.149:6443: connect: connection refused Jun 20 19:32:07.759346 kubelet[2349]: E0620 19:32:07.759327 2349 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.149:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.149:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:32:07.761148 kubelet[2349]: I0620 19:32:07.760511 2349 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jun 20 19:32:07.761148 kubelet[2349]: I0620 19:32:07.760913 2349 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jun 20 19:32:07.761148 kubelet[2349]: W0620 19:32:07.760964 2349 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jun 20 19:32:07.763582 kubelet[2349]: I0620 19:32:07.763553 2349 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jun 20 19:32:07.763709 kubelet[2349]: I0620 19:32:07.763592 2349 server.go:1287] "Started kubelet" Jun 20 19:32:08.020979 kubelet[2349]: I0620 19:32:08.020818 2349 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jun 20 19:32:08.025184 kubelet[2349]: I0620 19:32:08.024600 2349 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 20 19:32:08.025184 kubelet[2349]: I0620 19:32:08.025116 2349 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 20 19:32:08.025939 kubelet[2349]: I0620 19:32:08.025649 2349 server.go:479] "Adding debug handlers to kubelet server" Jun 20 19:32:08.026067 kubelet[2349]: I0620 19:32:08.026046 2349 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 20 19:32:08.026321 kubelet[2349]: I0620 19:32:08.026296 2349 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jun 20 19:32:08.029564 kubelet[2349]: E0620 19:32:08.029538 2349 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 20 19:32:08.029603 kubelet[2349]: I0620 19:32:08.029569 2349 volume_manager.go:297] "Starting Kubelet Volume Manager" Jun 20 19:32:08.029753 kubelet[2349]: I0620 19:32:08.029731 2349 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jun 20 19:32:08.029831 kubelet[2349]: I0620 19:32:08.029813 2349 reconciler.go:26] "Reconciler: start to sync state" Jun 20 19:32:08.030189 kubelet[2349]: W0620 19:32:08.030144 2349 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.149:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.149:6443: connect: connection refused Jun 20 19:32:08.030239 kubelet[2349]: E0620 19:32:08.030196 2349 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.149:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.149:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:32:08.031084 kubelet[2349]: E0620 19:32:08.028759 2349 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.149:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.149:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.184ad71db42a9eec default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-06-20 19:32:07.763574508 +0000 UTC m=+0.273339573,LastTimestamp:2025-06-20 19:32:07.763574508 +0000 UTC m=+0.273339573,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jun 20 19:32:08.031084 kubelet[2349]: E0620 19:32:08.030922 2349 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.149:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.149:6443: connect: connection refused" interval="200ms" Jun 20 19:32:08.031084 kubelet[2349]: E0620 19:32:08.031036 2349 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 20 19:32:08.031304 kubelet[2349]: I0620 19:32:08.031279 2349 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 20 19:32:08.032307 kubelet[2349]: I0620 19:32:08.032245 2349 factory.go:221] Registration of the containerd container factory successfully Jun 20 19:32:08.032307 kubelet[2349]: I0620 19:32:08.032302 2349 factory.go:221] Registration of the systemd container factory successfully Jun 20 19:32:08.043003 kubelet[2349]: I0620 19:32:08.042931 2349 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 20 19:32:08.044127 kubelet[2349]: I0620 19:32:08.044103 2349 cpu_manager.go:221] "Starting CPU manager" policy="none" Jun 20 19:32:08.044127 kubelet[2349]: I0620 19:32:08.044108 2349 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 20 19:32:08.044199 kubelet[2349]: I0620 19:32:08.044147 2349 status_manager.go:227] "Starting to sync pod status with apiserver" Jun 20 19:32:08.044199 kubelet[2349]: I0620 19:32:08.044171 2349 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jun 20 19:32:08.044199 kubelet[2349]: I0620 19:32:08.044183 2349 kubelet.go:2382] "Starting kubelet main sync loop" Jun 20 19:32:08.044267 kubelet[2349]: E0620 19:32:08.044230 2349 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 20 19:32:08.044374 kubelet[2349]: I0620 19:32:08.044117 2349 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jun 20 19:32:08.044398 kubelet[2349]: I0620 19:32:08.044376 2349 state_mem.go:36] "Initialized new in-memory state store" Jun 20 19:32:08.046908 kubelet[2349]: W0620 19:32:08.046812 2349 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.149:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.149:6443: connect: connection refused Jun 20 19:32:08.046908 kubelet[2349]: E0620 19:32:08.046862 2349 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.149:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.149:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:32:08.130132 kubelet[2349]: E0620 19:32:08.130109 2349 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 20 19:32:08.144552 kubelet[2349]: E0620 19:32:08.144514 2349 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jun 20 19:32:08.230659 kubelet[2349]: E0620 19:32:08.230630 2349 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 20 19:32:08.232227 kubelet[2349]: E0620 19:32:08.232178 2349 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.149:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.149:6443: connect: connection refused" interval="400ms" Jun 20 19:32:08.330822 kubelet[2349]: E0620 19:32:08.330687 2349 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 20 19:32:08.344916 kubelet[2349]: E0620 19:32:08.344862 2349 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jun 20 19:32:08.431244 kubelet[2349]: E0620 19:32:08.431194 2349 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 20 19:32:08.532031 kubelet[2349]: E0620 19:32:08.531975 2349 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 20 19:32:08.632497 kubelet[2349]: E0620 19:32:08.632348 2349 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 20 19:32:08.632932 kubelet[2349]: E0620 19:32:08.632689 2349 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.149:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.149:6443: connect: connection refused" interval="800ms" Jun 20 19:32:08.635079 kubelet[2349]: W0620 19:32:08.635048 2349 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.149:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.149:6443: connect: connection refused Jun 20 19:32:08.635122 kubelet[2349]: E0620 19:32:08.635084 2349 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.149:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.149:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:32:08.642599 kubelet[2349]: W0620 19:32:08.642572 2349 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.149:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.149:6443: connect: connection refused Jun 20 19:32:08.642627 kubelet[2349]: E0620 19:32:08.642595 2349 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.149:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.149:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:32:08.733286 kubelet[2349]: E0620 19:32:08.733230 2349 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 20 19:32:08.745625 kubelet[2349]: E0620 19:32:08.745569 2349 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jun 20 19:32:08.833671 kubelet[2349]: E0620 19:32:08.833615 2349 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 20 19:32:08.934115 kubelet[2349]: E0620 19:32:08.934026 2349 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 20 19:32:08.988565 kubelet[2349]: E0620 19:32:08.988457 2349 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.149:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.149:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.184ad71db42a9eec default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-06-20 19:32:07.763574508 +0000 UTC m=+0.273339573,LastTimestamp:2025-06-20 19:32:07.763574508 +0000 UTC m=+0.273339573,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jun 20 19:32:09.034224 kubelet[2349]: E0620 19:32:09.034193 2349 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 20 19:32:09.100080 kubelet[2349]: I0620 19:32:09.100037 2349 policy_none.go:49] "None policy: Start" Jun 20 19:32:09.100080 kubelet[2349]: I0620 19:32:09.100072 2349 memory_manager.go:186] "Starting memorymanager" policy="None" Jun 20 19:32:09.100162 kubelet[2349]: I0620 19:32:09.100089 2349 state_mem.go:35] "Initializing new in-memory state store" Jun 20 19:32:09.107304 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jun 20 19:32:09.130244 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jun 20 19:32:09.134998 kubelet[2349]: E0620 19:32:09.134970 2349 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 20 19:32:09.154731 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jun 20 19:32:09.156167 kubelet[2349]: I0620 19:32:09.156143 2349 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 20 19:32:09.156439 kubelet[2349]: I0620 19:32:09.156356 2349 eviction_manager.go:189] "Eviction manager: starting control loop" Jun 20 19:32:09.156439 kubelet[2349]: I0620 19:32:09.156372 2349 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 20 19:32:09.156647 kubelet[2349]: I0620 19:32:09.156630 2349 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 20 19:32:09.157427 kubelet[2349]: E0620 19:32:09.157410 2349 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jun 20 19:32:09.157474 kubelet[2349]: E0620 19:32:09.157449 2349 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jun 20 19:32:09.205985 kubelet[2349]: W0620 19:32:09.205890 2349 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.149:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.149:6443: connect: connection refused Jun 20 19:32:09.205985 kubelet[2349]: E0620 19:32:09.205937 2349 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.149:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.149:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:32:09.210434 kubelet[2349]: W0620 19:32:09.210410 2349 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.149:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.149:6443: connect: connection refused Jun 20 19:32:09.210492 kubelet[2349]: E0620 19:32:09.210438 2349 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.149:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.149:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:32:09.258353 kubelet[2349]: I0620 19:32:09.258310 2349 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jun 20 19:32:09.258893 kubelet[2349]: E0620 19:32:09.258846 2349 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.149:6443/api/v1/nodes\": dial tcp 10.0.0.149:6443: connect: connection refused" node="localhost" Jun 20 19:32:09.433451 kubelet[2349]: E0620 19:32:09.433396 2349 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.149:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.149:6443: connect: connection refused" interval="1.6s" Jun 20 19:32:09.460607 kubelet[2349]: I0620 19:32:09.460540 2349 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jun 20 19:32:09.460967 kubelet[2349]: E0620 19:32:09.460918 2349 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.149:6443/api/v1/nodes\": dial tcp 10.0.0.149:6443: connect: connection refused" node="localhost" Jun 20 19:32:09.556536 systemd[1]: Created slice kubepods-burstable-pod84f612b8557459546619150f2293a6b3.slice - libcontainer container kubepods-burstable-pod84f612b8557459546619150f2293a6b3.slice. Jun 20 19:32:09.577246 kubelet[2349]: E0620 19:32:09.577204 2349 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jun 20 19:32:09.579977 systemd[1]: Created slice kubepods-burstable-podd1af03769b64da1b1e8089a7035018fc.slice - libcontainer container kubepods-burstable-podd1af03769b64da1b1e8089a7035018fc.slice. Jun 20 19:32:09.595005 kubelet[2349]: E0620 19:32:09.594976 2349 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jun 20 19:32:09.597631 systemd[1]: Created slice kubepods-burstable-pod8a75e163f27396b2168da0f88f85f8a5.slice - libcontainer container kubepods-burstable-pod8a75e163f27396b2168da0f88f85f8a5.slice. Jun 20 19:32:09.599420 kubelet[2349]: E0620 19:32:09.599385 2349 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jun 20 19:32:09.638645 kubelet[2349]: I0620 19:32:09.638613 2349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a75e163f27396b2168da0f88f85f8a5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8a75e163f27396b2168da0f88f85f8a5\") " pod="kube-system/kube-scheduler-localhost" Jun 20 19:32:09.638645 kubelet[2349]: I0620 19:32:09.638640 2349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/84f612b8557459546619150f2293a6b3-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"84f612b8557459546619150f2293a6b3\") " pod="kube-system/kube-apiserver-localhost" Jun 20 19:32:09.638986 kubelet[2349]: I0620 19:32:09.638662 2349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jun 20 19:32:09.638986 kubelet[2349]: I0620 19:32:09.638720 2349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jun 20 19:32:09.638986 kubelet[2349]: I0620 19:32:09.638788 2349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jun 20 19:32:09.638986 kubelet[2349]: I0620 19:32:09.638811 2349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/84f612b8557459546619150f2293a6b3-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"84f612b8557459546619150f2293a6b3\") " pod="kube-system/kube-apiserver-localhost" Jun 20 19:32:09.638986 kubelet[2349]: I0620 19:32:09.638827 2349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/84f612b8557459546619150f2293a6b3-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"84f612b8557459546619150f2293a6b3\") " pod="kube-system/kube-apiserver-localhost" Jun 20 19:32:09.639115 kubelet[2349]: I0620 19:32:09.638842 2349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jun 20 19:32:09.639115 kubelet[2349]: I0620 19:32:09.638858 2349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jun 20 19:32:09.818896 kubelet[2349]: E0620 19:32:09.818749 2349 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.149:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.149:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:32:09.862706 kubelet[2349]: I0620 19:32:09.862678 2349 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jun 20 19:32:09.863086 kubelet[2349]: E0620 19:32:09.863045 2349 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.149:6443/api/v1/nodes\": dial tcp 10.0.0.149:6443: connect: connection refused" node="localhost" Jun 20 19:32:09.879001 containerd[1593]: time="2025-06-20T19:32:09.878939979Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:84f612b8557459546619150f2293a6b3,Namespace:kube-system,Attempt:0,}" Jun 20 19:32:09.896879 containerd[1593]: time="2025-06-20T19:32:09.896819078Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d1af03769b64da1b1e8089a7035018fc,Namespace:kube-system,Attempt:0,}" Jun 20 19:32:09.902265 containerd[1593]: time="2025-06-20T19:32:09.902223064Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8a75e163f27396b2168da0f88f85f8a5,Namespace:kube-system,Attempt:0,}" Jun 20 19:32:09.942436 containerd[1593]: time="2025-06-20T19:32:09.942374958Z" level=info msg="connecting to shim 3572519af703e258bbe3f1b435693355b5ce0efe2dc8d2225df574f24c705fce" address="unix:///run/containerd/s/db4ed081d520aa0e6d5195db614ee6567f166b1d9ebc9fad88f8646574cee1f3" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:32:09.958193 containerd[1593]: time="2025-06-20T19:32:09.958047564Z" level=info msg="connecting to shim 372dac438dc7cdea488258a43735a094d59bfcc89f114ce93e2e80626c0b1479" address="unix:///run/containerd/s/e0d2cb9f69596792ecac1e583fcf8f6e4561ccc1e514879378e99cbfad031454" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:32:09.963378 containerd[1593]: time="2025-06-20T19:32:09.963318020Z" level=info msg="connecting to shim f0daf0ee2787d37ece1481863541d614a737c20a9de562f81d5ca065f27fb9ba" address="unix:///run/containerd/s/67a009f65b35441f9e98e6b702aa8b73fed4a978c5e0e12bcc19268e91191e55" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:32:09.984894 systemd[1]: Started cri-containerd-3572519af703e258bbe3f1b435693355b5ce0efe2dc8d2225df574f24c705fce.scope - libcontainer container 3572519af703e258bbe3f1b435693355b5ce0efe2dc8d2225df574f24c705fce. Jun 20 19:32:09.988044 systemd[1]: Started cri-containerd-372dac438dc7cdea488258a43735a094d59bfcc89f114ce93e2e80626c0b1479.scope - libcontainer container 372dac438dc7cdea488258a43735a094d59bfcc89f114ce93e2e80626c0b1479. Jun 20 19:32:09.993238 systemd[1]: Started cri-containerd-f0daf0ee2787d37ece1481863541d614a737c20a9de562f81d5ca065f27fb9ba.scope - libcontainer container f0daf0ee2787d37ece1481863541d614a737c20a9de562f81d5ca065f27fb9ba. Jun 20 19:32:10.065900 containerd[1593]: time="2025-06-20T19:32:10.065832937Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8a75e163f27396b2168da0f88f85f8a5,Namespace:kube-system,Attempt:0,} returns sandbox id \"372dac438dc7cdea488258a43735a094d59bfcc89f114ce93e2e80626c0b1479\"" Jun 20 19:32:10.073329 containerd[1593]: time="2025-06-20T19:32:10.073239648Z" level=info msg="CreateContainer within sandbox \"372dac438dc7cdea488258a43735a094d59bfcc89f114ce93e2e80626c0b1479\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jun 20 19:32:10.074488 containerd[1593]: time="2025-06-20T19:32:10.074434225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:84f612b8557459546619150f2293a6b3,Namespace:kube-system,Attempt:0,} returns sandbox id \"3572519af703e258bbe3f1b435693355b5ce0efe2dc8d2225df574f24c705fce\"" Jun 20 19:32:10.077680 containerd[1593]: time="2025-06-20T19:32:10.077657733Z" level=info msg="CreateContainer within sandbox \"3572519af703e258bbe3f1b435693355b5ce0efe2dc8d2225df574f24c705fce\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jun 20 19:32:10.079042 containerd[1593]: time="2025-06-20T19:32:10.079020601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d1af03769b64da1b1e8089a7035018fc,Namespace:kube-system,Attempt:0,} returns sandbox id \"f0daf0ee2787d37ece1481863541d614a737c20a9de562f81d5ca065f27fb9ba\"" Jun 20 19:32:10.081379 containerd[1593]: time="2025-06-20T19:32:10.081341565Z" level=info msg="CreateContainer within sandbox \"f0daf0ee2787d37ece1481863541d614a737c20a9de562f81d5ca065f27fb9ba\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jun 20 19:32:10.086950 containerd[1593]: time="2025-06-20T19:32:10.086915331Z" level=info msg="Container 7333333681a4b0582640ca64bee4cc894f30a2a933e344f4aa61c5ae346cb693: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:32:10.089970 containerd[1593]: time="2025-06-20T19:32:10.089935265Z" level=info msg="Container ddd29ac0e4139b54ee1f1fadc5d0ba250510284fb1fef7a0cfc847c353557d3c: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:32:10.096874 containerd[1593]: time="2025-06-20T19:32:10.096823313Z" level=info msg="Container b876bfb512f749c3b75c6abfb9fcb264864ee727b3051f2b979815369a119db1: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:32:10.101800 containerd[1593]: time="2025-06-20T19:32:10.101725541Z" level=info msg="CreateContainer within sandbox \"372dac438dc7cdea488258a43735a094d59bfcc89f114ce93e2e80626c0b1479\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"7333333681a4b0582640ca64bee4cc894f30a2a933e344f4aa61c5ae346cb693\"" Jun 20 19:32:10.102415 containerd[1593]: time="2025-06-20T19:32:10.102375364Z" level=info msg="StartContainer for \"7333333681a4b0582640ca64bee4cc894f30a2a933e344f4aa61c5ae346cb693\"" Jun 20 19:32:10.103505 containerd[1593]: time="2025-06-20T19:32:10.103485804Z" level=info msg="connecting to shim 7333333681a4b0582640ca64bee4cc894f30a2a933e344f4aa61c5ae346cb693" address="unix:///run/containerd/s/e0d2cb9f69596792ecac1e583fcf8f6e4561ccc1e514879378e99cbfad031454" protocol=ttrpc version=3 Jun 20 19:32:10.106075 containerd[1593]: time="2025-06-20T19:32:10.105832328Z" level=info msg="CreateContainer within sandbox \"3572519af703e258bbe3f1b435693355b5ce0efe2dc8d2225df574f24c705fce\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ddd29ac0e4139b54ee1f1fadc5d0ba250510284fb1fef7a0cfc847c353557d3c\"" Jun 20 19:32:10.106956 containerd[1593]: time="2025-06-20T19:32:10.106236960Z" level=info msg="StartContainer for \"ddd29ac0e4139b54ee1f1fadc5d0ba250510284fb1fef7a0cfc847c353557d3c\"" Jun 20 19:32:10.107368 containerd[1593]: time="2025-06-20T19:32:10.107337149Z" level=info msg="connecting to shim ddd29ac0e4139b54ee1f1fadc5d0ba250510284fb1fef7a0cfc847c353557d3c" address="unix:///run/containerd/s/db4ed081d520aa0e6d5195db614ee6567f166b1d9ebc9fad88f8646574cee1f3" protocol=ttrpc version=3 Jun 20 19:32:10.109714 containerd[1593]: time="2025-06-20T19:32:10.109662909Z" level=info msg="CreateContainer within sandbox \"f0daf0ee2787d37ece1481863541d614a737c20a9de562f81d5ca065f27fb9ba\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b876bfb512f749c3b75c6abfb9fcb264864ee727b3051f2b979815369a119db1\"" Jun 20 19:32:10.110493 containerd[1593]: time="2025-06-20T19:32:10.110453894Z" level=info msg="StartContainer for \"b876bfb512f749c3b75c6abfb9fcb264864ee727b3051f2b979815369a119db1\"" Jun 20 19:32:10.112023 containerd[1593]: time="2025-06-20T19:32:10.111989090Z" level=info msg="connecting to shim b876bfb512f749c3b75c6abfb9fcb264864ee727b3051f2b979815369a119db1" address="unix:///run/containerd/s/67a009f65b35441f9e98e6b702aa8b73fed4a978c5e0e12bcc19268e91191e55" protocol=ttrpc version=3 Jun 20 19:32:10.124053 systemd[1]: Started cri-containerd-7333333681a4b0582640ca64bee4cc894f30a2a933e344f4aa61c5ae346cb693.scope - libcontainer container 7333333681a4b0582640ca64bee4cc894f30a2a933e344f4aa61c5ae346cb693. Jun 20 19:32:10.127760 systemd[1]: Started cri-containerd-ddd29ac0e4139b54ee1f1fadc5d0ba250510284fb1fef7a0cfc847c353557d3c.scope - libcontainer container ddd29ac0e4139b54ee1f1fadc5d0ba250510284fb1fef7a0cfc847c353557d3c. Jun 20 19:32:10.132292 systemd[1]: Started cri-containerd-b876bfb512f749c3b75c6abfb9fcb264864ee727b3051f2b979815369a119db1.scope - libcontainer container b876bfb512f749c3b75c6abfb9fcb264864ee727b3051f2b979815369a119db1. Jun 20 19:32:10.189951 containerd[1593]: time="2025-06-20T19:32:10.187679604Z" level=info msg="StartContainer for \"ddd29ac0e4139b54ee1f1fadc5d0ba250510284fb1fef7a0cfc847c353557d3c\" returns successfully" Jun 20 19:32:10.189951 containerd[1593]: time="2025-06-20T19:32:10.188313089Z" level=info msg="StartContainer for \"b876bfb512f749c3b75c6abfb9fcb264864ee727b3051f2b979815369a119db1\" returns successfully" Jun 20 19:32:10.251052 containerd[1593]: time="2025-06-20T19:32:10.251001221Z" level=info msg="StartContainer for \"7333333681a4b0582640ca64bee4cc894f30a2a933e344f4aa61c5ae346cb693\" returns successfully" Jun 20 19:32:10.664799 kubelet[2349]: I0620 19:32:10.664656 2349 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jun 20 19:32:11.057604 kubelet[2349]: E0620 19:32:11.057489 2349 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jun 20 19:32:11.059259 kubelet[2349]: E0620 19:32:11.059220 2349 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jun 20 19:32:11.066866 kubelet[2349]: E0620 19:32:11.066838 2349 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jun 20 19:32:11.672084 kubelet[2349]: E0620 19:32:11.672033 2349 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jun 20 19:32:11.761008 kubelet[2349]: I0620 19:32:11.760957 2349 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jun 20 19:32:11.761008 kubelet[2349]: E0620 19:32:11.760994 2349 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jun 20 19:32:11.769942 kubelet[2349]: E0620 19:32:11.769897 2349 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 20 19:32:11.831218 kubelet[2349]: I0620 19:32:11.831177 2349 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jun 20 19:32:11.835018 kubelet[2349]: E0620 19:32:11.834993 2349 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jun 20 19:32:11.835018 kubelet[2349]: I0620 19:32:11.835010 2349 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jun 20 19:32:11.836328 kubelet[2349]: E0620 19:32:11.836309 2349 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jun 20 19:32:11.836328 kubelet[2349]: I0620 19:32:11.836325 2349 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jun 20 19:32:11.837509 kubelet[2349]: E0620 19:32:11.837484 2349 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jun 20 19:32:12.068337 kubelet[2349]: I0620 19:32:12.068238 2349 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jun 20 19:32:12.068443 kubelet[2349]: I0620 19:32:12.068393 2349 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jun 20 19:32:12.070036 kubelet[2349]: E0620 19:32:12.069997 2349 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jun 20 19:32:12.071216 kubelet[2349]: E0620 19:32:12.071197 2349 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jun 20 19:32:12.757914 kubelet[2349]: I0620 19:32:12.757863 2349 apiserver.go:52] "Watching apiserver" Jun 20 19:32:12.829968 kubelet[2349]: I0620 19:32:12.829932 2349 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jun 20 19:32:14.214208 systemd[1]: Reload requested from client PID 2621 ('systemctl') (unit session-7.scope)... Jun 20 19:32:14.214224 systemd[1]: Reloading... Jun 20 19:32:14.286816 zram_generator::config[2664]: No configuration found. Jun 20 19:32:14.387860 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 19:32:14.517612 systemd[1]: Reloading finished in 303 ms. Jun 20 19:32:14.544725 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:32:14.545027 kubelet[2349]: I0620 19:32:14.544753 2349 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 20 19:32:14.561045 systemd[1]: kubelet.service: Deactivated successfully. Jun 20 19:32:14.561363 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:32:14.561412 systemd[1]: kubelet.service: Consumed 1.279s CPU time, 131.6M memory peak. Jun 20 19:32:14.563114 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:32:14.776519 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:32:14.790094 (kubelet)[2709]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 20 19:32:14.822815 kubelet[2709]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 19:32:14.822815 kubelet[2709]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jun 20 19:32:14.822815 kubelet[2709]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 19:32:14.823214 kubelet[2709]: I0620 19:32:14.822935 2709 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 20 19:32:14.829087 kubelet[2709]: I0620 19:32:14.829059 2709 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jun 20 19:32:14.829087 kubelet[2709]: I0620 19:32:14.829078 2709 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 20 19:32:14.829314 kubelet[2709]: I0620 19:32:14.829297 2709 server.go:954] "Client rotation is on, will bootstrap in background" Jun 20 19:32:14.830337 kubelet[2709]: I0620 19:32:14.830320 2709 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jun 20 19:32:14.832486 kubelet[2709]: I0620 19:32:14.832396 2709 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 20 19:32:14.836114 kubelet[2709]: I0620 19:32:14.836093 2709 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jun 20 19:32:14.841184 kubelet[2709]: I0620 19:32:14.841168 2709 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 20 19:32:14.841423 kubelet[2709]: I0620 19:32:14.841395 2709 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 20 19:32:14.841591 kubelet[2709]: I0620 19:32:14.841424 2709 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jun 20 19:32:14.841591 kubelet[2709]: I0620 19:32:14.841585 2709 topology_manager.go:138] "Creating topology manager with none policy" Jun 20 19:32:14.841764 kubelet[2709]: I0620 19:32:14.841596 2709 container_manager_linux.go:304] "Creating device plugin manager" Jun 20 19:32:14.841764 kubelet[2709]: I0620 19:32:14.841650 2709 state_mem.go:36] "Initialized new in-memory state store" Jun 20 19:32:14.841876 kubelet[2709]: I0620 19:32:14.841857 2709 kubelet.go:446] "Attempting to sync node with API server" Jun 20 19:32:14.841914 kubelet[2709]: I0620 19:32:14.841884 2709 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 20 19:32:14.841914 kubelet[2709]: I0620 19:32:14.841909 2709 kubelet.go:352] "Adding apiserver pod source" Jun 20 19:32:14.841967 kubelet[2709]: I0620 19:32:14.841921 2709 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 20 19:32:14.842690 kubelet[2709]: I0620 19:32:14.842585 2709 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jun 20 19:32:14.843094 kubelet[2709]: I0620 19:32:14.843075 2709 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jun 20 19:32:15.128518 kubelet[2709]: I0620 19:32:15.128354 2709 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jun 20 19:32:15.128518 kubelet[2709]: I0620 19:32:15.128395 2709 server.go:1287] "Started kubelet" Jun 20 19:32:15.130360 kubelet[2709]: I0620 19:32:15.130010 2709 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 20 19:32:15.134919 kubelet[2709]: I0620 19:32:15.134876 2709 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jun 20 19:32:15.135818 kubelet[2709]: I0620 19:32:15.135804 2709 server.go:479] "Adding debug handlers to kubelet server" Jun 20 19:32:15.136680 kubelet[2709]: I0620 19:32:15.136642 2709 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 20 19:32:15.136946 kubelet[2709]: I0620 19:32:15.136932 2709 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 20 19:32:15.137316 kubelet[2709]: I0620 19:32:15.137300 2709 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jun 20 19:32:15.139677 kubelet[2709]: I0620 19:32:15.139468 2709 volume_manager.go:297] "Starting Kubelet Volume Manager" Jun 20 19:32:15.140010 kubelet[2709]: E0620 19:32:15.139984 2709 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 20 19:32:15.140405 kubelet[2709]: I0620 19:32:15.140383 2709 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jun 20 19:32:15.140582 kubelet[2709]: I0620 19:32:15.140564 2709 reconciler.go:26] "Reconciler: start to sync state" Jun 20 19:32:15.148183 kubelet[2709]: I0620 19:32:15.148148 2709 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 20 19:32:15.149371 kubelet[2709]: I0620 19:32:15.148962 2709 factory.go:221] Registration of the systemd container factory successfully Jun 20 19:32:15.149371 kubelet[2709]: I0620 19:32:15.149086 2709 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 20 19:32:15.150498 kubelet[2709]: E0620 19:32:15.150481 2709 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 20 19:32:15.150621 kubelet[2709]: I0620 19:32:15.150607 2709 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 20 19:32:15.150675 kubelet[2709]: I0620 19:32:15.150666 2709 status_manager.go:227] "Starting to sync pod status with apiserver" Jun 20 19:32:15.150749 kubelet[2709]: I0620 19:32:15.150736 2709 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jun 20 19:32:15.150814 kubelet[2709]: I0620 19:32:15.150805 2709 kubelet.go:2382] "Starting kubelet main sync loop" Jun 20 19:32:15.150910 kubelet[2709]: E0620 19:32:15.150893 2709 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 20 19:32:15.151550 kubelet[2709]: I0620 19:32:15.151293 2709 factory.go:221] Registration of the containerd container factory successfully Jun 20 19:32:15.185414 kubelet[2709]: I0620 19:32:15.184952 2709 cpu_manager.go:221] "Starting CPU manager" policy="none" Jun 20 19:32:15.185414 kubelet[2709]: I0620 19:32:15.184971 2709 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jun 20 19:32:15.185414 kubelet[2709]: I0620 19:32:15.184986 2709 state_mem.go:36] "Initialized new in-memory state store" Jun 20 19:32:15.185414 kubelet[2709]: I0620 19:32:15.185144 2709 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jun 20 19:32:15.185414 kubelet[2709]: I0620 19:32:15.185154 2709 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jun 20 19:32:15.185414 kubelet[2709]: I0620 19:32:15.185172 2709 policy_none.go:49] "None policy: Start" Jun 20 19:32:15.185414 kubelet[2709]: I0620 19:32:15.185181 2709 memory_manager.go:186] "Starting memorymanager" policy="None" Jun 20 19:32:15.185414 kubelet[2709]: I0620 19:32:15.185193 2709 state_mem.go:35] "Initializing new in-memory state store" Jun 20 19:32:15.185414 kubelet[2709]: I0620 19:32:15.185303 2709 state_mem.go:75] "Updated machine memory state" Jun 20 19:32:15.190049 kubelet[2709]: I0620 19:32:15.189928 2709 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 20 19:32:15.190111 kubelet[2709]: I0620 19:32:15.190091 2709 eviction_manager.go:189] "Eviction manager: starting control loop" Jun 20 19:32:15.190134 kubelet[2709]: I0620 19:32:15.190102 2709 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 20 19:32:15.191029 kubelet[2709]: I0620 19:32:15.191002 2709 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 20 19:32:15.194175 kubelet[2709]: E0620 19:32:15.193527 2709 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jun 20 19:32:15.252499 kubelet[2709]: I0620 19:32:15.252467 2709 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jun 20 19:32:15.252742 kubelet[2709]: I0620 19:32:15.252502 2709 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jun 20 19:32:15.253071 kubelet[2709]: I0620 19:32:15.252566 2709 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jun 20 19:32:15.292241 kubelet[2709]: I0620 19:32:15.292207 2709 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jun 20 19:32:15.442120 kubelet[2709]: I0620 19:32:15.441975 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/84f612b8557459546619150f2293a6b3-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"84f612b8557459546619150f2293a6b3\") " pod="kube-system/kube-apiserver-localhost" Jun 20 19:32:15.442120 kubelet[2709]: I0620 19:32:15.442065 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/84f612b8557459546619150f2293a6b3-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"84f612b8557459546619150f2293a6b3\") " pod="kube-system/kube-apiserver-localhost" Jun 20 19:32:15.442120 kubelet[2709]: I0620 19:32:15.442095 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/84f612b8557459546619150f2293a6b3-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"84f612b8557459546619150f2293a6b3\") " pod="kube-system/kube-apiserver-localhost" Jun 20 19:32:15.442120 kubelet[2709]: I0620 19:32:15.442125 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jun 20 19:32:15.442390 kubelet[2709]: I0620 19:32:15.442141 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jun 20 19:32:15.442390 kubelet[2709]: I0620 19:32:15.442159 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jun 20 19:32:15.442390 kubelet[2709]: I0620 19:32:15.442210 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jun 20 19:32:15.442390 kubelet[2709]: I0620 19:32:15.442244 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a75e163f27396b2168da0f88f85f8a5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8a75e163f27396b2168da0f88f85f8a5\") " pod="kube-system/kube-scheduler-localhost" Jun 20 19:32:15.442390 kubelet[2709]: I0620 19:32:15.442265 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jun 20 19:32:15.697446 kubelet[2709]: I0620 19:32:15.697328 2709 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jun 20 19:32:15.697446 kubelet[2709]: I0620 19:32:15.697412 2709 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jun 20 19:32:15.842471 kubelet[2709]: I0620 19:32:15.842418 2709 apiserver.go:52] "Watching apiserver" Jun 20 19:32:15.941571 kubelet[2709]: I0620 19:32:15.941518 2709 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jun 20 19:32:16.033605 kubelet[2709]: I0620 19:32:16.033271 2709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.033256322 podStartE2EDuration="1.033256322s" podCreationTimestamp="2025-06-20 19:32:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:32:16.033019972 +0000 UTC m=+1.238876325" watchObservedRunningTime="2025-06-20 19:32:16.033256322 +0000 UTC m=+1.239112675" Jun 20 19:32:16.040194 kubelet[2709]: I0620 19:32:16.040116 2709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.040106635 podStartE2EDuration="1.040106635s" podCreationTimestamp="2025-06-20 19:32:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:32:16.040030133 +0000 UTC m=+1.245886486" watchObservedRunningTime="2025-06-20 19:32:16.040106635 +0000 UTC m=+1.245962978" Jun 20 19:32:16.054004 kubelet[2709]: I0620 19:32:16.052945 2709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.052926308 podStartE2EDuration="1.052926308s" podCreationTimestamp="2025-06-20 19:32:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:32:16.046690568 +0000 UTC m=+1.252546921" watchObservedRunningTime="2025-06-20 19:32:16.052926308 +0000 UTC m=+1.258782661" Jun 20 19:32:16.168591 kubelet[2709]: I0620 19:32:16.168553 2709 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jun 20 19:32:16.174625 kubelet[2709]: E0620 19:32:16.174586 2709 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jun 20 19:32:18.774910 kubelet[2709]: I0620 19:32:18.774877 2709 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jun 20 19:32:18.775337 containerd[1593]: time="2025-06-20T19:32:18.775295510Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jun 20 19:32:18.775572 kubelet[2709]: I0620 19:32:18.775494 2709 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jun 20 19:32:19.766161 systemd[1]: Created slice kubepods-besteffort-pod33fc5ca3_5279_435a_91b8_43224a2440fb.slice - libcontainer container kubepods-besteffort-pod33fc5ca3_5279_435a_91b8_43224a2440fb.slice. Jun 20 19:32:19.766563 kubelet[2709]: I0620 19:32:19.766393 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/33fc5ca3-5279-435a-91b8-43224a2440fb-kube-proxy\") pod \"kube-proxy-vvgd7\" (UID: \"33fc5ca3-5279-435a-91b8-43224a2440fb\") " pod="kube-system/kube-proxy-vvgd7" Jun 20 19:32:19.766563 kubelet[2709]: I0620 19:32:19.766431 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/33fc5ca3-5279-435a-91b8-43224a2440fb-lib-modules\") pod \"kube-proxy-vvgd7\" (UID: \"33fc5ca3-5279-435a-91b8-43224a2440fb\") " pod="kube-system/kube-proxy-vvgd7" Jun 20 19:32:19.766563 kubelet[2709]: I0620 19:32:19.766467 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkj8g\" (UniqueName: \"kubernetes.io/projected/33fc5ca3-5279-435a-91b8-43224a2440fb-kube-api-access-lkj8g\") pod \"kube-proxy-vvgd7\" (UID: \"33fc5ca3-5279-435a-91b8-43224a2440fb\") " pod="kube-system/kube-proxy-vvgd7" Jun 20 19:32:19.766563 kubelet[2709]: I0620 19:32:19.766487 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/33fc5ca3-5279-435a-91b8-43224a2440fb-xtables-lock\") pod \"kube-proxy-vvgd7\" (UID: \"33fc5ca3-5279-435a-91b8-43224a2440fb\") " pod="kube-system/kube-proxy-vvgd7" Jun 20 19:32:19.929026 systemd[1]: Created slice kubepods-besteffort-pod61d4739e_5000_41f7_b4bb_470d5485375d.slice - libcontainer container kubepods-besteffort-pod61d4739e_5000_41f7_b4bb_470d5485375d.slice. Jun 20 19:32:19.968044 kubelet[2709]: I0620 19:32:19.967992 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzfvg\" (UniqueName: \"kubernetes.io/projected/61d4739e-5000-41f7-b4bb-470d5485375d-kube-api-access-gzfvg\") pod \"tigera-operator-68f7c7984d-9j9tb\" (UID: \"61d4739e-5000-41f7-b4bb-470d5485375d\") " pod="tigera-operator/tigera-operator-68f7c7984d-9j9tb" Jun 20 19:32:19.968044 kubelet[2709]: I0620 19:32:19.968032 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/61d4739e-5000-41f7-b4bb-470d5485375d-var-lib-calico\") pod \"tigera-operator-68f7c7984d-9j9tb\" (UID: \"61d4739e-5000-41f7-b4bb-470d5485375d\") " pod="tigera-operator/tigera-operator-68f7c7984d-9j9tb" Jun 20 19:32:20.075737 containerd[1593]: time="2025-06-20T19:32:20.075640504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vvgd7,Uid:33fc5ca3-5279-435a-91b8-43224a2440fb,Namespace:kube-system,Attempt:0,}" Jun 20 19:32:20.232117 containerd[1593]: time="2025-06-20T19:32:20.232069239Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-68f7c7984d-9j9tb,Uid:61d4739e-5000-41f7-b4bb-470d5485375d,Namespace:tigera-operator,Attempt:0,}" Jun 20 19:32:20.347244 containerd[1593]: time="2025-06-20T19:32:20.347129906Z" level=info msg="connecting to shim 6ef2f5415db306d311635bf45f7163f4533342c756ae80ecd1293db0bf5aaf61" address="unix:///run/containerd/s/2a03a362a176aec70ecf88aea9502f5c4135714714f98a038d4e01a414e637d4" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:32:20.347837 containerd[1593]: time="2025-06-20T19:32:20.347800134Z" level=info msg="connecting to shim e33b41d39e1f11070ddfda5bcb6cb6aa3e369d88266bbb3fe549513c14a9580a" address="unix:///run/containerd/s/a883ecb142dbda05af1d2fce90df4f3560c430af2f39675a7bbd6d417cb28b49" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:32:20.380906 systemd[1]: Started cri-containerd-6ef2f5415db306d311635bf45f7163f4533342c756ae80ecd1293db0bf5aaf61.scope - libcontainer container 6ef2f5415db306d311635bf45f7163f4533342c756ae80ecd1293db0bf5aaf61. Jun 20 19:32:20.384657 systemd[1]: Started cri-containerd-e33b41d39e1f11070ddfda5bcb6cb6aa3e369d88266bbb3fe549513c14a9580a.scope - libcontainer container e33b41d39e1f11070ddfda5bcb6cb6aa3e369d88266bbb3fe549513c14a9580a. Jun 20 19:32:20.410011 containerd[1593]: time="2025-06-20T19:32:20.409910631Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vvgd7,Uid:33fc5ca3-5279-435a-91b8-43224a2440fb,Namespace:kube-system,Attempt:0,} returns sandbox id \"e33b41d39e1f11070ddfda5bcb6cb6aa3e369d88266bbb3fe549513c14a9580a\"" Jun 20 19:32:20.414812 containerd[1593]: time="2025-06-20T19:32:20.414760075Z" level=info msg="CreateContainer within sandbox \"e33b41d39e1f11070ddfda5bcb6cb6aa3e369d88266bbb3fe549513c14a9580a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jun 20 19:32:20.427475 containerd[1593]: time="2025-06-20T19:32:20.427271141Z" level=info msg="Container cc02d863c291f11ff1b7af8d3a43da1371a173928afb6918c02bf5f0a3728095: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:32:20.432237 containerd[1593]: time="2025-06-20T19:32:20.432201043Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-68f7c7984d-9j9tb,Uid:61d4739e-5000-41f7-b4bb-470d5485375d,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"6ef2f5415db306d311635bf45f7163f4533342c756ae80ecd1293db0bf5aaf61\"" Jun 20 19:32:20.433616 containerd[1593]: time="2025-06-20T19:32:20.433576928Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.1\"" Jun 20 19:32:20.439211 containerd[1593]: time="2025-06-20T19:32:20.439169262Z" level=info msg="CreateContainer within sandbox \"e33b41d39e1f11070ddfda5bcb6cb6aa3e369d88266bbb3fe549513c14a9580a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"cc02d863c291f11ff1b7af8d3a43da1371a173928afb6918c02bf5f0a3728095\"" Jun 20 19:32:20.439667 containerd[1593]: time="2025-06-20T19:32:20.439639447Z" level=info msg="StartContainer for \"cc02d863c291f11ff1b7af8d3a43da1371a173928afb6918c02bf5f0a3728095\"" Jun 20 19:32:20.441018 containerd[1593]: time="2025-06-20T19:32:20.440987107Z" level=info msg="connecting to shim cc02d863c291f11ff1b7af8d3a43da1371a173928afb6918c02bf5f0a3728095" address="unix:///run/containerd/s/a883ecb142dbda05af1d2fce90df4f3560c430af2f39675a7bbd6d417cb28b49" protocol=ttrpc version=3 Jun 20 19:32:20.467909 systemd[1]: Started cri-containerd-cc02d863c291f11ff1b7af8d3a43da1371a173928afb6918c02bf5f0a3728095.scope - libcontainer container cc02d863c291f11ff1b7af8d3a43da1371a173928afb6918c02bf5f0a3728095. Jun 20 19:32:20.510176 containerd[1593]: time="2025-06-20T19:32:20.510124259Z" level=info msg="StartContainer for \"cc02d863c291f11ff1b7af8d3a43da1371a173928afb6918c02bf5f0a3728095\" returns successfully" Jun 20 19:32:22.223821 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount517243208.mount: Deactivated successfully. Jun 20 19:32:22.551833 containerd[1593]: time="2025-06-20T19:32:22.551698940Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:32:22.553104 containerd[1593]: time="2025-06-20T19:32:22.552975648Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.1: active requests=0, bytes read=25059858" Jun 20 19:32:22.554403 containerd[1593]: time="2025-06-20T19:32:22.554372120Z" level=info msg="ImageCreate event name:\"sha256:9fe1a04a0e6c440395d63018f1a72bb1ed07d81ed81be41e9b8adcc35a64164c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:32:22.556744 containerd[1593]: time="2025-06-20T19:32:22.556698893Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a2a468d1ac1b6a7049c1c2505cd933461fcadb127b5c3f98f03bd8e402bce456\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:32:22.557253 containerd[1593]: time="2025-06-20T19:32:22.557217197Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.1\" with image id \"sha256:9fe1a04a0e6c440395d63018f1a72bb1ed07d81ed81be41e9b8adcc35a64164c\", repo tag \"quay.io/tigera/operator:v1.38.1\", repo digest \"quay.io/tigera/operator@sha256:a2a468d1ac1b6a7049c1c2505cd933461fcadb127b5c3f98f03bd8e402bce456\", size \"25055853\" in 2.123603547s" Jun 20 19:32:22.557253 containerd[1593]: time="2025-06-20T19:32:22.557249209Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.1\" returns image reference \"sha256:9fe1a04a0e6c440395d63018f1a72bb1ed07d81ed81be41e9b8adcc35a64164c\"" Jun 20 19:32:22.559300 containerd[1593]: time="2025-06-20T19:32:22.559268821Z" level=info msg="CreateContainer within sandbox \"6ef2f5415db306d311635bf45f7163f4533342c756ae80ecd1293db0bf5aaf61\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jun 20 19:32:22.565768 containerd[1593]: time="2025-06-20T19:32:22.565729032Z" level=info msg="Container 2810b79d1da26c6ef123aa87cc24314e88d2d8054ea19acb94e2010a1e886e9a: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:32:22.569507 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount218750325.mount: Deactivated successfully. Jun 20 19:32:22.571845 containerd[1593]: time="2025-06-20T19:32:22.571810511Z" level=info msg="CreateContainer within sandbox \"6ef2f5415db306d311635bf45f7163f4533342c756ae80ecd1293db0bf5aaf61\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"2810b79d1da26c6ef123aa87cc24314e88d2d8054ea19acb94e2010a1e886e9a\"" Jun 20 19:32:22.572199 containerd[1593]: time="2025-06-20T19:32:22.572168822Z" level=info msg="StartContainer for \"2810b79d1da26c6ef123aa87cc24314e88d2d8054ea19acb94e2010a1e886e9a\"" Jun 20 19:32:22.572926 containerd[1593]: time="2025-06-20T19:32:22.572887288Z" level=info msg="connecting to shim 2810b79d1da26c6ef123aa87cc24314e88d2d8054ea19acb94e2010a1e886e9a" address="unix:///run/containerd/s/2a03a362a176aec70ecf88aea9502f5c4135714714f98a038d4e01a414e637d4" protocol=ttrpc version=3 Jun 20 19:32:22.621905 systemd[1]: Started cri-containerd-2810b79d1da26c6ef123aa87cc24314e88d2d8054ea19acb94e2010a1e886e9a.scope - libcontainer container 2810b79d1da26c6ef123aa87cc24314e88d2d8054ea19acb94e2010a1e886e9a. Jun 20 19:32:22.650503 containerd[1593]: time="2025-06-20T19:32:22.650462796Z" level=info msg="StartContainer for \"2810b79d1da26c6ef123aa87cc24314e88d2d8054ea19acb94e2010a1e886e9a\" returns successfully" Jun 20 19:32:23.190615 kubelet[2709]: I0620 19:32:23.190550 2709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-vvgd7" podStartSLOduration=4.190531926 podStartE2EDuration="4.190531926s" podCreationTimestamp="2025-06-20 19:32:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:32:21.186124903 +0000 UTC m=+6.391981256" watchObservedRunningTime="2025-06-20 19:32:23.190531926 +0000 UTC m=+8.396388269" Jun 20 19:32:27.766463 update_engine[1581]: I20250620 19:32:27.766321 1581 update_attempter.cc:509] Updating boot flags... Jun 20 19:32:27.804105 sudo[1801]: pam_unix(sudo:session): session closed for user root Jun 20 19:32:27.809715 sshd[1800]: Connection closed by 10.0.0.1 port 33942 Jun 20 19:32:27.807171 sshd-session[1798]: pam_unix(sshd:session): session closed for user core Jun 20 19:32:27.812738 systemd[1]: sshd@6-10.0.0.149:22-10.0.0.1:33942.service: Deactivated successfully. Jun 20 19:32:27.818701 systemd[1]: session-7.scope: Deactivated successfully. Jun 20 19:32:27.823206 systemd[1]: session-7.scope: Consumed 4.610s CPU time, 228.8M memory peak. Jun 20 19:32:27.839745 systemd-logind[1580]: Session 7 logged out. Waiting for processes to exit. Jun 20 19:32:27.886363 systemd-logind[1580]: Removed session 7. Jun 20 19:32:28.405104 kubelet[2709]: I0620 19:32:28.405043 2709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-68f7c7984d-9j9tb" podStartSLOduration=7.2801142930000005 podStartE2EDuration="9.405006946s" podCreationTimestamp="2025-06-20 19:32:19 +0000 UTC" firstStartedPulling="2025-06-20 19:32:20.433149408 +0000 UTC m=+5.639005751" lastFinishedPulling="2025-06-20 19:32:22.558042051 +0000 UTC m=+7.763898404" observedRunningTime="2025-06-20 19:32:23.190644937 +0000 UTC m=+8.396501400" watchObservedRunningTime="2025-06-20 19:32:28.405006946 +0000 UTC m=+13.610863299" Jun 20 19:32:32.034913 systemd[1]: Created slice kubepods-besteffort-podf4f117ca_fcbd_44a8_b24c_014c8a7c3b57.slice - libcontainer container kubepods-besteffort-podf4f117ca_fcbd_44a8_b24c_014c8a7c3b57.slice. Jun 20 19:32:32.050197 kubelet[2709]: I0620 19:32:32.050110 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfcgb\" (UniqueName: \"kubernetes.io/projected/f4f117ca-fcbd-44a8-b24c-014c8a7c3b57-kube-api-access-kfcgb\") pod \"calico-typha-57ff54c66d-s5q45\" (UID: \"f4f117ca-fcbd-44a8-b24c-014c8a7c3b57\") " pod="calico-system/calico-typha-57ff54c66d-s5q45" Jun 20 19:32:32.050197 kubelet[2709]: I0620 19:32:32.050157 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f4f117ca-fcbd-44a8-b24c-014c8a7c3b57-tigera-ca-bundle\") pod \"calico-typha-57ff54c66d-s5q45\" (UID: \"f4f117ca-fcbd-44a8-b24c-014c8a7c3b57\") " pod="calico-system/calico-typha-57ff54c66d-s5q45" Jun 20 19:32:32.050197 kubelet[2709]: I0620 19:32:32.050180 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/f4f117ca-fcbd-44a8-b24c-014c8a7c3b57-typha-certs\") pod \"calico-typha-57ff54c66d-s5q45\" (UID: \"f4f117ca-fcbd-44a8-b24c-014c8a7c3b57\") " pod="calico-system/calico-typha-57ff54c66d-s5q45" Jun 20 19:32:32.341850 containerd[1593]: time="2025-06-20T19:32:32.341803137Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-57ff54c66d-s5q45,Uid:f4f117ca-fcbd-44a8-b24c-014c8a7c3b57,Namespace:calico-system,Attempt:0,}" Jun 20 19:32:32.444045 kubelet[2709]: I0620 19:32:32.443835 2709 status_manager.go:890] "Failed to get status for pod" podUID="93a784a3-cbfa-43c0-8b07-829327bbdf2a" pod="calico-system/calico-node-mzrs4" err="pods \"calico-node-mzrs4\" is forbidden: User \"system:node:localhost\" cannot get resource \"pods\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'localhost' and this object" Jun 20 19:32:32.444045 kubelet[2709]: W0620 19:32:32.443910 2709 reflector.go:569] object-"calico-system"/"node-certs": failed to list *v1.Secret: secrets "node-certs" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'localhost' and this object Jun 20 19:32:32.444045 kubelet[2709]: E0620 19:32:32.443932 2709 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"node-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"node-certs\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Jun 20 19:32:32.454708 systemd[1]: Created slice kubepods-besteffort-pod93a784a3_cbfa_43c0_8b07_829327bbdf2a.slice - libcontainer container kubepods-besteffort-pod93a784a3_cbfa_43c0_8b07_829327bbdf2a.slice. Jun 20 19:32:32.468234 containerd[1593]: time="2025-06-20T19:32:32.468190908Z" level=info msg="connecting to shim a3bdfbec8726971e97bef0497efea4f3abc2074b0b73b6d01a5cf9d72721fcea" address="unix:///run/containerd/s/aa67a4cdb25494a5c1c9b351b5ddf1161074b4a29beeb8a95117759e3874a1e4" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:32:32.500057 systemd[1]: Started cri-containerd-a3bdfbec8726971e97bef0497efea4f3abc2074b0b73b6d01a5cf9d72721fcea.scope - libcontainer container a3bdfbec8726971e97bef0497efea4f3abc2074b0b73b6d01a5cf9d72721fcea. Jun 20 19:32:32.543930 containerd[1593]: time="2025-06-20T19:32:32.543890641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-57ff54c66d-s5q45,Uid:f4f117ca-fcbd-44a8-b24c-014c8a7c3b57,Namespace:calico-system,Attempt:0,} returns sandbox id \"a3bdfbec8726971e97bef0497efea4f3abc2074b0b73b6d01a5cf9d72721fcea\"" Jun 20 19:32:32.546076 containerd[1593]: time="2025-06-20T19:32:32.545978379Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.1\"" Jun 20 19:32:32.553682 kubelet[2709]: I0620 19:32:32.553651 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/93a784a3-cbfa-43c0-8b07-829327bbdf2a-policysync\") pod \"calico-node-mzrs4\" (UID: \"93a784a3-cbfa-43c0-8b07-829327bbdf2a\") " pod="calico-system/calico-node-mzrs4" Jun 20 19:32:32.553926 kubelet[2709]: I0620 19:32:32.553896 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/93a784a3-cbfa-43c0-8b07-829327bbdf2a-flexvol-driver-host\") pod \"calico-node-mzrs4\" (UID: \"93a784a3-cbfa-43c0-8b07-829327bbdf2a\") " pod="calico-system/calico-node-mzrs4" Jun 20 19:32:32.553926 kubelet[2709]: I0620 19:32:32.553922 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/93a784a3-cbfa-43c0-8b07-829327bbdf2a-var-run-calico\") pod \"calico-node-mzrs4\" (UID: \"93a784a3-cbfa-43c0-8b07-829327bbdf2a\") " pod="calico-system/calico-node-mzrs4" Jun 20 19:32:32.554114 kubelet[2709]: I0620 19:32:32.553972 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/93a784a3-cbfa-43c0-8b07-829327bbdf2a-xtables-lock\") pod \"calico-node-mzrs4\" (UID: \"93a784a3-cbfa-43c0-8b07-829327bbdf2a\") " pod="calico-system/calico-node-mzrs4" Jun 20 19:32:32.554114 kubelet[2709]: I0620 19:32:32.554000 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/93a784a3-cbfa-43c0-8b07-829327bbdf2a-cni-net-dir\") pod \"calico-node-mzrs4\" (UID: \"93a784a3-cbfa-43c0-8b07-829327bbdf2a\") " pod="calico-system/calico-node-mzrs4" Jun 20 19:32:32.554162 kubelet[2709]: I0620 19:32:32.554115 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2t2r\" (UniqueName: \"kubernetes.io/projected/93a784a3-cbfa-43c0-8b07-829327bbdf2a-kube-api-access-v2t2r\") pod \"calico-node-mzrs4\" (UID: \"93a784a3-cbfa-43c0-8b07-829327bbdf2a\") " pod="calico-system/calico-node-mzrs4" Jun 20 19:32:32.554162 kubelet[2709]: I0620 19:32:32.554140 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/93a784a3-cbfa-43c0-8b07-829327bbdf2a-cni-bin-dir\") pod \"calico-node-mzrs4\" (UID: \"93a784a3-cbfa-43c0-8b07-829327bbdf2a\") " pod="calico-system/calico-node-mzrs4" Jun 20 19:32:32.554162 kubelet[2709]: I0620 19:32:32.554156 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/93a784a3-cbfa-43c0-8b07-829327bbdf2a-node-certs\") pod \"calico-node-mzrs4\" (UID: \"93a784a3-cbfa-43c0-8b07-829327bbdf2a\") " pod="calico-system/calico-node-mzrs4" Jun 20 19:32:32.554255 kubelet[2709]: I0620 19:32:32.554169 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/93a784a3-cbfa-43c0-8b07-829327bbdf2a-var-lib-calico\") pod \"calico-node-mzrs4\" (UID: \"93a784a3-cbfa-43c0-8b07-829327bbdf2a\") " pod="calico-system/calico-node-mzrs4" Jun 20 19:32:32.554255 kubelet[2709]: I0620 19:32:32.554218 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/93a784a3-cbfa-43c0-8b07-829327bbdf2a-cni-log-dir\") pod \"calico-node-mzrs4\" (UID: \"93a784a3-cbfa-43c0-8b07-829327bbdf2a\") " pod="calico-system/calico-node-mzrs4" Jun 20 19:32:32.554255 kubelet[2709]: I0620 19:32:32.554232 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/93a784a3-cbfa-43c0-8b07-829327bbdf2a-lib-modules\") pod \"calico-node-mzrs4\" (UID: \"93a784a3-cbfa-43c0-8b07-829327bbdf2a\") " pod="calico-system/calico-node-mzrs4" Jun 20 19:32:32.554255 kubelet[2709]: I0620 19:32:32.554248 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/93a784a3-cbfa-43c0-8b07-829327bbdf2a-tigera-ca-bundle\") pod \"calico-node-mzrs4\" (UID: \"93a784a3-cbfa-43c0-8b07-829327bbdf2a\") " pod="calico-system/calico-node-mzrs4" Jun 20 19:32:32.569789 kubelet[2709]: E0620 19:32:32.569522 2709 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-82mq6" podUID="10fc28c5-4d33-4680-87d1-31c5370f21a5" Jun 20 19:32:32.655623 kubelet[2709]: I0620 19:32:32.655102 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/10fc28c5-4d33-4680-87d1-31c5370f21a5-socket-dir\") pod \"csi-node-driver-82mq6\" (UID: \"10fc28c5-4d33-4680-87d1-31c5370f21a5\") " pod="calico-system/csi-node-driver-82mq6" Jun 20 19:32:32.655623 kubelet[2709]: I0620 19:32:32.655152 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/10fc28c5-4d33-4680-87d1-31c5370f21a5-kubelet-dir\") pod \"csi-node-driver-82mq6\" (UID: \"10fc28c5-4d33-4680-87d1-31c5370f21a5\") " pod="calico-system/csi-node-driver-82mq6" Jun 20 19:32:32.655623 kubelet[2709]: I0620 19:32:32.655246 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/10fc28c5-4d33-4680-87d1-31c5370f21a5-registration-dir\") pod \"csi-node-driver-82mq6\" (UID: \"10fc28c5-4d33-4680-87d1-31c5370f21a5\") " pod="calico-system/csi-node-driver-82mq6" Jun 20 19:32:32.655623 kubelet[2709]: I0620 19:32:32.655287 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/10fc28c5-4d33-4680-87d1-31c5370f21a5-varrun\") pod \"csi-node-driver-82mq6\" (UID: \"10fc28c5-4d33-4680-87d1-31c5370f21a5\") " pod="calico-system/csi-node-driver-82mq6" Jun 20 19:32:32.655623 kubelet[2709]: I0620 19:32:32.655307 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4znx\" (UniqueName: \"kubernetes.io/projected/10fc28c5-4d33-4680-87d1-31c5370f21a5-kube-api-access-p4znx\") pod \"csi-node-driver-82mq6\" (UID: \"10fc28c5-4d33-4680-87d1-31c5370f21a5\") " pod="calico-system/csi-node-driver-82mq6" Jun 20 19:32:32.657063 kubelet[2709]: E0620 19:32:32.657027 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:32:32.657063 kubelet[2709]: W0620 19:32:32.657047 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:32:32.657219 kubelet[2709]: E0620 19:32:32.657079 2709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:32:32.663682 kubelet[2709]: E0620 19:32:32.663646 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:32:32.663682 kubelet[2709]: W0620 19:32:32.663663 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:32:32.663682 kubelet[2709]: E0620 19:32:32.663680 2709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:32:32.756004 kubelet[2709]: E0620 19:32:32.755932 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:32:32.756004 kubelet[2709]: W0620 19:32:32.755961 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:32:32.756004 kubelet[2709]: E0620 19:32:32.755995 2709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:32:32.756304 kubelet[2709]: E0620 19:32:32.756281 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:32:32.756304 kubelet[2709]: W0620 19:32:32.756293 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:32:32.756421 kubelet[2709]: E0620 19:32:32.756306 2709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:32:32.756496 kubelet[2709]: E0620 19:32:32.756483 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:32:32.756496 kubelet[2709]: W0620 19:32:32.756492 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:32:32.756548 kubelet[2709]: E0620 19:32:32.756505 2709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:32:32.756709 kubelet[2709]: E0620 19:32:32.756695 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:32:32.756709 kubelet[2709]: W0620 19:32:32.756707 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:32:32.756768 kubelet[2709]: E0620 19:32:32.756720 2709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:32:32.757028 kubelet[2709]: E0620 19:32:32.756995 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:32:32.757028 kubelet[2709]: W0620 19:32:32.757019 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:32:32.757073 kubelet[2709]: E0620 19:32:32.757049 2709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:32:32.757237 kubelet[2709]: E0620 19:32:32.757222 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:32:32.757237 kubelet[2709]: W0620 19:32:32.757233 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:32:32.757286 kubelet[2709]: E0620 19:32:32.757249 2709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:32:32.757429 kubelet[2709]: E0620 19:32:32.757414 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:32:32.757429 kubelet[2709]: W0620 19:32:32.757425 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:32:32.757477 kubelet[2709]: E0620 19:32:32.757438 2709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:32:32.757635 kubelet[2709]: E0620 19:32:32.757621 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:32:32.757635 kubelet[2709]: W0620 19:32:32.757631 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:32:32.757686 kubelet[2709]: E0620 19:32:32.757645 2709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:32:32.757940 kubelet[2709]: E0620 19:32:32.757918 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:32:32.757974 kubelet[2709]: W0620 19:32:32.757939 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:32:32.757974 kubelet[2709]: E0620 19:32:32.757967 2709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:32:32.758155 kubelet[2709]: E0620 19:32:32.758140 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:32:32.758155 kubelet[2709]: W0620 19:32:32.758151 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:32:32.758199 kubelet[2709]: E0620 19:32:32.758180 2709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:32:32.758337 kubelet[2709]: E0620 19:32:32.758322 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:32:32.758337 kubelet[2709]: W0620 19:32:32.758332 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:32:32.758390 kubelet[2709]: E0620 19:32:32.758359 2709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:32:32.758516 kubelet[2709]: E0620 19:32:32.758502 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:32:32.758516 kubelet[2709]: W0620 19:32:32.758512 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:32:32.758561 kubelet[2709]: E0620 19:32:32.758542 2709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:32:32.758689 kubelet[2709]: E0620 19:32:32.758675 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:32:32.758689 kubelet[2709]: W0620 19:32:32.758685 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:32:32.758730 kubelet[2709]: E0620 19:32:32.758713 2709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:32:32.758921 kubelet[2709]: E0620 19:32:32.758906 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:32:32.758921 kubelet[2709]: W0620 19:32:32.758920 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:32:32.758969 kubelet[2709]: E0620 19:32:32.758948 2709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:32:32.759102 kubelet[2709]: E0620 19:32:32.759088 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:32:32.759102 kubelet[2709]: W0620 19:32:32.759098 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:32:32.759148 kubelet[2709]: E0620 19:32:32.759111 2709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:32:32.759300 kubelet[2709]: E0620 19:32:32.759285 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:32:32.759300 kubelet[2709]: W0620 19:32:32.759296 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:32:32.759346 kubelet[2709]: E0620 19:32:32.759309 2709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:32:32.759573 kubelet[2709]: E0620 19:32:32.759542 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:32:32.759573 kubelet[2709]: W0620 19:32:32.759556 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:32:32.759573 kubelet[2709]: E0620 19:32:32.759574 2709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:32:32.759787 kubelet[2709]: E0620 19:32:32.759728 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:32:32.759787 kubelet[2709]: W0620 19:32:32.759735 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:32:32.759828 kubelet[2709]: E0620 19:32:32.759790 2709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:32:32.759964 kubelet[2709]: E0620 19:32:32.759948 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:32:32.759964 kubelet[2709]: W0620 19:32:32.759960 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:32:32.760032 kubelet[2709]: E0620 19:32:32.759993 2709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:32:32.760296 kubelet[2709]: E0620 19:32:32.760174 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:32:32.760296 kubelet[2709]: W0620 19:32:32.760188 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:32:32.760296 kubelet[2709]: E0620 19:32:32.760223 2709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:32:32.760457 kubelet[2709]: E0620 19:32:32.760438 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:32:32.760457 kubelet[2709]: W0620 19:32:32.760453 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:32:32.760499 kubelet[2709]: E0620 19:32:32.760475 2709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:32:32.760892 kubelet[2709]: E0620 19:32:32.760796 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:32:32.760892 kubelet[2709]: W0620 19:32:32.760812 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:32:32.760892 kubelet[2709]: E0620 19:32:32.760832 2709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:32:32.761121 kubelet[2709]: E0620 19:32:32.761104 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:32:32.761121 kubelet[2709]: W0620 19:32:32.761116 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:32:32.761187 kubelet[2709]: E0620 19:32:32.761133 2709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:32:32.761536 kubelet[2709]: E0620 19:32:32.761506 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:32:32.761536 kubelet[2709]: W0620 19:32:32.761520 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:32:32.761536 kubelet[2709]: E0620 19:32:32.761537 2709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:32:32.761783 kubelet[2709]: E0620 19:32:32.761744 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:32:32.761783 kubelet[2709]: W0620 19:32:32.761758 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:32:32.761783 kubelet[2709]: E0620 19:32:32.761784 2709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:32:32.766597 kubelet[2709]: E0620 19:32:32.766566 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:32:32.766597 kubelet[2709]: W0620 19:32:32.766579 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:32:32.766597 kubelet[2709]: E0620 19:32:32.766588 2709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:32:33.655834 kubelet[2709]: E0620 19:32:33.655785 2709 secret.go:189] Couldn't get secret calico-system/node-certs: failed to sync secret cache: timed out waiting for the condition Jun 20 19:32:33.656368 kubelet[2709]: E0620 19:32:33.655902 2709 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/93a784a3-cbfa-43c0-8b07-829327bbdf2a-node-certs podName:93a784a3-cbfa-43c0-8b07-829327bbdf2a nodeName:}" failed. No retries permitted until 2025-06-20 19:32:34.155880014 +0000 UTC m=+19.361736367 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-certs" (UniqueName: "kubernetes.io/secret/93a784a3-cbfa-43c0-8b07-829327bbdf2a-node-certs") pod "calico-node-mzrs4" (UID: "93a784a3-cbfa-43c0-8b07-829327bbdf2a") : failed to sync secret cache: timed out waiting for the condition Jun 20 19:32:33.663641 kubelet[2709]: E0620 19:32:33.663609 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:32:33.663641 kubelet[2709]: W0620 19:32:33.663625 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:32:33.663641 kubelet[2709]: E0620 19:32:33.663647 2709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:32:33.765052 kubelet[2709]: E0620 19:32:33.765016 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:32:33.765052 kubelet[2709]: W0620 19:32:33.765039 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:32:33.765052 kubelet[2709]: E0620 19:32:33.765059 2709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:32:33.865924 kubelet[2709]: E0620 19:32:33.865878 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:32:33.865924 kubelet[2709]: W0620 19:32:33.865900 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:32:33.865924 kubelet[2709]: E0620 19:32:33.865917 2709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:32:33.967067 kubelet[2709]: E0620 19:32:33.966937 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:32:33.967067 kubelet[2709]: W0620 19:32:33.966965 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:32:33.967067 kubelet[2709]: E0620 19:32:33.966986 2709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:32:34.067751 kubelet[2709]: E0620 19:32:34.067709 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:32:34.067751 kubelet[2709]: W0620 19:32:34.067737 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:32:34.067751 kubelet[2709]: E0620 19:32:34.067759 2709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:32:34.097652 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2357137962.mount: Deactivated successfully. Jun 20 19:32:34.151382 kubelet[2709]: E0620 19:32:34.151333 2709 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-82mq6" podUID="10fc28c5-4d33-4680-87d1-31c5370f21a5" Jun 20 19:32:34.168691 kubelet[2709]: E0620 19:32:34.168652 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:32:34.168691 kubelet[2709]: W0620 19:32:34.168673 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:32:34.168691 kubelet[2709]: E0620 19:32:34.168688 2709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:32:34.168983 kubelet[2709]: E0620 19:32:34.168956 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:32:34.168983 kubelet[2709]: W0620 19:32:34.168969 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:32:34.168983 kubelet[2709]: E0620 19:32:34.168977 2709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:32:34.169185 kubelet[2709]: E0620 19:32:34.169148 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:32:34.169185 kubelet[2709]: W0620 19:32:34.169156 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:32:34.169185 kubelet[2709]: E0620 19:32:34.169164 2709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:32:34.169438 kubelet[2709]: E0620 19:32:34.169421 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:32:34.169438 kubelet[2709]: W0620 19:32:34.169433 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:32:34.169492 kubelet[2709]: E0620 19:32:34.169443 2709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:32:34.169721 kubelet[2709]: E0620 19:32:34.169703 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:32:34.169721 kubelet[2709]: W0620 19:32:34.169715 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:32:34.169805 kubelet[2709]: E0620 19:32:34.169724 2709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:32:34.176536 kubelet[2709]: E0620 19:32:34.176513 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:32:34.176536 kubelet[2709]: W0620 19:32:34.176534 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:32:34.176699 kubelet[2709]: E0620 19:32:34.176544 2709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:32:34.263403 containerd[1593]: time="2025-06-20T19:32:34.263280694Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-mzrs4,Uid:93a784a3-cbfa-43c0-8b07-829327bbdf2a,Namespace:calico-system,Attempt:0,}" Jun 20 19:32:34.829127 containerd[1593]: time="2025-06-20T19:32:34.829075656Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:32:34.829967 containerd[1593]: time="2025-06-20T19:32:34.829883317Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.1: active requests=0, bytes read=35227888" Jun 20 19:32:34.855423 containerd[1593]: time="2025-06-20T19:32:34.855381148Z" level=info msg="ImageCreate event name:\"sha256:11d920cd1d8c935bdf3cb40dd9e67f22c3624df627bdd58cf6d0e503230688d7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:32:34.865021 containerd[1593]: time="2025-06-20T19:32:34.864961624Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f1edaa4eaa6349a958c409e0dab2d6ee7d1234e5f0eeefc9f508d0b1c9d7d0d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:32:34.865345 containerd[1593]: time="2025-06-20T19:32:34.865301457Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.1\" with image id \"sha256:11d920cd1d8c935bdf3cb40dd9e67f22c3624df627bdd58cf6d0e503230688d7\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f1edaa4eaa6349a958c409e0dab2d6ee7d1234e5f0eeefc9f508d0b1c9d7d0d1\", size \"35227742\" in 2.31927696s" Jun 20 19:32:34.865345 containerd[1593]: time="2025-06-20T19:32:34.865340442Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.1\" returns image reference \"sha256:11d920cd1d8c935bdf3cb40dd9e67f22c3624df627bdd58cf6d0e503230688d7\"" Jun 20 19:32:34.868455 containerd[1593]: time="2025-06-20T19:32:34.868230291Z" level=info msg="connecting to shim c0d8202b8a8db5557f223ecba8cda476f9d0afda9702554f14babd91577a6205" address="unix:///run/containerd/s/e9fd726b1d43da2867985d35a78a024a02292f1c3b938ed9ea8240f76f2a3d2b" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:32:34.879473 containerd[1593]: time="2025-06-20T19:32:34.879430968Z" level=info msg="CreateContainer within sandbox \"a3bdfbec8726971e97bef0497efea4f3abc2074b0b73b6d01a5cf9d72721fcea\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jun 20 19:32:34.890129 containerd[1593]: time="2025-06-20T19:32:34.890104824Z" level=info msg="Container 0b14efd79c6bcd2d4ddbf884b5e82dcda9b60f9ac31d8db1c188c9f639e9a4c6: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:32:34.895042 systemd[1]: Started cri-containerd-c0d8202b8a8db5557f223ecba8cda476f9d0afda9702554f14babd91577a6205.scope - libcontainer container c0d8202b8a8db5557f223ecba8cda476f9d0afda9702554f14babd91577a6205. Jun 20 19:32:34.897818 containerd[1593]: time="2025-06-20T19:32:34.897767638Z" level=info msg="CreateContainer within sandbox \"a3bdfbec8726971e97bef0497efea4f3abc2074b0b73b6d01a5cf9d72721fcea\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"0b14efd79c6bcd2d4ddbf884b5e82dcda9b60f9ac31d8db1c188c9f639e9a4c6\"" Jun 20 19:32:34.898410 containerd[1593]: time="2025-06-20T19:32:34.898377028Z" level=info msg="StartContainer for \"0b14efd79c6bcd2d4ddbf884b5e82dcda9b60f9ac31d8db1c188c9f639e9a4c6\"" Jun 20 19:32:34.899340 containerd[1593]: time="2025-06-20T19:32:34.899309588Z" level=info msg="connecting to shim 0b14efd79c6bcd2d4ddbf884b5e82dcda9b60f9ac31d8db1c188c9f639e9a4c6" address="unix:///run/containerd/s/aa67a4cdb25494a5c1c9b351b5ddf1161074b4a29beeb8a95117759e3874a1e4" protocol=ttrpc version=3 Jun 20 19:32:34.921910 systemd[1]: Started cri-containerd-0b14efd79c6bcd2d4ddbf884b5e82dcda9b60f9ac31d8db1c188c9f639e9a4c6.scope - libcontainer container 0b14efd79c6bcd2d4ddbf884b5e82dcda9b60f9ac31d8db1c188c9f639e9a4c6. Jun 20 19:32:34.926687 containerd[1593]: time="2025-06-20T19:32:34.926649155Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-mzrs4,Uid:93a784a3-cbfa-43c0-8b07-829327bbdf2a,Namespace:calico-system,Attempt:0,} returns sandbox id \"c0d8202b8a8db5557f223ecba8cda476f9d0afda9702554f14babd91577a6205\"" Jun 20 19:32:34.929654 containerd[1593]: time="2025-06-20T19:32:34.929632825Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1\"" Jun 20 19:32:34.974977 containerd[1593]: time="2025-06-20T19:32:34.974921497Z" level=info msg="StartContainer for \"0b14efd79c6bcd2d4ddbf884b5e82dcda9b60f9ac31d8db1c188c9f639e9a4c6\" returns successfully" Jun 20 19:32:35.250976 kubelet[2709]: E0620 19:32:35.250865 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:32:35.250976 kubelet[2709]: W0620 19:32:35.250893 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:32:35.250976 kubelet[2709]: E0620 19:32:35.250912 2709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:32:35.251397 kubelet[2709]: E0620 19:32:35.251107 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:32:35.251397 kubelet[2709]: W0620 19:32:35.251115 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:32:35.251397 kubelet[2709]: E0620 19:32:35.251123 2709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:32:35.251397 kubelet[2709]: E0620 19:32:35.251285 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:32:35.251397 kubelet[2709]: W0620 19:32:35.251293 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:32:35.251397 kubelet[2709]: E0620 19:32:35.251300 2709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:32:35.251689 kubelet[2709]: E0620 19:32:35.251564 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:32:35.251689 kubelet[2709]: W0620 19:32:35.251573 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:32:35.251689 kubelet[2709]: E0620 19:32:35.251582 2709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:32:35.251821 kubelet[2709]: E0620 19:32:35.251796 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:32:35.251821 kubelet[2709]: W0620 19:32:35.251808 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:32:35.251821 kubelet[2709]: E0620 19:32:35.251817 2709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:32:35.252044 kubelet[2709]: E0620 19:32:35.252012 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:32:35.252074 kubelet[2709]: W0620 19:32:35.252045 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:32:35.252108 kubelet[2709]: E0620 19:32:35.252073 2709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:32:35.252369 kubelet[2709]: E0620 19:32:35.252339 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:32:35.252369 kubelet[2709]: W0620 19:32:35.252356 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:32:35.252369 kubelet[2709]: E0620 19:32:35.252366 2709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:32:35.252595 kubelet[2709]: E0620 19:32:35.252577 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:32:35.252595 kubelet[2709]: W0620 19:32:35.252590 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:32:35.252685 kubelet[2709]: E0620 19:32:35.252598 2709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:32:35.252834 kubelet[2709]: E0620 19:32:35.252819 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:32:35.252834 kubelet[2709]: W0620 19:32:35.252829 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:32:35.252892 kubelet[2709]: E0620 19:32:35.252847 2709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:32:35.253029 kubelet[2709]: E0620 19:32:35.253015 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:32:35.253066 kubelet[2709]: W0620 19:32:35.253042 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:32:35.253066 kubelet[2709]: E0620 19:32:35.253052 2709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:32:35.253231 kubelet[2709]: E0620 19:32:35.253216 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:32:35.253231 kubelet[2709]: W0620 19:32:35.253227 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:32:35.253302 kubelet[2709]: E0620 19:32:35.253235 2709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:32:35.253398 kubelet[2709]: E0620 19:32:35.253384 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:32:35.253398 kubelet[2709]: W0620 19:32:35.253394 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:32:35.253454 kubelet[2709]: E0620 19:32:35.253401 2709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:32:35.253573 kubelet[2709]: E0620 19:32:35.253556 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:32:35.253573 kubelet[2709]: W0620 19:32:35.253566 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:32:35.253573 kubelet[2709]: E0620 19:32:35.253574 2709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:32:35.253766 kubelet[2709]: E0620 19:32:35.253746 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:32:35.253766 kubelet[2709]: W0620 19:32:35.253761 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:32:35.253766 kubelet[2709]: E0620 19:32:35.253768 2709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:32:35.254030 kubelet[2709]: E0620 19:32:35.253992 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:32:35.254030 kubelet[2709]: W0620 19:32:35.254004 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:32:35.254030 kubelet[2709]: E0620 19:32:35.254020 2709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:32:35.276360 kubelet[2709]: E0620 19:32:35.276331 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:32:35.276360 kubelet[2709]: W0620 19:32:35.276348 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:32:35.276360 kubelet[2709]: E0620 19:32:35.276361 2709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:32:35.276652 kubelet[2709]: E0620 19:32:35.276619 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:32:35.276652 kubelet[2709]: W0620 19:32:35.276644 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:32:35.276699 kubelet[2709]: E0620 19:32:35.276672 2709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:32:35.276945 kubelet[2709]: E0620 19:32:35.276922 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:32:35.276945 kubelet[2709]: W0620 19:32:35.276937 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:32:35.277007 kubelet[2709]: E0620 19:32:35.276950 2709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:32:35.277185 kubelet[2709]: E0620 19:32:35.277162 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:32:35.277185 kubelet[2709]: W0620 19:32:35.277174 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:32:35.277237 kubelet[2709]: E0620 19:32:35.277188 2709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:32:35.277387 kubelet[2709]: E0620 19:32:35.277371 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:32:35.277387 kubelet[2709]: W0620 19:32:35.277384 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:32:35.277443 kubelet[2709]: E0620 19:32:35.277402 2709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:32:35.277594 kubelet[2709]: E0620 19:32:35.277579 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:32:35.277594 kubelet[2709]: W0620 19:32:35.277589 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:32:35.277647 kubelet[2709]: E0620 19:32:35.277604 2709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:32:35.277824 kubelet[2709]: E0620 19:32:35.277810 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:32:35.277824 kubelet[2709]: W0620 19:32:35.277820 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:32:35.277883 kubelet[2709]: E0620 19:32:35.277851 2709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:32:35.278004 kubelet[2709]: E0620 19:32:35.277988 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:32:35.278004 kubelet[2709]: W0620 19:32:35.277999 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:32:35.278072 kubelet[2709]: E0620 19:32:35.278025 2709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:32:35.278208 kubelet[2709]: E0620 19:32:35.278193 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:32:35.278208 kubelet[2709]: W0620 19:32:35.278205 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:32:35.278271 kubelet[2709]: E0620 19:32:35.278220 2709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:32:35.278407 kubelet[2709]: E0620 19:32:35.278393 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:32:35.278407 kubelet[2709]: W0620 19:32:35.278403 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:32:35.278457 kubelet[2709]: E0620 19:32:35.278417 2709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:32:35.278586 kubelet[2709]: E0620 19:32:35.278572 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:32:35.278586 kubelet[2709]: W0620 19:32:35.278581 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:32:35.278636 kubelet[2709]: E0620 19:32:35.278594 2709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:32:35.278797 kubelet[2709]: E0620 19:32:35.278782 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:32:35.278797 kubelet[2709]: W0620 19:32:35.278793 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:32:35.278844 kubelet[2709]: E0620 19:32:35.278805 2709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:32:35.279000 kubelet[2709]: E0620 19:32:35.278986 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:32:35.279022 kubelet[2709]: W0620 19:32:35.278999 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:32:35.279022 kubelet[2709]: E0620 19:32:35.279015 2709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:32:35.279187 kubelet[2709]: E0620 19:32:35.279176 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:32:35.279187 kubelet[2709]: W0620 19:32:35.279184 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:32:35.279233 kubelet[2709]: E0620 19:32:35.279197 2709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:32:35.279397 kubelet[2709]: E0620 19:32:35.279386 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:32:35.279418 kubelet[2709]: W0620 19:32:35.279396 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:32:35.279442 kubelet[2709]: E0620 19:32:35.279425 2709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:32:35.279575 kubelet[2709]: E0620 19:32:35.279564 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:32:35.279575 kubelet[2709]: W0620 19:32:35.279573 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:32:35.279629 kubelet[2709]: E0620 19:32:35.279602 2709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:32:35.279740 kubelet[2709]: E0620 19:32:35.279729 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:32:35.279740 kubelet[2709]: W0620 19:32:35.279738 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:32:35.279807 kubelet[2709]: E0620 19:32:35.279750 2709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:32:35.279940 kubelet[2709]: E0620 19:32:35.279924 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:32:35.279940 kubelet[2709]: W0620 19:32:35.279935 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:32:35.279980 kubelet[2709]: E0620 19:32:35.279943 2709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:32:36.151715 kubelet[2709]: E0620 19:32:36.151674 2709 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-82mq6" podUID="10fc28c5-4d33-4680-87d1-31c5370f21a5" Jun 20 19:32:36.206067 kubelet[2709]: I0620 19:32:36.206027 2709 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 20 19:32:36.261631 kubelet[2709]: E0620 19:32:36.261605 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:32:36.261631 kubelet[2709]: W0620 19:32:36.261623 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:32:36.262030 kubelet[2709]: E0620 19:32:36.261640 2709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:32:36.262030 kubelet[2709]: E0620 19:32:36.261870 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:32:36.262030 kubelet[2709]: W0620 19:32:36.261879 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:32:36.262030 kubelet[2709]: E0620 19:32:36.261887 2709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:32:36.262126 kubelet[2709]: E0620 19:32:36.262058 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:32:36.262126 kubelet[2709]: W0620 19:32:36.262066 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:32:36.262126 kubelet[2709]: E0620 19:32:36.262074 2709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:32:36.262295 kubelet[2709]: E0620 19:32:36.262277 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:32:36.262295 kubelet[2709]: W0620 19:32:36.262291 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:32:36.262362 kubelet[2709]: E0620 19:32:36.262302 2709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:32:36.262507 kubelet[2709]: E0620 19:32:36.262490 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:32:36.262507 kubelet[2709]: W0620 19:32:36.262503 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:32:36.262566 kubelet[2709]: E0620 19:32:36.262512 2709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:32:36.262677 kubelet[2709]: E0620 19:32:36.262663 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:32:36.262677 kubelet[2709]: W0620 19:32:36.262673 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:32:36.262729 kubelet[2709]: E0620 19:32:36.262681 2709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:32:36.262929 kubelet[2709]: E0620 19:32:36.262891 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:32:36.262929 kubelet[2709]: W0620 19:32:36.262916 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:32:36.263111 kubelet[2709]: E0620 19:32:36.262946 2709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:32:36.263223 kubelet[2709]: E0620 19:32:36.263206 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:32:36.263223 kubelet[2709]: W0620 19:32:36.263220 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:32:36.263271 kubelet[2709]: E0620 19:32:36.263234 2709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:32:36.263420 kubelet[2709]: E0620 19:32:36.263406 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:32:36.263420 kubelet[2709]: W0620 19:32:36.263415 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:32:36.263483 kubelet[2709]: E0620 19:32:36.263424 2709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:32:36.263592 kubelet[2709]: E0620 19:32:36.263578 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:32:36.263592 kubelet[2709]: W0620 19:32:36.263588 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:32:36.263635 kubelet[2709]: E0620 19:32:36.263596 2709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:32:36.263768 kubelet[2709]: E0620 19:32:36.263745 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:32:36.263768 kubelet[2709]: W0620 19:32:36.263755 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:32:36.263768 kubelet[2709]: E0620 19:32:36.263762 2709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:32:36.263939 kubelet[2709]: E0620 19:32:36.263925 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:32:36.263939 kubelet[2709]: W0620 19:32:36.263934 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:32:36.263990 kubelet[2709]: E0620 19:32:36.263941 2709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:32:36.264121 kubelet[2709]: E0620 19:32:36.264104 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:32:36.264121 kubelet[2709]: W0620 19:32:36.264114 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:32:36.264121 kubelet[2709]: E0620 19:32:36.264121 2709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:32:36.264295 kubelet[2709]: E0620 19:32:36.264280 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:32:36.264295 kubelet[2709]: W0620 19:32:36.264290 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:32:36.264337 kubelet[2709]: E0620 19:32:36.264297 2709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:32:36.264442 kubelet[2709]: E0620 19:32:36.264428 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:32:36.264442 kubelet[2709]: W0620 19:32:36.264438 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:32:36.264493 kubelet[2709]: E0620 19:32:36.264445 2709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:32:36.267418 containerd[1593]: time="2025-06-20T19:32:36.267373491Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:32:36.268075 containerd[1593]: time="2025-06-20T19:32:36.268038948Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1: active requests=0, bytes read=4441627" Jun 20 19:32:36.269061 containerd[1593]: time="2025-06-20T19:32:36.269016160Z" level=info msg="ImageCreate event name:\"sha256:2eb0d46821080fd806e1b7f8ca42889800fcb3f0af912b6fbb09a13b21454d48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:32:36.270851 containerd[1593]: time="2025-06-20T19:32:36.270819255Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:b9246fe925ee5b8a5c7dfe1d1c3c29063cbfd512663088b135a015828c20401e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:32:36.271353 containerd[1593]: time="2025-06-20T19:32:36.271321328Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1\" with image id \"sha256:2eb0d46821080fd806e1b7f8ca42889800fcb3f0af912b6fbb09a13b21454d48\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:b9246fe925ee5b8a5c7dfe1d1c3c29063cbfd512663088b135a015828c20401e\", size \"5934290\" in 1.341560757s" Jun 20 19:32:36.271379 containerd[1593]: time="2025-06-20T19:32:36.271350003Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1\" returns image reference \"sha256:2eb0d46821080fd806e1b7f8ca42889800fcb3f0af912b6fbb09a13b21454d48\"" Jun 20 19:32:36.273321 containerd[1593]: time="2025-06-20T19:32:36.273258771Z" level=info msg="CreateContainer within sandbox \"c0d8202b8a8db5557f223ecba8cda476f9d0afda9702554f14babd91577a6205\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jun 20 19:32:36.282278 containerd[1593]: time="2025-06-20T19:32:36.282227167Z" level=info msg="Container d38af84453d258df3008bf479bf7f4b46fda5fc8377bb463d317290a3ae95c6a: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:32:36.284405 kubelet[2709]: E0620 19:32:36.284289 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:32:36.284405 kubelet[2709]: W0620 19:32:36.284332 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:32:36.284405 kubelet[2709]: E0620 19:32:36.284352 2709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:32:36.284663 kubelet[2709]: E0620 19:32:36.284642 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:32:36.284663 kubelet[2709]: W0620 19:32:36.284656 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:32:36.284721 kubelet[2709]: E0620 19:32:36.284672 2709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:32:36.285084 kubelet[2709]: E0620 19:32:36.284929 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:32:36.285084 kubelet[2709]: W0620 19:32:36.284957 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:32:36.285084 kubelet[2709]: E0620 19:32:36.284966 2709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:32:36.285223 kubelet[2709]: E0620 19:32:36.285196 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:32:36.285223 kubelet[2709]: W0620 19:32:36.285210 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:32:36.285223 kubelet[2709]: E0620 19:32:36.285221 2709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:32:36.285520 kubelet[2709]: E0620 19:32:36.285501 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:32:36.285520 kubelet[2709]: W0620 19:32:36.285510 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:32:36.285581 kubelet[2709]: E0620 19:32:36.285526 2709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:32:36.285764 kubelet[2709]: E0620 19:32:36.285741 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:32:36.285764 kubelet[2709]: W0620 19:32:36.285759 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:32:36.285944 kubelet[2709]: E0620 19:32:36.285925 2709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:32:36.286358 kubelet[2709]: E0620 19:32:36.286068 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:32:36.286358 kubelet[2709]: W0620 19:32:36.286080 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:32:36.286358 kubelet[2709]: E0620 19:32:36.286132 2709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:32:36.286358 kubelet[2709]: E0620 19:32:36.286271 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:32:36.286358 kubelet[2709]: W0620 19:32:36.286278 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:32:36.286358 kubelet[2709]: E0620 19:32:36.286320 2709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:32:36.286950 kubelet[2709]: E0620 19:32:36.286635 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:32:36.286950 kubelet[2709]: W0620 19:32:36.286717 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:32:36.286950 kubelet[2709]: E0620 19:32:36.286842 2709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:32:36.287620 kubelet[2709]: E0620 19:32:36.287596 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:32:36.287620 kubelet[2709]: W0620 19:32:36.287611 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:32:36.287740 kubelet[2709]: E0620 19:32:36.287630 2709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:32:36.287892 kubelet[2709]: E0620 19:32:36.287864 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:32:36.287892 kubelet[2709]: W0620 19:32:36.287878 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:32:36.287986 kubelet[2709]: E0620 19:32:36.287921 2709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:32:36.288113 kubelet[2709]: E0620 19:32:36.288091 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:32:36.288113 kubelet[2709]: W0620 19:32:36.288103 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:32:36.288323 kubelet[2709]: E0620 19:32:36.288176 2709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:32:36.288351 kubelet[2709]: E0620 19:32:36.288337 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:32:36.288351 kubelet[2709]: W0620 19:32:36.288346 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:32:36.288351 kubelet[2709]: E0620 19:32:36.288362 2709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:32:36.288667 kubelet[2709]: E0620 19:32:36.288635 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:32:36.288667 kubelet[2709]: W0620 19:32:36.288647 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:32:36.288667 kubelet[2709]: E0620 19:32:36.288661 2709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:32:36.288988 kubelet[2709]: E0620 19:32:36.288970 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:32:36.288988 kubelet[2709]: W0620 19:32:36.288982 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:32:36.289078 kubelet[2709]: E0620 19:32:36.288998 2709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:32:36.289494 kubelet[2709]: E0620 19:32:36.289193 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:32:36.289494 kubelet[2709]: W0620 19:32:36.289205 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:32:36.289494 kubelet[2709]: E0620 19:32:36.289221 2709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:32:36.289608 kubelet[2709]: E0620 19:32:36.289595 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:32:36.289608 kubelet[2709]: W0620 19:32:36.289606 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:32:36.289662 kubelet[2709]: E0620 19:32:36.289620 2709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:32:36.289877 kubelet[2709]: E0620 19:32:36.289859 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:32:36.289912 kubelet[2709]: W0620 19:32:36.289872 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:32:36.289912 kubelet[2709]: E0620 19:32:36.289907 2709 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:32:36.291009 containerd[1593]: time="2025-06-20T19:32:36.290972097Z" level=info msg="CreateContainer within sandbox \"c0d8202b8a8db5557f223ecba8cda476f9d0afda9702554f14babd91577a6205\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"d38af84453d258df3008bf479bf7f4b46fda5fc8377bb463d317290a3ae95c6a\"" Jun 20 19:32:36.291572 containerd[1593]: time="2025-06-20T19:32:36.291535787Z" level=info msg="StartContainer for \"d38af84453d258df3008bf479bf7f4b46fda5fc8377bb463d317290a3ae95c6a\"" Jun 20 19:32:36.293074 containerd[1593]: time="2025-06-20T19:32:36.293037375Z" level=info msg="connecting to shim d38af84453d258df3008bf479bf7f4b46fda5fc8377bb463d317290a3ae95c6a" address="unix:///run/containerd/s/e9fd726b1d43da2867985d35a78a024a02292f1c3b938ed9ea8240f76f2a3d2b" protocol=ttrpc version=3 Jun 20 19:32:36.318119 systemd[1]: Started cri-containerd-d38af84453d258df3008bf479bf7f4b46fda5fc8377bb463d317290a3ae95c6a.scope - libcontainer container d38af84453d258df3008bf479bf7f4b46fda5fc8377bb463d317290a3ae95c6a. Jun 20 19:32:36.372972 systemd[1]: cri-containerd-d38af84453d258df3008bf479bf7f4b46fda5fc8377bb463d317290a3ae95c6a.scope: Deactivated successfully. Jun 20 19:32:36.373388 systemd[1]: cri-containerd-d38af84453d258df3008bf479bf7f4b46fda5fc8377bb463d317290a3ae95c6a.scope: Consumed 40ms CPU time, 6.3M memory peak, 4.6M written to disk. Jun 20 19:32:36.375943 containerd[1593]: time="2025-06-20T19:32:36.375910934Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d38af84453d258df3008bf479bf7f4b46fda5fc8377bb463d317290a3ae95c6a\" id:\"d38af84453d258df3008bf479bf7f4b46fda5fc8377bb463d317290a3ae95c6a\" pid:3409 exited_at:{seconds:1750447956 nanos:375461904}" Jun 20 19:32:36.613136 containerd[1593]: time="2025-06-20T19:32:36.613073515Z" level=info msg="received exit event container_id:\"d38af84453d258df3008bf479bf7f4b46fda5fc8377bb463d317290a3ae95c6a\" id:\"d38af84453d258df3008bf479bf7f4b46fda5fc8377bb463d317290a3ae95c6a\" pid:3409 exited_at:{seconds:1750447956 nanos:375461904}" Jun 20 19:32:36.615140 containerd[1593]: time="2025-06-20T19:32:36.615091442Z" level=info msg="StartContainer for \"d38af84453d258df3008bf479bf7f4b46fda5fc8377bb463d317290a3ae95c6a\" returns successfully" Jun 20 19:32:36.635121 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d38af84453d258df3008bf479bf7f4b46fda5fc8377bb463d317290a3ae95c6a-rootfs.mount: Deactivated successfully. Jun 20 19:32:37.210730 containerd[1593]: time="2025-06-20T19:32:37.210656945Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.1\"" Jun 20 19:32:37.243462 kubelet[2709]: I0620 19:32:37.243381 2709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-57ff54c66d-s5q45" podStartSLOduration=2.922379089 podStartE2EDuration="5.243365883s" podCreationTimestamp="2025-06-20 19:32:32 +0000 UTC" firstStartedPulling="2025-06-20 19:32:32.545350862 +0000 UTC m=+17.751207215" lastFinishedPulling="2025-06-20 19:32:34.866337656 +0000 UTC m=+20.072194009" observedRunningTime="2025-06-20 19:32:35.243471761 +0000 UTC m=+20.449328104" watchObservedRunningTime="2025-06-20 19:32:37.243365883 +0000 UTC m=+22.449222226" Jun 20 19:32:38.152068 kubelet[2709]: E0620 19:32:38.152027 2709 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-82mq6" podUID="10fc28c5-4d33-4680-87d1-31c5370f21a5" Jun 20 19:32:40.151996 kubelet[2709]: E0620 19:32:40.151942 2709 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-82mq6" podUID="10fc28c5-4d33-4680-87d1-31c5370f21a5" Jun 20 19:32:40.798302 containerd[1593]: time="2025-06-20T19:32:40.798246246Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:32:40.799044 containerd[1593]: time="2025-06-20T19:32:40.798999014Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.1: active requests=0, bytes read=70405879" Jun 20 19:32:40.800164 containerd[1593]: time="2025-06-20T19:32:40.800132019Z" level=info msg="ImageCreate event name:\"sha256:0d2cd976ff6ee711927e02b1c2ba0b532275ff85d5dc05fc413cc660d5bec68e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:32:40.801950 containerd[1593]: time="2025-06-20T19:32:40.801928120Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:930b33311eec7523e36d95977281681d74d33efff937302b26516b2bc03a5fe9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:32:40.802517 containerd[1593]: time="2025-06-20T19:32:40.802480335Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.1\" with image id \"sha256:0d2cd976ff6ee711927e02b1c2ba0b532275ff85d5dc05fc413cc660d5bec68e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:930b33311eec7523e36d95977281681d74d33efff937302b26516b2bc03a5fe9\", size \"71898582\" in 3.591775376s" Jun 20 19:32:40.802555 containerd[1593]: time="2025-06-20T19:32:40.802516173Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.1\" returns image reference \"sha256:0d2cd976ff6ee711927e02b1c2ba0b532275ff85d5dc05fc413cc660d5bec68e\"" Jun 20 19:32:40.804415 containerd[1593]: time="2025-06-20T19:32:40.804380333Z" level=info msg="CreateContainer within sandbox \"c0d8202b8a8db5557f223ecba8cda476f9d0afda9702554f14babd91577a6205\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jun 20 19:32:40.813202 containerd[1593]: time="2025-06-20T19:32:40.813164233Z" level=info msg="Container aa883983395ba38ae9da6cbfb56f3ed05f458e81765e5f3ec6782672d530b6ad: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:32:40.822350 containerd[1593]: time="2025-06-20T19:32:40.822313679Z" level=info msg="CreateContainer within sandbox \"c0d8202b8a8db5557f223ecba8cda476f9d0afda9702554f14babd91577a6205\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"aa883983395ba38ae9da6cbfb56f3ed05f458e81765e5f3ec6782672d530b6ad\"" Jun 20 19:32:40.822962 containerd[1593]: time="2025-06-20T19:32:40.822931881Z" level=info msg="StartContainer for \"aa883983395ba38ae9da6cbfb56f3ed05f458e81765e5f3ec6782672d530b6ad\"" Jun 20 19:32:40.824504 containerd[1593]: time="2025-06-20T19:32:40.824463065Z" level=info msg="connecting to shim aa883983395ba38ae9da6cbfb56f3ed05f458e81765e5f3ec6782672d530b6ad" address="unix:///run/containerd/s/e9fd726b1d43da2867985d35a78a024a02292f1c3b938ed9ea8240f76f2a3d2b" protocol=ttrpc version=3 Jun 20 19:32:40.845942 systemd[1]: Started cri-containerd-aa883983395ba38ae9da6cbfb56f3ed05f458e81765e5f3ec6782672d530b6ad.scope - libcontainer container aa883983395ba38ae9da6cbfb56f3ed05f458e81765e5f3ec6782672d530b6ad. Jun 20 19:32:40.960590 containerd[1593]: time="2025-06-20T19:32:40.960553171Z" level=info msg="StartContainer for \"aa883983395ba38ae9da6cbfb56f3ed05f458e81765e5f3ec6782672d530b6ad\" returns successfully" Jun 20 19:32:42.152016 kubelet[2709]: E0620 19:32:42.151939 2709 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-82mq6" podUID="10fc28c5-4d33-4680-87d1-31c5370f21a5" Jun 20 19:32:42.604623 systemd[1]: cri-containerd-aa883983395ba38ae9da6cbfb56f3ed05f458e81765e5f3ec6782672d530b6ad.scope: Deactivated successfully. Jun 20 19:32:42.605444 systemd[1]: cri-containerd-aa883983395ba38ae9da6cbfb56f3ed05f458e81765e5f3ec6782672d530b6ad.scope: Consumed 613ms CPU time, 177.5M memory peak, 3.3M read from disk, 171.2M written to disk. Jun 20 19:32:42.606431 containerd[1593]: time="2025-06-20T19:32:42.606357506Z" level=info msg="received exit event container_id:\"aa883983395ba38ae9da6cbfb56f3ed05f458e81765e5f3ec6782672d530b6ad\" id:\"aa883983395ba38ae9da6cbfb56f3ed05f458e81765e5f3ec6782672d530b6ad\" pid:3470 exited_at:{seconds:1750447962 nanos:606139610}" Jun 20 19:32:42.606862 containerd[1593]: time="2025-06-20T19:32:42.606436467Z" level=info msg="TaskExit event in podsandbox handler container_id:\"aa883983395ba38ae9da6cbfb56f3ed05f458e81765e5f3ec6782672d530b6ad\" id:\"aa883983395ba38ae9da6cbfb56f3ed05f458e81765e5f3ec6782672d530b6ad\" pid:3470 exited_at:{seconds:1750447962 nanos:606139610}" Jun 20 19:32:42.628006 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aa883983395ba38ae9da6cbfb56f3ed05f458e81765e5f3ec6782672d530b6ad-rootfs.mount: Deactivated successfully. Jun 20 19:32:42.694056 kubelet[2709]: I0620 19:32:42.694015 2709 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jun 20 19:32:43.087390 systemd[1]: Created slice kubepods-burstable-pod588ef3e4_eda0_4b6d_8388_71f60eb8e89d.slice - libcontainer container kubepods-burstable-pod588ef3e4_eda0_4b6d_8388_71f60eb8e89d.slice. Jun 20 19:32:43.123144 systemd[1]: Created slice kubepods-besteffort-pod930dcef4_43bd_41e9_9b33_cfe29691b1e8.slice - libcontainer container kubepods-besteffort-pod930dcef4_43bd_41e9_9b33_cfe29691b1e8.slice. Jun 20 19:32:43.128003 systemd[1]: Created slice kubepods-burstable-pod750ebe81_b0b4_4c6b_8270_1ad7fa1cd241.slice - libcontainer container kubepods-burstable-pod750ebe81_b0b4_4c6b_8270_1ad7fa1cd241.slice. Jun 20 19:32:43.130309 kubelet[2709]: I0620 19:32:43.130277 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xqcd\" (UniqueName: \"kubernetes.io/projected/588ef3e4-eda0-4b6d-8388-71f60eb8e89d-kube-api-access-8xqcd\") pod \"coredns-668d6bf9bc-nczfq\" (UID: \"588ef3e4-eda0-4b6d-8388-71f60eb8e89d\") " pod="kube-system/coredns-668d6bf9bc-nczfq" Jun 20 19:32:43.130406 kubelet[2709]: I0620 19:32:43.130318 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/588ef3e4-eda0-4b6d-8388-71f60eb8e89d-config-volume\") pod \"coredns-668d6bf9bc-nczfq\" (UID: \"588ef3e4-eda0-4b6d-8388-71f60eb8e89d\") " pod="kube-system/coredns-668d6bf9bc-nczfq" Jun 20 19:32:43.133837 systemd[1]: Created slice kubepods-besteffort-pod3cc2adaf_8ce5_4e33_9a9f_758c8f132fce.slice - libcontainer container kubepods-besteffort-pod3cc2adaf_8ce5_4e33_9a9f_758c8f132fce.slice. Jun 20 19:32:43.138181 systemd[1]: Created slice kubepods-besteffort-pod39b2c0d8_49ef_476a_8e57_87f01819d6f1.slice - libcontainer container kubepods-besteffort-pod39b2c0d8_49ef_476a_8e57_87f01819d6f1.slice. Jun 20 19:32:43.146438 systemd[1]: Created slice kubepods-besteffort-podaf7de0bd_f3dc_480d_9df4_554fb6215492.slice - libcontainer container kubepods-besteffort-podaf7de0bd_f3dc_480d_9df4_554fb6215492.slice. Jun 20 19:32:43.150885 systemd[1]: Created slice kubepods-besteffort-podbfc4a5a7_f2e7_4326_8172_338c5c23e0de.slice - libcontainer container kubepods-besteffort-podbfc4a5a7_f2e7_4326_8172_338c5c23e0de.slice. Jun 20 19:32:43.224918 containerd[1593]: time="2025-06-20T19:32:43.224878973Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.1\"" Jun 20 19:32:43.230746 kubelet[2709]: I0620 19:32:43.230706 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/39b2c0d8-49ef-476a-8e57-87f01819d6f1-tigera-ca-bundle\") pod \"calico-kube-controllers-7cd54bd598-j9rdn\" (UID: \"39b2c0d8-49ef-476a-8e57-87f01819d6f1\") " pod="calico-system/calico-kube-controllers-7cd54bd598-j9rdn" Jun 20 19:32:43.230746 kubelet[2709]: I0620 19:32:43.230744 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3cc2adaf-8ce5-4e33-9a9f-758c8f132fce-whisker-ca-bundle\") pod \"whisker-85dfc8d959-fndhf\" (UID: \"3cc2adaf-8ce5-4e33-9a9f-758c8f132fce\") " pod="calico-system/whisker-85dfc8d959-fndhf" Jun 20 19:32:43.231104 kubelet[2709]: I0620 19:32:43.230762 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqfsb\" (UniqueName: \"kubernetes.io/projected/750ebe81-b0b4-4c6b-8270-1ad7fa1cd241-kube-api-access-sqfsb\") pod \"coredns-668d6bf9bc-7sdlx\" (UID: \"750ebe81-b0b4-4c6b-8270-1ad7fa1cd241\") " pod="kube-system/coredns-668d6bf9bc-7sdlx" Jun 20 19:32:43.231104 kubelet[2709]: I0620 19:32:43.230790 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/af7de0bd-f3dc-480d-9df4-554fb6215492-calico-apiserver-certs\") pod \"calico-apiserver-7cc96d8cf6-s2hg5\" (UID: \"af7de0bd-f3dc-480d-9df4-554fb6215492\") " pod="calico-apiserver/calico-apiserver-7cc96d8cf6-s2hg5" Jun 20 19:32:43.231104 kubelet[2709]: I0620 19:32:43.230807 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tq6br\" (UniqueName: \"kubernetes.io/projected/bfc4a5a7-f2e7-4326-8172-338c5c23e0de-kube-api-access-tq6br\") pod \"calico-apiserver-7cc96d8cf6-czfrv\" (UID: \"bfc4a5a7-f2e7-4326-8172-338c5c23e0de\") " pod="calico-apiserver/calico-apiserver-7cc96d8cf6-czfrv" Jun 20 19:32:43.231104 kubelet[2709]: I0620 19:32:43.230823 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zm75s\" (UniqueName: \"kubernetes.io/projected/930dcef4-43bd-41e9-9b33-cfe29691b1e8-kube-api-access-zm75s\") pod \"goldmane-5bd85449d4-579gn\" (UID: \"930dcef4-43bd-41e9-9b33-cfe29691b1e8\") " pod="calico-system/goldmane-5bd85449d4-579gn" Jun 20 19:32:43.231104 kubelet[2709]: I0620 19:32:43.230882 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spwzf\" (UniqueName: \"kubernetes.io/projected/3cc2adaf-8ce5-4e33-9a9f-758c8f132fce-kube-api-access-spwzf\") pod \"whisker-85dfc8d959-fndhf\" (UID: \"3cc2adaf-8ce5-4e33-9a9f-758c8f132fce\") " pod="calico-system/whisker-85dfc8d959-fndhf" Jun 20 19:32:43.231238 kubelet[2709]: I0620 19:32:43.230945 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9dnd\" (UniqueName: \"kubernetes.io/projected/39b2c0d8-49ef-476a-8e57-87f01819d6f1-kube-api-access-b9dnd\") pod \"calico-kube-controllers-7cd54bd598-j9rdn\" (UID: \"39b2c0d8-49ef-476a-8e57-87f01819d6f1\") " pod="calico-system/calico-kube-controllers-7cd54bd598-j9rdn" Jun 20 19:32:43.231238 kubelet[2709]: I0620 19:32:43.230963 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/bfc4a5a7-f2e7-4326-8172-338c5c23e0de-calico-apiserver-certs\") pod \"calico-apiserver-7cc96d8cf6-czfrv\" (UID: \"bfc4a5a7-f2e7-4326-8172-338c5c23e0de\") " pod="calico-apiserver/calico-apiserver-7cc96d8cf6-czfrv" Jun 20 19:32:43.231238 kubelet[2709]: I0620 19:32:43.230980 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/930dcef4-43bd-41e9-9b33-cfe29691b1e8-goldmane-ca-bundle\") pod \"goldmane-5bd85449d4-579gn\" (UID: \"930dcef4-43bd-41e9-9b33-cfe29691b1e8\") " pod="calico-system/goldmane-5bd85449d4-579gn" Jun 20 19:32:43.231238 kubelet[2709]: I0620 19:32:43.230994 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/930dcef4-43bd-41e9-9b33-cfe29691b1e8-goldmane-key-pair\") pod \"goldmane-5bd85449d4-579gn\" (UID: \"930dcef4-43bd-41e9-9b33-cfe29691b1e8\") " pod="calico-system/goldmane-5bd85449d4-579gn" Jun 20 19:32:43.231238 kubelet[2709]: I0620 19:32:43.231011 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3cc2adaf-8ce5-4e33-9a9f-758c8f132fce-whisker-backend-key-pair\") pod \"whisker-85dfc8d959-fndhf\" (UID: \"3cc2adaf-8ce5-4e33-9a9f-758c8f132fce\") " pod="calico-system/whisker-85dfc8d959-fndhf" Jun 20 19:32:43.231357 kubelet[2709]: I0620 19:32:43.231048 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/930dcef4-43bd-41e9-9b33-cfe29691b1e8-config\") pod \"goldmane-5bd85449d4-579gn\" (UID: \"930dcef4-43bd-41e9-9b33-cfe29691b1e8\") " pod="calico-system/goldmane-5bd85449d4-579gn" Jun 20 19:32:43.231357 kubelet[2709]: I0620 19:32:43.231076 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/750ebe81-b0b4-4c6b-8270-1ad7fa1cd241-config-volume\") pod \"coredns-668d6bf9bc-7sdlx\" (UID: \"750ebe81-b0b4-4c6b-8270-1ad7fa1cd241\") " pod="kube-system/coredns-668d6bf9bc-7sdlx" Jun 20 19:32:43.231357 kubelet[2709]: I0620 19:32:43.231093 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqzmh\" (UniqueName: \"kubernetes.io/projected/af7de0bd-f3dc-480d-9df4-554fb6215492-kube-api-access-zqzmh\") pod \"calico-apiserver-7cc96d8cf6-s2hg5\" (UID: \"af7de0bd-f3dc-480d-9df4-554fb6215492\") " pod="calico-apiserver/calico-apiserver-7cc96d8cf6-s2hg5" Jun 20 19:32:43.394616 containerd[1593]: time="2025-06-20T19:32:43.394500157Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nczfq,Uid:588ef3e4-eda0-4b6d-8388-71f60eb8e89d,Namespace:kube-system,Attempt:0,}" Jun 20 19:32:43.427815 containerd[1593]: time="2025-06-20T19:32:43.427737692Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5bd85449d4-579gn,Uid:930dcef4-43bd-41e9-9b33-cfe29691b1e8,Namespace:calico-system,Attempt:0,}" Jun 20 19:32:43.432368 containerd[1593]: time="2025-06-20T19:32:43.432185742Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7sdlx,Uid:750ebe81-b0b4-4c6b-8270-1ad7fa1cd241,Namespace:kube-system,Attempt:0,}" Jun 20 19:32:43.437462 containerd[1593]: time="2025-06-20T19:32:43.437252762Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-85dfc8d959-fndhf,Uid:3cc2adaf-8ce5-4e33-9a9f-758c8f132fce,Namespace:calico-system,Attempt:0,}" Jun 20 19:32:43.441164 containerd[1593]: time="2025-06-20T19:32:43.441127659Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7cd54bd598-j9rdn,Uid:39b2c0d8-49ef-476a-8e57-87f01819d6f1,Namespace:calico-system,Attempt:0,}" Jun 20 19:32:43.450078 containerd[1593]: time="2025-06-20T19:32:43.450041994Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cc96d8cf6-s2hg5,Uid:af7de0bd-f3dc-480d-9df4-554fb6215492,Namespace:calico-apiserver,Attempt:0,}" Jun 20 19:32:43.454750 containerd[1593]: time="2025-06-20T19:32:43.454572762Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cc96d8cf6-czfrv,Uid:bfc4a5a7-f2e7-4326-8172-338c5c23e0de,Namespace:calico-apiserver,Attempt:0,}" Jun 20 19:32:43.496487 containerd[1593]: time="2025-06-20T19:32:43.496431563Z" level=error msg="Failed to destroy network for sandbox \"ef5f8e099bc21fad09b9a0f55ace63dee4dd4564763824a61972e383b627ad77\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:32:43.531626 containerd[1593]: time="2025-06-20T19:32:43.531557720Z" level=error msg="Failed to destroy network for sandbox \"f45e05e25eb13e3403d0ca3208b2b2cb0770d83eaca7232f0e33738a7bb5cfb9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:32:43.547548 containerd[1593]: time="2025-06-20T19:32:43.547487619Z" level=error msg="Failed to destroy network for sandbox \"7ec0f5d7003505da2b2265b591c1ab40d8ef3cc1fbb8736066ae8947551868d0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:32:43.552268 containerd[1593]: time="2025-06-20T19:32:43.552220462Z" level=error msg="Failed to destroy network for sandbox \"8361ec8396027a5fb3c7c7abe8456edf519a6e0650f3ea475969e896a03d36c4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:32:43.555906 containerd[1593]: time="2025-06-20T19:32:43.555853839Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-85dfc8d959-fndhf,Uid:3cc2adaf-8ce5-4e33-9a9f-758c8f132fce,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8361ec8396027a5fb3c7c7abe8456edf519a6e0650f3ea475969e896a03d36c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:32:43.559868 containerd[1593]: time="2025-06-20T19:32:43.559829709Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nczfq,Uid:588ef3e4-eda0-4b6d-8388-71f60eb8e89d,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef5f8e099bc21fad09b9a0f55ace63dee4dd4564763824a61972e383b627ad77\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:32:43.560652 containerd[1593]: time="2025-06-20T19:32:43.560617000Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5bd85449d4-579gn,Uid:930dcef4-43bd-41e9-9b33-cfe29691b1e8,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f45e05e25eb13e3403d0ca3208b2b2cb0770d83eaca7232f0e33738a7bb5cfb9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:32:43.560728 containerd[1593]: time="2025-06-20T19:32:43.560693756Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7sdlx,Uid:750ebe81-b0b4-4c6b-8270-1ad7fa1cd241,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ec0f5d7003505da2b2265b591c1ab40d8ef3cc1fbb8736066ae8947551868d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:32:43.562527 containerd[1593]: time="2025-06-20T19:32:43.562491603Z" level=error msg="Failed to destroy network for sandbox \"03bf61796bf8cb5815d628e5d4908d897069374d6d94952d657bf85e6623feca\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:32:43.564183 containerd[1593]: time="2025-06-20T19:32:43.563863900Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7cd54bd598-j9rdn,Uid:39b2c0d8-49ef-476a-8e57-87f01819d6f1,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"03bf61796bf8cb5815d628e5d4908d897069374d6d94952d657bf85e6623feca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:32:43.573397 kubelet[2709]: E0620 19:32:43.572620 2709 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef5f8e099bc21fad09b9a0f55ace63dee4dd4564763824a61972e383b627ad77\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:32:43.573397 kubelet[2709]: E0620 19:32:43.572702 2709 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef5f8e099bc21fad09b9a0f55ace63dee4dd4564763824a61972e383b627ad77\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-nczfq" Jun 20 19:32:43.573397 kubelet[2709]: E0620 19:32:43.572723 2709 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef5f8e099bc21fad09b9a0f55ace63dee4dd4564763824a61972e383b627ad77\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-nczfq" Jun 20 19:32:43.574718 kubelet[2709]: E0620 19:32:43.572766 2709 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-nczfq_kube-system(588ef3e4-eda0-4b6d-8388-71f60eb8e89d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-nczfq_kube-system(588ef3e4-eda0-4b6d-8388-71f60eb8e89d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ef5f8e099bc21fad09b9a0f55ace63dee4dd4564763824a61972e383b627ad77\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-nczfq" podUID="588ef3e4-eda0-4b6d-8388-71f60eb8e89d" Jun 20 19:32:43.574718 kubelet[2709]: E0620 19:32:43.573011 2709 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8361ec8396027a5fb3c7c7abe8456edf519a6e0650f3ea475969e896a03d36c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:32:43.574718 kubelet[2709]: E0620 19:32:43.573029 2709 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8361ec8396027a5fb3c7c7abe8456edf519a6e0650f3ea475969e896a03d36c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-85dfc8d959-fndhf" Jun 20 19:32:43.574974 kubelet[2709]: E0620 19:32:43.573043 2709 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8361ec8396027a5fb3c7c7abe8456edf519a6e0650f3ea475969e896a03d36c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-85dfc8d959-fndhf" Jun 20 19:32:43.574974 kubelet[2709]: E0620 19:32:43.573064 2709 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-85dfc8d959-fndhf_calico-system(3cc2adaf-8ce5-4e33-9a9f-758c8f132fce)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-85dfc8d959-fndhf_calico-system(3cc2adaf-8ce5-4e33-9a9f-758c8f132fce)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8361ec8396027a5fb3c7c7abe8456edf519a6e0650f3ea475969e896a03d36c4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-85dfc8d959-fndhf" podUID="3cc2adaf-8ce5-4e33-9a9f-758c8f132fce" Jun 20 19:32:43.574974 kubelet[2709]: E0620 19:32:43.573090 2709 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f45e05e25eb13e3403d0ca3208b2b2cb0770d83eaca7232f0e33738a7bb5cfb9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:32:43.575060 kubelet[2709]: E0620 19:32:43.573103 2709 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f45e05e25eb13e3403d0ca3208b2b2cb0770d83eaca7232f0e33738a7bb5cfb9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5bd85449d4-579gn" Jun 20 19:32:43.575060 kubelet[2709]: E0620 19:32:43.573124 2709 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f45e05e25eb13e3403d0ca3208b2b2cb0770d83eaca7232f0e33738a7bb5cfb9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5bd85449d4-579gn" Jun 20 19:32:43.575060 kubelet[2709]: E0620 19:32:43.573145 2709 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-5bd85449d4-579gn_calico-system(930dcef4-43bd-41e9-9b33-cfe29691b1e8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-5bd85449d4-579gn_calico-system(930dcef4-43bd-41e9-9b33-cfe29691b1e8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f45e05e25eb13e3403d0ca3208b2b2cb0770d83eaca7232f0e33738a7bb5cfb9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-5bd85449d4-579gn" podUID="930dcef4-43bd-41e9-9b33-cfe29691b1e8" Jun 20 19:32:43.575225 kubelet[2709]: E0620 19:32:43.573166 2709 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ec0f5d7003505da2b2265b591c1ab40d8ef3cc1fbb8736066ae8947551868d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:32:43.575225 kubelet[2709]: E0620 19:32:43.573179 2709 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ec0f5d7003505da2b2265b591c1ab40d8ef3cc1fbb8736066ae8947551868d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-7sdlx" Jun 20 19:32:43.575225 kubelet[2709]: E0620 19:32:43.573193 2709 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ec0f5d7003505da2b2265b591c1ab40d8ef3cc1fbb8736066ae8947551868d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-7sdlx" Jun 20 19:32:43.575301 kubelet[2709]: E0620 19:32:43.573230 2709 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-7sdlx_kube-system(750ebe81-b0b4-4c6b-8270-1ad7fa1cd241)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-7sdlx_kube-system(750ebe81-b0b4-4c6b-8270-1ad7fa1cd241)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7ec0f5d7003505da2b2265b591c1ab40d8ef3cc1fbb8736066ae8947551868d0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-7sdlx" podUID="750ebe81-b0b4-4c6b-8270-1ad7fa1cd241" Jun 20 19:32:43.575301 kubelet[2709]: E0620 19:32:43.573274 2709 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"03bf61796bf8cb5815d628e5d4908d897069374d6d94952d657bf85e6623feca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:32:43.575301 kubelet[2709]: E0620 19:32:43.573294 2709 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"03bf61796bf8cb5815d628e5d4908d897069374d6d94952d657bf85e6623feca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7cd54bd598-j9rdn" Jun 20 19:32:43.575500 kubelet[2709]: E0620 19:32:43.573316 2709 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"03bf61796bf8cb5815d628e5d4908d897069374d6d94952d657bf85e6623feca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7cd54bd598-j9rdn" Jun 20 19:32:43.575500 kubelet[2709]: E0620 19:32:43.573337 2709 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7cd54bd598-j9rdn_calico-system(39b2c0d8-49ef-476a-8e57-87f01819d6f1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7cd54bd598-j9rdn_calico-system(39b2c0d8-49ef-476a-8e57-87f01819d6f1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"03bf61796bf8cb5815d628e5d4908d897069374d6d94952d657bf85e6623feca\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7cd54bd598-j9rdn" podUID="39b2c0d8-49ef-476a-8e57-87f01819d6f1" Jun 20 19:32:43.576054 containerd[1593]: time="2025-06-20T19:32:43.576005678Z" level=error msg="Failed to destroy network for sandbox \"ad5e5f3a61d1cb03cd3caafc5011eb3b70672b8499213dc531f6f7b893be5bbe\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:32:43.577400 containerd[1593]: time="2025-06-20T19:32:43.577361963Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cc96d8cf6-czfrv,Uid:bfc4a5a7-f2e7-4326-8172-338c5c23e0de,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad5e5f3a61d1cb03cd3caafc5011eb3b70672b8499213dc531f6f7b893be5bbe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:32:43.577859 containerd[1593]: time="2025-06-20T19:32:43.577759221Z" level=error msg="Failed to destroy network for sandbox \"5277d2ba78494bca4763cfc115381c30f7386e31ec0af25fc778ffa57cc7b593\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:32:43.577908 kubelet[2709]: E0620 19:32:43.577613 2709 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad5e5f3a61d1cb03cd3caafc5011eb3b70672b8499213dc531f6f7b893be5bbe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:32:43.578198 kubelet[2709]: E0620 19:32:43.578012 2709 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad5e5f3a61d1cb03cd3caafc5011eb3b70672b8499213dc531f6f7b893be5bbe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7cc96d8cf6-czfrv" Jun 20 19:32:43.578198 kubelet[2709]: E0620 19:32:43.578067 2709 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad5e5f3a61d1cb03cd3caafc5011eb3b70672b8499213dc531f6f7b893be5bbe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7cc96d8cf6-czfrv" Jun 20 19:32:43.578198 kubelet[2709]: E0620 19:32:43.578161 2709 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7cc96d8cf6-czfrv_calico-apiserver(bfc4a5a7-f2e7-4326-8172-338c5c23e0de)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7cc96d8cf6-czfrv_calico-apiserver(bfc4a5a7-f2e7-4326-8172-338c5c23e0de)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ad5e5f3a61d1cb03cd3caafc5011eb3b70672b8499213dc531f6f7b893be5bbe\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7cc96d8cf6-czfrv" podUID="bfc4a5a7-f2e7-4326-8172-338c5c23e0de" Jun 20 19:32:43.579195 containerd[1593]: time="2025-06-20T19:32:43.579158318Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cc96d8cf6-s2hg5,Uid:af7de0bd-f3dc-480d-9df4-554fb6215492,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5277d2ba78494bca4763cfc115381c30f7386e31ec0af25fc778ffa57cc7b593\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:32:43.579337 kubelet[2709]: E0620 19:32:43.579316 2709 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5277d2ba78494bca4763cfc115381c30f7386e31ec0af25fc778ffa57cc7b593\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:32:43.579375 kubelet[2709]: E0620 19:32:43.579345 2709 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5277d2ba78494bca4763cfc115381c30f7386e31ec0af25fc778ffa57cc7b593\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7cc96d8cf6-s2hg5" Jun 20 19:32:43.579375 kubelet[2709]: E0620 19:32:43.579365 2709 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5277d2ba78494bca4763cfc115381c30f7386e31ec0af25fc778ffa57cc7b593\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7cc96d8cf6-s2hg5" Jun 20 19:32:43.579432 kubelet[2709]: E0620 19:32:43.579395 2709 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7cc96d8cf6-s2hg5_calico-apiserver(af7de0bd-f3dc-480d-9df4-554fb6215492)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7cc96d8cf6-s2hg5_calico-apiserver(af7de0bd-f3dc-480d-9df4-554fb6215492)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5277d2ba78494bca4763cfc115381c30f7386e31ec0af25fc778ffa57cc7b593\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7cc96d8cf6-s2hg5" podUID="af7de0bd-f3dc-480d-9df4-554fb6215492" Jun 20 19:32:44.158278 systemd[1]: Created slice kubepods-besteffort-pod10fc28c5_4d33_4680_87d1_31c5370f21a5.slice - libcontainer container kubepods-besteffort-pod10fc28c5_4d33_4680_87d1_31c5370f21a5.slice. Jun 20 19:32:44.160584 containerd[1593]: time="2025-06-20T19:32:44.160550679Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-82mq6,Uid:10fc28c5-4d33-4680-87d1-31c5370f21a5,Namespace:calico-system,Attempt:0,}" Jun 20 19:32:44.209490 containerd[1593]: time="2025-06-20T19:32:44.209436354Z" level=error msg="Failed to destroy network for sandbox \"d4005fc928d98b6f24ac228b20140ae32c3a05de151fd1610f9a151a0dd7a76e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:32:44.211236 containerd[1593]: time="2025-06-20T19:32:44.211193732Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-82mq6,Uid:10fc28c5-4d33-4680-87d1-31c5370f21a5,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d4005fc928d98b6f24ac228b20140ae32c3a05de151fd1610f9a151a0dd7a76e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:32:44.211457 kubelet[2709]: E0620 19:32:44.211411 2709 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d4005fc928d98b6f24ac228b20140ae32c3a05de151fd1610f9a151a0dd7a76e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:32:44.211538 kubelet[2709]: E0620 19:32:44.211479 2709 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d4005fc928d98b6f24ac228b20140ae32c3a05de151fd1610f9a151a0dd7a76e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-82mq6" Jun 20 19:32:44.211538 kubelet[2709]: E0620 19:32:44.211502 2709 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d4005fc928d98b6f24ac228b20140ae32c3a05de151fd1610f9a151a0dd7a76e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-82mq6" Jun 20 19:32:44.211588 kubelet[2709]: E0620 19:32:44.211548 2709 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-82mq6_calico-system(10fc28c5-4d33-4680-87d1-31c5370f21a5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-82mq6_calico-system(10fc28c5-4d33-4680-87d1-31c5370f21a5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d4005fc928d98b6f24ac228b20140ae32c3a05de151fd1610f9a151a0dd7a76e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-82mq6" podUID="10fc28c5-4d33-4680-87d1-31c5370f21a5" Jun 20 19:32:44.211700 systemd[1]: run-netns-cni\x2d115f8f5f\x2d2f95\x2d15ec\x2d3038\x2d056e91bfee85.mount: Deactivated successfully. Jun 20 19:32:49.570188 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4080846480.mount: Deactivated successfully. Jun 20 19:32:50.509899 containerd[1593]: time="2025-06-20T19:32:50.509833548Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:32:50.510723 containerd[1593]: time="2025-06-20T19:32:50.510645581Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.1: active requests=0, bytes read=156518913" Jun 20 19:32:50.512084 containerd[1593]: time="2025-06-20T19:32:50.512044248Z" level=info msg="ImageCreate event name:\"sha256:9ac26af2ca9c35e475f921a9bcf40c7c0ce106819208883b006e64c489251722\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:32:50.514039 containerd[1593]: time="2025-06-20T19:32:50.513998662Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:8da6d025e5cf2ff5080c801ac8611bedb513e5922500fcc8161d8164e4679597\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:32:50.514570 containerd[1593]: time="2025-06-20T19:32:50.514497710Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.1\" with image id \"sha256:9ac26af2ca9c35e475f921a9bcf40c7c0ce106819208883b006e64c489251722\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:8da6d025e5cf2ff5080c801ac8611bedb513e5922500fcc8161d8164e4679597\", size \"156518775\" in 7.289584752s" Jun 20 19:32:50.514570 containerd[1593]: time="2025-06-20T19:32:50.514535211Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.1\" returns image reference \"sha256:9ac26af2ca9c35e475f921a9bcf40c7c0ce106819208883b006e64c489251722\"" Jun 20 19:32:50.524097 containerd[1593]: time="2025-06-20T19:32:50.524056894Z" level=info msg="CreateContainer within sandbox \"c0d8202b8a8db5557f223ecba8cda476f9d0afda9702554f14babd91577a6205\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jun 20 19:32:50.561590 containerd[1593]: time="2025-06-20T19:32:50.561540897Z" level=info msg="Container b2955553363bebd490b2b5ae96c8b36195fc72478ca6d879e92a2b1aa6a73fc7: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:32:50.582975 containerd[1593]: time="2025-06-20T19:32:50.582921262Z" level=info msg="CreateContainer within sandbox \"c0d8202b8a8db5557f223ecba8cda476f9d0afda9702554f14babd91577a6205\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"b2955553363bebd490b2b5ae96c8b36195fc72478ca6d879e92a2b1aa6a73fc7\"" Jun 20 19:32:50.583576 containerd[1593]: time="2025-06-20T19:32:50.583473031Z" level=info msg="StartContainer for \"b2955553363bebd490b2b5ae96c8b36195fc72478ca6d879e92a2b1aa6a73fc7\"" Jun 20 19:32:50.584786 containerd[1593]: time="2025-06-20T19:32:50.584732984Z" level=info msg="connecting to shim b2955553363bebd490b2b5ae96c8b36195fc72478ca6d879e92a2b1aa6a73fc7" address="unix:///run/containerd/s/e9fd726b1d43da2867985d35a78a024a02292f1c3b938ed9ea8240f76f2a3d2b" protocol=ttrpc version=3 Jun 20 19:32:50.614052 systemd[1]: Started cri-containerd-b2955553363bebd490b2b5ae96c8b36195fc72478ca6d879e92a2b1aa6a73fc7.scope - libcontainer container b2955553363bebd490b2b5ae96c8b36195fc72478ca6d879e92a2b1aa6a73fc7. Jun 20 19:32:50.769641 kubelet[2709]: I0620 19:32:50.769494 2709 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 20 19:32:51.023652 containerd[1593]: time="2025-06-20T19:32:51.023171279Z" level=info msg="StartContainer for \"b2955553363bebd490b2b5ae96c8b36195fc72478ca6d879e92a2b1aa6a73fc7\" returns successfully" Jun 20 19:32:51.048972 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jun 20 19:32:51.049578 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jun 20 19:32:51.183376 kubelet[2709]: I0620 19:32:51.183327 2709 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3cc2adaf-8ce5-4e33-9a9f-758c8f132fce-whisker-ca-bundle\") pod \"3cc2adaf-8ce5-4e33-9a9f-758c8f132fce\" (UID: \"3cc2adaf-8ce5-4e33-9a9f-758c8f132fce\") " Jun 20 19:32:51.183376 kubelet[2709]: I0620 19:32:51.183377 2709 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3cc2adaf-8ce5-4e33-9a9f-758c8f132fce-whisker-backend-key-pair\") pod \"3cc2adaf-8ce5-4e33-9a9f-758c8f132fce\" (UID: \"3cc2adaf-8ce5-4e33-9a9f-758c8f132fce\") " Jun 20 19:32:51.183594 kubelet[2709]: I0620 19:32:51.183410 2709 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-spwzf\" (UniqueName: \"kubernetes.io/projected/3cc2adaf-8ce5-4e33-9a9f-758c8f132fce-kube-api-access-spwzf\") pod \"3cc2adaf-8ce5-4e33-9a9f-758c8f132fce\" (UID: \"3cc2adaf-8ce5-4e33-9a9f-758c8f132fce\") " Jun 20 19:32:51.184160 kubelet[2709]: I0620 19:32:51.184131 2709 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cc2adaf-8ce5-4e33-9a9f-758c8f132fce-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "3cc2adaf-8ce5-4e33-9a9f-758c8f132fce" (UID: "3cc2adaf-8ce5-4e33-9a9f-758c8f132fce"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jun 20 19:32:51.188802 kubelet[2709]: I0620 19:32:51.186999 2709 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cc2adaf-8ce5-4e33-9a9f-758c8f132fce-kube-api-access-spwzf" (OuterVolumeSpecName: "kube-api-access-spwzf") pod "3cc2adaf-8ce5-4e33-9a9f-758c8f132fce" (UID: "3cc2adaf-8ce5-4e33-9a9f-758c8f132fce"). InnerVolumeSpecName "kube-api-access-spwzf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jun 20 19:32:51.188994 systemd[1]: var-lib-kubelet-pods-3cc2adaf\x2d8ce5\x2d4e33\x2d9a9f\x2d758c8f132fce-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dspwzf.mount: Deactivated successfully. Jun 20 19:32:51.190755 kubelet[2709]: I0620 19:32:51.190303 2709 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cc2adaf-8ce5-4e33-9a9f-758c8f132fce-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "3cc2adaf-8ce5-4e33-9a9f-758c8f132fce" (UID: "3cc2adaf-8ce5-4e33-9a9f-758c8f132fce"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jun 20 19:32:51.193395 systemd[1]: var-lib-kubelet-pods-3cc2adaf\x2d8ce5\x2d4e33\x2d9a9f\x2d758c8f132fce-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jun 20 19:32:51.274714 systemd[1]: Removed slice kubepods-besteffort-pod3cc2adaf_8ce5_4e33_9a9f_758c8f132fce.slice - libcontainer container kubepods-besteffort-pod3cc2adaf_8ce5_4e33_9a9f_758c8f132fce.slice. Jun 20 19:32:51.284059 kubelet[2709]: I0620 19:32:51.284007 2709 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3cc2adaf-8ce5-4e33-9a9f-758c8f132fce-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jun 20 19:32:51.284059 kubelet[2709]: I0620 19:32:51.284040 2709 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3cc2adaf-8ce5-4e33-9a9f-758c8f132fce-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jun 20 19:32:51.284059 kubelet[2709]: I0620 19:32:51.284050 2709 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-spwzf\" (UniqueName: \"kubernetes.io/projected/3cc2adaf-8ce5-4e33-9a9f-758c8f132fce-kube-api-access-spwzf\") on node \"localhost\" DevicePath \"\"" Jun 20 19:32:51.287963 kubelet[2709]: I0620 19:32:51.287315 2709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-mzrs4" podStartSLOduration=3.700094896 podStartE2EDuration="19.287293996s" podCreationTimestamp="2025-06-20 19:32:32 +0000 UTC" firstStartedPulling="2025-06-20 19:32:34.928132254 +0000 UTC m=+20.133988607" lastFinishedPulling="2025-06-20 19:32:50.515331354 +0000 UTC m=+35.721187707" observedRunningTime="2025-06-20 19:32:51.286632731 +0000 UTC m=+36.492489084" watchObservedRunningTime="2025-06-20 19:32:51.287293996 +0000 UTC m=+36.493150349" Jun 20 19:32:51.338258 systemd[1]: Created slice kubepods-besteffort-pod9c7781f6_28e7_4a72_bc27_515197b9baf2.slice - libcontainer container kubepods-besteffort-pod9c7781f6_28e7_4a72_bc27_515197b9baf2.slice. Jun 20 19:32:51.384744 kubelet[2709]: I0620 19:32:51.384644 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c7781f6-28e7-4a72-bc27-515197b9baf2-whisker-ca-bundle\") pod \"whisker-6cddd98664-6hn8r\" (UID: \"9c7781f6-28e7-4a72-bc27-515197b9baf2\") " pod="calico-system/whisker-6cddd98664-6hn8r" Jun 20 19:32:51.385369 kubelet[2709]: I0620 19:32:51.385052 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vqss\" (UniqueName: \"kubernetes.io/projected/9c7781f6-28e7-4a72-bc27-515197b9baf2-kube-api-access-7vqss\") pod \"whisker-6cddd98664-6hn8r\" (UID: \"9c7781f6-28e7-4a72-bc27-515197b9baf2\") " pod="calico-system/whisker-6cddd98664-6hn8r" Jun 20 19:32:51.385369 kubelet[2709]: I0620 19:32:51.385127 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9c7781f6-28e7-4a72-bc27-515197b9baf2-whisker-backend-key-pair\") pod \"whisker-6cddd98664-6hn8r\" (UID: \"9c7781f6-28e7-4a72-bc27-515197b9baf2\") " pod="calico-system/whisker-6cddd98664-6hn8r" Jun 20 19:32:51.643291 containerd[1593]: time="2025-06-20T19:32:51.643231348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6cddd98664-6hn8r,Uid:9c7781f6-28e7-4a72-bc27-515197b9baf2,Namespace:calico-system,Attempt:0,}" Jun 20 19:32:51.897945 systemd-networkd[1488]: calida2478c845e: Link UP Jun 20 19:32:51.899187 systemd-networkd[1488]: calida2478c845e: Gained carrier Jun 20 19:32:51.909099 containerd[1593]: 2025-06-20 19:32:51.780 [INFO][3850] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jun 20 19:32:51.909099 containerd[1593]: 2025-06-20 19:32:51.797 [INFO][3850] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--6cddd98664--6hn8r-eth0 whisker-6cddd98664- calico-system 9c7781f6-28e7-4a72-bc27-515197b9baf2 867 0 2025-06-20 19:32:51 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6cddd98664 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-6cddd98664-6hn8r eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calida2478c845e [] [] }} ContainerID="6def9f2bde38b4a5265fe27e1d9327da8be5901e8bf2979c47e4448c50f56411" Namespace="calico-system" Pod="whisker-6cddd98664-6hn8r" WorkloadEndpoint="localhost-k8s-whisker--6cddd98664--6hn8r-" Jun 20 19:32:51.909099 containerd[1593]: 2025-06-20 19:32:51.797 [INFO][3850] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6def9f2bde38b4a5265fe27e1d9327da8be5901e8bf2979c47e4448c50f56411" Namespace="calico-system" Pod="whisker-6cddd98664-6hn8r" WorkloadEndpoint="localhost-k8s-whisker--6cddd98664--6hn8r-eth0" Jun 20 19:32:51.909099 containerd[1593]: 2025-06-20 19:32:51.856 [INFO][3864] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6def9f2bde38b4a5265fe27e1d9327da8be5901e8bf2979c47e4448c50f56411" HandleID="k8s-pod-network.6def9f2bde38b4a5265fe27e1d9327da8be5901e8bf2979c47e4448c50f56411" Workload="localhost-k8s-whisker--6cddd98664--6hn8r-eth0" Jun 20 19:32:51.909286 containerd[1593]: 2025-06-20 19:32:51.856 [INFO][3864] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6def9f2bde38b4a5265fe27e1d9327da8be5901e8bf2979c47e4448c50f56411" HandleID="k8s-pod-network.6def9f2bde38b4a5265fe27e1d9327da8be5901e8bf2979c47e4448c50f56411" Workload="localhost-k8s-whisker--6cddd98664--6hn8r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00041f200), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-6cddd98664-6hn8r", "timestamp":"2025-06-20 19:32:51.856068504 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 19:32:51.909286 containerd[1593]: 2025-06-20 19:32:51.856 [INFO][3864] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:32:51.909286 containerd[1593]: 2025-06-20 19:32:51.856 [INFO][3864] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:32:51.909286 containerd[1593]: 2025-06-20 19:32:51.856 [INFO][3864] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 20 19:32:51.909286 containerd[1593]: 2025-06-20 19:32:51.863 [INFO][3864] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6def9f2bde38b4a5265fe27e1d9327da8be5901e8bf2979c47e4448c50f56411" host="localhost" Jun 20 19:32:51.909286 containerd[1593]: 2025-06-20 19:32:51.868 [INFO][3864] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jun 20 19:32:51.909286 containerd[1593]: 2025-06-20 19:32:51.875 [INFO][3864] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jun 20 19:32:51.909286 containerd[1593]: 2025-06-20 19:32:51.877 [INFO][3864] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 20 19:32:51.909286 containerd[1593]: 2025-06-20 19:32:51.878 [INFO][3864] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 20 19:32:51.909286 containerd[1593]: 2025-06-20 19:32:51.878 [INFO][3864] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6def9f2bde38b4a5265fe27e1d9327da8be5901e8bf2979c47e4448c50f56411" host="localhost" Jun 20 19:32:51.909503 containerd[1593]: 2025-06-20 19:32:51.880 [INFO][3864] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6def9f2bde38b4a5265fe27e1d9327da8be5901e8bf2979c47e4448c50f56411 Jun 20 19:32:51.909503 containerd[1593]: 2025-06-20 19:32:51.883 [INFO][3864] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6def9f2bde38b4a5265fe27e1d9327da8be5901e8bf2979c47e4448c50f56411" host="localhost" Jun 20 19:32:51.909503 containerd[1593]: 2025-06-20 19:32:51.887 [INFO][3864] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.6def9f2bde38b4a5265fe27e1d9327da8be5901e8bf2979c47e4448c50f56411" host="localhost" Jun 20 19:32:51.909503 containerd[1593]: 2025-06-20 19:32:51.887 [INFO][3864] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.6def9f2bde38b4a5265fe27e1d9327da8be5901e8bf2979c47e4448c50f56411" host="localhost" Jun 20 19:32:51.909503 containerd[1593]: 2025-06-20 19:32:51.887 [INFO][3864] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:32:51.909503 containerd[1593]: 2025-06-20 19:32:51.887 [INFO][3864] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="6def9f2bde38b4a5265fe27e1d9327da8be5901e8bf2979c47e4448c50f56411" HandleID="k8s-pod-network.6def9f2bde38b4a5265fe27e1d9327da8be5901e8bf2979c47e4448c50f56411" Workload="localhost-k8s-whisker--6cddd98664--6hn8r-eth0" Jun 20 19:32:51.909629 containerd[1593]: 2025-06-20 19:32:51.891 [INFO][3850] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6def9f2bde38b4a5265fe27e1d9327da8be5901e8bf2979c47e4448c50f56411" Namespace="calico-system" Pod="whisker-6cddd98664-6hn8r" WorkloadEndpoint="localhost-k8s-whisker--6cddd98664--6hn8r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6cddd98664--6hn8r-eth0", GenerateName:"whisker-6cddd98664-", Namespace:"calico-system", SelfLink:"", UID:"9c7781f6-28e7-4a72-bc27-515197b9baf2", ResourceVersion:"867", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 32, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6cddd98664", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-6cddd98664-6hn8r", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calida2478c845e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:32:51.909629 containerd[1593]: 2025-06-20 19:32:51.891 [INFO][3850] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="6def9f2bde38b4a5265fe27e1d9327da8be5901e8bf2979c47e4448c50f56411" Namespace="calico-system" Pod="whisker-6cddd98664-6hn8r" WorkloadEndpoint="localhost-k8s-whisker--6cddd98664--6hn8r-eth0" Jun 20 19:32:51.909699 containerd[1593]: 2025-06-20 19:32:51.891 [INFO][3850] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calida2478c845e ContainerID="6def9f2bde38b4a5265fe27e1d9327da8be5901e8bf2979c47e4448c50f56411" Namespace="calico-system" Pod="whisker-6cddd98664-6hn8r" WorkloadEndpoint="localhost-k8s-whisker--6cddd98664--6hn8r-eth0" Jun 20 19:32:51.909699 containerd[1593]: 2025-06-20 19:32:51.898 [INFO][3850] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6def9f2bde38b4a5265fe27e1d9327da8be5901e8bf2979c47e4448c50f56411" Namespace="calico-system" Pod="whisker-6cddd98664-6hn8r" WorkloadEndpoint="localhost-k8s-whisker--6cddd98664--6hn8r-eth0" Jun 20 19:32:51.909742 containerd[1593]: 2025-06-20 19:32:51.898 [INFO][3850] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6def9f2bde38b4a5265fe27e1d9327da8be5901e8bf2979c47e4448c50f56411" Namespace="calico-system" Pod="whisker-6cddd98664-6hn8r" WorkloadEndpoint="localhost-k8s-whisker--6cddd98664--6hn8r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6cddd98664--6hn8r-eth0", GenerateName:"whisker-6cddd98664-", Namespace:"calico-system", SelfLink:"", UID:"9c7781f6-28e7-4a72-bc27-515197b9baf2", ResourceVersion:"867", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 32, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6cddd98664", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6def9f2bde38b4a5265fe27e1d9327da8be5901e8bf2979c47e4448c50f56411", Pod:"whisker-6cddd98664-6hn8r", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calida2478c845e", MAC:"7a:a7:cf:f9:51:4d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:32:51.909833 containerd[1593]: 2025-06-20 19:32:51.904 [INFO][3850] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6def9f2bde38b4a5265fe27e1d9327da8be5901e8bf2979c47e4448c50f56411" Namespace="calico-system" Pod="whisker-6cddd98664-6hn8r" WorkloadEndpoint="localhost-k8s-whisker--6cddd98664--6hn8r-eth0" Jun 20 19:32:51.982191 containerd[1593]: time="2025-06-20T19:32:51.982152108Z" level=info msg="connecting to shim 6def9f2bde38b4a5265fe27e1d9327da8be5901e8bf2979c47e4448c50f56411" address="unix:///run/containerd/s/f894ff2a91aea7b861ae24f326b16dbe7886a8af68c6be4a4a963ee88e2f14ed" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:32:52.009909 systemd[1]: Started cri-containerd-6def9f2bde38b4a5265fe27e1d9327da8be5901e8bf2979c47e4448c50f56411.scope - libcontainer container 6def9f2bde38b4a5265fe27e1d9327da8be5901e8bf2979c47e4448c50f56411. Jun 20 19:32:52.023175 systemd-resolved[1412]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 20 19:32:52.054904 containerd[1593]: time="2025-06-20T19:32:52.054845348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6cddd98664-6hn8r,Uid:9c7781f6-28e7-4a72-bc27-515197b9baf2,Namespace:calico-system,Attempt:0,} returns sandbox id \"6def9f2bde38b4a5265fe27e1d9327da8be5901e8bf2979c47e4448c50f56411\"" Jun 20 19:32:52.056264 containerd[1593]: time="2025-06-20T19:32:52.056165835Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.1\"" Jun 20 19:32:52.857104 systemd-networkd[1488]: vxlan.calico: Link UP Jun 20 19:32:52.857126 systemd-networkd[1488]: vxlan.calico: Gained carrier Jun 20 19:32:53.084864 systemd-networkd[1488]: calida2478c845e: Gained IPv6LL Jun 20 19:32:53.154794 kubelet[2709]: I0620 19:32:53.154674 2709 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cc2adaf-8ce5-4e33-9a9f-758c8f132fce" path="/var/lib/kubelet/pods/3cc2adaf-8ce5-4e33-9a9f-758c8f132fce/volumes" Jun 20 19:32:53.684635 systemd[1]: Started sshd@7-10.0.0.149:22-10.0.0.1:59664.service - OpenSSH per-connection server daemon (10.0.0.1:59664). Jun 20 19:32:53.709941 containerd[1593]: time="2025-06-20T19:32:53.709893506Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:32:53.711360 containerd[1593]: time="2025-06-20T19:32:53.711335824Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.1: active requests=0, bytes read=4661202" Jun 20 19:32:53.713055 containerd[1593]: time="2025-06-20T19:32:53.713029018Z" level=info msg="ImageCreate event name:\"sha256:f9c2addb6553484a4cf8cf5e38959c95aff70d213991bb2626aab9eb9b0ce51c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:32:53.715029 containerd[1593]: time="2025-06-20T19:32:53.715006021Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:7f323954f2f741238d256690a674536bf562d4b4bd7cd6bab3c21a0a1327e1fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:32:53.715559 containerd[1593]: time="2025-06-20T19:32:53.715513154Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.1\" with image id \"sha256:f9c2addb6553484a4cf8cf5e38959c95aff70d213991bb2626aab9eb9b0ce51c\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:7f323954f2f741238d256690a674536bf562d4b4bd7cd6bab3c21a0a1327e1fc\", size \"6153897\" in 1.659313133s" Jun 20 19:32:53.715604 containerd[1593]: time="2025-06-20T19:32:53.715559612Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.1\" returns image reference \"sha256:f9c2addb6553484a4cf8cf5e38959c95aff70d213991bb2626aab9eb9b0ce51c\"" Jun 20 19:32:53.717353 containerd[1593]: time="2025-06-20T19:32:53.717327036Z" level=info msg="CreateContainer within sandbox \"6def9f2bde38b4a5265fe27e1d9327da8be5901e8bf2979c47e4448c50f56411\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jun 20 19:32:53.736803 sshd[4128]: Accepted publickey for core from 10.0.0.1 port 59664 ssh2: RSA SHA256:YoQ4GiRtY5Hu7FaS/OnNYeCnsR+r8YS2g6Qh7XD/NPE Jun 20 19:32:53.738321 containerd[1593]: time="2025-06-20T19:32:53.738285670Z" level=info msg="Container 1215943ef90352abbc72d1b818e52c6871842e30ce8d0e34d75b42d1c327ee83: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:32:53.739127 sshd-session[4128]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:32:53.747220 containerd[1593]: time="2025-06-20T19:32:53.746849244Z" level=info msg="CreateContainer within sandbox \"6def9f2bde38b4a5265fe27e1d9327da8be5901e8bf2979c47e4448c50f56411\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"1215943ef90352abbc72d1b818e52c6871842e30ce8d0e34d75b42d1c327ee83\"" Jun 20 19:32:53.746907 systemd-logind[1580]: New session 8 of user core. Jun 20 19:32:53.747975 containerd[1593]: time="2025-06-20T19:32:53.747951086Z" level=info msg="StartContainer for \"1215943ef90352abbc72d1b818e52c6871842e30ce8d0e34d75b42d1c327ee83\"" Jun 20 19:32:53.748924 containerd[1593]: time="2025-06-20T19:32:53.748902501Z" level=info msg="connecting to shim 1215943ef90352abbc72d1b818e52c6871842e30ce8d0e34d75b42d1c327ee83" address="unix:///run/containerd/s/f894ff2a91aea7b861ae24f326b16dbe7886a8af68c6be4a4a963ee88e2f14ed" protocol=ttrpc version=3 Jun 20 19:32:53.751961 systemd[1]: Started session-8.scope - Session 8 of User core. Jun 20 19:32:53.772949 systemd[1]: Started cri-containerd-1215943ef90352abbc72d1b818e52c6871842e30ce8d0e34d75b42d1c327ee83.scope - libcontainer container 1215943ef90352abbc72d1b818e52c6871842e30ce8d0e34d75b42d1c327ee83. Jun 20 19:32:53.822680 containerd[1593]: time="2025-06-20T19:32:53.822636891Z" level=info msg="StartContainer for \"1215943ef90352abbc72d1b818e52c6871842e30ce8d0e34d75b42d1c327ee83\" returns successfully" Jun 20 19:32:53.825914 containerd[1593]: time="2025-06-20T19:32:53.825824360Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.1\"" Jun 20 19:32:53.892216 sshd[4140]: Connection closed by 10.0.0.1 port 59664 Jun 20 19:32:53.892474 sshd-session[4128]: pam_unix(sshd:session): session closed for user core Jun 20 19:32:53.896788 systemd[1]: sshd@7-10.0.0.149:22-10.0.0.1:59664.service: Deactivated successfully. Jun 20 19:32:53.898795 systemd[1]: session-8.scope: Deactivated successfully. Jun 20 19:32:53.899484 systemd-logind[1580]: Session 8 logged out. Waiting for processes to exit. Jun 20 19:32:53.900518 systemd-logind[1580]: Removed session 8. Jun 20 19:32:54.152146 containerd[1593]: time="2025-06-20T19:32:54.152068845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5bd85449d4-579gn,Uid:930dcef4-43bd-41e9-9b33-cfe29691b1e8,Namespace:calico-system,Attempt:0,}" Jun 20 19:32:54.152293 containerd[1593]: time="2025-06-20T19:32:54.152240331Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7cd54bd598-j9rdn,Uid:39b2c0d8-49ef-476a-8e57-87f01819d6f1,Namespace:calico-system,Attempt:0,}" Jun 20 19:32:54.246760 systemd-networkd[1488]: cali4720683bf25: Link UP Jun 20 19:32:54.247412 systemd-networkd[1488]: cali4720683bf25: Gained carrier Jun 20 19:32:54.260559 containerd[1593]: 2025-06-20 19:32:54.191 [INFO][4197] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--7cd54bd598--j9rdn-eth0 calico-kube-controllers-7cd54bd598- calico-system 39b2c0d8-49ef-476a-8e57-87f01819d6f1 800 0 2025-06-20 19:32:32 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7cd54bd598 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-7cd54bd598-j9rdn eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali4720683bf25 [] [] }} ContainerID="0b3fbde16aac6f975c4a680705492b7197ee706a4faf1b80193c705a923e5a1b" Namespace="calico-system" Pod="calico-kube-controllers-7cd54bd598-j9rdn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7cd54bd598--j9rdn-" Jun 20 19:32:54.260559 containerd[1593]: 2025-06-20 19:32:54.191 [INFO][4197] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0b3fbde16aac6f975c4a680705492b7197ee706a4faf1b80193c705a923e5a1b" Namespace="calico-system" Pod="calico-kube-controllers-7cd54bd598-j9rdn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7cd54bd598--j9rdn-eth0" Jun 20 19:32:54.260559 containerd[1593]: 2025-06-20 19:32:54.214 [INFO][4217] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0b3fbde16aac6f975c4a680705492b7197ee706a4faf1b80193c705a923e5a1b" HandleID="k8s-pod-network.0b3fbde16aac6f975c4a680705492b7197ee706a4faf1b80193c705a923e5a1b" Workload="localhost-k8s-calico--kube--controllers--7cd54bd598--j9rdn-eth0" Jun 20 19:32:54.260837 containerd[1593]: 2025-06-20 19:32:54.214 [INFO][4217] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0b3fbde16aac6f975c4a680705492b7197ee706a4faf1b80193c705a923e5a1b" HandleID="k8s-pod-network.0b3fbde16aac6f975c4a680705492b7197ee706a4faf1b80193c705a923e5a1b" Workload="localhost-k8s-calico--kube--controllers--7cd54bd598--j9rdn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00042f2b0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-7cd54bd598-j9rdn", "timestamp":"2025-06-20 19:32:54.214662101 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 19:32:54.260837 containerd[1593]: 2025-06-20 19:32:54.214 [INFO][4217] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:32:54.260837 containerd[1593]: 2025-06-20 19:32:54.214 [INFO][4217] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:32:54.260837 containerd[1593]: 2025-06-20 19:32:54.214 [INFO][4217] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 20 19:32:54.260837 containerd[1593]: 2025-06-20 19:32:54.221 [INFO][4217] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0b3fbde16aac6f975c4a680705492b7197ee706a4faf1b80193c705a923e5a1b" host="localhost" Jun 20 19:32:54.260837 containerd[1593]: 2025-06-20 19:32:54.224 [INFO][4217] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jun 20 19:32:54.260837 containerd[1593]: 2025-06-20 19:32:54.228 [INFO][4217] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jun 20 19:32:54.260837 containerd[1593]: 2025-06-20 19:32:54.229 [INFO][4217] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 20 19:32:54.260837 containerd[1593]: 2025-06-20 19:32:54.231 [INFO][4217] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 20 19:32:54.260837 containerd[1593]: 2025-06-20 19:32:54.231 [INFO][4217] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0b3fbde16aac6f975c4a680705492b7197ee706a4faf1b80193c705a923e5a1b" host="localhost" Jun 20 19:32:54.261087 containerd[1593]: 2025-06-20 19:32:54.232 [INFO][4217] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0b3fbde16aac6f975c4a680705492b7197ee706a4faf1b80193c705a923e5a1b Jun 20 19:32:54.261087 containerd[1593]: 2025-06-20 19:32:54.235 [INFO][4217] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0b3fbde16aac6f975c4a680705492b7197ee706a4faf1b80193c705a923e5a1b" host="localhost" Jun 20 19:32:54.261087 containerd[1593]: 2025-06-20 19:32:54.241 [INFO][4217] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.0b3fbde16aac6f975c4a680705492b7197ee706a4faf1b80193c705a923e5a1b" host="localhost" Jun 20 19:32:54.261087 containerd[1593]: 2025-06-20 19:32:54.241 [INFO][4217] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.0b3fbde16aac6f975c4a680705492b7197ee706a4faf1b80193c705a923e5a1b" host="localhost" Jun 20 19:32:54.261087 containerd[1593]: 2025-06-20 19:32:54.241 [INFO][4217] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:32:54.261087 containerd[1593]: 2025-06-20 19:32:54.241 [INFO][4217] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="0b3fbde16aac6f975c4a680705492b7197ee706a4faf1b80193c705a923e5a1b" HandleID="k8s-pod-network.0b3fbde16aac6f975c4a680705492b7197ee706a4faf1b80193c705a923e5a1b" Workload="localhost-k8s-calico--kube--controllers--7cd54bd598--j9rdn-eth0" Jun 20 19:32:54.261215 containerd[1593]: 2025-06-20 19:32:54.244 [INFO][4197] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0b3fbde16aac6f975c4a680705492b7197ee706a4faf1b80193c705a923e5a1b" Namespace="calico-system" Pod="calico-kube-controllers-7cd54bd598-j9rdn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7cd54bd598--j9rdn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7cd54bd598--j9rdn-eth0", GenerateName:"calico-kube-controllers-7cd54bd598-", Namespace:"calico-system", SelfLink:"", UID:"39b2c0d8-49ef-476a-8e57-87f01819d6f1", ResourceVersion:"800", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 32, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7cd54bd598", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-7cd54bd598-j9rdn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4720683bf25", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:32:54.261266 containerd[1593]: 2025-06-20 19:32:54.244 [INFO][4197] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="0b3fbde16aac6f975c4a680705492b7197ee706a4faf1b80193c705a923e5a1b" Namespace="calico-system" Pod="calico-kube-controllers-7cd54bd598-j9rdn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7cd54bd598--j9rdn-eth0" Jun 20 19:32:54.261266 containerd[1593]: 2025-06-20 19:32:54.244 [INFO][4197] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4720683bf25 ContainerID="0b3fbde16aac6f975c4a680705492b7197ee706a4faf1b80193c705a923e5a1b" Namespace="calico-system" Pod="calico-kube-controllers-7cd54bd598-j9rdn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7cd54bd598--j9rdn-eth0" Jun 20 19:32:54.261266 containerd[1593]: 2025-06-20 19:32:54.246 [INFO][4197] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0b3fbde16aac6f975c4a680705492b7197ee706a4faf1b80193c705a923e5a1b" Namespace="calico-system" Pod="calico-kube-controllers-7cd54bd598-j9rdn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7cd54bd598--j9rdn-eth0" Jun 20 19:32:54.261330 containerd[1593]: 2025-06-20 19:32:54.247 [INFO][4197] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0b3fbde16aac6f975c4a680705492b7197ee706a4faf1b80193c705a923e5a1b" Namespace="calico-system" Pod="calico-kube-controllers-7cd54bd598-j9rdn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7cd54bd598--j9rdn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7cd54bd598--j9rdn-eth0", GenerateName:"calico-kube-controllers-7cd54bd598-", Namespace:"calico-system", SelfLink:"", UID:"39b2c0d8-49ef-476a-8e57-87f01819d6f1", ResourceVersion:"800", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 32, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7cd54bd598", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0b3fbde16aac6f975c4a680705492b7197ee706a4faf1b80193c705a923e5a1b", Pod:"calico-kube-controllers-7cd54bd598-j9rdn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4720683bf25", MAC:"4a:0f:cd:95:91:ed", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:32:54.261375 containerd[1593]: 2025-06-20 19:32:54.256 [INFO][4197] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0b3fbde16aac6f975c4a680705492b7197ee706a4faf1b80193c705a923e5a1b" Namespace="calico-system" Pod="calico-kube-controllers-7cd54bd598-j9rdn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7cd54bd598--j9rdn-eth0" Jun 20 19:32:54.306429 containerd[1593]: time="2025-06-20T19:32:54.306377609Z" level=info msg="connecting to shim 0b3fbde16aac6f975c4a680705492b7197ee706a4faf1b80193c705a923e5a1b" address="unix:///run/containerd/s/5681154d311b3fd2c11eaa9848548f9f8a3153d5e391b5aede8b5dc1f34432d6" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:32:54.329912 systemd[1]: Started cri-containerd-0b3fbde16aac6f975c4a680705492b7197ee706a4faf1b80193c705a923e5a1b.scope - libcontainer container 0b3fbde16aac6f975c4a680705492b7197ee706a4faf1b80193c705a923e5a1b. Jun 20 19:32:54.343365 systemd-resolved[1412]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 20 19:32:54.351206 systemd-networkd[1488]: cali87cd2809ec0: Link UP Jun 20 19:32:54.352290 systemd-networkd[1488]: cali87cd2809ec0: Gained carrier Jun 20 19:32:54.367195 containerd[1593]: 2025-06-20 19:32:54.187 [INFO][4186] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--5bd85449d4--579gn-eth0 goldmane-5bd85449d4- calico-system 930dcef4-43bd-41e9-9b33-cfe29691b1e8 798 0 2025-06-20 19:32:31 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:5bd85449d4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-5bd85449d4-579gn eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali87cd2809ec0 [] [] }} ContainerID="ed05c351dc23b8399560ae8215df312e4f2e1db46d665192c70e04c6ed322152" Namespace="calico-system" Pod="goldmane-5bd85449d4-579gn" WorkloadEndpoint="localhost-k8s-goldmane--5bd85449d4--579gn-" Jun 20 19:32:54.367195 containerd[1593]: 2025-06-20 19:32:54.188 [INFO][4186] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ed05c351dc23b8399560ae8215df312e4f2e1db46d665192c70e04c6ed322152" Namespace="calico-system" Pod="goldmane-5bd85449d4-579gn" WorkloadEndpoint="localhost-k8s-goldmane--5bd85449d4--579gn-eth0" Jun 20 19:32:54.367195 containerd[1593]: 2025-06-20 19:32:54.215 [INFO][4215] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ed05c351dc23b8399560ae8215df312e4f2e1db46d665192c70e04c6ed322152" HandleID="k8s-pod-network.ed05c351dc23b8399560ae8215df312e4f2e1db46d665192c70e04c6ed322152" Workload="localhost-k8s-goldmane--5bd85449d4--579gn-eth0" Jun 20 19:32:54.367760 containerd[1593]: 2025-06-20 19:32:54.215 [INFO][4215] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ed05c351dc23b8399560ae8215df312e4f2e1db46d665192c70e04c6ed322152" HandleID="k8s-pod-network.ed05c351dc23b8399560ae8215df312e4f2e1db46d665192c70e04c6ed322152" Workload="localhost-k8s-goldmane--5bd85449d4--579gn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001397c0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-5bd85449d4-579gn", "timestamp":"2025-06-20 19:32:54.21522526 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 19:32:54.367760 containerd[1593]: 2025-06-20 19:32:54.215 [INFO][4215] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:32:54.367760 containerd[1593]: 2025-06-20 19:32:54.241 [INFO][4215] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:32:54.367760 containerd[1593]: 2025-06-20 19:32:54.241 [INFO][4215] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 20 19:32:54.367760 containerd[1593]: 2025-06-20 19:32:54.322 [INFO][4215] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ed05c351dc23b8399560ae8215df312e4f2e1db46d665192c70e04c6ed322152" host="localhost" Jun 20 19:32:54.367760 containerd[1593]: 2025-06-20 19:32:54.327 [INFO][4215] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jun 20 19:32:54.367760 containerd[1593]: 2025-06-20 19:32:54.331 [INFO][4215] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jun 20 19:32:54.367760 containerd[1593]: 2025-06-20 19:32:54.333 [INFO][4215] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 20 19:32:54.367760 containerd[1593]: 2025-06-20 19:32:54.334 [INFO][4215] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 20 19:32:54.367760 containerd[1593]: 2025-06-20 19:32:54.334 [INFO][4215] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ed05c351dc23b8399560ae8215df312e4f2e1db46d665192c70e04c6ed322152" host="localhost" Jun 20 19:32:54.368528 containerd[1593]: 2025-06-20 19:32:54.336 [INFO][4215] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ed05c351dc23b8399560ae8215df312e4f2e1db46d665192c70e04c6ed322152 Jun 20 19:32:54.368528 containerd[1593]: 2025-06-20 19:32:54.339 [INFO][4215] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ed05c351dc23b8399560ae8215df312e4f2e1db46d665192c70e04c6ed322152" host="localhost" Jun 20 19:32:54.368528 containerd[1593]: 2025-06-20 19:32:54.344 [INFO][4215] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.ed05c351dc23b8399560ae8215df312e4f2e1db46d665192c70e04c6ed322152" host="localhost" Jun 20 19:32:54.368528 containerd[1593]: 2025-06-20 19:32:54.344 [INFO][4215] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.ed05c351dc23b8399560ae8215df312e4f2e1db46d665192c70e04c6ed322152" host="localhost" Jun 20 19:32:54.368528 containerd[1593]: 2025-06-20 19:32:54.344 [INFO][4215] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:32:54.368528 containerd[1593]: 2025-06-20 19:32:54.344 [INFO][4215] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="ed05c351dc23b8399560ae8215df312e4f2e1db46d665192c70e04c6ed322152" HandleID="k8s-pod-network.ed05c351dc23b8399560ae8215df312e4f2e1db46d665192c70e04c6ed322152" Workload="localhost-k8s-goldmane--5bd85449d4--579gn-eth0" Jun 20 19:32:54.368650 containerd[1593]: 2025-06-20 19:32:54.347 [INFO][4186] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ed05c351dc23b8399560ae8215df312e4f2e1db46d665192c70e04c6ed322152" Namespace="calico-system" Pod="goldmane-5bd85449d4-579gn" WorkloadEndpoint="localhost-k8s-goldmane--5bd85449d4--579gn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--5bd85449d4--579gn-eth0", GenerateName:"goldmane-5bd85449d4-", Namespace:"calico-system", SelfLink:"", UID:"930dcef4-43bd-41e9-9b33-cfe29691b1e8", ResourceVersion:"798", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 32, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5bd85449d4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-5bd85449d4-579gn", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali87cd2809ec0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:32:54.368650 containerd[1593]: 2025-06-20 19:32:54.347 [INFO][4186] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="ed05c351dc23b8399560ae8215df312e4f2e1db46d665192c70e04c6ed322152" Namespace="calico-system" Pod="goldmane-5bd85449d4-579gn" WorkloadEndpoint="localhost-k8s-goldmane--5bd85449d4--579gn-eth0" Jun 20 19:32:54.368729 containerd[1593]: 2025-06-20 19:32:54.347 [INFO][4186] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali87cd2809ec0 ContainerID="ed05c351dc23b8399560ae8215df312e4f2e1db46d665192c70e04c6ed322152" Namespace="calico-system" Pod="goldmane-5bd85449d4-579gn" WorkloadEndpoint="localhost-k8s-goldmane--5bd85449d4--579gn-eth0" Jun 20 19:32:54.368729 containerd[1593]: 2025-06-20 19:32:54.352 [INFO][4186] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ed05c351dc23b8399560ae8215df312e4f2e1db46d665192c70e04c6ed322152" Namespace="calico-system" Pod="goldmane-5bd85449d4-579gn" WorkloadEndpoint="localhost-k8s-goldmane--5bd85449d4--579gn-eth0" Jun 20 19:32:54.368769 containerd[1593]: 2025-06-20 19:32:54.352 [INFO][4186] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ed05c351dc23b8399560ae8215df312e4f2e1db46d665192c70e04c6ed322152" Namespace="calico-system" Pod="goldmane-5bd85449d4-579gn" WorkloadEndpoint="localhost-k8s-goldmane--5bd85449d4--579gn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--5bd85449d4--579gn-eth0", GenerateName:"goldmane-5bd85449d4-", Namespace:"calico-system", SelfLink:"", UID:"930dcef4-43bd-41e9-9b33-cfe29691b1e8", ResourceVersion:"798", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 32, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5bd85449d4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ed05c351dc23b8399560ae8215df312e4f2e1db46d665192c70e04c6ed322152", Pod:"goldmane-5bd85449d4-579gn", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali87cd2809ec0", MAC:"9e:b9:ee:92:a2:19", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:32:54.368854 containerd[1593]: 2025-06-20 19:32:54.362 [INFO][4186] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ed05c351dc23b8399560ae8215df312e4f2e1db46d665192c70e04c6ed322152" Namespace="calico-system" Pod="goldmane-5bd85449d4-579gn" WorkloadEndpoint="localhost-k8s-goldmane--5bd85449d4--579gn-eth0" Jun 20 19:32:54.422210 containerd[1593]: time="2025-06-20T19:32:54.422095323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7cd54bd598-j9rdn,Uid:39b2c0d8-49ef-476a-8e57-87f01819d6f1,Namespace:calico-system,Attempt:0,} returns sandbox id \"0b3fbde16aac6f975c4a680705492b7197ee706a4faf1b80193c705a923e5a1b\"" Jun 20 19:32:54.446066 containerd[1593]: time="2025-06-20T19:32:54.446004552Z" level=info msg="connecting to shim ed05c351dc23b8399560ae8215df312e4f2e1db46d665192c70e04c6ed322152" address="unix:///run/containerd/s/292edfb2a70cbbf18856f45dc7bcf20c5931d9f4b29260db9ad257df211aab9e" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:32:54.474926 systemd[1]: Started cri-containerd-ed05c351dc23b8399560ae8215df312e4f2e1db46d665192c70e04c6ed322152.scope - libcontainer container ed05c351dc23b8399560ae8215df312e4f2e1db46d665192c70e04c6ed322152. Jun 20 19:32:54.487710 systemd-resolved[1412]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 20 19:32:54.516581 containerd[1593]: time="2025-06-20T19:32:54.516532464Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5bd85449d4-579gn,Uid:930dcef4-43bd-41e9-9b33-cfe29691b1e8,Namespace:calico-system,Attempt:0,} returns sandbox id \"ed05c351dc23b8399560ae8215df312e4f2e1db46d665192c70e04c6ed322152\"" Jun 20 19:32:54.620945 systemd-networkd[1488]: vxlan.calico: Gained IPv6LL Jun 20 19:32:55.152142 containerd[1593]: time="2025-06-20T19:32:55.152017857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cc96d8cf6-czfrv,Uid:bfc4a5a7-f2e7-4326-8172-338c5c23e0de,Namespace:calico-apiserver,Attempt:0,}" Jun 20 19:32:55.153126 containerd[1593]: time="2025-06-20T19:32:55.152638584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cc96d8cf6-s2hg5,Uid:af7de0bd-f3dc-480d-9df4-554fb6215492,Namespace:calico-apiserver,Attempt:0,}" Jun 20 19:32:55.309685 systemd-networkd[1488]: cali9f82c36fc27: Link UP Jun 20 19:32:55.310735 systemd-networkd[1488]: cali9f82c36fc27: Gained carrier Jun 20 19:32:55.325079 systemd-networkd[1488]: cali4720683bf25: Gained IPv6LL Jun 20 19:32:55.334875 containerd[1593]: 2025-06-20 19:32:55.245 [INFO][4345] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7cc96d8cf6--czfrv-eth0 calico-apiserver-7cc96d8cf6- calico-apiserver bfc4a5a7-f2e7-4326-8172-338c5c23e0de 793 0 2025-06-20 19:32:28 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7cc96d8cf6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7cc96d8cf6-czfrv eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9f82c36fc27 [] [] }} ContainerID="546f1db123aa8a3588c31a9bef699729875b5bc2a4369fa7384086657d2634aa" Namespace="calico-apiserver" Pod="calico-apiserver-7cc96d8cf6-czfrv" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cc96d8cf6--czfrv-" Jun 20 19:32:55.334875 containerd[1593]: 2025-06-20 19:32:55.245 [INFO][4345] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="546f1db123aa8a3588c31a9bef699729875b5bc2a4369fa7384086657d2634aa" Namespace="calico-apiserver" Pod="calico-apiserver-7cc96d8cf6-czfrv" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cc96d8cf6--czfrv-eth0" Jun 20 19:32:55.334875 containerd[1593]: 2025-06-20 19:32:55.270 [INFO][4359] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="546f1db123aa8a3588c31a9bef699729875b5bc2a4369fa7384086657d2634aa" HandleID="k8s-pod-network.546f1db123aa8a3588c31a9bef699729875b5bc2a4369fa7384086657d2634aa" Workload="localhost-k8s-calico--apiserver--7cc96d8cf6--czfrv-eth0" Jun 20 19:32:55.335360 containerd[1593]: 2025-06-20 19:32:55.271 [INFO][4359] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="546f1db123aa8a3588c31a9bef699729875b5bc2a4369fa7384086657d2634aa" HandleID="k8s-pod-network.546f1db123aa8a3588c31a9bef699729875b5bc2a4369fa7384086657d2634aa" Workload="localhost-k8s-calico--apiserver--7cc96d8cf6--czfrv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003a6450), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7cc96d8cf6-czfrv", "timestamp":"2025-06-20 19:32:55.270963182 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 19:32:55.335360 containerd[1593]: 2025-06-20 19:32:55.271 [INFO][4359] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:32:55.335360 containerd[1593]: 2025-06-20 19:32:55.271 [INFO][4359] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:32:55.335360 containerd[1593]: 2025-06-20 19:32:55.271 [INFO][4359] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 20 19:32:55.335360 containerd[1593]: 2025-06-20 19:32:55.277 [INFO][4359] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.546f1db123aa8a3588c31a9bef699729875b5bc2a4369fa7384086657d2634aa" host="localhost" Jun 20 19:32:55.335360 containerd[1593]: 2025-06-20 19:32:55.284 [INFO][4359] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jun 20 19:32:55.335360 containerd[1593]: 2025-06-20 19:32:55.287 [INFO][4359] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jun 20 19:32:55.335360 containerd[1593]: 2025-06-20 19:32:55.289 [INFO][4359] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 20 19:32:55.335360 containerd[1593]: 2025-06-20 19:32:55.291 [INFO][4359] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 20 19:32:55.335360 containerd[1593]: 2025-06-20 19:32:55.291 [INFO][4359] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.546f1db123aa8a3588c31a9bef699729875b5bc2a4369fa7384086657d2634aa" host="localhost" Jun 20 19:32:55.336044 containerd[1593]: 2025-06-20 19:32:55.292 [INFO][4359] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.546f1db123aa8a3588c31a9bef699729875b5bc2a4369fa7384086657d2634aa Jun 20 19:32:55.336044 containerd[1593]: 2025-06-20 19:32:55.297 [INFO][4359] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.546f1db123aa8a3588c31a9bef699729875b5bc2a4369fa7384086657d2634aa" host="localhost" Jun 20 19:32:55.336044 containerd[1593]: 2025-06-20 19:32:55.302 [INFO][4359] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.546f1db123aa8a3588c31a9bef699729875b5bc2a4369fa7384086657d2634aa" host="localhost" Jun 20 19:32:55.336044 containerd[1593]: 2025-06-20 19:32:55.302 [INFO][4359] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.546f1db123aa8a3588c31a9bef699729875b5bc2a4369fa7384086657d2634aa" host="localhost" Jun 20 19:32:55.336044 containerd[1593]: 2025-06-20 19:32:55.302 [INFO][4359] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:32:55.336044 containerd[1593]: 2025-06-20 19:32:55.302 [INFO][4359] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="546f1db123aa8a3588c31a9bef699729875b5bc2a4369fa7384086657d2634aa" HandleID="k8s-pod-network.546f1db123aa8a3588c31a9bef699729875b5bc2a4369fa7384086657d2634aa" Workload="localhost-k8s-calico--apiserver--7cc96d8cf6--czfrv-eth0" Jun 20 19:32:55.336187 containerd[1593]: 2025-06-20 19:32:55.305 [INFO][4345] cni-plugin/k8s.go 418: Populated endpoint ContainerID="546f1db123aa8a3588c31a9bef699729875b5bc2a4369fa7384086657d2634aa" Namespace="calico-apiserver" Pod="calico-apiserver-7cc96d8cf6-czfrv" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cc96d8cf6--czfrv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7cc96d8cf6--czfrv-eth0", GenerateName:"calico-apiserver-7cc96d8cf6-", Namespace:"calico-apiserver", SelfLink:"", UID:"bfc4a5a7-f2e7-4326-8172-338c5c23e0de", ResourceVersion:"793", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 32, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cc96d8cf6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7cc96d8cf6-czfrv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9f82c36fc27", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:32:55.336242 containerd[1593]: 2025-06-20 19:32:55.305 [INFO][4345] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="546f1db123aa8a3588c31a9bef699729875b5bc2a4369fa7384086657d2634aa" Namespace="calico-apiserver" Pod="calico-apiserver-7cc96d8cf6-czfrv" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cc96d8cf6--czfrv-eth0" Jun 20 19:32:55.336242 containerd[1593]: 2025-06-20 19:32:55.305 [INFO][4345] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9f82c36fc27 ContainerID="546f1db123aa8a3588c31a9bef699729875b5bc2a4369fa7384086657d2634aa" Namespace="calico-apiserver" Pod="calico-apiserver-7cc96d8cf6-czfrv" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cc96d8cf6--czfrv-eth0" Jun 20 19:32:55.336242 containerd[1593]: 2025-06-20 19:32:55.311 [INFO][4345] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="546f1db123aa8a3588c31a9bef699729875b5bc2a4369fa7384086657d2634aa" Namespace="calico-apiserver" Pod="calico-apiserver-7cc96d8cf6-czfrv" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cc96d8cf6--czfrv-eth0" Jun 20 19:32:55.336308 containerd[1593]: 2025-06-20 19:32:55.314 [INFO][4345] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="546f1db123aa8a3588c31a9bef699729875b5bc2a4369fa7384086657d2634aa" Namespace="calico-apiserver" Pod="calico-apiserver-7cc96d8cf6-czfrv" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cc96d8cf6--czfrv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7cc96d8cf6--czfrv-eth0", GenerateName:"calico-apiserver-7cc96d8cf6-", Namespace:"calico-apiserver", SelfLink:"", UID:"bfc4a5a7-f2e7-4326-8172-338c5c23e0de", ResourceVersion:"793", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 32, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cc96d8cf6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"546f1db123aa8a3588c31a9bef699729875b5bc2a4369fa7384086657d2634aa", Pod:"calico-apiserver-7cc96d8cf6-czfrv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9f82c36fc27", MAC:"46:81:66:55:d1:53", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:32:55.336358 containerd[1593]: 2025-06-20 19:32:55.327 [INFO][4345] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="546f1db123aa8a3588c31a9bef699729875b5bc2a4369fa7384086657d2634aa" Namespace="calico-apiserver" Pod="calico-apiserver-7cc96d8cf6-czfrv" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cc96d8cf6--czfrv-eth0" Jun 20 19:32:55.365976 containerd[1593]: time="2025-06-20T19:32:55.365921421Z" level=info msg="connecting to shim 546f1db123aa8a3588c31a9bef699729875b5bc2a4369fa7384086657d2634aa" address="unix:///run/containerd/s/456a81f3a5957cd4bf4236946f4eb24acd3d5c2fa0d8087813083ccdefc57810" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:32:55.398013 systemd[1]: Started cri-containerd-546f1db123aa8a3588c31a9bef699729875b5bc2a4369fa7384086657d2634aa.scope - libcontainer container 546f1db123aa8a3588c31a9bef699729875b5bc2a4369fa7384086657d2634aa. Jun 20 19:32:55.412852 systemd-resolved[1412]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 20 19:32:55.418043 systemd-networkd[1488]: calibd0559bb3fa: Link UP Jun 20 19:32:55.419016 systemd-networkd[1488]: calibd0559bb3fa: Gained carrier Jun 20 19:32:55.439769 containerd[1593]: 2025-06-20 19:32:55.337 [INFO][4368] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7cc96d8cf6--s2hg5-eth0 calico-apiserver-7cc96d8cf6- calico-apiserver af7de0bd-f3dc-480d-9df4-554fb6215492 797 0 2025-06-20 19:32:28 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7cc96d8cf6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7cc96d8cf6-s2hg5 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calibd0559bb3fa [] [] }} ContainerID="9ce08fdc84cacf9e05aea957a01eb8783547e393722e7ac8204db7a26c22797f" Namespace="calico-apiserver" Pod="calico-apiserver-7cc96d8cf6-s2hg5" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cc96d8cf6--s2hg5-" Jun 20 19:32:55.439769 containerd[1593]: 2025-06-20 19:32:55.337 [INFO][4368] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9ce08fdc84cacf9e05aea957a01eb8783547e393722e7ac8204db7a26c22797f" Namespace="calico-apiserver" Pod="calico-apiserver-7cc96d8cf6-s2hg5" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cc96d8cf6--s2hg5-eth0" Jun 20 19:32:55.439769 containerd[1593]: 2025-06-20 19:32:55.373 [INFO][4396] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9ce08fdc84cacf9e05aea957a01eb8783547e393722e7ac8204db7a26c22797f" HandleID="k8s-pod-network.9ce08fdc84cacf9e05aea957a01eb8783547e393722e7ac8204db7a26c22797f" Workload="localhost-k8s-calico--apiserver--7cc96d8cf6--s2hg5-eth0" Jun 20 19:32:55.440042 containerd[1593]: 2025-06-20 19:32:55.373 [INFO][4396] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9ce08fdc84cacf9e05aea957a01eb8783547e393722e7ac8204db7a26c22797f" HandleID="k8s-pod-network.9ce08fdc84cacf9e05aea957a01eb8783547e393722e7ac8204db7a26c22797f" Workload="localhost-k8s-calico--apiserver--7cc96d8cf6--s2hg5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e470), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7cc96d8cf6-s2hg5", "timestamp":"2025-06-20 19:32:55.373071566 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 19:32:55.440042 containerd[1593]: 2025-06-20 19:32:55.373 [INFO][4396] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:32:55.440042 containerd[1593]: 2025-06-20 19:32:55.373 [INFO][4396] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:32:55.440042 containerd[1593]: 2025-06-20 19:32:55.373 [INFO][4396] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 20 19:32:55.440042 containerd[1593]: 2025-06-20 19:32:55.381 [INFO][4396] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9ce08fdc84cacf9e05aea957a01eb8783547e393722e7ac8204db7a26c22797f" host="localhost" Jun 20 19:32:55.440042 containerd[1593]: 2025-06-20 19:32:55.385 [INFO][4396] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jun 20 19:32:55.440042 containerd[1593]: 2025-06-20 19:32:55.389 [INFO][4396] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jun 20 19:32:55.440042 containerd[1593]: 2025-06-20 19:32:55.391 [INFO][4396] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 20 19:32:55.440042 containerd[1593]: 2025-06-20 19:32:55.394 [INFO][4396] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 20 19:32:55.440042 containerd[1593]: 2025-06-20 19:32:55.394 [INFO][4396] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9ce08fdc84cacf9e05aea957a01eb8783547e393722e7ac8204db7a26c22797f" host="localhost" Jun 20 19:32:55.440264 containerd[1593]: 2025-06-20 19:32:55.395 [INFO][4396] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9ce08fdc84cacf9e05aea957a01eb8783547e393722e7ac8204db7a26c22797f Jun 20 19:32:55.440264 containerd[1593]: 2025-06-20 19:32:55.400 [INFO][4396] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9ce08fdc84cacf9e05aea957a01eb8783547e393722e7ac8204db7a26c22797f" host="localhost" Jun 20 19:32:55.440264 containerd[1593]: 2025-06-20 19:32:55.409 [INFO][4396] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.9ce08fdc84cacf9e05aea957a01eb8783547e393722e7ac8204db7a26c22797f" host="localhost" Jun 20 19:32:55.440264 containerd[1593]: 2025-06-20 19:32:55.409 [INFO][4396] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.9ce08fdc84cacf9e05aea957a01eb8783547e393722e7ac8204db7a26c22797f" host="localhost" Jun 20 19:32:55.440264 containerd[1593]: 2025-06-20 19:32:55.409 [INFO][4396] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:32:55.440264 containerd[1593]: 2025-06-20 19:32:55.409 [INFO][4396] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="9ce08fdc84cacf9e05aea957a01eb8783547e393722e7ac8204db7a26c22797f" HandleID="k8s-pod-network.9ce08fdc84cacf9e05aea957a01eb8783547e393722e7ac8204db7a26c22797f" Workload="localhost-k8s-calico--apiserver--7cc96d8cf6--s2hg5-eth0" Jun 20 19:32:55.440378 containerd[1593]: 2025-06-20 19:32:55.413 [INFO][4368] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9ce08fdc84cacf9e05aea957a01eb8783547e393722e7ac8204db7a26c22797f" Namespace="calico-apiserver" Pod="calico-apiserver-7cc96d8cf6-s2hg5" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cc96d8cf6--s2hg5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7cc96d8cf6--s2hg5-eth0", GenerateName:"calico-apiserver-7cc96d8cf6-", Namespace:"calico-apiserver", SelfLink:"", UID:"af7de0bd-f3dc-480d-9df4-554fb6215492", ResourceVersion:"797", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 32, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cc96d8cf6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7cc96d8cf6-s2hg5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibd0559bb3fa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:32:55.440428 containerd[1593]: 2025-06-20 19:32:55.413 [INFO][4368] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="9ce08fdc84cacf9e05aea957a01eb8783547e393722e7ac8204db7a26c22797f" Namespace="calico-apiserver" Pod="calico-apiserver-7cc96d8cf6-s2hg5" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cc96d8cf6--s2hg5-eth0" Jun 20 19:32:55.440428 containerd[1593]: 2025-06-20 19:32:55.413 [INFO][4368] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibd0559bb3fa ContainerID="9ce08fdc84cacf9e05aea957a01eb8783547e393722e7ac8204db7a26c22797f" Namespace="calico-apiserver" Pod="calico-apiserver-7cc96d8cf6-s2hg5" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cc96d8cf6--s2hg5-eth0" Jun 20 19:32:55.440428 containerd[1593]: 2025-06-20 19:32:55.418 [INFO][4368] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9ce08fdc84cacf9e05aea957a01eb8783547e393722e7ac8204db7a26c22797f" Namespace="calico-apiserver" Pod="calico-apiserver-7cc96d8cf6-s2hg5" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cc96d8cf6--s2hg5-eth0" Jun 20 19:32:55.440494 containerd[1593]: 2025-06-20 19:32:55.419 [INFO][4368] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9ce08fdc84cacf9e05aea957a01eb8783547e393722e7ac8204db7a26c22797f" Namespace="calico-apiserver" Pod="calico-apiserver-7cc96d8cf6-s2hg5" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cc96d8cf6--s2hg5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7cc96d8cf6--s2hg5-eth0", GenerateName:"calico-apiserver-7cc96d8cf6-", Namespace:"calico-apiserver", SelfLink:"", UID:"af7de0bd-f3dc-480d-9df4-554fb6215492", ResourceVersion:"797", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 32, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cc96d8cf6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9ce08fdc84cacf9e05aea957a01eb8783547e393722e7ac8204db7a26c22797f", Pod:"calico-apiserver-7cc96d8cf6-s2hg5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibd0559bb3fa", MAC:"96:4f:df:ad:22:2a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:32:55.440543 containerd[1593]: 2025-06-20 19:32:55.432 [INFO][4368] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9ce08fdc84cacf9e05aea957a01eb8783547e393722e7ac8204db7a26c22797f" Namespace="calico-apiserver" Pod="calico-apiserver-7cc96d8cf6-s2hg5" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cc96d8cf6--s2hg5-eth0" Jun 20 19:32:55.457841 containerd[1593]: time="2025-06-20T19:32:55.457798876Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cc96d8cf6-czfrv,Uid:bfc4a5a7-f2e7-4326-8172-338c5c23e0de,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"546f1db123aa8a3588c31a9bef699729875b5bc2a4369fa7384086657d2634aa\"" Jun 20 19:32:55.469269 containerd[1593]: time="2025-06-20T19:32:55.469229872Z" level=info msg="connecting to shim 9ce08fdc84cacf9e05aea957a01eb8783547e393722e7ac8204db7a26c22797f" address="unix:///run/containerd/s/3ddadaaba6d8ce0a7b6f86de4d68573d65d314b94f5b245c55a51c2a4a8bc531" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:32:55.497917 systemd[1]: Started cri-containerd-9ce08fdc84cacf9e05aea957a01eb8783547e393722e7ac8204db7a26c22797f.scope - libcontainer container 9ce08fdc84cacf9e05aea957a01eb8783547e393722e7ac8204db7a26c22797f. Jun 20 19:32:55.511690 systemd-resolved[1412]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 20 19:32:55.543201 containerd[1593]: time="2025-06-20T19:32:55.543054420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cc96d8cf6-s2hg5,Uid:af7de0bd-f3dc-480d-9df4-554fb6215492,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"9ce08fdc84cacf9e05aea957a01eb8783547e393722e7ac8204db7a26c22797f\"" Jun 20 19:32:55.708004 systemd-networkd[1488]: cali87cd2809ec0: Gained IPv6LL Jun 20 19:32:55.743081 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount583238894.mount: Deactivated successfully. Jun 20 19:32:55.857608 containerd[1593]: time="2025-06-20T19:32:55.857555472Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:32:55.858499 containerd[1593]: time="2025-06-20T19:32:55.858459537Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.1: active requests=0, bytes read=33086345" Jun 20 19:32:55.859956 containerd[1593]: time="2025-06-20T19:32:55.859921270Z" level=info msg="ImageCreate event name:\"sha256:a8d73c8fd22b3a7a28e9baab63169fb459bc504d71d871f96225c4f2d5e660a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:32:55.862528 containerd[1593]: time="2025-06-20T19:32:55.862488119Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:4b8bcb8b4fc05026ba811bf0b25b736086c1b8b26a83a9039a84dd3a06b06bd4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:32:55.863054 containerd[1593]: time="2025-06-20T19:32:55.863029838Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.1\" with image id \"sha256:a8d73c8fd22b3a7a28e9baab63169fb459bc504d71d871f96225c4f2d5e660a5\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:4b8bcb8b4fc05026ba811bf0b25b736086c1b8b26a83a9039a84dd3a06b06bd4\", size \"33086175\" in 2.037172484s" Jun 20 19:32:55.863114 containerd[1593]: time="2025-06-20T19:32:55.863058973Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.1\" returns image reference \"sha256:a8d73c8fd22b3a7a28e9baab63169fb459bc504d71d871f96225c4f2d5e660a5\"" Jun 20 19:32:55.864358 containerd[1593]: time="2025-06-20T19:32:55.864310106Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.1\"" Jun 20 19:32:55.865332 containerd[1593]: time="2025-06-20T19:32:55.865304994Z" level=info msg="CreateContainer within sandbox \"6def9f2bde38b4a5265fe27e1d9327da8be5901e8bf2979c47e4448c50f56411\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jun 20 19:32:55.873694 containerd[1593]: time="2025-06-20T19:32:55.873658330Z" level=info msg="Container c6a7777627d736a2c7d4679ff456fbf5ab58bbfca88de9b11946648898f70114: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:32:55.881080 containerd[1593]: time="2025-06-20T19:32:55.881039703Z" level=info msg="CreateContainer within sandbox \"6def9f2bde38b4a5265fe27e1d9327da8be5901e8bf2979c47e4448c50f56411\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"c6a7777627d736a2c7d4679ff456fbf5ab58bbfca88de9b11946648898f70114\"" Jun 20 19:32:55.881562 containerd[1593]: time="2025-06-20T19:32:55.881463126Z" level=info msg="StartContainer for \"c6a7777627d736a2c7d4679ff456fbf5ab58bbfca88de9b11946648898f70114\"" Jun 20 19:32:55.882427 containerd[1593]: time="2025-06-20T19:32:55.882397279Z" level=info msg="connecting to shim c6a7777627d736a2c7d4679ff456fbf5ab58bbfca88de9b11946648898f70114" address="unix:///run/containerd/s/f894ff2a91aea7b861ae24f326b16dbe7886a8af68c6be4a4a963ee88e2f14ed" protocol=ttrpc version=3 Jun 20 19:32:55.901900 systemd[1]: Started cri-containerd-c6a7777627d736a2c7d4679ff456fbf5ab58bbfca88de9b11946648898f70114.scope - libcontainer container c6a7777627d736a2c7d4679ff456fbf5ab58bbfca88de9b11946648898f70114. Jun 20 19:32:55.955407 containerd[1593]: time="2025-06-20T19:32:55.955365642Z" level=info msg="StartContainer for \"c6a7777627d736a2c7d4679ff456fbf5ab58bbfca88de9b11946648898f70114\" returns successfully" Jun 20 19:32:56.151927 containerd[1593]: time="2025-06-20T19:32:56.151882688Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7sdlx,Uid:750ebe81-b0b4-4c6b-8270-1ad7fa1cd241,Namespace:kube-system,Attempt:0,}" Jun 20 19:32:56.240640 systemd-networkd[1488]: cali2e0e99f08e0: Link UP Jun 20 19:32:56.241277 systemd-networkd[1488]: cali2e0e99f08e0: Gained carrier Jun 20 19:32:56.252676 containerd[1593]: 2025-06-20 19:32:56.184 [INFO][4538] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--7sdlx-eth0 coredns-668d6bf9bc- kube-system 750ebe81-b0b4-4c6b-8270-1ad7fa1cd241 795 0 2025-06-20 19:32:19 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-7sdlx eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2e0e99f08e0 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="01aef3d8cc90a4ae509c5f72b569d7bae15a9bb7f00f505f43599a009e3173bb" Namespace="kube-system" Pod="coredns-668d6bf9bc-7sdlx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--7sdlx-" Jun 20 19:32:56.252676 containerd[1593]: 2025-06-20 19:32:56.185 [INFO][4538] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="01aef3d8cc90a4ae509c5f72b569d7bae15a9bb7f00f505f43599a009e3173bb" Namespace="kube-system" Pod="coredns-668d6bf9bc-7sdlx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--7sdlx-eth0" Jun 20 19:32:56.252676 containerd[1593]: 2025-06-20 19:32:56.207 [INFO][4554] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="01aef3d8cc90a4ae509c5f72b569d7bae15a9bb7f00f505f43599a009e3173bb" HandleID="k8s-pod-network.01aef3d8cc90a4ae509c5f72b569d7bae15a9bb7f00f505f43599a009e3173bb" Workload="localhost-k8s-coredns--668d6bf9bc--7sdlx-eth0" Jun 20 19:32:56.253136 containerd[1593]: 2025-06-20 19:32:56.207 [INFO][4554] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="01aef3d8cc90a4ae509c5f72b569d7bae15a9bb7f00f505f43599a009e3173bb" HandleID="k8s-pod-network.01aef3d8cc90a4ae509c5f72b569d7bae15a9bb7f00f505f43599a009e3173bb" Workload="localhost-k8s-coredns--668d6bf9bc--7sdlx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000139720), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-7sdlx", "timestamp":"2025-06-20 19:32:56.20772725 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 19:32:56.253136 containerd[1593]: 2025-06-20 19:32:56.208 [INFO][4554] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:32:56.253136 containerd[1593]: 2025-06-20 19:32:56.208 [INFO][4554] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:32:56.253136 containerd[1593]: 2025-06-20 19:32:56.208 [INFO][4554] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 20 19:32:56.253136 containerd[1593]: 2025-06-20 19:32:56.213 [INFO][4554] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.01aef3d8cc90a4ae509c5f72b569d7bae15a9bb7f00f505f43599a009e3173bb" host="localhost" Jun 20 19:32:56.253136 containerd[1593]: 2025-06-20 19:32:56.218 [INFO][4554] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jun 20 19:32:56.253136 containerd[1593]: 2025-06-20 19:32:56.222 [INFO][4554] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jun 20 19:32:56.253136 containerd[1593]: 2025-06-20 19:32:56.223 [INFO][4554] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 20 19:32:56.253136 containerd[1593]: 2025-06-20 19:32:56.225 [INFO][4554] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 20 19:32:56.253136 containerd[1593]: 2025-06-20 19:32:56.225 [INFO][4554] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.01aef3d8cc90a4ae509c5f72b569d7bae15a9bb7f00f505f43599a009e3173bb" host="localhost" Jun 20 19:32:56.253361 containerd[1593]: 2025-06-20 19:32:56.227 [INFO][4554] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.01aef3d8cc90a4ae509c5f72b569d7bae15a9bb7f00f505f43599a009e3173bb Jun 20 19:32:56.253361 containerd[1593]: 2025-06-20 19:32:56.231 [INFO][4554] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.01aef3d8cc90a4ae509c5f72b569d7bae15a9bb7f00f505f43599a009e3173bb" host="localhost" Jun 20 19:32:56.253361 containerd[1593]: 2025-06-20 19:32:56.235 [INFO][4554] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.01aef3d8cc90a4ae509c5f72b569d7bae15a9bb7f00f505f43599a009e3173bb" host="localhost" Jun 20 19:32:56.253361 containerd[1593]: 2025-06-20 19:32:56.235 [INFO][4554] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.01aef3d8cc90a4ae509c5f72b569d7bae15a9bb7f00f505f43599a009e3173bb" host="localhost" Jun 20 19:32:56.253361 containerd[1593]: 2025-06-20 19:32:56.235 [INFO][4554] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:32:56.253361 containerd[1593]: 2025-06-20 19:32:56.235 [INFO][4554] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="01aef3d8cc90a4ae509c5f72b569d7bae15a9bb7f00f505f43599a009e3173bb" HandleID="k8s-pod-network.01aef3d8cc90a4ae509c5f72b569d7bae15a9bb7f00f505f43599a009e3173bb" Workload="localhost-k8s-coredns--668d6bf9bc--7sdlx-eth0" Jun 20 19:32:56.253476 containerd[1593]: 2025-06-20 19:32:56.238 [INFO][4538] cni-plugin/k8s.go 418: Populated endpoint ContainerID="01aef3d8cc90a4ae509c5f72b569d7bae15a9bb7f00f505f43599a009e3173bb" Namespace="kube-system" Pod="coredns-668d6bf9bc-7sdlx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--7sdlx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--7sdlx-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"750ebe81-b0b4-4c6b-8270-1ad7fa1cd241", ResourceVersion:"795", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 32, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-7sdlx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2e0e99f08e0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:32:56.253547 containerd[1593]: 2025-06-20 19:32:56.238 [INFO][4538] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="01aef3d8cc90a4ae509c5f72b569d7bae15a9bb7f00f505f43599a009e3173bb" Namespace="kube-system" Pod="coredns-668d6bf9bc-7sdlx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--7sdlx-eth0" Jun 20 19:32:56.253547 containerd[1593]: 2025-06-20 19:32:56.238 [INFO][4538] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2e0e99f08e0 ContainerID="01aef3d8cc90a4ae509c5f72b569d7bae15a9bb7f00f505f43599a009e3173bb" Namespace="kube-system" Pod="coredns-668d6bf9bc-7sdlx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--7sdlx-eth0" Jun 20 19:32:56.253547 containerd[1593]: 2025-06-20 19:32:56.240 [INFO][4538] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="01aef3d8cc90a4ae509c5f72b569d7bae15a9bb7f00f505f43599a009e3173bb" Namespace="kube-system" Pod="coredns-668d6bf9bc-7sdlx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--7sdlx-eth0" Jun 20 19:32:56.253619 containerd[1593]: 2025-06-20 19:32:56.241 [INFO][4538] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="01aef3d8cc90a4ae509c5f72b569d7bae15a9bb7f00f505f43599a009e3173bb" Namespace="kube-system" Pod="coredns-668d6bf9bc-7sdlx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--7sdlx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--7sdlx-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"750ebe81-b0b4-4c6b-8270-1ad7fa1cd241", ResourceVersion:"795", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 32, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"01aef3d8cc90a4ae509c5f72b569d7bae15a9bb7f00f505f43599a009e3173bb", Pod:"coredns-668d6bf9bc-7sdlx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2e0e99f08e0", MAC:"ee:cf:aa:3d:7b:8f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:32:56.253619 containerd[1593]: 2025-06-20 19:32:56.249 [INFO][4538] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="01aef3d8cc90a4ae509c5f72b569d7bae15a9bb7f00f505f43599a009e3173bb" Namespace="kube-system" Pod="coredns-668d6bf9bc-7sdlx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--7sdlx-eth0" Jun 20 19:32:56.315752 kubelet[2709]: I0620 19:32:56.315671 2709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-6cddd98664-6hn8r" podStartSLOduration=1.507706148 podStartE2EDuration="5.315653917s" podCreationTimestamp="2025-06-20 19:32:51 +0000 UTC" firstStartedPulling="2025-06-20 19:32:52.055964523 +0000 UTC m=+37.261820876" lastFinishedPulling="2025-06-20 19:32:55.863912292 +0000 UTC m=+41.069768645" observedRunningTime="2025-06-20 19:32:56.315401197 +0000 UTC m=+41.521257550" watchObservedRunningTime="2025-06-20 19:32:56.315653917 +0000 UTC m=+41.521510260" Jun 20 19:32:56.333817 containerd[1593]: time="2025-06-20T19:32:56.333686577Z" level=info msg="connecting to shim 01aef3d8cc90a4ae509c5f72b569d7bae15a9bb7f00f505f43599a009e3173bb" address="unix:///run/containerd/s/70c612bb09b15f8c3daa97556c09598b7347c811df6979a039ddbddcff13f4f2" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:32:56.360892 systemd[1]: Started cri-containerd-01aef3d8cc90a4ae509c5f72b569d7bae15a9bb7f00f505f43599a009e3173bb.scope - libcontainer container 01aef3d8cc90a4ae509c5f72b569d7bae15a9bb7f00f505f43599a009e3173bb. Jun 20 19:32:56.373464 systemd-resolved[1412]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 20 19:32:56.402483 containerd[1593]: time="2025-06-20T19:32:56.402347946Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7sdlx,Uid:750ebe81-b0b4-4c6b-8270-1ad7fa1cd241,Namespace:kube-system,Attempt:0,} returns sandbox id \"01aef3d8cc90a4ae509c5f72b569d7bae15a9bb7f00f505f43599a009e3173bb\"" Jun 20 19:32:56.405328 containerd[1593]: time="2025-06-20T19:32:56.405288583Z" level=info msg="CreateContainer within sandbox \"01aef3d8cc90a4ae509c5f72b569d7bae15a9bb7f00f505f43599a009e3173bb\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 20 19:32:56.416702 containerd[1593]: time="2025-06-20T19:32:56.416663355Z" level=info msg="Container cccafbaa4e30a51dc93e7c604b93a54fb43dec191844b0756dbb3502aacd99a7: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:32:56.422442 containerd[1593]: time="2025-06-20T19:32:56.422399304Z" level=info msg="CreateContainer within sandbox \"01aef3d8cc90a4ae509c5f72b569d7bae15a9bb7f00f505f43599a009e3173bb\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cccafbaa4e30a51dc93e7c604b93a54fb43dec191844b0756dbb3502aacd99a7\"" Jun 20 19:32:56.423058 containerd[1593]: time="2025-06-20T19:32:56.423025972Z" level=info msg="StartContainer for \"cccafbaa4e30a51dc93e7c604b93a54fb43dec191844b0756dbb3502aacd99a7\"" Jun 20 19:32:56.423875 containerd[1593]: time="2025-06-20T19:32:56.423852099Z" level=info msg="connecting to shim cccafbaa4e30a51dc93e7c604b93a54fb43dec191844b0756dbb3502aacd99a7" address="unix:///run/containerd/s/70c612bb09b15f8c3daa97556c09598b7347c811df6979a039ddbddcff13f4f2" protocol=ttrpc version=3 Jun 20 19:32:56.446956 systemd[1]: Started cri-containerd-cccafbaa4e30a51dc93e7c604b93a54fb43dec191844b0756dbb3502aacd99a7.scope - libcontainer container cccafbaa4e30a51dc93e7c604b93a54fb43dec191844b0756dbb3502aacd99a7. Jun 20 19:32:56.480655 containerd[1593]: time="2025-06-20T19:32:56.480618569Z" level=info msg="StartContainer for \"cccafbaa4e30a51dc93e7c604b93a54fb43dec191844b0756dbb3502aacd99a7\" returns successfully" Jun 20 19:32:56.539928 systemd-networkd[1488]: calibd0559bb3fa: Gained IPv6LL Jun 20 19:32:57.116032 systemd-networkd[1488]: cali9f82c36fc27: Gained IPv6LL Jun 20 19:32:57.166762 containerd[1593]: time="2025-06-20T19:32:57.166716533Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nczfq,Uid:588ef3e4-eda0-4b6d-8388-71f60eb8e89d,Namespace:kube-system,Attempt:0,}" Jun 20 19:32:57.167280 containerd[1593]: time="2025-06-20T19:32:57.167247520Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-82mq6,Uid:10fc28c5-4d33-4680-87d1-31c5370f21a5,Namespace:calico-system,Attempt:0,}" Jun 20 19:32:57.622875 systemd-networkd[1488]: cali6a41999fe8b: Link UP Jun 20 19:32:57.623718 systemd-networkd[1488]: cali6a41999fe8b: Gained carrier Jun 20 19:32:57.635651 kubelet[2709]: I0620 19:32:57.635597 2709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-7sdlx" podStartSLOduration=38.635539624 podStartE2EDuration="38.635539624s" podCreationTimestamp="2025-06-20 19:32:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:32:57.350794828 +0000 UTC m=+42.556651181" watchObservedRunningTime="2025-06-20 19:32:57.635539624 +0000 UTC m=+42.841395977" Jun 20 19:32:57.642360 containerd[1593]: 2025-06-20 19:32:57.537 [INFO][4658] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--82mq6-eth0 csi-node-driver- calico-system 10fc28c5-4d33-4680-87d1-31c5370f21a5 687 0 2025-06-20 19:32:32 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:85b8c9d4df k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-82mq6 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali6a41999fe8b [] [] }} ContainerID="ebeb4b46d680009e43f8346b2ceae13f7fed405f20ec599fb16edbb342973f35" Namespace="calico-system" Pod="csi-node-driver-82mq6" WorkloadEndpoint="localhost-k8s-csi--node--driver--82mq6-" Jun 20 19:32:57.642360 containerd[1593]: 2025-06-20 19:32:57.537 [INFO][4658] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ebeb4b46d680009e43f8346b2ceae13f7fed405f20ec599fb16edbb342973f35" Namespace="calico-system" Pod="csi-node-driver-82mq6" WorkloadEndpoint="localhost-k8s-csi--node--driver--82mq6-eth0" Jun 20 19:32:57.642360 containerd[1593]: 2025-06-20 19:32:57.574 [INFO][4693] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ebeb4b46d680009e43f8346b2ceae13f7fed405f20ec599fb16edbb342973f35" HandleID="k8s-pod-network.ebeb4b46d680009e43f8346b2ceae13f7fed405f20ec599fb16edbb342973f35" Workload="localhost-k8s-csi--node--driver--82mq6-eth0" Jun 20 19:32:57.642360 containerd[1593]: 2025-06-20 19:32:57.575 [INFO][4693] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ebeb4b46d680009e43f8346b2ceae13f7fed405f20ec599fb16edbb342973f35" HandleID="k8s-pod-network.ebeb4b46d680009e43f8346b2ceae13f7fed405f20ec599fb16edbb342973f35" Workload="localhost-k8s-csi--node--driver--82mq6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fd490), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-82mq6", "timestamp":"2025-06-20 19:32:57.574870211 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 19:32:57.642360 containerd[1593]: 2025-06-20 19:32:57.575 [INFO][4693] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:32:57.642360 containerd[1593]: 2025-06-20 19:32:57.575 [INFO][4693] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:32:57.642360 containerd[1593]: 2025-06-20 19:32:57.575 [INFO][4693] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 20 19:32:57.642360 containerd[1593]: 2025-06-20 19:32:57.581 [INFO][4693] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ebeb4b46d680009e43f8346b2ceae13f7fed405f20ec599fb16edbb342973f35" host="localhost" Jun 20 19:32:57.642360 containerd[1593]: 2025-06-20 19:32:57.587 [INFO][4693] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jun 20 19:32:57.642360 containerd[1593]: 2025-06-20 19:32:57.592 [INFO][4693] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jun 20 19:32:57.642360 containerd[1593]: 2025-06-20 19:32:57.594 [INFO][4693] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 20 19:32:57.642360 containerd[1593]: 2025-06-20 19:32:57.597 [INFO][4693] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 20 19:32:57.642360 containerd[1593]: 2025-06-20 19:32:57.597 [INFO][4693] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ebeb4b46d680009e43f8346b2ceae13f7fed405f20ec599fb16edbb342973f35" host="localhost" Jun 20 19:32:57.642360 containerd[1593]: 2025-06-20 19:32:57.600 [INFO][4693] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ebeb4b46d680009e43f8346b2ceae13f7fed405f20ec599fb16edbb342973f35 Jun 20 19:32:57.642360 containerd[1593]: 2025-06-20 19:32:57.608 [INFO][4693] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ebeb4b46d680009e43f8346b2ceae13f7fed405f20ec599fb16edbb342973f35" host="localhost" Jun 20 19:32:57.642360 containerd[1593]: 2025-06-20 19:32:57.616 [INFO][4693] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.ebeb4b46d680009e43f8346b2ceae13f7fed405f20ec599fb16edbb342973f35" host="localhost" Jun 20 19:32:57.642360 containerd[1593]: 2025-06-20 19:32:57.616 [INFO][4693] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.ebeb4b46d680009e43f8346b2ceae13f7fed405f20ec599fb16edbb342973f35" host="localhost" Jun 20 19:32:57.642360 containerd[1593]: 2025-06-20 19:32:57.616 [INFO][4693] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:32:57.642360 containerd[1593]: 2025-06-20 19:32:57.616 [INFO][4693] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="ebeb4b46d680009e43f8346b2ceae13f7fed405f20ec599fb16edbb342973f35" HandleID="k8s-pod-network.ebeb4b46d680009e43f8346b2ceae13f7fed405f20ec599fb16edbb342973f35" Workload="localhost-k8s-csi--node--driver--82mq6-eth0" Jun 20 19:32:57.643215 containerd[1593]: 2025-06-20 19:32:57.619 [INFO][4658] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ebeb4b46d680009e43f8346b2ceae13f7fed405f20ec599fb16edbb342973f35" Namespace="calico-system" Pod="csi-node-driver-82mq6" WorkloadEndpoint="localhost-k8s-csi--node--driver--82mq6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--82mq6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"10fc28c5-4d33-4680-87d1-31c5370f21a5", ResourceVersion:"687", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 32, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"85b8c9d4df", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-82mq6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6a41999fe8b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:32:57.643215 containerd[1593]: 2025-06-20 19:32:57.619 [INFO][4658] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="ebeb4b46d680009e43f8346b2ceae13f7fed405f20ec599fb16edbb342973f35" Namespace="calico-system" Pod="csi-node-driver-82mq6" WorkloadEndpoint="localhost-k8s-csi--node--driver--82mq6-eth0" Jun 20 19:32:57.643215 containerd[1593]: 2025-06-20 19:32:57.619 [INFO][4658] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6a41999fe8b ContainerID="ebeb4b46d680009e43f8346b2ceae13f7fed405f20ec599fb16edbb342973f35" Namespace="calico-system" Pod="csi-node-driver-82mq6" WorkloadEndpoint="localhost-k8s-csi--node--driver--82mq6-eth0" Jun 20 19:32:57.643215 containerd[1593]: 2025-06-20 19:32:57.623 [INFO][4658] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ebeb4b46d680009e43f8346b2ceae13f7fed405f20ec599fb16edbb342973f35" Namespace="calico-system" Pod="csi-node-driver-82mq6" WorkloadEndpoint="localhost-k8s-csi--node--driver--82mq6-eth0" Jun 20 19:32:57.643215 containerd[1593]: 2025-06-20 19:32:57.624 [INFO][4658] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ebeb4b46d680009e43f8346b2ceae13f7fed405f20ec599fb16edbb342973f35" Namespace="calico-system" Pod="csi-node-driver-82mq6" WorkloadEndpoint="localhost-k8s-csi--node--driver--82mq6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--82mq6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"10fc28c5-4d33-4680-87d1-31c5370f21a5", ResourceVersion:"687", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 32, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"85b8c9d4df", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ebeb4b46d680009e43f8346b2ceae13f7fed405f20ec599fb16edbb342973f35", Pod:"csi-node-driver-82mq6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6a41999fe8b", MAC:"7e:c1:6c:ed:be:c6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:32:57.643215 containerd[1593]: 2025-06-20 19:32:57.637 [INFO][4658] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ebeb4b46d680009e43f8346b2ceae13f7fed405f20ec599fb16edbb342973f35" Namespace="calico-system" Pod="csi-node-driver-82mq6" WorkloadEndpoint="localhost-k8s-csi--node--driver--82mq6-eth0" Jun 20 19:32:57.691807 containerd[1593]: time="2025-06-20T19:32:57.691602844Z" level=info msg="connecting to shim ebeb4b46d680009e43f8346b2ceae13f7fed405f20ec599fb16edbb342973f35" address="unix:///run/containerd/s/c6d098c03e2734b0125d70a16eadde69438cc6a31322e01b4c573e343c87cd5d" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:32:57.718936 systemd[1]: Started cri-containerd-ebeb4b46d680009e43f8346b2ceae13f7fed405f20ec599fb16edbb342973f35.scope - libcontainer container ebeb4b46d680009e43f8346b2ceae13f7fed405f20ec599fb16edbb342973f35. Jun 20 19:32:57.728009 systemd-networkd[1488]: cali6104db7e23e: Link UP Jun 20 19:32:57.730874 systemd-networkd[1488]: cali6104db7e23e: Gained carrier Jun 20 19:32:57.742212 systemd-resolved[1412]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 20 19:32:57.753254 containerd[1593]: 2025-06-20 19:32:57.545 [INFO][4668] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--nczfq-eth0 coredns-668d6bf9bc- kube-system 588ef3e4-eda0-4b6d-8388-71f60eb8e89d 788 0 2025-06-20 19:32:19 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-nczfq eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6104db7e23e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="8b75523de6e0fcb968a1c37ea293a41052fdb0c0b4c0213e9fe7141a51f42c68" Namespace="kube-system" Pod="coredns-668d6bf9bc-nczfq" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--nczfq-" Jun 20 19:32:57.753254 containerd[1593]: 2025-06-20 19:32:57.545 [INFO][4668] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8b75523de6e0fcb968a1c37ea293a41052fdb0c0b4c0213e9fe7141a51f42c68" Namespace="kube-system" Pod="coredns-668d6bf9bc-nczfq" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--nczfq-eth0" Jun 20 19:32:57.753254 containerd[1593]: 2025-06-20 19:32:57.577 [INFO][4699] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8b75523de6e0fcb968a1c37ea293a41052fdb0c0b4c0213e9fe7141a51f42c68" HandleID="k8s-pod-network.8b75523de6e0fcb968a1c37ea293a41052fdb0c0b4c0213e9fe7141a51f42c68" Workload="localhost-k8s-coredns--668d6bf9bc--nczfq-eth0" Jun 20 19:32:57.753254 containerd[1593]: 2025-06-20 19:32:57.577 [INFO][4699] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8b75523de6e0fcb968a1c37ea293a41052fdb0c0b4c0213e9fe7141a51f42c68" HandleID="k8s-pod-network.8b75523de6e0fcb968a1c37ea293a41052fdb0c0b4c0213e9fe7141a51f42c68" Workload="localhost-k8s-coredns--668d6bf9bc--nczfq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d8fe0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-nczfq", "timestamp":"2025-06-20 19:32:57.577315868 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 19:32:57.753254 containerd[1593]: 2025-06-20 19:32:57.577 [INFO][4699] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:32:57.753254 containerd[1593]: 2025-06-20 19:32:57.616 [INFO][4699] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:32:57.753254 containerd[1593]: 2025-06-20 19:32:57.616 [INFO][4699] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 20 19:32:57.753254 containerd[1593]: 2025-06-20 19:32:57.681 [INFO][4699] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8b75523de6e0fcb968a1c37ea293a41052fdb0c0b4c0213e9fe7141a51f42c68" host="localhost" Jun 20 19:32:57.753254 containerd[1593]: 2025-06-20 19:32:57.688 [INFO][4699] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jun 20 19:32:57.753254 containerd[1593]: 2025-06-20 19:32:57.699 [INFO][4699] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jun 20 19:32:57.753254 containerd[1593]: 2025-06-20 19:32:57.701 [INFO][4699] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 20 19:32:57.753254 containerd[1593]: 2025-06-20 19:32:57.704 [INFO][4699] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 20 19:32:57.753254 containerd[1593]: 2025-06-20 19:32:57.705 [INFO][4699] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8b75523de6e0fcb968a1c37ea293a41052fdb0c0b4c0213e9fe7141a51f42c68" host="localhost" Jun 20 19:32:57.753254 containerd[1593]: 2025-06-20 19:32:57.707 [INFO][4699] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8b75523de6e0fcb968a1c37ea293a41052fdb0c0b4c0213e9fe7141a51f42c68 Jun 20 19:32:57.753254 containerd[1593]: 2025-06-20 19:32:57.711 [INFO][4699] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8b75523de6e0fcb968a1c37ea293a41052fdb0c0b4c0213e9fe7141a51f42c68" host="localhost" Jun 20 19:32:57.753254 containerd[1593]: 2025-06-20 19:32:57.717 [INFO][4699] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.8b75523de6e0fcb968a1c37ea293a41052fdb0c0b4c0213e9fe7141a51f42c68" host="localhost" Jun 20 19:32:57.753254 containerd[1593]: 2025-06-20 19:32:57.717 [INFO][4699] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.8b75523de6e0fcb968a1c37ea293a41052fdb0c0b4c0213e9fe7141a51f42c68" host="localhost" Jun 20 19:32:57.753254 containerd[1593]: 2025-06-20 19:32:57.717 [INFO][4699] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:32:57.753254 containerd[1593]: 2025-06-20 19:32:57.717 [INFO][4699] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="8b75523de6e0fcb968a1c37ea293a41052fdb0c0b4c0213e9fe7141a51f42c68" HandleID="k8s-pod-network.8b75523de6e0fcb968a1c37ea293a41052fdb0c0b4c0213e9fe7141a51f42c68" Workload="localhost-k8s-coredns--668d6bf9bc--nczfq-eth0" Jun 20 19:32:57.753817 containerd[1593]: 2025-06-20 19:32:57.723 [INFO][4668] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8b75523de6e0fcb968a1c37ea293a41052fdb0c0b4c0213e9fe7141a51f42c68" Namespace="kube-system" Pod="coredns-668d6bf9bc-nczfq" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--nczfq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--nczfq-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"588ef3e4-eda0-4b6d-8388-71f60eb8e89d", ResourceVersion:"788", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 32, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-nczfq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6104db7e23e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:32:57.753817 containerd[1593]: 2025-06-20 19:32:57.723 [INFO][4668] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="8b75523de6e0fcb968a1c37ea293a41052fdb0c0b4c0213e9fe7141a51f42c68" Namespace="kube-system" Pod="coredns-668d6bf9bc-nczfq" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--nczfq-eth0" Jun 20 19:32:57.753817 containerd[1593]: 2025-06-20 19:32:57.723 [INFO][4668] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6104db7e23e ContainerID="8b75523de6e0fcb968a1c37ea293a41052fdb0c0b4c0213e9fe7141a51f42c68" Namespace="kube-system" Pod="coredns-668d6bf9bc-nczfq" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--nczfq-eth0" Jun 20 19:32:57.753817 containerd[1593]: 2025-06-20 19:32:57.729 [INFO][4668] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8b75523de6e0fcb968a1c37ea293a41052fdb0c0b4c0213e9fe7141a51f42c68" Namespace="kube-system" Pod="coredns-668d6bf9bc-nczfq" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--nczfq-eth0" Jun 20 19:32:57.753817 containerd[1593]: 2025-06-20 19:32:57.729 [INFO][4668] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8b75523de6e0fcb968a1c37ea293a41052fdb0c0b4c0213e9fe7141a51f42c68" Namespace="kube-system" Pod="coredns-668d6bf9bc-nczfq" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--nczfq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--nczfq-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"588ef3e4-eda0-4b6d-8388-71f60eb8e89d", ResourceVersion:"788", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 32, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8b75523de6e0fcb968a1c37ea293a41052fdb0c0b4c0213e9fe7141a51f42c68", Pod:"coredns-668d6bf9bc-nczfq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6104db7e23e", MAC:"a6:a2:8f:c7:c8:c4", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:32:57.753817 containerd[1593]: 2025-06-20 19:32:57.749 [INFO][4668] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8b75523de6e0fcb968a1c37ea293a41052fdb0c0b4c0213e9fe7141a51f42c68" Namespace="kube-system" Pod="coredns-668d6bf9bc-nczfq" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--nczfq-eth0" Jun 20 19:32:57.757264 systemd-networkd[1488]: cali2e0e99f08e0: Gained IPv6LL Jun 20 19:32:57.785005 containerd[1593]: time="2025-06-20T19:32:57.784934848Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-82mq6,Uid:10fc28c5-4d33-4680-87d1-31c5370f21a5,Namespace:calico-system,Attempt:0,} returns sandbox id \"ebeb4b46d680009e43f8346b2ceae13f7fed405f20ec599fb16edbb342973f35\"" Jun 20 19:32:57.806942 containerd[1593]: time="2025-06-20T19:32:57.806894826Z" level=info msg="connecting to shim 8b75523de6e0fcb968a1c37ea293a41052fdb0c0b4c0213e9fe7141a51f42c68" address="unix:///run/containerd/s/c7eaea17522ebcb113293d760e5f47a3d1b6ac97b3542ea9ff5033279a7b1f7c" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:32:57.838896 systemd[1]: Started cri-containerd-8b75523de6e0fcb968a1c37ea293a41052fdb0c0b4c0213e9fe7141a51f42c68.scope - libcontainer container 8b75523de6e0fcb968a1c37ea293a41052fdb0c0b4c0213e9fe7141a51f42c68. Jun 20 19:32:57.855586 systemd-resolved[1412]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 20 19:32:58.014296 containerd[1593]: time="2025-06-20T19:32:58.014198772Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nczfq,Uid:588ef3e4-eda0-4b6d-8388-71f60eb8e89d,Namespace:kube-system,Attempt:0,} returns sandbox id \"8b75523de6e0fcb968a1c37ea293a41052fdb0c0b4c0213e9fe7141a51f42c68\"" Jun 20 19:32:58.019878 containerd[1593]: time="2025-06-20T19:32:58.019829897Z" level=info msg="CreateContainer within sandbox \"8b75523de6e0fcb968a1c37ea293a41052fdb0c0b4c0213e9fe7141a51f42c68\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 20 19:32:58.034424 containerd[1593]: time="2025-06-20T19:32:58.034377163Z" level=info msg="Container b2bc1532b1b34fbffced534b65edab52442206df2eac5021b60b614751c7b195: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:32:58.042807 containerd[1593]: time="2025-06-20T19:32:58.042710961Z" level=info msg="CreateContainer within sandbox \"8b75523de6e0fcb968a1c37ea293a41052fdb0c0b4c0213e9fe7141a51f42c68\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b2bc1532b1b34fbffced534b65edab52442206df2eac5021b60b614751c7b195\"" Jun 20 19:32:58.044535 containerd[1593]: time="2025-06-20T19:32:58.044466258Z" level=info msg="StartContainer for \"b2bc1532b1b34fbffced534b65edab52442206df2eac5021b60b614751c7b195\"" Jun 20 19:32:58.046880 containerd[1593]: time="2025-06-20T19:32:58.046850158Z" level=info msg="connecting to shim b2bc1532b1b34fbffced534b65edab52442206df2eac5021b60b614751c7b195" address="unix:///run/containerd/s/c7eaea17522ebcb113293d760e5f47a3d1b6ac97b3542ea9ff5033279a7b1f7c" protocol=ttrpc version=3 Jun 20 19:32:58.082181 systemd[1]: Started cri-containerd-b2bc1532b1b34fbffced534b65edab52442206df2eac5021b60b614751c7b195.scope - libcontainer container b2bc1532b1b34fbffced534b65edab52442206df2eac5021b60b614751c7b195. Jun 20 19:32:58.117847 containerd[1593]: time="2025-06-20T19:32:58.117803087Z" level=info msg="StartContainer for \"b2bc1532b1b34fbffced534b65edab52442206df2eac5021b60b614751c7b195\" returns successfully" Jun 20 19:32:58.531533 kubelet[2709]: I0620 19:32:58.531433 2709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-nczfq" podStartSLOduration=39.531416405 podStartE2EDuration="39.531416405s" podCreationTimestamp="2025-06-20 19:32:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:32:58.531131485 +0000 UTC m=+43.736987838" watchObservedRunningTime="2025-06-20 19:32:58.531416405 +0000 UTC m=+43.737272758" Jun 20 19:32:58.607977 containerd[1593]: time="2025-06-20T19:32:58.607920499Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:32:58.609159 containerd[1593]: time="2025-06-20T19:32:58.609115444Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.1: active requests=0, bytes read=51246233" Jun 20 19:32:58.613225 containerd[1593]: time="2025-06-20T19:32:58.612967265Z" level=info msg="ImageCreate event name:\"sha256:6df5d7da55b19142ea456ddaa7f49909709419c92a39991e84b0f6708f953d73\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:32:58.615853 containerd[1593]: time="2025-06-20T19:32:58.615804674Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5a988b0c09389a083a7f37e3f14e361659f0bcf538c01d50e9f785671a7d9b20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:32:58.616445 containerd[1593]: time="2025-06-20T19:32:58.616359185Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.1\" with image id \"sha256:6df5d7da55b19142ea456ddaa7f49909709419c92a39991e84b0f6708f953d73\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5a988b0c09389a083a7f37e3f14e361659f0bcf538c01d50e9f785671a7d9b20\", size \"52738904\" in 2.752020415s" Jun 20 19:32:58.616445 containerd[1593]: time="2025-06-20T19:32:58.616399492Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.1\" returns image reference \"sha256:6df5d7da55b19142ea456ddaa7f49909709419c92a39991e84b0f6708f953d73\"" Jun 20 19:32:58.618107 containerd[1593]: time="2025-06-20T19:32:58.618055201Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.1\"" Jun 20 19:32:58.624361 containerd[1593]: time="2025-06-20T19:32:58.624316760Z" level=info msg="CreateContainer within sandbox \"0b3fbde16aac6f975c4a680705492b7197ee706a4faf1b80193c705a923e5a1b\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jun 20 19:32:58.634895 containerd[1593]: time="2025-06-20T19:32:58.634851841Z" level=info msg="Container 7676052fe69075c0d157eefc55e3d286b8001118bdab362fe9d070a8150b0af9: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:32:58.651137 containerd[1593]: time="2025-06-20T19:32:58.651011864Z" level=info msg="CreateContainer within sandbox \"0b3fbde16aac6f975c4a680705492b7197ee706a4faf1b80193c705a923e5a1b\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"7676052fe69075c0d157eefc55e3d286b8001118bdab362fe9d070a8150b0af9\"" Jun 20 19:32:58.651737 containerd[1593]: time="2025-06-20T19:32:58.651704537Z" level=info msg="StartContainer for \"7676052fe69075c0d157eefc55e3d286b8001118bdab362fe9d070a8150b0af9\"" Jun 20 19:32:58.653141 containerd[1593]: time="2025-06-20T19:32:58.653094522Z" level=info msg="connecting to shim 7676052fe69075c0d157eefc55e3d286b8001118bdab362fe9d070a8150b0af9" address="unix:///run/containerd/s/5681154d311b3fd2c11eaa9848548f9f8a3153d5e391b5aede8b5dc1f34432d6" protocol=ttrpc version=3 Jun 20 19:32:58.681922 systemd[1]: Started cri-containerd-7676052fe69075c0d157eefc55e3d286b8001118bdab362fe9d070a8150b0af9.scope - libcontainer container 7676052fe69075c0d157eefc55e3d286b8001118bdab362fe9d070a8150b0af9. Jun 20 19:32:58.731390 containerd[1593]: time="2025-06-20T19:32:58.730608920Z" level=info msg="StartContainer for \"7676052fe69075c0d157eefc55e3d286b8001118bdab362fe9d070a8150b0af9\" returns successfully" Jun 20 19:32:58.909064 systemd[1]: Started sshd@8-10.0.0.149:22-10.0.0.1:59674.service - OpenSSH per-connection server daemon (10.0.0.1:59674). Jun 20 19:32:58.970505 sshd[4914]: Accepted publickey for core from 10.0.0.1 port 59674 ssh2: RSA SHA256:YoQ4GiRtY5Hu7FaS/OnNYeCnsR+r8YS2g6Qh7XD/NPE Jun 20 19:32:58.972321 sshd-session[4914]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:32:58.976898 systemd-logind[1580]: New session 9 of user core. Jun 20 19:32:58.984970 systemd[1]: Started session-9.scope - Session 9 of User core. Jun 20 19:32:59.101294 systemd-networkd[1488]: cali6a41999fe8b: Gained IPv6LL Jun 20 19:32:59.121121 sshd[4916]: Connection closed by 10.0.0.1 port 59674 Jun 20 19:32:59.121440 sshd-session[4914]: pam_unix(sshd:session): session closed for user core Jun 20 19:32:59.125877 systemd[1]: sshd@8-10.0.0.149:22-10.0.0.1:59674.service: Deactivated successfully. Jun 20 19:32:59.128130 systemd[1]: session-9.scope: Deactivated successfully. Jun 20 19:32:59.129016 systemd-logind[1580]: Session 9 logged out. Waiting for processes to exit. Jun 20 19:32:59.130180 systemd-logind[1580]: Removed session 9. Jun 20 19:32:59.315048 kubelet[2709]: I0620 19:32:59.314708 2709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7cd54bd598-j9rdn" podStartSLOduration=23.120859492 podStartE2EDuration="27.314689219s" podCreationTimestamp="2025-06-20 19:32:32 +0000 UTC" firstStartedPulling="2025-06-20 19:32:54.423573277 +0000 UTC m=+39.629429630" lastFinishedPulling="2025-06-20 19:32:58.617403014 +0000 UTC m=+43.823259357" observedRunningTime="2025-06-20 19:32:59.314279112 +0000 UTC m=+44.520135465" watchObservedRunningTime="2025-06-20 19:32:59.314689219 +0000 UTC m=+44.520545572" Jun 20 19:32:59.423511 kubelet[2709]: I0620 19:32:59.423426 2709 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 20 19:32:59.675966 systemd-networkd[1488]: cali6104db7e23e: Gained IPv6LL Jun 20 19:32:59.694107 containerd[1593]: time="2025-06-20T19:32:59.694032085Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b2955553363bebd490b2b5ae96c8b36195fc72478ca6d879e92a2b1aa6a73fc7\" id:\"3611d66e01e10800caef9bda3259bc10ff892546c6f17d65344366278bf3ed27\" pid:4943 exited_at:{seconds:1750447979 nanos:693627809}" Jun 20 19:32:59.778962 containerd[1593]: time="2025-06-20T19:32:59.778908995Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b2955553363bebd490b2b5ae96c8b36195fc72478ca6d879e92a2b1aa6a73fc7\" id:\"b5ae920b7171298963bb4fbd66e88c43fa89e525e70371fdd31d76655f7a28b4\" pid:4969 exited_at:{seconds:1750447979 nanos:778556036}" Jun 20 19:33:00.348762 containerd[1593]: time="2025-06-20T19:33:00.348724221Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7676052fe69075c0d157eefc55e3d286b8001118bdab362fe9d070a8150b0af9\" id:\"629bd44084442ef0e00457f3531744139d296f77f14ba0c529236f1fadbfc383\" pid:4994 exited_at:{seconds:1750447980 nanos:347801924}" Jun 20 19:33:01.752926 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2024694805.mount: Deactivated successfully. Jun 20 19:33:02.913444 containerd[1593]: time="2025-06-20T19:33:02.913388834Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:33:02.916967 containerd[1593]: time="2025-06-20T19:33:02.916924140Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.1: active requests=0, bytes read=66352249" Jun 20 19:33:02.918251 containerd[1593]: time="2025-06-20T19:33:02.918216488Z" level=info msg="ImageCreate event name:\"sha256:7ded2fef2b18e2077114599de13fa300df0e1437753deab5c59843a86d2dad82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:33:02.920490 containerd[1593]: time="2025-06-20T19:33:02.920439489Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:173a10ef7a65a843f99fc366c7c860fa4068a8f52fda1b30ee589bc4ca43f45a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:33:02.921043 containerd[1593]: time="2025-06-20T19:33:02.920995592Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.1\" with image id \"sha256:7ded2fef2b18e2077114599de13fa300df0e1437753deab5c59843a86d2dad82\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:173a10ef7a65a843f99fc366c7c860fa4068a8f52fda1b30ee589bc4ca43f45a\", size \"66352095\" in 4.302911226s" Jun 20 19:33:02.921043 containerd[1593]: time="2025-06-20T19:33:02.921040016Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.1\" returns image reference \"sha256:7ded2fef2b18e2077114599de13fa300df0e1437753deab5c59843a86d2dad82\"" Jun 20 19:33:02.922005 containerd[1593]: time="2025-06-20T19:33:02.921978353Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.1\"" Jun 20 19:33:02.923685 containerd[1593]: time="2025-06-20T19:33:02.923508823Z" level=info msg="CreateContainer within sandbox \"ed05c351dc23b8399560ae8215df312e4f2e1db46d665192c70e04c6ed322152\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jun 20 19:33:02.930306 containerd[1593]: time="2025-06-20T19:33:02.930268356Z" level=info msg="Container aa564ed3f482fb5b2b1012a2f93e3483517e57a2c87a31ac6cb7214692ab4fdc: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:33:02.938576 containerd[1593]: time="2025-06-20T19:33:02.938535145Z" level=info msg="CreateContainer within sandbox \"ed05c351dc23b8399560ae8215df312e4f2e1db46d665192c70e04c6ed322152\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"aa564ed3f482fb5b2b1012a2f93e3483517e57a2c87a31ac6cb7214692ab4fdc\"" Jun 20 19:33:02.938933 containerd[1593]: time="2025-06-20T19:33:02.938906878Z" level=info msg="StartContainer for \"aa564ed3f482fb5b2b1012a2f93e3483517e57a2c87a31ac6cb7214692ab4fdc\"" Jun 20 19:33:02.939953 containerd[1593]: time="2025-06-20T19:33:02.939925047Z" level=info msg="connecting to shim aa564ed3f482fb5b2b1012a2f93e3483517e57a2c87a31ac6cb7214692ab4fdc" address="unix:///run/containerd/s/292edfb2a70cbbf18856f45dc7bcf20c5931d9f4b29260db9ad257df211aab9e" protocol=ttrpc version=3 Jun 20 19:33:02.967918 systemd[1]: Started cri-containerd-aa564ed3f482fb5b2b1012a2f93e3483517e57a2c87a31ac6cb7214692ab4fdc.scope - libcontainer container aa564ed3f482fb5b2b1012a2f93e3483517e57a2c87a31ac6cb7214692ab4fdc. Jun 20 19:33:03.016335 containerd[1593]: time="2025-06-20T19:33:03.016290708Z" level=info msg="StartContainer for \"aa564ed3f482fb5b2b1012a2f93e3483517e57a2c87a31ac6cb7214692ab4fdc\" returns successfully" Jun 20 19:33:03.337446 kubelet[2709]: I0620 19:33:03.337366 2709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-5bd85449d4-579gn" podStartSLOduration=23.933032321 podStartE2EDuration="32.337344059s" podCreationTimestamp="2025-06-20 19:32:31 +0000 UTC" firstStartedPulling="2025-06-20 19:32:54.517421151 +0000 UTC m=+39.723277504" lastFinishedPulling="2025-06-20 19:33:02.921732889 +0000 UTC m=+48.127589242" observedRunningTime="2025-06-20 19:33:03.334718416 +0000 UTC m=+48.540574769" watchObservedRunningTime="2025-06-20 19:33:03.337344059 +0000 UTC m=+48.543200412" Jun 20 19:33:03.408356 containerd[1593]: time="2025-06-20T19:33:03.408274446Z" level=info msg="TaskExit event in podsandbox handler container_id:\"aa564ed3f482fb5b2b1012a2f93e3483517e57a2c87a31ac6cb7214692ab4fdc\" id:\"85814bf3eb51bfd4dc7b0c532cdca988b2574a13b727a697e622bf9abc3cc466\" pid:5071 exit_status:1 exited_at:{seconds:1750447983 nanos:407864721}" Jun 20 19:33:04.139128 systemd[1]: Started sshd@9-10.0.0.149:22-10.0.0.1:55244.service - OpenSSH per-connection server daemon (10.0.0.1:55244). Jun 20 19:33:04.196253 sshd[5086]: Accepted publickey for core from 10.0.0.1 port 55244 ssh2: RSA SHA256:YoQ4GiRtY5Hu7FaS/OnNYeCnsR+r8YS2g6Qh7XD/NPE Jun 20 19:33:04.197621 sshd-session[5086]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:33:04.201669 systemd-logind[1580]: New session 10 of user core. Jun 20 19:33:04.209889 systemd[1]: Started session-10.scope - Session 10 of User core. Jun 20 19:33:04.332321 sshd[5090]: Connection closed by 10.0.0.1 port 55244 Jun 20 19:33:04.332621 sshd-session[5086]: pam_unix(sshd:session): session closed for user core Jun 20 19:33:04.336988 systemd[1]: sshd@9-10.0.0.149:22-10.0.0.1:55244.service: Deactivated successfully. Jun 20 19:33:04.339172 systemd[1]: session-10.scope: Deactivated successfully. Jun 20 19:33:04.340150 systemd-logind[1580]: Session 10 logged out. Waiting for processes to exit. Jun 20 19:33:04.341699 systemd-logind[1580]: Removed session 10. Jun 20 19:33:04.396862 containerd[1593]: time="2025-06-20T19:33:04.396737519Z" level=info msg="TaskExit event in podsandbox handler container_id:\"aa564ed3f482fb5b2b1012a2f93e3483517e57a2c87a31ac6cb7214692ab4fdc\" id:\"29089b52f1cdd4a39703bcd8fe1e245018b42f8c30cd47eafa5da8755766fda9\" pid:5117 exit_status:1 exited_at:{seconds:1750447984 nanos:396451728}" Jun 20 19:33:05.423483 containerd[1593]: time="2025-06-20T19:33:05.423433479Z" level=info msg="TaskExit event in podsandbox handler container_id:\"aa564ed3f482fb5b2b1012a2f93e3483517e57a2c87a31ac6cb7214692ab4fdc\" id:\"f166a1df3486e682e7fcfab532592d83bcc0c737492bb92fd67242d1e66f5bb5\" pid:5148 exit_status:1 exited_at:{seconds:1750447985 nanos:422479763}" Jun 20 19:33:06.340559 containerd[1593]: time="2025-06-20T19:33:06.340517971Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:33:06.341369 containerd[1593]: time="2025-06-20T19:33:06.341341722Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.1: active requests=0, bytes read=47305653" Jun 20 19:33:06.342643 containerd[1593]: time="2025-06-20T19:33:06.342584704Z" level=info msg="ImageCreate event name:\"sha256:5d29e6e796e41d7383da7c5b73fc136f7e486d40c52f79a04098396b7f85106c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:33:06.344676 containerd[1593]: time="2025-06-20T19:33:06.344606952Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:f6439af8b6022a48d2c6c75d92ec31fe177e7b6a90c58c78ca3964db2b94e21b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:33:06.345397 containerd[1593]: time="2025-06-20T19:33:06.345261811Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.1\" with image id \"sha256:5d29e6e796e41d7383da7c5b73fc136f7e486d40c52f79a04098396b7f85106c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:f6439af8b6022a48d2c6c75d92ec31fe177e7b6a90c58c78ca3964db2b94e21b\", size \"48798372\" in 3.423256767s" Jun 20 19:33:06.345397 containerd[1593]: time="2025-06-20T19:33:06.345305484Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.1\" returns image reference \"sha256:5d29e6e796e41d7383da7c5b73fc136f7e486d40c52f79a04098396b7f85106c\"" Jun 20 19:33:06.346156 containerd[1593]: time="2025-06-20T19:33:06.346133171Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.1\"" Jun 20 19:33:06.352593 containerd[1593]: time="2025-06-20T19:33:06.352321074Z" level=info msg="CreateContainer within sandbox \"546f1db123aa8a3588c31a9bef699729875b5bc2a4369fa7384086657d2634aa\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jun 20 19:33:06.367591 containerd[1593]: time="2025-06-20T19:33:06.366568995Z" level=info msg="Container c66acc7166ab92c591812e6cd919981c3dd428cda1922680537da1d7d92f5fa5: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:33:06.377855 containerd[1593]: time="2025-06-20T19:33:06.377810806Z" level=info msg="CreateContainer within sandbox \"546f1db123aa8a3588c31a9bef699729875b5bc2a4369fa7384086657d2634aa\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c66acc7166ab92c591812e6cd919981c3dd428cda1922680537da1d7d92f5fa5\"" Jun 20 19:33:06.378313 containerd[1593]: time="2025-06-20T19:33:06.378286456Z" level=info msg="StartContainer for \"c66acc7166ab92c591812e6cd919981c3dd428cda1922680537da1d7d92f5fa5\"" Jun 20 19:33:06.380171 containerd[1593]: time="2025-06-20T19:33:06.380107974Z" level=info msg="connecting to shim c66acc7166ab92c591812e6cd919981c3dd428cda1922680537da1d7d92f5fa5" address="unix:///run/containerd/s/456a81f3a5957cd4bf4236946f4eb24acd3d5c2fa0d8087813083ccdefc57810" protocol=ttrpc version=3 Jun 20 19:33:06.401928 systemd[1]: Started cri-containerd-c66acc7166ab92c591812e6cd919981c3dd428cda1922680537da1d7d92f5fa5.scope - libcontainer container c66acc7166ab92c591812e6cd919981c3dd428cda1922680537da1d7d92f5fa5. Jun 20 19:33:06.455755 containerd[1593]: time="2025-06-20T19:33:06.455714654Z" level=info msg="StartContainer for \"c66acc7166ab92c591812e6cd919981c3dd428cda1922680537da1d7d92f5fa5\" returns successfully" Jun 20 19:33:06.963552 containerd[1593]: time="2025-06-20T19:33:06.963418094Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:33:06.964831 containerd[1593]: time="2025-06-20T19:33:06.964740147Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.1: active requests=0, bytes read=77" Jun 20 19:33:06.966822 containerd[1593]: time="2025-06-20T19:33:06.966754340Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.1\" with image id \"sha256:5d29e6e796e41d7383da7c5b73fc136f7e486d40c52f79a04098396b7f85106c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:f6439af8b6022a48d2c6c75d92ec31fe177e7b6a90c58c78ca3964db2b94e21b\", size \"48798372\" in 620.592023ms" Jun 20 19:33:06.966822 containerd[1593]: time="2025-06-20T19:33:06.966822529Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.1\" returns image reference \"sha256:5d29e6e796e41d7383da7c5b73fc136f7e486d40c52f79a04098396b7f85106c\"" Jun 20 19:33:06.968802 containerd[1593]: time="2025-06-20T19:33:06.968447735Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.1\"" Jun 20 19:33:06.970623 containerd[1593]: time="2025-06-20T19:33:06.970591954Z" level=info msg="CreateContainer within sandbox \"9ce08fdc84cacf9e05aea957a01eb8783547e393722e7ac8204db7a26c22797f\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jun 20 19:33:07.224638 containerd[1593]: time="2025-06-20T19:33:07.224503422Z" level=info msg="Container 3e6a30642470bcd0e0505a345dea0a34b669cff7fea99cd9a8c6cd128fd24618: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:33:07.389114 kubelet[2709]: I0620 19:33:07.388928 2709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7cc96d8cf6-czfrv" podStartSLOduration=28.502804869 podStartE2EDuration="39.388910196s" podCreationTimestamp="2025-06-20 19:32:28 +0000 UTC" firstStartedPulling="2025-06-20 19:32:55.459903028 +0000 UTC m=+40.665759381" lastFinishedPulling="2025-06-20 19:33:06.346008355 +0000 UTC m=+51.551864708" observedRunningTime="2025-06-20 19:33:07.387414595 +0000 UTC m=+52.593270948" watchObservedRunningTime="2025-06-20 19:33:07.388910196 +0000 UTC m=+52.594766539" Jun 20 19:33:07.393047 containerd[1593]: time="2025-06-20T19:33:07.392685431Z" level=info msg="CreateContainer within sandbox \"9ce08fdc84cacf9e05aea957a01eb8783547e393722e7ac8204db7a26c22797f\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"3e6a30642470bcd0e0505a345dea0a34b669cff7fea99cd9a8c6cd128fd24618\"" Jun 20 19:33:07.393632 containerd[1593]: time="2025-06-20T19:33:07.393562481Z" level=info msg="StartContainer for \"3e6a30642470bcd0e0505a345dea0a34b669cff7fea99cd9a8c6cd128fd24618\"" Jun 20 19:33:07.395802 containerd[1593]: time="2025-06-20T19:33:07.395372507Z" level=info msg="connecting to shim 3e6a30642470bcd0e0505a345dea0a34b669cff7fea99cd9a8c6cd128fd24618" address="unix:///run/containerd/s/3ddadaaba6d8ce0a7b6f86de4d68573d65d314b94f5b245c55a51c2a4a8bc531" protocol=ttrpc version=3 Jun 20 19:33:07.422911 systemd[1]: Started cri-containerd-3e6a30642470bcd0e0505a345dea0a34b669cff7fea99cd9a8c6cd128fd24618.scope - libcontainer container 3e6a30642470bcd0e0505a345dea0a34b669cff7fea99cd9a8c6cd128fd24618. Jun 20 19:33:07.475927 containerd[1593]: time="2025-06-20T19:33:07.475541965Z" level=info msg="StartContainer for \"3e6a30642470bcd0e0505a345dea0a34b669cff7fea99cd9a8c6cd128fd24618\" returns successfully" Jun 20 19:33:08.352363 kubelet[2709]: I0620 19:33:08.352329 2709 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 20 19:33:08.362602 kubelet[2709]: I0620 19:33:08.362534 2709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7cc96d8cf6-s2hg5" podStartSLOduration=28.939093377 podStartE2EDuration="40.362521364s" podCreationTimestamp="2025-06-20 19:32:28 +0000 UTC" firstStartedPulling="2025-06-20 19:32:55.544642433 +0000 UTC m=+40.750498786" lastFinishedPulling="2025-06-20 19:33:06.96807042 +0000 UTC m=+52.173926773" observedRunningTime="2025-06-20 19:33:08.362446793 +0000 UTC m=+53.568303146" watchObservedRunningTime="2025-06-20 19:33:08.362521364 +0000 UTC m=+53.568377717" Jun 20 19:33:08.663163 containerd[1593]: time="2025-06-20T19:33:08.663042493Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:33:08.663795 containerd[1593]: time="2025-06-20T19:33:08.663720967Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.1: active requests=0, bytes read=8758389" Jun 20 19:33:08.664850 containerd[1593]: time="2025-06-20T19:33:08.664818184Z" level=info msg="ImageCreate event name:\"sha256:8a733c30ec1a8c9f3f51e2da387b425052ed4a9ca631da57c6b185183243e8e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:33:08.673902 containerd[1593]: time="2025-06-20T19:33:08.673859414Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:b2a5699992dd6c84cfab94ef60536b9aaf19ad8de648e8e0b92d3733f5f52d23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:33:08.674425 containerd[1593]: time="2025-06-20T19:33:08.674380490Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.1\" with image id \"sha256:8a733c30ec1a8c9f3f51e2da387b425052ed4a9ca631da57c6b185183243e8e9\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:b2a5699992dd6c84cfab94ef60536b9aaf19ad8de648e8e0b92d3733f5f52d23\", size \"10251092\" in 1.705890876s" Jun 20 19:33:08.674425 containerd[1593]: time="2025-06-20T19:33:08.674422189Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.1\" returns image reference \"sha256:8a733c30ec1a8c9f3f51e2da387b425052ed4a9ca631da57c6b185183243e8e9\"" Jun 20 19:33:08.676572 containerd[1593]: time="2025-06-20T19:33:08.676542261Z" level=info msg="CreateContainer within sandbox \"ebeb4b46d680009e43f8346b2ceae13f7fed405f20ec599fb16edbb342973f35\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jun 20 19:33:08.688964 containerd[1593]: time="2025-06-20T19:33:08.688935706Z" level=info msg="Container 2923094f53714cee34c2c151f4dab7f4e4580dd944d3181138c21c1f9781279f: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:33:08.708125 containerd[1593]: time="2025-06-20T19:33:08.708084143Z" level=info msg="CreateContainer within sandbox \"ebeb4b46d680009e43f8346b2ceae13f7fed405f20ec599fb16edbb342973f35\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"2923094f53714cee34c2c151f4dab7f4e4580dd944d3181138c21c1f9781279f\"" Jun 20 19:33:08.708623 containerd[1593]: time="2025-06-20T19:33:08.708599539Z" level=info msg="StartContainer for \"2923094f53714cee34c2c151f4dab7f4e4580dd944d3181138c21c1f9781279f\"" Jun 20 19:33:08.709880 containerd[1593]: time="2025-06-20T19:33:08.709803908Z" level=info msg="connecting to shim 2923094f53714cee34c2c151f4dab7f4e4580dd944d3181138c21c1f9781279f" address="unix:///run/containerd/s/c6d098c03e2734b0125d70a16eadde69438cc6a31322e01b4c573e343c87cd5d" protocol=ttrpc version=3 Jun 20 19:33:08.729892 systemd[1]: Started cri-containerd-2923094f53714cee34c2c151f4dab7f4e4580dd944d3181138c21c1f9781279f.scope - libcontainer container 2923094f53714cee34c2c151f4dab7f4e4580dd944d3181138c21c1f9781279f. Jun 20 19:33:08.773109 containerd[1593]: time="2025-06-20T19:33:08.773073174Z" level=info msg="StartContainer for \"2923094f53714cee34c2c151f4dab7f4e4580dd944d3181138c21c1f9781279f\" returns successfully" Jun 20 19:33:08.774717 containerd[1593]: time="2025-06-20T19:33:08.774682420Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1\"" Jun 20 19:33:09.346411 systemd[1]: Started sshd@10-10.0.0.149:22-10.0.0.1:55256.service - OpenSSH per-connection server daemon (10.0.0.1:55256). Jun 20 19:33:09.356513 kubelet[2709]: I0620 19:33:09.356486 2709 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 20 19:33:09.405868 sshd[5275]: Accepted publickey for core from 10.0.0.1 port 55256 ssh2: RSA SHA256:YoQ4GiRtY5Hu7FaS/OnNYeCnsR+r8YS2g6Qh7XD/NPE Jun 20 19:33:09.407285 sshd-session[5275]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:33:09.411608 systemd-logind[1580]: New session 11 of user core. Jun 20 19:33:09.420917 systemd[1]: Started session-11.scope - Session 11 of User core. Jun 20 19:33:09.541639 sshd[5277]: Connection closed by 10.0.0.1 port 55256 Jun 20 19:33:09.542009 sshd-session[5275]: pam_unix(sshd:session): session closed for user core Jun 20 19:33:09.550622 systemd[1]: sshd@10-10.0.0.149:22-10.0.0.1:55256.service: Deactivated successfully. Jun 20 19:33:09.552672 systemd[1]: session-11.scope: Deactivated successfully. Jun 20 19:33:09.553592 systemd-logind[1580]: Session 11 logged out. Waiting for processes to exit. Jun 20 19:33:09.556503 systemd[1]: Started sshd@11-10.0.0.149:22-10.0.0.1:55272.service - OpenSSH per-connection server daemon (10.0.0.1:55272). Jun 20 19:33:09.557443 systemd-logind[1580]: Removed session 11. Jun 20 19:33:09.618344 sshd[5292]: Accepted publickey for core from 10.0.0.1 port 55272 ssh2: RSA SHA256:YoQ4GiRtY5Hu7FaS/OnNYeCnsR+r8YS2g6Qh7XD/NPE Jun 20 19:33:09.619566 sshd-session[5292]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:33:09.623858 systemd-logind[1580]: New session 12 of user core. Jun 20 19:33:09.630894 systemd[1]: Started session-12.scope - Session 12 of User core. Jun 20 19:33:09.776946 sshd[5294]: Connection closed by 10.0.0.1 port 55272 Jun 20 19:33:09.777402 sshd-session[5292]: pam_unix(sshd:session): session closed for user core Jun 20 19:33:09.789218 systemd[1]: sshd@11-10.0.0.149:22-10.0.0.1:55272.service: Deactivated successfully. Jun 20 19:33:09.794578 systemd[1]: session-12.scope: Deactivated successfully. Jun 20 19:33:09.796598 systemd-logind[1580]: Session 12 logged out. Waiting for processes to exit. Jun 20 19:33:09.799482 systemd-logind[1580]: Removed session 12. Jun 20 19:33:09.801360 systemd[1]: Started sshd@12-10.0.0.149:22-10.0.0.1:55284.service - OpenSSH per-connection server daemon (10.0.0.1:55284). Jun 20 19:33:09.849236 sshd[5305]: Accepted publickey for core from 10.0.0.1 port 55284 ssh2: RSA SHA256:YoQ4GiRtY5Hu7FaS/OnNYeCnsR+r8YS2g6Qh7XD/NPE Jun 20 19:33:09.850919 sshd-session[5305]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:33:09.855380 systemd-logind[1580]: New session 13 of user core. Jun 20 19:33:09.862903 systemd[1]: Started session-13.scope - Session 13 of User core. Jun 20 19:33:09.975218 sshd[5307]: Connection closed by 10.0.0.1 port 55284 Jun 20 19:33:09.975530 sshd-session[5305]: pam_unix(sshd:session): session closed for user core Jun 20 19:33:09.979923 systemd[1]: sshd@12-10.0.0.149:22-10.0.0.1:55284.service: Deactivated successfully. Jun 20 19:33:09.981792 systemd[1]: session-13.scope: Deactivated successfully. Jun 20 19:33:09.982565 systemd-logind[1580]: Session 13 logged out. Waiting for processes to exit. Jun 20 19:33:09.983664 systemd-logind[1580]: Removed session 13. Jun 20 19:33:11.035332 containerd[1593]: time="2025-06-20T19:33:11.035284517Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:33:11.036684 containerd[1593]: time="2025-06-20T19:33:11.036628589Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1: active requests=0, bytes read=14705633" Jun 20 19:33:11.038026 containerd[1593]: time="2025-06-20T19:33:11.037993422Z" level=info msg="ImageCreate event name:\"sha256:dfc00385e8755bddd1053a2a482a3559ad6c93bd8b882371b9ed8b5c3dfe22b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:33:11.040277 containerd[1593]: time="2025-06-20T19:33:11.040241905Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:1a882b6866dd22d783a39f1e041b87a154666ea4dd8b669fe98d0b0fac58d225\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:33:11.040963 containerd[1593]: time="2025-06-20T19:33:11.040923916Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1\" with image id \"sha256:dfc00385e8755bddd1053a2a482a3559ad6c93bd8b882371b9ed8b5c3dfe22b5\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:1a882b6866dd22d783a39f1e041b87a154666ea4dd8b669fe98d0b0fac58d225\", size \"16198288\" in 2.266203365s" Jun 20 19:33:11.041015 containerd[1593]: time="2025-06-20T19:33:11.040963591Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1\" returns image reference \"sha256:dfc00385e8755bddd1053a2a482a3559ad6c93bd8b882371b9ed8b5c3dfe22b5\"" Jun 20 19:33:11.042862 containerd[1593]: time="2025-06-20T19:33:11.042828138Z" level=info msg="CreateContainer within sandbox \"ebeb4b46d680009e43f8346b2ceae13f7fed405f20ec599fb16edbb342973f35\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jun 20 19:33:11.052417 containerd[1593]: time="2025-06-20T19:33:11.052373957Z" level=info msg="Container 6fe646f12ad07a8893b1507bc00a300e0c9fee1a2ec6ffe4f07a0740a1be0173: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:33:11.061427 containerd[1593]: time="2025-06-20T19:33:11.061393020Z" level=info msg="CreateContainer within sandbox \"ebeb4b46d680009e43f8346b2ceae13f7fed405f20ec599fb16edbb342973f35\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"6fe646f12ad07a8893b1507bc00a300e0c9fee1a2ec6ffe4f07a0740a1be0173\"" Jun 20 19:33:11.061740 containerd[1593]: time="2025-06-20T19:33:11.061714268Z" level=info msg="StartContainer for \"6fe646f12ad07a8893b1507bc00a300e0c9fee1a2ec6ffe4f07a0740a1be0173\"" Jun 20 19:33:11.063096 containerd[1593]: time="2025-06-20T19:33:11.063072638Z" level=info msg="connecting to shim 6fe646f12ad07a8893b1507bc00a300e0c9fee1a2ec6ffe4f07a0740a1be0173" address="unix:///run/containerd/s/c6d098c03e2734b0125d70a16eadde69438cc6a31322e01b4c573e343c87cd5d" protocol=ttrpc version=3 Jun 20 19:33:11.092901 systemd[1]: Started cri-containerd-6fe646f12ad07a8893b1507bc00a300e0c9fee1a2ec6ffe4f07a0740a1be0173.scope - libcontainer container 6fe646f12ad07a8893b1507bc00a300e0c9fee1a2ec6ffe4f07a0740a1be0173. Jun 20 19:33:11.132613 containerd[1593]: time="2025-06-20T19:33:11.132559117Z" level=info msg="StartContainer for \"6fe646f12ad07a8893b1507bc00a300e0c9fee1a2ec6ffe4f07a0740a1be0173\" returns successfully" Jun 20 19:33:11.227854 kubelet[2709]: I0620 19:33:11.227827 2709 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jun 20 19:33:11.227854 kubelet[2709]: I0620 19:33:11.227857 2709 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jun 20 19:33:11.376115 kubelet[2709]: I0620 19:33:11.376056 2709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-82mq6" podStartSLOduration=26.122360524 podStartE2EDuration="39.376038316s" podCreationTimestamp="2025-06-20 19:32:32 +0000 UTC" firstStartedPulling="2025-06-20 19:32:57.787885423 +0000 UTC m=+42.993741776" lastFinishedPulling="2025-06-20 19:33:11.041563225 +0000 UTC m=+56.247419568" observedRunningTime="2025-06-20 19:33:11.375726807 +0000 UTC m=+56.581583150" watchObservedRunningTime="2025-06-20 19:33:11.376038316 +0000 UTC m=+56.581894669" Jun 20 19:33:14.994894 systemd[1]: Started sshd@13-10.0.0.149:22-10.0.0.1:48904.service - OpenSSH per-connection server daemon (10.0.0.1:48904). Jun 20 19:33:15.054035 sshd[5370]: Accepted publickey for core from 10.0.0.1 port 48904 ssh2: RSA SHA256:YoQ4GiRtY5Hu7FaS/OnNYeCnsR+r8YS2g6Qh7XD/NPE Jun 20 19:33:15.055426 sshd-session[5370]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:33:15.059717 systemd-logind[1580]: New session 14 of user core. Jun 20 19:33:15.066899 systemd[1]: Started session-14.scope - Session 14 of User core. Jun 20 19:33:15.190810 sshd[5372]: Connection closed by 10.0.0.1 port 48904 Jun 20 19:33:15.191094 sshd-session[5370]: pam_unix(sshd:session): session closed for user core Jun 20 19:33:15.197101 systemd[1]: sshd@13-10.0.0.149:22-10.0.0.1:48904.service: Deactivated successfully. Jun 20 19:33:15.199363 systemd[1]: session-14.scope: Deactivated successfully. Jun 20 19:33:15.200199 systemd-logind[1580]: Session 14 logged out. Waiting for processes to exit. Jun 20 19:33:15.201991 systemd-logind[1580]: Removed session 14. Jun 20 19:33:16.652310 containerd[1593]: time="2025-06-20T19:33:16.652186569Z" level=info msg="TaskExit event in podsandbox handler container_id:\"aa564ed3f482fb5b2b1012a2f93e3483517e57a2c87a31ac6cb7214692ab4fdc\" id:\"65363d2372e816129413e878315d5d027294516cc781c7e8b1b734eac766e211\" pid:5401 exited_at:{seconds:1750447996 nanos:651742239}" Jun 20 19:33:17.276270 kubelet[2709]: I0620 19:33:17.276243 2709 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 20 19:33:19.469932 kubelet[2709]: I0620 19:33:19.469874 2709 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 20 19:33:20.203740 systemd[1]: Started sshd@14-10.0.0.149:22-10.0.0.1:48920.service - OpenSSH per-connection server daemon (10.0.0.1:48920). Jun 20 19:33:20.272020 sshd[5422]: Accepted publickey for core from 10.0.0.1 port 48920 ssh2: RSA SHA256:YoQ4GiRtY5Hu7FaS/OnNYeCnsR+r8YS2g6Qh7XD/NPE Jun 20 19:33:20.273415 sshd-session[5422]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:33:20.277681 systemd-logind[1580]: New session 15 of user core. Jun 20 19:33:20.288894 systemd[1]: Started session-15.scope - Session 15 of User core. Jun 20 19:33:20.413603 sshd[5424]: Connection closed by 10.0.0.1 port 48920 Jun 20 19:33:20.413911 sshd-session[5422]: pam_unix(sshd:session): session closed for user core Jun 20 19:33:20.419800 systemd[1]: sshd@14-10.0.0.149:22-10.0.0.1:48920.service: Deactivated successfully. Jun 20 19:33:20.421898 systemd[1]: session-15.scope: Deactivated successfully. Jun 20 19:33:20.422679 systemd-logind[1580]: Session 15 logged out. Waiting for processes to exit. Jun 20 19:33:20.423901 systemd-logind[1580]: Removed session 15. Jun 20 19:33:25.426256 systemd[1]: Started sshd@15-10.0.0.149:22-10.0.0.1:41840.service - OpenSSH per-connection server daemon (10.0.0.1:41840). Jun 20 19:33:25.487016 sshd[5440]: Accepted publickey for core from 10.0.0.1 port 41840 ssh2: RSA SHA256:YoQ4GiRtY5Hu7FaS/OnNYeCnsR+r8YS2g6Qh7XD/NPE Jun 20 19:33:25.488348 sshd-session[5440]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:33:25.492467 systemd-logind[1580]: New session 16 of user core. Jun 20 19:33:25.501906 systemd[1]: Started session-16.scope - Session 16 of User core. Jun 20 19:33:25.618563 sshd[5443]: Connection closed by 10.0.0.1 port 41840 Jun 20 19:33:25.618880 sshd-session[5440]: pam_unix(sshd:session): session closed for user core Jun 20 19:33:25.624307 systemd[1]: sshd@15-10.0.0.149:22-10.0.0.1:41840.service: Deactivated successfully. Jun 20 19:33:25.626329 systemd[1]: session-16.scope: Deactivated successfully. Jun 20 19:33:25.627100 systemd-logind[1580]: Session 16 logged out. Waiting for processes to exit. Jun 20 19:33:25.628339 systemd-logind[1580]: Removed session 16. Jun 20 19:33:29.785584 containerd[1593]: time="2025-06-20T19:33:29.785503487Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b2955553363bebd490b2b5ae96c8b36195fc72478ca6d879e92a2b1aa6a73fc7\" id:\"ad0e4a20018e484b79371c19b9a87b0fbc42c31771e79456ee692a8f2b70ce06\" pid:5469 exited_at:{seconds:1750448009 nanos:785042261}" Jun 20 19:33:30.151592 kubelet[2709]: E0620 19:33:30.151554 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:33:30.351004 containerd[1593]: time="2025-06-20T19:33:30.350955486Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7676052fe69075c0d157eefc55e3d286b8001118bdab362fe9d070a8150b0af9\" id:\"d23e65a6665be6f15945067ee56de99c23c91bccdff7621d7443b83729e76fe1\" pid:5494 exited_at:{seconds:1750448010 nanos:350636602}" Jun 20 19:33:30.634264 systemd[1]: Started sshd@16-10.0.0.149:22-10.0.0.1:41854.service - OpenSSH per-connection server daemon (10.0.0.1:41854). Jun 20 19:33:30.696689 sshd[5505]: Accepted publickey for core from 10.0.0.1 port 41854 ssh2: RSA SHA256:YoQ4GiRtY5Hu7FaS/OnNYeCnsR+r8YS2g6Qh7XD/NPE Jun 20 19:33:30.698209 sshd-session[5505]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:33:30.702595 systemd-logind[1580]: New session 17 of user core. Jun 20 19:33:30.708896 systemd[1]: Started session-17.scope - Session 17 of User core. Jun 20 19:33:30.842466 sshd[5507]: Connection closed by 10.0.0.1 port 41854 Jun 20 19:33:30.842793 sshd-session[5505]: pam_unix(sshd:session): session closed for user core Jun 20 19:33:30.851979 systemd[1]: sshd@16-10.0.0.149:22-10.0.0.1:41854.service: Deactivated successfully. Jun 20 19:33:30.854068 systemd[1]: session-17.scope: Deactivated successfully. Jun 20 19:33:30.854823 systemd-logind[1580]: Session 17 logged out. Waiting for processes to exit. Jun 20 19:33:30.857706 systemd[1]: Started sshd@17-10.0.0.149:22-10.0.0.1:41868.service - OpenSSH per-connection server daemon (10.0.0.1:41868). Jun 20 19:33:30.858338 systemd-logind[1580]: Removed session 17. Jun 20 19:33:30.915473 sshd[5520]: Accepted publickey for core from 10.0.0.1 port 41868 ssh2: RSA SHA256:YoQ4GiRtY5Hu7FaS/OnNYeCnsR+r8YS2g6Qh7XD/NPE Jun 20 19:33:30.916881 sshd-session[5520]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:33:30.921739 systemd-logind[1580]: New session 18 of user core. Jun 20 19:33:30.930894 systemd[1]: Started session-18.scope - Session 18 of User core. Jun 20 19:33:31.210598 sshd[5522]: Connection closed by 10.0.0.1 port 41868 Jun 20 19:33:31.211161 sshd-session[5520]: pam_unix(sshd:session): session closed for user core Jun 20 19:33:31.221651 systemd[1]: sshd@17-10.0.0.149:22-10.0.0.1:41868.service: Deactivated successfully. Jun 20 19:33:31.223513 systemd[1]: session-18.scope: Deactivated successfully. Jun 20 19:33:31.224246 systemd-logind[1580]: Session 18 logged out. Waiting for processes to exit. Jun 20 19:33:31.227059 systemd[1]: Started sshd@18-10.0.0.149:22-10.0.0.1:41880.service - OpenSSH per-connection server daemon (10.0.0.1:41880). Jun 20 19:33:31.227727 systemd-logind[1580]: Removed session 18. Jun 20 19:33:31.290730 sshd[5535]: Accepted publickey for core from 10.0.0.1 port 41880 ssh2: RSA SHA256:YoQ4GiRtY5Hu7FaS/OnNYeCnsR+r8YS2g6Qh7XD/NPE Jun 20 19:33:31.292308 sshd-session[5535]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:33:31.296683 systemd-logind[1580]: New session 19 of user core. Jun 20 19:33:31.305909 systemd[1]: Started session-19.scope - Session 19 of User core. Jun 20 19:33:32.052551 sshd[5537]: Connection closed by 10.0.0.1 port 41880 Jun 20 19:33:32.053114 sshd-session[5535]: pam_unix(sshd:session): session closed for user core Jun 20 19:33:32.064215 systemd[1]: sshd@18-10.0.0.149:22-10.0.0.1:41880.service: Deactivated successfully. Jun 20 19:33:32.067625 systemd[1]: session-19.scope: Deactivated successfully. Jun 20 19:33:32.068946 systemd-logind[1580]: Session 19 logged out. Waiting for processes to exit. Jun 20 19:33:32.071719 systemd-logind[1580]: Removed session 19. Jun 20 19:33:32.073437 systemd[1]: Started sshd@19-10.0.0.149:22-10.0.0.1:41888.service - OpenSSH per-connection server daemon (10.0.0.1:41888). Jun 20 19:33:32.124881 sshd[5555]: Accepted publickey for core from 10.0.0.1 port 41888 ssh2: RSA SHA256:YoQ4GiRtY5Hu7FaS/OnNYeCnsR+r8YS2g6Qh7XD/NPE Jun 20 19:33:32.126222 sshd-session[5555]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:33:32.130704 systemd-logind[1580]: New session 20 of user core. Jun 20 19:33:32.138896 systemd[1]: Started session-20.scope - Session 20 of User core. Jun 20 19:33:32.152103 kubelet[2709]: E0620 19:33:32.152064 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:33:32.394152 sshd[5558]: Connection closed by 10.0.0.1 port 41888 Jun 20 19:33:32.394831 sshd-session[5555]: pam_unix(sshd:session): session closed for user core Jun 20 19:33:32.403849 systemd[1]: sshd@19-10.0.0.149:22-10.0.0.1:41888.service: Deactivated successfully. Jun 20 19:33:32.405767 systemd[1]: session-20.scope: Deactivated successfully. Jun 20 19:33:32.406978 systemd-logind[1580]: Session 20 logged out. Waiting for processes to exit. Jun 20 19:33:32.410001 systemd[1]: Started sshd@20-10.0.0.149:22-10.0.0.1:41902.service - OpenSSH per-connection server daemon (10.0.0.1:41902). Jun 20 19:33:32.411198 systemd-logind[1580]: Removed session 20. Jun 20 19:33:32.457797 sshd[5569]: Accepted publickey for core from 10.0.0.1 port 41902 ssh2: RSA SHA256:YoQ4GiRtY5Hu7FaS/OnNYeCnsR+r8YS2g6Qh7XD/NPE Jun 20 19:33:32.460045 sshd-session[5569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:33:32.465229 systemd-logind[1580]: New session 21 of user core. Jun 20 19:33:32.470973 systemd[1]: Started session-21.scope - Session 21 of User core. Jun 20 19:33:32.615716 sshd[5571]: Connection closed by 10.0.0.1 port 41902 Jun 20 19:33:32.616071 sshd-session[5569]: pam_unix(sshd:session): session closed for user core Jun 20 19:33:32.620418 systemd[1]: sshd@20-10.0.0.149:22-10.0.0.1:41902.service: Deactivated successfully. Jun 20 19:33:32.622428 systemd[1]: session-21.scope: Deactivated successfully. Jun 20 19:33:32.623305 systemd-logind[1580]: Session 21 logged out. Waiting for processes to exit. Jun 20 19:33:32.624575 systemd-logind[1580]: Removed session 21. Jun 20 19:33:35.407284 containerd[1593]: time="2025-06-20T19:33:35.407229522Z" level=info msg="TaskExit event in podsandbox handler container_id:\"aa564ed3f482fb5b2b1012a2f93e3483517e57a2c87a31ac6cb7214692ab4fdc\" id:\"bbdce5796b024b066c21ce6ffff03804690ff9d04ab7398e60891cbd61160694\" pid:5600 exited_at:{seconds:1750448015 nanos:406953422}" Jun 20 19:33:37.632364 systemd[1]: Started sshd@21-10.0.0.149:22-10.0.0.1:39614.service - OpenSSH per-connection server daemon (10.0.0.1:39614). Jun 20 19:33:37.692905 sshd[5618]: Accepted publickey for core from 10.0.0.1 port 39614 ssh2: RSA SHA256:YoQ4GiRtY5Hu7FaS/OnNYeCnsR+r8YS2g6Qh7XD/NPE Jun 20 19:33:37.694406 sshd-session[5618]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:33:37.698696 systemd-logind[1580]: New session 22 of user core. Jun 20 19:33:37.705909 systemd[1]: Started session-22.scope - Session 22 of User core. Jun 20 19:33:37.889844 sshd[5620]: Connection closed by 10.0.0.1 port 39614 Jun 20 19:33:37.890091 sshd-session[5618]: pam_unix(sshd:session): session closed for user core Jun 20 19:33:37.894420 systemd[1]: sshd@21-10.0.0.149:22-10.0.0.1:39614.service: Deactivated successfully. Jun 20 19:33:37.896450 systemd[1]: session-22.scope: Deactivated successfully. Jun 20 19:33:37.897244 systemd-logind[1580]: Session 22 logged out. Waiting for processes to exit. Jun 20 19:33:37.898415 systemd-logind[1580]: Removed session 22. Jun 20 19:33:38.152317 kubelet[2709]: E0620 19:33:38.152211 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:33:42.902889 systemd[1]: Started sshd@22-10.0.0.149:22-10.0.0.1:39626.service - OpenSSH per-connection server daemon (10.0.0.1:39626). Jun 20 19:33:42.955871 sshd[5636]: Accepted publickey for core from 10.0.0.1 port 39626 ssh2: RSA SHA256:YoQ4GiRtY5Hu7FaS/OnNYeCnsR+r8YS2g6Qh7XD/NPE Jun 20 19:33:42.957316 sshd-session[5636]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:33:42.961549 systemd-logind[1580]: New session 23 of user core. Jun 20 19:33:42.972902 systemd[1]: Started session-23.scope - Session 23 of User core. Jun 20 19:33:43.090210 sshd[5638]: Connection closed by 10.0.0.1 port 39626 Jun 20 19:33:43.090508 sshd-session[5636]: pam_unix(sshd:session): session closed for user core Jun 20 19:33:43.094294 systemd[1]: sshd@22-10.0.0.149:22-10.0.0.1:39626.service: Deactivated successfully. Jun 20 19:33:43.096397 systemd[1]: session-23.scope: Deactivated successfully. Jun 20 19:33:43.097317 systemd-logind[1580]: Session 23 logged out. Waiting for processes to exit. Jun 20 19:33:43.098491 systemd-logind[1580]: Removed session 23. Jun 20 19:33:48.102665 systemd[1]: Started sshd@23-10.0.0.149:22-10.0.0.1:53734.service - OpenSSH per-connection server daemon (10.0.0.1:53734). Jun 20 19:33:48.159608 sshd[5651]: Accepted publickey for core from 10.0.0.1 port 53734 ssh2: RSA SHA256:YoQ4GiRtY5Hu7FaS/OnNYeCnsR+r8YS2g6Qh7XD/NPE Jun 20 19:33:48.160989 sshd-session[5651]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:33:48.165211 systemd-logind[1580]: New session 24 of user core. Jun 20 19:33:48.171891 systemd[1]: Started session-24.scope - Session 24 of User core. Jun 20 19:33:48.281728 containerd[1593]: time="2025-06-20T19:33:48.281273770Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7676052fe69075c0d157eefc55e3d286b8001118bdab362fe9d070a8150b0af9\" id:\"77fc6f2e199f7b978ed0f240f951a7f42218457f587bc0dc0f332545ba2a9237\" pid:5674 exited_at:{seconds:1750448028 nanos:280599182}" Jun 20 19:33:48.287014 sshd[5653]: Connection closed by 10.0.0.1 port 53734 Jun 20 19:33:48.287328 sshd-session[5651]: pam_unix(sshd:session): session closed for user core Jun 20 19:33:48.291283 systemd[1]: sshd@23-10.0.0.149:22-10.0.0.1:53734.service: Deactivated successfully. Jun 20 19:33:48.293336 systemd[1]: session-24.scope: Deactivated successfully. Jun 20 19:33:48.294195 systemd-logind[1580]: Session 24 logged out. Waiting for processes to exit. Jun 20 19:33:48.295272 systemd-logind[1580]: Removed session 24. Jun 20 19:33:53.303862 systemd[1]: Started sshd@24-10.0.0.149:22-10.0.0.1:53746.service - OpenSSH per-connection server daemon (10.0.0.1:53746). Jun 20 19:33:53.379430 sshd[5690]: Accepted publickey for core from 10.0.0.1 port 53746 ssh2: RSA SHA256:YoQ4GiRtY5Hu7FaS/OnNYeCnsR+r8YS2g6Qh7XD/NPE Jun 20 19:33:53.380044 sshd-session[5690]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:33:53.385668 systemd-logind[1580]: New session 25 of user core. Jun 20 19:33:53.393598 systemd[1]: Started session-25.scope - Session 25 of User core. Jun 20 19:33:53.583225 sshd[5692]: Connection closed by 10.0.0.1 port 53746 Jun 20 19:33:53.583665 sshd-session[5690]: pam_unix(sshd:session): session closed for user core Jun 20 19:33:53.588416 systemd[1]: sshd@24-10.0.0.149:22-10.0.0.1:53746.service: Deactivated successfully. Jun 20 19:33:53.590529 systemd[1]: session-25.scope: Deactivated successfully. Jun 20 19:33:53.591382 systemd-logind[1580]: Session 25 logged out. Waiting for processes to exit. Jun 20 19:33:53.592931 systemd-logind[1580]: Removed session 25.