Jul 1 08:37:37.876248 kernel: Linux version 6.12.34-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Mon Jun 30 19:26:54 -00 2025 Jul 1 08:37:37.876271 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=03b744fdab9d0c2a6ce16909d1444c286b74402b7ab027472687ca33469d417f Jul 1 08:37:37.876280 kernel: BIOS-provided physical RAM map: Jul 1 08:37:37.876287 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jul 1 08:37:37.876293 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jul 1 08:37:37.876299 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jul 1 08:37:37.876307 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jul 1 08:37:37.876316 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jul 1 08:37:37.876325 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jul 1 08:37:37.876332 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jul 1 08:37:37.876338 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 1 08:37:37.876345 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jul 1 08:37:37.876351 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 1 08:37:37.876358 kernel: NX (Execute Disable) protection: active Jul 1 08:37:37.876368 kernel: APIC: Static calls initialized Jul 1 08:37:37.876375 kernel: SMBIOS 2.8 present. Jul 1 08:37:37.876385 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jul 1 08:37:37.876392 kernel: DMI: Memory slots populated: 1/1 Jul 1 08:37:37.876399 kernel: Hypervisor detected: KVM Jul 1 08:37:37.876406 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 1 08:37:37.876427 kernel: kvm-clock: using sched offset of 4745066089 cycles Jul 1 08:37:37.876435 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 1 08:37:37.876442 kernel: tsc: Detected 2794.750 MHz processor Jul 1 08:37:37.876453 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 1 08:37:37.876460 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 1 08:37:37.876468 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jul 1 08:37:37.876475 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jul 1 08:37:37.876483 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 1 08:37:37.876490 kernel: Using GB pages for direct mapping Jul 1 08:37:37.876497 kernel: ACPI: Early table checksum verification disabled Jul 1 08:37:37.876504 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jul 1 08:37:37.876512 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 1 08:37:37.876521 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 1 08:37:37.876529 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 1 08:37:37.876536 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jul 1 08:37:37.876543 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 1 08:37:37.876550 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 1 08:37:37.876558 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 1 08:37:37.876565 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 1 08:37:37.876572 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Jul 1 08:37:37.876585 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Jul 1 08:37:37.876592 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jul 1 08:37:37.876600 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Jul 1 08:37:37.876607 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Jul 1 08:37:37.876615 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Jul 1 08:37:37.876622 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Jul 1 08:37:37.876632 kernel: No NUMA configuration found Jul 1 08:37:37.876639 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jul 1 08:37:37.876646 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Jul 1 08:37:37.876654 kernel: Zone ranges: Jul 1 08:37:37.876661 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 1 08:37:37.876669 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jul 1 08:37:37.876676 kernel: Normal empty Jul 1 08:37:37.876683 kernel: Device empty Jul 1 08:37:37.876691 kernel: Movable zone start for each node Jul 1 08:37:37.876700 kernel: Early memory node ranges Jul 1 08:37:37.876708 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jul 1 08:37:37.876715 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jul 1 08:37:37.876723 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jul 1 08:37:37.876730 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 1 08:37:37.876737 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jul 1 08:37:37.876745 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jul 1 08:37:37.876752 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 1 08:37:37.876763 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 1 08:37:37.876770 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 1 08:37:37.876780 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 1 08:37:37.876787 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 1 08:37:37.876797 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 1 08:37:37.876804 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 1 08:37:37.876812 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 1 08:37:37.876819 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 1 08:37:37.876827 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 1 08:37:37.876834 kernel: TSC deadline timer available Jul 1 08:37:37.876841 kernel: CPU topo: Max. logical packages: 1 Jul 1 08:37:37.876851 kernel: CPU topo: Max. logical dies: 1 Jul 1 08:37:37.876859 kernel: CPU topo: Max. dies per package: 1 Jul 1 08:37:37.876866 kernel: CPU topo: Max. threads per core: 1 Jul 1 08:37:37.876873 kernel: CPU topo: Num. cores per package: 4 Jul 1 08:37:37.876881 kernel: CPU topo: Num. threads per package: 4 Jul 1 08:37:37.876888 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jul 1 08:37:37.876896 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 1 08:37:37.876903 kernel: kvm-guest: KVM setup pv remote TLB flush Jul 1 08:37:37.876910 kernel: kvm-guest: setup PV sched yield Jul 1 08:37:37.876920 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jul 1 08:37:37.876928 kernel: Booting paravirtualized kernel on KVM Jul 1 08:37:37.876936 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 1 08:37:37.876943 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jul 1 08:37:37.876951 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jul 1 08:37:37.876958 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jul 1 08:37:37.876965 kernel: pcpu-alloc: [0] 0 1 2 3 Jul 1 08:37:37.876973 kernel: kvm-guest: PV spinlocks enabled Jul 1 08:37:37.876980 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 1 08:37:37.876991 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=03b744fdab9d0c2a6ce16909d1444c286b74402b7ab027472687ca33469d417f Jul 1 08:37:37.876999 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 1 08:37:37.877006 kernel: random: crng init done Jul 1 08:37:37.877014 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 1 08:37:37.877021 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 1 08:37:37.877029 kernel: Fallback order for Node 0: 0 Jul 1 08:37:37.877036 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Jul 1 08:37:37.877043 kernel: Policy zone: DMA32 Jul 1 08:37:37.877051 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 1 08:37:37.877060 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 1 08:37:37.877068 kernel: ftrace: allocating 40095 entries in 157 pages Jul 1 08:37:37.877075 kernel: ftrace: allocated 157 pages with 5 groups Jul 1 08:37:37.877083 kernel: Dynamic Preempt: voluntary Jul 1 08:37:37.877090 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 1 08:37:37.877098 kernel: rcu: RCU event tracing is enabled. Jul 1 08:37:37.877106 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 1 08:37:37.877114 kernel: Trampoline variant of Tasks RCU enabled. Jul 1 08:37:37.877124 kernel: Rude variant of Tasks RCU enabled. Jul 1 08:37:37.877137 kernel: Tracing variant of Tasks RCU enabled. Jul 1 08:37:37.877144 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 1 08:37:37.877158 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 1 08:37:37.877174 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 1 08:37:37.877187 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 1 08:37:37.877206 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 1 08:37:37.877214 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jul 1 08:37:37.877222 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 1 08:37:37.877239 kernel: Console: colour VGA+ 80x25 Jul 1 08:37:37.877246 kernel: printk: legacy console [ttyS0] enabled Jul 1 08:37:37.877254 kernel: ACPI: Core revision 20240827 Jul 1 08:37:37.877264 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 1 08:37:37.877271 kernel: APIC: Switch to symmetric I/O mode setup Jul 1 08:37:37.877279 kernel: x2apic enabled Jul 1 08:37:37.877287 kernel: APIC: Switched APIC routing to: physical x2apic Jul 1 08:37:37.877297 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jul 1 08:37:37.877305 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jul 1 08:37:37.877315 kernel: kvm-guest: setup PV IPIs Jul 1 08:37:37.877323 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 1 08:37:37.877331 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Jul 1 08:37:37.877339 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Jul 1 08:37:37.877347 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 1 08:37:37.877354 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jul 1 08:37:37.877362 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jul 1 08:37:37.877370 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 1 08:37:37.877380 kernel: Spectre V2 : Mitigation: Retpolines Jul 1 08:37:37.877388 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 1 08:37:37.877395 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jul 1 08:37:37.877403 kernel: RETBleed: Mitigation: untrained return thunk Jul 1 08:37:37.877422 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 1 08:37:37.877430 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 1 08:37:37.877437 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jul 1 08:37:37.877446 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jul 1 08:37:37.877456 kernel: x86/bugs: return thunk changed Jul 1 08:37:37.877464 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jul 1 08:37:37.877471 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 1 08:37:37.877479 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 1 08:37:37.877497 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 1 08:37:37.877514 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 1 08:37:37.877523 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jul 1 08:37:37.877531 kernel: Freeing SMP alternatives memory: 32K Jul 1 08:37:37.877538 kernel: pid_max: default: 32768 minimum: 301 Jul 1 08:37:37.877549 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 1 08:37:37.877557 kernel: landlock: Up and running. Jul 1 08:37:37.877566 kernel: SELinux: Initializing. Jul 1 08:37:37.877576 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 1 08:37:37.877588 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 1 08:37:37.877598 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jul 1 08:37:37.877608 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jul 1 08:37:37.877618 kernel: ... version: 0 Jul 1 08:37:37.877628 kernel: ... bit width: 48 Jul 1 08:37:37.877640 kernel: ... generic registers: 6 Jul 1 08:37:37.877649 kernel: ... value mask: 0000ffffffffffff Jul 1 08:37:37.877659 kernel: ... max period: 00007fffffffffff Jul 1 08:37:37.877668 kernel: ... fixed-purpose events: 0 Jul 1 08:37:37.877678 kernel: ... event mask: 000000000000003f Jul 1 08:37:37.877688 kernel: signal: max sigframe size: 1776 Jul 1 08:37:37.877697 kernel: rcu: Hierarchical SRCU implementation. Jul 1 08:37:37.877707 kernel: rcu: Max phase no-delay instances is 400. Jul 1 08:37:37.877715 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 1 08:37:37.877723 kernel: smp: Bringing up secondary CPUs ... Jul 1 08:37:37.877732 kernel: smpboot: x86: Booting SMP configuration: Jul 1 08:37:37.877740 kernel: .... node #0, CPUs: #1 #2 #3 Jul 1 08:37:37.877747 kernel: smp: Brought up 1 node, 4 CPUs Jul 1 08:37:37.877755 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Jul 1 08:37:37.877763 kernel: Memory: 2428912K/2571752K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54508K init, 2460K bss, 136904K reserved, 0K cma-reserved) Jul 1 08:37:37.877771 kernel: devtmpfs: initialized Jul 1 08:37:37.877779 kernel: x86/mm: Memory block size: 128MB Jul 1 08:37:37.877786 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 1 08:37:37.877794 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 1 08:37:37.877804 kernel: pinctrl core: initialized pinctrl subsystem Jul 1 08:37:37.877811 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 1 08:37:37.877819 kernel: audit: initializing netlink subsys (disabled) Jul 1 08:37:37.877827 kernel: audit: type=2000 audit(1751359053.972:1): state=initialized audit_enabled=0 res=1 Jul 1 08:37:37.877834 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 1 08:37:37.877842 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 1 08:37:37.877849 kernel: cpuidle: using governor menu Jul 1 08:37:37.877857 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 1 08:37:37.877867 kernel: dca service started, version 1.12.1 Jul 1 08:37:37.877874 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Jul 1 08:37:37.877882 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jul 1 08:37:37.877890 kernel: PCI: Using configuration type 1 for base access Jul 1 08:37:37.877898 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 1 08:37:37.877905 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 1 08:37:37.877913 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 1 08:37:37.877921 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 1 08:37:37.877928 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 1 08:37:37.877938 kernel: ACPI: Added _OSI(Module Device) Jul 1 08:37:37.877946 kernel: ACPI: Added _OSI(Processor Device) Jul 1 08:37:37.877953 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 1 08:37:37.877961 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 1 08:37:37.877968 kernel: ACPI: Interpreter enabled Jul 1 08:37:37.877978 kernel: ACPI: PM: (supports S0 S3 S5) Jul 1 08:37:37.877986 kernel: ACPI: Using IOAPIC for interrupt routing Jul 1 08:37:37.877994 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 1 08:37:37.878001 kernel: PCI: Using E820 reservations for host bridge windows Jul 1 08:37:37.878011 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jul 1 08:37:37.878019 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 1 08:37:37.878242 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 1 08:37:37.878384 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jul 1 08:37:37.878584 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jul 1 08:37:37.878596 kernel: PCI host bridge to bus 0000:00 Jul 1 08:37:37.878761 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 1 08:37:37.878934 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 1 08:37:37.879079 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 1 08:37:37.879208 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jul 1 08:37:37.879356 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jul 1 08:37:37.879494 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jul 1 08:37:37.879612 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 1 08:37:37.879770 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jul 1 08:37:37.879923 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jul 1 08:37:37.880082 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Jul 1 08:37:37.880220 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Jul 1 08:37:37.880345 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Jul 1 08:37:37.880486 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 1 08:37:37.880647 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jul 1 08:37:37.880791 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Jul 1 08:37:37.880914 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Jul 1 08:37:37.881037 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Jul 1 08:37:37.881204 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jul 1 08:37:37.881333 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Jul 1 08:37:37.881474 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Jul 1 08:37:37.881599 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Jul 1 08:37:37.881767 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jul 1 08:37:37.881914 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Jul 1 08:37:37.882038 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Jul 1 08:37:37.882162 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Jul 1 08:37:37.882297 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Jul 1 08:37:37.882451 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jul 1 08:37:37.882580 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jul 1 08:37:37.882726 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jul 1 08:37:37.882866 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Jul 1 08:37:37.882995 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Jul 1 08:37:37.883147 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jul 1 08:37:37.883314 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Jul 1 08:37:37.883327 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 1 08:37:37.883335 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 1 08:37:37.883348 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 1 08:37:37.883356 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 1 08:37:37.883368 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jul 1 08:37:37.883376 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jul 1 08:37:37.883386 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jul 1 08:37:37.883394 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jul 1 08:37:37.883403 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jul 1 08:37:37.883443 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jul 1 08:37:37.883451 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jul 1 08:37:37.883462 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jul 1 08:37:37.883470 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jul 1 08:37:37.883478 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jul 1 08:37:37.883486 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jul 1 08:37:37.883494 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jul 1 08:37:37.883502 kernel: iommu: Default domain type: Translated Jul 1 08:37:37.883510 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 1 08:37:37.883518 kernel: PCI: Using ACPI for IRQ routing Jul 1 08:37:37.883526 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 1 08:37:37.883536 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jul 1 08:37:37.883544 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jul 1 08:37:37.883701 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jul 1 08:37:37.883835 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jul 1 08:37:37.883958 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 1 08:37:37.883969 kernel: vgaarb: loaded Jul 1 08:37:37.883977 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 1 08:37:37.883985 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 1 08:37:37.883996 kernel: clocksource: Switched to clocksource kvm-clock Jul 1 08:37:37.884003 kernel: VFS: Disk quotas dquot_6.6.0 Jul 1 08:37:37.884012 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 1 08:37:37.884019 kernel: pnp: PnP ACPI init Jul 1 08:37:37.884167 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jul 1 08:37:37.884180 kernel: pnp: PnP ACPI: found 6 devices Jul 1 08:37:37.884188 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 1 08:37:37.884204 kernel: NET: Registered PF_INET protocol family Jul 1 08:37:37.884216 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 1 08:37:37.884224 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 1 08:37:37.884232 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 1 08:37:37.884240 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 1 08:37:37.884248 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 1 08:37:37.884256 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 1 08:37:37.884264 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 1 08:37:37.884272 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 1 08:37:37.884280 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 1 08:37:37.884290 kernel: NET: Registered PF_XDP protocol family Jul 1 08:37:37.884446 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 1 08:37:37.884566 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 1 08:37:37.884686 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 1 08:37:37.884828 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jul 1 08:37:37.884949 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jul 1 08:37:37.885061 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jul 1 08:37:37.885072 kernel: PCI: CLS 0 bytes, default 64 Jul 1 08:37:37.885084 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Jul 1 08:37:37.885092 kernel: Initialise system trusted keyrings Jul 1 08:37:37.885100 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 1 08:37:37.885108 kernel: Key type asymmetric registered Jul 1 08:37:37.885115 kernel: Asymmetric key parser 'x509' registered Jul 1 08:37:37.885123 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 1 08:37:37.885131 kernel: io scheduler mq-deadline registered Jul 1 08:37:37.885139 kernel: io scheduler kyber registered Jul 1 08:37:37.885147 kernel: io scheduler bfq registered Jul 1 08:37:37.885158 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 1 08:37:37.885167 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jul 1 08:37:37.885175 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jul 1 08:37:37.885182 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jul 1 08:37:37.885190 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 1 08:37:37.885207 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 1 08:37:37.885215 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 1 08:37:37.885224 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 1 08:37:37.885232 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 1 08:37:37.885242 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 1 08:37:37.885383 kernel: rtc_cmos 00:04: RTC can wake from S4 Jul 1 08:37:37.885544 kernel: rtc_cmos 00:04: registered as rtc0 Jul 1 08:37:37.885665 kernel: rtc_cmos 00:04: setting system clock to 2025-07-01T08:37:37 UTC (1751359057) Jul 1 08:37:37.885781 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jul 1 08:37:37.885792 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jul 1 08:37:37.885800 kernel: NET: Registered PF_INET6 protocol family Jul 1 08:37:37.885808 kernel: Segment Routing with IPv6 Jul 1 08:37:37.885820 kernel: In-situ OAM (IOAM) with IPv6 Jul 1 08:37:37.885828 kernel: NET: Registered PF_PACKET protocol family Jul 1 08:37:37.885836 kernel: Key type dns_resolver registered Jul 1 08:37:37.885843 kernel: IPI shorthand broadcast: enabled Jul 1 08:37:37.885851 kernel: sched_clock: Marking stable (3330004213, 147752716)->(3499727879, -21970950) Jul 1 08:37:37.885859 kernel: registered taskstats version 1 Jul 1 08:37:37.885867 kernel: Loading compiled-in X.509 certificates Jul 1 08:37:37.885875 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.34-flatcar: bdab85da21e6e40e781d68d3bf17f0a40ee7357c' Jul 1 08:37:37.885883 kernel: Demotion targets for Node 0: null Jul 1 08:37:37.885894 kernel: Key type .fscrypt registered Jul 1 08:37:37.885901 kernel: Key type fscrypt-provisioning registered Jul 1 08:37:37.885909 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 1 08:37:37.885917 kernel: ima: Allocated hash algorithm: sha1 Jul 1 08:37:37.885925 kernel: ima: No architecture policies found Jul 1 08:37:37.885933 kernel: clk: Disabling unused clocks Jul 1 08:37:37.885941 kernel: Warning: unable to open an initial console. Jul 1 08:37:37.885949 kernel: Freeing unused kernel image (initmem) memory: 54508K Jul 1 08:37:37.885957 kernel: Write protecting the kernel read-only data: 24576k Jul 1 08:37:37.885967 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jul 1 08:37:37.885975 kernel: Run /init as init process Jul 1 08:37:37.885983 kernel: with arguments: Jul 1 08:37:37.885991 kernel: /init Jul 1 08:37:37.885999 kernel: with environment: Jul 1 08:37:37.886006 kernel: HOME=/ Jul 1 08:37:37.886014 kernel: TERM=linux Jul 1 08:37:37.886022 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 1 08:37:37.886031 systemd[1]: Successfully made /usr/ read-only. Jul 1 08:37:37.886044 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 1 08:37:37.886066 systemd[1]: Detected virtualization kvm. Jul 1 08:37:37.886074 systemd[1]: Detected architecture x86-64. Jul 1 08:37:37.886083 systemd[1]: Running in initrd. Jul 1 08:37:37.886091 systemd[1]: No hostname configured, using default hostname. Jul 1 08:37:37.886102 systemd[1]: Hostname set to . Jul 1 08:37:37.886110 systemd[1]: Initializing machine ID from VM UUID. Jul 1 08:37:37.886119 systemd[1]: Queued start job for default target initrd.target. Jul 1 08:37:37.886128 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 1 08:37:37.886136 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 1 08:37:37.886146 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 1 08:37:37.886154 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 1 08:37:37.886163 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 1 08:37:37.886175 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 1 08:37:37.886185 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 1 08:37:37.886201 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 1 08:37:37.886210 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 1 08:37:37.886220 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 1 08:37:37.886229 systemd[1]: Reached target paths.target - Path Units. Jul 1 08:37:37.886237 systemd[1]: Reached target slices.target - Slice Units. Jul 1 08:37:37.886248 systemd[1]: Reached target swap.target - Swaps. Jul 1 08:37:37.886256 systemd[1]: Reached target timers.target - Timer Units. Jul 1 08:37:37.886264 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 1 08:37:37.886273 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 1 08:37:37.886282 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 1 08:37:37.886290 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 1 08:37:37.886299 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 1 08:37:37.886307 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 1 08:37:37.886316 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 1 08:37:37.886327 systemd[1]: Reached target sockets.target - Socket Units. Jul 1 08:37:37.886335 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 1 08:37:37.886344 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 1 08:37:37.886353 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 1 08:37:37.886362 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 1 08:37:37.886374 systemd[1]: Starting systemd-fsck-usr.service... Jul 1 08:37:37.886383 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 1 08:37:37.886392 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 1 08:37:37.886400 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 1 08:37:37.886423 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 1 08:37:37.886433 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 1 08:37:37.886444 systemd[1]: Finished systemd-fsck-usr.service. Jul 1 08:37:37.886453 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 1 08:37:37.886495 systemd-journald[219]: Collecting audit messages is disabled. Jul 1 08:37:37.886519 systemd-journald[219]: Journal started Jul 1 08:37:37.886539 systemd-journald[219]: Runtime Journal (/run/log/journal/6c6a93cd88dc461f9be0bb75d6032f13) is 6M, max 48.6M, 42.5M free. Jul 1 08:37:37.877547 systemd-modules-load[222]: Inserted module 'overlay' Jul 1 08:37:37.915106 systemd[1]: Started systemd-journald.service - Journal Service. Jul 1 08:37:37.915129 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 1 08:37:37.915150 kernel: Bridge firewalling registered Jul 1 08:37:37.909256 systemd-modules-load[222]: Inserted module 'br_netfilter' Jul 1 08:37:37.915528 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 1 08:37:37.919139 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 1 08:37:37.921438 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 1 08:37:37.926072 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 1 08:37:37.930015 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 1 08:37:37.958115 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 1 08:37:37.960236 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 1 08:37:37.972220 systemd-tmpfiles[241]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 1 08:37:37.973492 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 1 08:37:37.974858 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 1 08:37:37.978237 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 1 08:37:37.980380 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 1 08:37:37.983699 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 1 08:37:37.986471 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 1 08:37:38.012877 dracut-cmdline[260]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=03b744fdab9d0c2a6ce16909d1444c286b74402b7ab027472687ca33469d417f Jul 1 08:37:38.033844 systemd-resolved[261]: Positive Trust Anchors: Jul 1 08:37:38.033860 systemd-resolved[261]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 1 08:37:38.033897 systemd-resolved[261]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 1 08:37:38.037014 systemd-resolved[261]: Defaulting to hostname 'linux'. Jul 1 08:37:38.038343 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 1 08:37:38.044536 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 1 08:37:38.136442 kernel: SCSI subsystem initialized Jul 1 08:37:38.146445 kernel: Loading iSCSI transport class v2.0-870. Jul 1 08:37:38.158438 kernel: iscsi: registered transport (tcp) Jul 1 08:37:38.183544 kernel: iscsi: registered transport (qla4xxx) Jul 1 08:37:38.183670 kernel: QLogic iSCSI HBA Driver Jul 1 08:37:38.214002 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 1 08:37:38.245702 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 1 08:37:38.250812 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 1 08:37:38.323435 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 1 08:37:38.325517 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 1 08:37:38.392471 kernel: raid6: avx2x4 gen() 24842 MB/s Jul 1 08:37:38.409474 kernel: raid6: avx2x2 gen() 28514 MB/s Jul 1 08:37:38.426542 kernel: raid6: avx2x1 gen() 23654 MB/s Jul 1 08:37:38.426632 kernel: raid6: using algorithm avx2x2 gen() 28514 MB/s Jul 1 08:37:38.444567 kernel: raid6: .... xor() 18242 MB/s, rmw enabled Jul 1 08:37:38.444678 kernel: raid6: using avx2x2 recovery algorithm Jul 1 08:37:38.465471 kernel: xor: automatically using best checksumming function avx Jul 1 08:37:38.642454 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 1 08:37:38.652342 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 1 08:37:38.655325 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 1 08:37:38.690559 systemd-udevd[471]: Using default interface naming scheme 'v255'. Jul 1 08:37:38.696076 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 1 08:37:38.699656 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 1 08:37:38.733205 dracut-pre-trigger[480]: rd.md=0: removing MD RAID activation Jul 1 08:37:38.788942 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 1 08:37:38.791947 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 1 08:37:38.873758 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 1 08:37:38.919788 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 1 08:37:38.924429 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jul 1 08:37:38.926233 kernel: cryptd: max_cpu_qlen set to 1000 Jul 1 08:37:38.926247 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 1 08:37:38.937446 kernel: AES CTR mode by8 optimization enabled Jul 1 08:37:38.949198 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jul 1 08:37:38.949257 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 1 08:37:38.949280 kernel: GPT:9289727 != 19775487 Jul 1 08:37:38.949291 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 1 08:37:38.949302 kernel: GPT:9289727 != 19775487 Jul 1 08:37:38.949312 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 1 08:37:38.949322 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 1 08:37:38.952978 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 1 08:37:38.953117 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 1 08:37:38.958851 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 1 08:37:38.965045 kernel: libata version 3.00 loaded. Jul 1 08:37:38.962617 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 1 08:37:38.998999 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 1 08:37:39.007520 kernel: ahci 0000:00:1f.2: version 3.0 Jul 1 08:37:39.007723 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jul 1 08:37:39.008945 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jul 1 08:37:39.009127 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jul 1 08:37:39.010434 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jul 1 08:37:39.012643 kernel: scsi host0: ahci Jul 1 08:37:39.012831 kernel: scsi host1: ahci Jul 1 08:37:39.013762 kernel: scsi host2: ahci Jul 1 08:37:39.017441 kernel: scsi host3: ahci Jul 1 08:37:39.019451 kernel: scsi host4: ahci Jul 1 08:37:39.025445 kernel: scsi host5: ahci Jul 1 08:37:39.029302 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 lpm-pol 0 Jul 1 08:37:39.029327 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 lpm-pol 0 Jul 1 08:37:39.029338 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 lpm-pol 0 Jul 1 08:37:39.029348 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 lpm-pol 0 Jul 1 08:37:39.029365 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 lpm-pol 0 Jul 1 08:37:39.030431 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 lpm-pol 0 Jul 1 08:37:39.052005 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 1 08:37:39.071576 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 1 08:37:39.073944 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 1 08:37:39.074247 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 1 08:37:39.086953 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 1 08:37:39.095443 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 1 08:37:39.096325 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 1 08:37:39.343226 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jul 1 08:37:39.343270 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jul 1 08:37:39.343281 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jul 1 08:37:39.343442 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jul 1 08:37:39.344437 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jul 1 08:37:39.345448 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jul 1 08:37:39.346440 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jul 1 08:37:39.346453 kernel: ata3.00: applying bridge limits Jul 1 08:37:39.347492 kernel: ata3.00: configured for UDMA/100 Jul 1 08:37:39.348466 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jul 1 08:37:39.401011 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jul 1 08:37:39.401297 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 1 08:37:39.421642 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jul 1 08:37:39.586537 disk-uuid[631]: Primary Header is updated. Jul 1 08:37:39.586537 disk-uuid[631]: Secondary Entries is updated. Jul 1 08:37:39.586537 disk-uuid[631]: Secondary Header is updated. Jul 1 08:37:39.611004 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 1 08:37:39.616491 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 1 08:37:39.794604 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 1 08:37:39.810112 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 1 08:37:39.811665 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 1 08:37:39.812862 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 1 08:37:39.814104 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 1 08:37:39.845865 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 1 08:37:40.659126 disk-uuid[635]: The operation has completed successfully. Jul 1 08:37:40.660797 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 1 08:37:40.694010 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 1 08:37:40.694152 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 1 08:37:40.724719 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 1 08:37:40.756804 sh[661]: Success Jul 1 08:37:40.779561 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 1 08:37:40.779641 kernel: device-mapper: uevent: version 1.0.3 Jul 1 08:37:40.779656 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 1 08:37:40.791457 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jul 1 08:37:40.824397 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 1 08:37:40.828534 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 1 08:37:40.845604 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 1 08:37:40.855375 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 1 08:37:40.855428 kernel: BTRFS: device fsid aeab36fb-d8a9-440c-a872-a8cce0218739 devid 1 transid 36 /dev/mapper/usr (253:0) scanned by mount (673) Jul 1 08:37:40.857694 kernel: BTRFS info (device dm-0): first mount of filesystem aeab36fb-d8a9-440c-a872-a8cce0218739 Jul 1 08:37:40.857723 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 1 08:37:40.857734 kernel: BTRFS info (device dm-0): using free-space-tree Jul 1 08:37:40.863141 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 1 08:37:40.865515 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 1 08:37:40.866835 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 1 08:37:40.867789 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 1 08:37:40.869631 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 1 08:37:40.901450 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (706) Jul 1 08:37:40.903636 kernel: BTRFS info (device vda6): first mount of filesystem 583bafe8-d373-434e-a8d4-4cb362bb932b Jul 1 08:37:40.903670 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 1 08:37:40.903695 kernel: BTRFS info (device vda6): using free-space-tree Jul 1 08:37:40.911452 kernel: BTRFS info (device vda6): last unmount of filesystem 583bafe8-d373-434e-a8d4-4cb362bb932b Jul 1 08:37:40.912692 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 1 08:37:40.915003 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 1 08:37:41.002164 ignition[747]: Ignition 2.21.0 Jul 1 08:37:41.002181 ignition[747]: Stage: fetch-offline Jul 1 08:37:41.002236 ignition[747]: no configs at "/usr/lib/ignition/base.d" Jul 1 08:37:41.002249 ignition[747]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 1 08:37:41.002355 ignition[747]: parsed url from cmdline: "" Jul 1 08:37:41.002360 ignition[747]: no config URL provided Jul 1 08:37:41.002365 ignition[747]: reading system config file "/usr/lib/ignition/user.ign" Jul 1 08:37:41.002374 ignition[747]: no config at "/usr/lib/ignition/user.ign" Jul 1 08:37:41.002400 ignition[747]: op(1): [started] loading QEMU firmware config module Jul 1 08:37:41.002405 ignition[747]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 1 08:37:41.015495 ignition[747]: op(1): [finished] loading QEMU firmware config module Jul 1 08:37:41.020391 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 1 08:37:41.025219 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 1 08:37:41.057715 ignition[747]: parsing config with SHA512: 52cbcda7226ebd3b96ff8d7cb2ed19860f1a6143cf868b540bd2c4800b689a06d99b012e0973ef1fc5350ef3ffc4508bd3489d434ec7448b33f9980d1b463ae2 Jul 1 08:37:41.061936 unknown[747]: fetched base config from "system" Jul 1 08:37:41.062288 ignition[747]: fetch-offline: fetch-offline passed Jul 1 08:37:41.061951 unknown[747]: fetched user config from "qemu" Jul 1 08:37:41.062343 ignition[747]: Ignition finished successfully Jul 1 08:37:41.073549 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 1 08:37:41.082975 systemd-networkd[850]: lo: Link UP Jul 1 08:37:41.082987 systemd-networkd[850]: lo: Gained carrier Jul 1 08:37:41.084598 systemd-networkd[850]: Enumeration completed Jul 1 08:37:41.084699 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 1 08:37:41.084972 systemd-networkd[850]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 1 08:37:41.084977 systemd-networkd[850]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 1 08:37:41.086428 systemd-networkd[850]: eth0: Link UP Jul 1 08:37:41.086433 systemd-networkd[850]: eth0: Gained carrier Jul 1 08:37:41.086442 systemd-networkd[850]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 1 08:37:41.087149 systemd[1]: Reached target network.target - Network. Jul 1 08:37:41.088956 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 1 08:37:41.091171 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 1 08:37:41.102539 systemd-networkd[850]: eth0: DHCPv4 address 10.0.0.78/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 1 08:37:41.133828 ignition[854]: Ignition 2.21.0 Jul 1 08:37:41.133843 ignition[854]: Stage: kargs Jul 1 08:37:41.133994 ignition[854]: no configs at "/usr/lib/ignition/base.d" Jul 1 08:37:41.134006 ignition[854]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 1 08:37:41.137234 ignition[854]: kargs: kargs passed Jul 1 08:37:41.137346 ignition[854]: Ignition finished successfully Jul 1 08:37:41.143685 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 1 08:37:41.146038 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 1 08:37:41.186920 ignition[863]: Ignition 2.21.0 Jul 1 08:37:41.186935 ignition[863]: Stage: disks Jul 1 08:37:41.187132 ignition[863]: no configs at "/usr/lib/ignition/base.d" Jul 1 08:37:41.187143 ignition[863]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 1 08:37:41.190876 ignition[863]: disks: disks passed Jul 1 08:37:41.190976 ignition[863]: Ignition finished successfully Jul 1 08:37:41.194508 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 1 08:37:41.194854 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 1 08:37:41.197556 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 1 08:37:41.197761 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 1 08:37:41.198082 systemd[1]: Reached target sysinit.target - System Initialization. Jul 1 08:37:41.198692 systemd[1]: Reached target basic.target - Basic System. Jul 1 08:37:41.200151 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 1 08:37:41.229636 systemd-fsck[873]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jul 1 08:37:41.367731 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 1 08:37:41.370171 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 1 08:37:41.514490 kernel: EXT4-fs (vda9): mounted filesystem 18421243-07cc-41b2-b496-d6a2cef84352 r/w with ordered data mode. Quota mode: none. Jul 1 08:37:41.515297 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 1 08:37:41.516201 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 1 08:37:41.518722 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 1 08:37:41.521921 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 1 08:37:41.523134 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 1 08:37:41.523181 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 1 08:37:41.523209 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 1 08:37:41.532527 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 1 08:37:41.535629 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 1 08:37:41.539466 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (881) Jul 1 08:37:41.539495 kernel: BTRFS info (device vda6): first mount of filesystem 583bafe8-d373-434e-a8d4-4cb362bb932b Jul 1 08:37:41.541660 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 1 08:37:41.541689 kernel: BTRFS info (device vda6): using free-space-tree Jul 1 08:37:41.547302 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 1 08:37:41.588236 initrd-setup-root[905]: cut: /sysroot/etc/passwd: No such file or directory Jul 1 08:37:41.592503 initrd-setup-root[912]: cut: /sysroot/etc/group: No such file or directory Jul 1 08:37:41.596580 initrd-setup-root[919]: cut: /sysroot/etc/shadow: No such file or directory Jul 1 08:37:41.601989 initrd-setup-root[926]: cut: /sysroot/etc/gshadow: No such file or directory Jul 1 08:37:41.690358 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 1 08:37:41.692665 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 1 08:37:41.694382 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 1 08:37:41.718464 kernel: BTRFS info (device vda6): last unmount of filesystem 583bafe8-d373-434e-a8d4-4cb362bb932b Jul 1 08:37:41.732127 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 1 08:37:41.749721 ignition[995]: INFO : Ignition 2.21.0 Jul 1 08:37:41.749721 ignition[995]: INFO : Stage: mount Jul 1 08:37:41.751725 ignition[995]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 1 08:37:41.751725 ignition[995]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 1 08:37:41.751725 ignition[995]: INFO : mount: mount passed Jul 1 08:37:41.751725 ignition[995]: INFO : Ignition finished successfully Jul 1 08:37:41.757943 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 1 08:37:41.759163 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 1 08:37:41.854714 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 1 08:37:41.856690 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 1 08:37:41.893139 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1007) Jul 1 08:37:41.893193 kernel: BTRFS info (device vda6): first mount of filesystem 583bafe8-d373-434e-a8d4-4cb362bb932b Jul 1 08:37:41.893209 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 1 08:37:41.893964 kernel: BTRFS info (device vda6): using free-space-tree Jul 1 08:37:41.898477 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 1 08:37:41.930856 ignition[1024]: INFO : Ignition 2.21.0 Jul 1 08:37:41.930856 ignition[1024]: INFO : Stage: files Jul 1 08:37:41.945742 ignition[1024]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 1 08:37:41.945742 ignition[1024]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 1 08:37:41.945742 ignition[1024]: DEBUG : files: compiled without relabeling support, skipping Jul 1 08:37:41.945742 ignition[1024]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 1 08:37:41.945742 ignition[1024]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 1 08:37:41.952334 ignition[1024]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 1 08:37:41.952334 ignition[1024]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 1 08:37:41.952334 ignition[1024]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 1 08:37:41.952334 ignition[1024]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jul 1 08:37:41.952334 ignition[1024]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jul 1 08:37:41.947556 unknown[1024]: wrote ssh authorized keys file for user: core Jul 1 08:37:41.985356 ignition[1024]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 1 08:37:42.121044 ignition[1024]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jul 1 08:37:42.121044 ignition[1024]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 1 08:37:42.125030 ignition[1024]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 1 08:37:42.125030 ignition[1024]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 1 08:37:42.125030 ignition[1024]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 1 08:37:42.125030 ignition[1024]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 1 08:37:42.125030 ignition[1024]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 1 08:37:42.125030 ignition[1024]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 1 08:37:42.125030 ignition[1024]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 1 08:37:42.368975 ignition[1024]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 1 08:37:42.371204 ignition[1024]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 1 08:37:42.371204 ignition[1024]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 1 08:37:42.473310 ignition[1024]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 1 08:37:42.473310 ignition[1024]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 1 08:37:42.478150 ignition[1024]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jul 1 08:37:42.503580 systemd-networkd[850]: eth0: Gained IPv6LL Jul 1 08:37:43.200158 ignition[1024]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 1 08:37:43.466144 ignition[1024]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 1 08:37:43.466144 ignition[1024]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 1 08:37:43.469831 ignition[1024]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 1 08:37:43.475931 ignition[1024]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 1 08:37:43.475931 ignition[1024]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 1 08:37:43.475931 ignition[1024]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jul 1 08:37:43.480091 ignition[1024]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 1 08:37:43.480091 ignition[1024]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 1 08:37:43.480091 ignition[1024]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jul 1 08:37:43.480091 ignition[1024]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jul 1 08:37:43.500810 ignition[1024]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 1 08:37:43.505936 ignition[1024]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 1 08:37:43.507605 ignition[1024]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jul 1 08:37:43.507605 ignition[1024]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jul 1 08:37:43.510322 ignition[1024]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jul 1 08:37:43.510322 ignition[1024]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 1 08:37:43.510322 ignition[1024]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 1 08:37:43.510322 ignition[1024]: INFO : files: files passed Jul 1 08:37:43.510322 ignition[1024]: INFO : Ignition finished successfully Jul 1 08:37:43.514066 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 1 08:37:43.517058 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 1 08:37:43.520020 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 1 08:37:43.539151 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 1 08:37:43.539295 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 1 08:37:43.542492 initrd-setup-root-after-ignition[1053]: grep: /sysroot/oem/oem-release: No such file or directory Jul 1 08:37:43.546951 initrd-setup-root-after-ignition[1055]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 1 08:37:43.546951 initrd-setup-root-after-ignition[1055]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 1 08:37:43.550676 initrd-setup-root-after-ignition[1059]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 1 08:37:43.554479 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 1 08:37:43.556197 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 1 08:37:43.560284 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 1 08:37:43.639755 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 1 08:37:43.639951 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 1 08:37:43.641508 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 1 08:37:43.644097 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 1 08:37:43.647672 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 1 08:37:43.650646 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 1 08:37:43.688842 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 1 08:37:43.690889 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 1 08:37:43.730816 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 1 08:37:43.731063 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 1 08:37:43.734896 systemd[1]: Stopped target timers.target - Timer Units. Jul 1 08:37:43.738158 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 1 08:37:43.738363 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 1 08:37:43.741267 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 1 08:37:43.743427 systemd[1]: Stopped target basic.target - Basic System. Jul 1 08:37:43.745728 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 1 08:37:43.746826 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 1 08:37:43.747228 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 1 08:37:43.747770 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 1 08:37:43.748181 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 1 08:37:43.748766 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 1 08:37:43.749163 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 1 08:37:43.749561 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 1 08:37:43.750136 systemd[1]: Stopped target swap.target - Swaps. Jul 1 08:37:43.750477 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 1 08:37:43.750629 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 1 08:37:43.751479 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 1 08:37:43.752026 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 1 08:37:43.752571 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 1 08:37:43.774075 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 1 08:37:43.775715 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 1 08:37:43.775892 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 1 08:37:43.781820 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 1 08:37:43.782123 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 1 08:37:43.783447 systemd[1]: Stopped target paths.target - Path Units. Jul 1 08:37:43.787963 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 1 08:37:43.789518 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 1 08:37:43.789711 systemd[1]: Stopped target slices.target - Slice Units. Jul 1 08:37:43.794817 systemd[1]: Stopped target sockets.target - Socket Units. Jul 1 08:37:43.796240 systemd[1]: iscsid.socket: Deactivated successfully. Jul 1 08:37:43.796395 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 1 08:37:43.798486 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 1 08:37:43.798625 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 1 08:37:43.800142 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 1 08:37:43.800362 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 1 08:37:43.801127 systemd[1]: ignition-files.service: Deactivated successfully. Jul 1 08:37:43.801281 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 1 08:37:43.806495 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 1 08:37:43.807663 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 1 08:37:43.807810 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 1 08:37:43.812877 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 1 08:37:43.816166 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 1 08:37:43.817497 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 1 08:37:43.821550 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 1 08:37:43.822726 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 1 08:37:43.834197 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 1 08:37:43.835514 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 1 08:37:43.842161 ignition[1079]: INFO : Ignition 2.21.0 Jul 1 08:37:43.842161 ignition[1079]: INFO : Stage: umount Jul 1 08:37:43.844212 ignition[1079]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 1 08:37:43.844212 ignition[1079]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 1 08:37:43.844212 ignition[1079]: INFO : umount: umount passed Jul 1 08:37:43.844212 ignition[1079]: INFO : Ignition finished successfully Jul 1 08:37:43.851048 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 1 08:37:43.851986 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 1 08:37:43.852165 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 1 08:37:43.854146 systemd[1]: Stopped target network.target - Network. Jul 1 08:37:43.857797 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 1 08:37:43.857969 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 1 08:37:43.860278 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 1 08:37:43.860346 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 1 08:37:43.860794 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 1 08:37:43.860870 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 1 08:37:43.861144 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 1 08:37:43.861203 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 1 08:37:43.861674 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 1 08:37:43.862057 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 1 08:37:43.873542 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 1 08:37:43.873701 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 1 08:37:43.879344 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 1 08:37:43.879872 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 1 08:37:43.879937 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 1 08:37:43.884012 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 1 08:37:43.887688 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 1 08:37:43.887825 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 1 08:37:43.892180 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 1 08:37:43.892368 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 1 08:37:43.896055 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 1 08:37:43.896137 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 1 08:37:43.900427 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 1 08:37:43.901338 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 1 08:37:43.901401 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 1 08:37:43.901838 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 1 08:37:43.901886 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 1 08:37:43.906475 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 1 08:37:43.906527 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 1 08:37:43.907012 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 1 08:37:43.908403 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 1 08:37:43.934695 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 1 08:37:43.948724 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 1 08:37:43.950725 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 1 08:37:43.950793 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 1 08:37:43.951829 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 1 08:37:43.951892 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 1 08:37:43.953956 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 1 08:37:43.954042 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 1 08:37:43.957732 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 1 08:37:43.957795 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 1 08:37:43.960194 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 1 08:37:43.960273 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 1 08:37:43.966581 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 1 08:37:43.968687 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 1 08:37:43.968764 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 1 08:37:43.972797 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 1 08:37:43.972880 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 1 08:37:43.976219 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 1 08:37:43.976300 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 1 08:37:43.982385 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 1 08:37:43.986674 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 1 08:37:43.997073 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 1 08:37:43.997260 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 1 08:37:44.065834 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 1 08:37:44.066044 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 1 08:37:44.067664 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 1 08:37:44.069297 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 1 08:37:44.069395 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 1 08:37:44.072807 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 1 08:37:44.100796 systemd[1]: Switching root. Jul 1 08:37:44.145820 systemd-journald[219]: Journal stopped Jul 1 08:37:46.101076 systemd-journald[219]: Received SIGTERM from PID 1 (systemd). Jul 1 08:37:46.101239 kernel: SELinux: policy capability network_peer_controls=1 Jul 1 08:37:46.101284 kernel: SELinux: policy capability open_perms=1 Jul 1 08:37:46.101305 kernel: SELinux: policy capability extended_socket_class=1 Jul 1 08:37:46.101324 kernel: SELinux: policy capability always_check_network=0 Jul 1 08:37:46.101341 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 1 08:37:46.101357 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 1 08:37:46.101372 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 1 08:37:46.101388 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 1 08:37:46.101403 kernel: SELinux: policy capability userspace_initial_context=0 Jul 1 08:37:46.101433 kernel: audit: type=1403 audit(1751359065.132:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 1 08:37:46.101467 systemd[1]: Successfully loaded SELinux policy in 63.346ms. Jul 1 08:37:46.101512 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.596ms. Jul 1 08:37:46.101530 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 1 08:37:46.101547 systemd[1]: Detected virtualization kvm. Jul 1 08:37:46.101562 systemd[1]: Detected architecture x86-64. Jul 1 08:37:46.101578 systemd[1]: Detected first boot. Jul 1 08:37:46.101592 systemd[1]: Initializing machine ID from VM UUID. Jul 1 08:37:46.101607 zram_generator::config[1128]: No configuration found. Jul 1 08:37:46.101631 kernel: Guest personality initialized and is inactive Jul 1 08:37:46.101645 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jul 1 08:37:46.101659 kernel: Initialized host personality Jul 1 08:37:46.101678 kernel: NET: Registered PF_VSOCK protocol family Jul 1 08:37:46.101697 systemd[1]: Populated /etc with preset unit settings. Jul 1 08:37:46.101714 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 1 08:37:46.101729 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 1 08:37:46.101744 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 1 08:37:46.101760 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 1 08:37:46.101783 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 1 08:37:46.101801 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 1 08:37:46.101829 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 1 08:37:46.101845 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 1 08:37:46.101861 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 1 08:37:46.101878 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 1 08:37:46.101896 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 1 08:37:46.101913 systemd[1]: Created slice user.slice - User and Session Slice. Jul 1 08:37:46.101930 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 1 08:37:46.101958 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 1 08:37:46.101985 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 1 08:37:46.102003 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 1 08:37:46.102020 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 1 08:37:46.102041 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 1 08:37:46.102058 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 1 08:37:46.102082 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 1 08:37:46.102112 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 1 08:37:46.102133 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 1 08:37:46.102158 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 1 08:37:46.102178 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 1 08:37:46.102198 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 1 08:37:46.102217 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 1 08:37:46.102237 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 1 08:37:46.102253 systemd[1]: Reached target slices.target - Slice Units. Jul 1 08:37:46.102269 systemd[1]: Reached target swap.target - Swaps. Jul 1 08:37:46.102286 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 1 08:37:46.102309 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 1 08:37:46.102325 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 1 08:37:46.102352 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 1 08:37:46.102371 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 1 08:37:46.102388 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 1 08:37:46.102404 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 1 08:37:46.102451 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 1 08:37:46.102472 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 1 08:37:46.102489 systemd[1]: Mounting media.mount - External Media Directory... Jul 1 08:37:46.102515 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 1 08:37:46.102532 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 1 08:37:46.102550 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 1 08:37:46.102570 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 1 08:37:46.102587 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 1 08:37:46.102607 systemd[1]: Reached target machines.target - Containers. Jul 1 08:37:46.102623 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 1 08:37:46.102653 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 1 08:37:46.102676 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 1 08:37:46.102692 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 1 08:37:46.102712 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 1 08:37:46.102728 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 1 08:37:46.102744 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 1 08:37:46.102760 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 1 08:37:46.102776 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 1 08:37:46.102796 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 1 08:37:46.102820 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 1 08:37:46.102836 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 1 08:37:46.102852 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 1 08:37:46.102868 systemd[1]: Stopped systemd-fsck-usr.service. Jul 1 08:37:46.102886 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 1 08:37:46.102902 kernel: fuse: init (API version 7.41) Jul 1 08:37:46.102921 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 1 08:37:46.102940 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 1 08:37:46.102959 kernel: ACPI: bus type drm_connector registered Jul 1 08:37:46.102993 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 1 08:37:46.103013 kernel: loop: module loaded Jul 1 08:37:46.103033 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 1 08:37:46.103050 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 1 08:37:46.103069 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 1 08:37:46.103094 systemd[1]: verity-setup.service: Deactivated successfully. Jul 1 08:37:46.103111 systemd[1]: Stopped verity-setup.service. Jul 1 08:37:46.103127 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 1 08:37:46.103180 systemd-journald[1210]: Collecting audit messages is disabled. Jul 1 08:37:46.103241 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 1 08:37:46.103272 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 1 08:37:46.103295 systemd-journald[1210]: Journal started Jul 1 08:37:46.103333 systemd-journald[1210]: Runtime Journal (/run/log/journal/6c6a93cd88dc461f9be0bb75d6032f13) is 6M, max 48.6M, 42.5M free. Jul 1 08:37:46.104540 systemd[1]: Mounted media.mount - External Media Directory. Jul 1 08:37:45.748808 systemd[1]: Queued start job for default target multi-user.target. Jul 1 08:37:45.771875 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 1 08:37:45.772429 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 1 08:37:46.108435 systemd[1]: Started systemd-journald.service - Journal Service. Jul 1 08:37:46.110582 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 1 08:37:46.113086 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 1 08:37:46.114495 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 1 08:37:46.115853 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 1 08:37:46.117501 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 1 08:37:46.119094 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 1 08:37:46.119398 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 1 08:37:46.121861 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 1 08:37:46.122178 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 1 08:37:46.123957 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 1 08:37:46.124285 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 1 08:37:46.125782 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 1 08:37:46.126074 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 1 08:37:46.127716 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 1 08:37:46.128013 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 1 08:37:46.129517 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 1 08:37:46.129813 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 1 08:37:46.131327 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 1 08:37:46.132886 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 1 08:37:46.134578 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 1 08:37:46.136210 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 1 08:37:46.156794 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 1 08:37:46.159963 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 1 08:37:46.162890 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 1 08:37:46.164401 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 1 08:37:46.164458 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 1 08:37:46.167072 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 1 08:37:46.170167 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 1 08:37:46.172089 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 1 08:37:46.174210 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 1 08:37:46.178530 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 1 08:37:46.179713 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 1 08:37:46.192898 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 1 08:37:46.194092 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 1 08:37:46.196521 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 1 08:37:46.199554 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 1 08:37:46.203009 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 1 08:37:46.206024 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 1 08:37:46.207475 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 1 08:37:46.210003 systemd-journald[1210]: Time spent on flushing to /var/log/journal/6c6a93cd88dc461f9be0bb75d6032f13 is 16.795ms for 976 entries. Jul 1 08:37:46.210003 systemd-journald[1210]: System Journal (/var/log/journal/6c6a93cd88dc461f9be0bb75d6032f13) is 8M, max 195.6M, 187.6M free. Jul 1 08:37:46.251553 systemd-journald[1210]: Received client request to flush runtime journal. Jul 1 08:37:46.251707 kernel: loop0: detected capacity change from 0 to 146336 Jul 1 08:37:46.219789 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 1 08:37:46.221611 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 1 08:37:46.225674 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 1 08:37:46.239002 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 1 08:37:46.254237 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 1 08:37:46.262184 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 1 08:37:46.264230 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 1 08:37:46.275877 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 1 08:37:46.279230 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 1 08:37:46.280952 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 1 08:37:46.300451 kernel: loop1: detected capacity change from 0 to 114000 Jul 1 08:37:46.308800 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Jul 1 08:37:46.308820 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Jul 1 08:37:46.315266 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 1 08:37:46.329450 kernel: loop2: detected capacity change from 0 to 229808 Jul 1 08:37:46.405446 kernel: loop3: detected capacity change from 0 to 146336 Jul 1 08:37:46.422562 kernel: loop4: detected capacity change from 0 to 114000 Jul 1 08:37:46.435452 kernel: loop5: detected capacity change from 0 to 229808 Jul 1 08:37:46.445261 (sd-merge)[1266]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 1 08:37:46.446022 (sd-merge)[1266]: Merged extensions into '/usr'. Jul 1 08:37:46.451330 systemd[1]: Reload requested from client PID 1244 ('systemd-sysext') (unit systemd-sysext.service)... Jul 1 08:37:46.451347 systemd[1]: Reloading... Jul 1 08:37:46.526450 zram_generator::config[1289]: No configuration found. Jul 1 08:37:46.662425 ldconfig[1239]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 1 08:37:46.678134 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 1 08:37:46.771195 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 1 08:37:46.771502 systemd[1]: Reloading finished in 319 ms. Jul 1 08:37:46.799633 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 1 08:37:46.801753 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 1 08:37:46.830530 systemd[1]: Starting ensure-sysext.service... Jul 1 08:37:46.833039 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 1 08:37:46.845687 systemd[1]: Reload requested from client PID 1329 ('systemctl') (unit ensure-sysext.service)... Jul 1 08:37:46.845711 systemd[1]: Reloading... Jul 1 08:37:46.856456 systemd-tmpfiles[1330]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 1 08:37:46.856502 systemd-tmpfiles[1330]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 1 08:37:46.856793 systemd-tmpfiles[1330]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 1 08:37:46.857070 systemd-tmpfiles[1330]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 1 08:37:46.858180 systemd-tmpfiles[1330]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 1 08:37:46.858472 systemd-tmpfiles[1330]: ACLs are not supported, ignoring. Jul 1 08:37:46.858571 systemd-tmpfiles[1330]: ACLs are not supported, ignoring. Jul 1 08:37:46.864517 systemd-tmpfiles[1330]: Detected autofs mount point /boot during canonicalization of boot. Jul 1 08:37:46.864636 systemd-tmpfiles[1330]: Skipping /boot Jul 1 08:37:46.875481 systemd-tmpfiles[1330]: Detected autofs mount point /boot during canonicalization of boot. Jul 1 08:37:46.875566 systemd-tmpfiles[1330]: Skipping /boot Jul 1 08:37:46.903522 zram_generator::config[1360]: No configuration found. Jul 1 08:37:46.999451 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 1 08:37:47.099718 systemd[1]: Reloading finished in 253 ms. Jul 1 08:37:47.127297 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 1 08:37:47.158314 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 1 08:37:47.170181 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 1 08:37:47.173428 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 1 08:37:47.181693 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 1 08:37:47.185876 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 1 08:37:47.192058 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 1 08:37:47.195665 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 1 08:37:47.201551 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 1 08:37:47.201793 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 1 08:37:47.205735 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 1 08:37:47.209081 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 1 08:37:47.212741 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 1 08:37:47.213908 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 1 08:37:47.214089 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 1 08:37:47.217816 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 1 08:37:47.218959 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 1 08:37:47.220672 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 1 08:37:47.220955 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 1 08:37:47.234636 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 1 08:37:47.239791 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 1 08:37:47.240147 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 1 08:37:47.242796 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 1 08:37:47.243549 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 1 08:37:47.248199 systemd-udevd[1401]: Using default interface naming scheme 'v255'. Jul 1 08:37:47.250973 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 1 08:37:47.256449 augenrules[1428]: No rules Jul 1 08:37:47.259261 systemd[1]: audit-rules.service: Deactivated successfully. Jul 1 08:37:47.259789 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 1 08:37:47.267379 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 1 08:37:47.268256 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 1 08:37:47.270207 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 1 08:37:47.272917 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 1 08:37:47.277547 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 1 08:37:47.281632 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 1 08:37:47.282862 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 1 08:37:47.282905 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 1 08:37:47.287601 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 1 08:37:47.288697 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 1 08:37:47.289115 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 1 08:37:47.290723 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 1 08:37:47.293703 systemd[1]: Finished ensure-sysext.service. Jul 1 08:37:47.295018 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 1 08:37:47.296698 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 1 08:37:47.297620 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 1 08:37:47.299113 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 1 08:37:47.299376 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 1 08:37:47.308730 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 1 08:37:47.309747 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 1 08:37:47.311487 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 1 08:37:47.311800 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 1 08:37:47.325301 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 1 08:37:47.326495 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 1 08:37:47.326582 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 1 08:37:47.328360 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 1 08:37:47.329774 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 1 08:37:47.330215 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 1 08:37:47.397032 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 1 08:37:47.440778 systemd-resolved[1399]: Positive Trust Anchors: Jul 1 08:37:47.440802 systemd-resolved[1399]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 1 08:37:47.440841 systemd-resolved[1399]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 1 08:37:47.445139 systemd-resolved[1399]: Defaulting to hostname 'linux'. Jul 1 08:37:47.447776 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 1 08:37:47.449165 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 1 08:37:47.494625 systemd-networkd[1475]: lo: Link UP Jul 1 08:37:47.494641 systemd-networkd[1475]: lo: Gained carrier Jul 1 08:37:47.497064 systemd-networkd[1475]: Enumeration completed Jul 1 08:37:47.497188 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 1 08:37:47.498737 systemd[1]: Reached target network.target - Network. Jul 1 08:37:47.499918 systemd-networkd[1475]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 1 08:37:47.499947 systemd-networkd[1475]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 1 08:37:47.500675 systemd-networkd[1475]: eth0: Link UP Jul 1 08:37:47.500866 systemd-networkd[1475]: eth0: Gained carrier Jul 1 08:37:47.500889 systemd-networkd[1475]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 1 08:37:47.503693 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 1 08:37:47.506573 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 1 08:37:47.509438 kernel: mousedev: PS/2 mouse device common for all mice Jul 1 08:37:47.516659 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 1 08:37:47.516799 systemd-networkd[1475]: eth0: DHCPv4 address 10.0.0.78/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 1 08:37:47.518666 systemd-timesyncd[1477]: Network configuration changed, trying to establish connection. Jul 1 08:37:47.521167 systemd-timesyncd[1477]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 1 08:37:47.521224 systemd-timesyncd[1477]: Initial clock synchronization to Tue 2025-07-01 08:37:47.362816 UTC. Jul 1 08:37:47.524390 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 1 08:37:47.526263 systemd[1]: Reached target sysinit.target - System Initialization. Jul 1 08:37:47.527626 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 1 08:37:47.528901 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 1 08:37:47.530318 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jul 1 08:37:47.531663 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 1 08:37:47.533016 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 1 08:37:47.533058 systemd[1]: Reached target paths.target - Path Units. Jul 1 08:37:47.534008 systemd[1]: Reached target time-set.target - System Time Set. Jul 1 08:37:47.535475 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 1 08:37:47.536730 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 1 08:37:47.538199 systemd[1]: Reached target timers.target - Timer Units. Jul 1 08:37:47.540185 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 1 08:37:47.543801 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 1 08:37:47.548382 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 1 08:37:47.549885 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 1 08:37:47.551236 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 1 08:37:47.553440 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jul 1 08:37:47.558451 kernel: ACPI: button: Power Button [PWRF] Jul 1 08:37:47.565496 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 1 08:37:47.568686 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jul 1 08:37:47.569039 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jul 1 08:37:47.568983 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 1 08:37:47.572696 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 1 08:37:47.574663 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 1 08:37:47.576686 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 1 08:37:47.588201 systemd[1]: Reached target sockets.target - Socket Units. Jul 1 08:37:47.590500 systemd[1]: Reached target basic.target - Basic System. Jul 1 08:37:47.591602 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 1 08:37:47.591639 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 1 08:37:47.594521 systemd[1]: Starting containerd.service - containerd container runtime... Jul 1 08:37:47.601748 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 1 08:37:47.604095 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 1 08:37:47.613681 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 1 08:37:47.616836 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 1 08:37:47.617855 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 1 08:37:47.619892 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jul 1 08:37:47.623618 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 1 08:37:47.625594 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 1 08:37:47.625678 jq[1517]: false Jul 1 08:37:47.629324 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 1 08:37:47.636342 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 1 08:37:47.645659 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 1 08:37:47.647816 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 1 08:37:47.648363 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 1 08:37:47.649282 google_oslogin_nss_cache[1520]: oslogin_cache_refresh[1520]: Refreshing passwd entry cache Jul 1 08:37:47.650457 oslogin_cache_refresh[1520]: Refreshing passwd entry cache Jul 1 08:37:47.650627 systemd[1]: Starting update-engine.service - Update Engine... Jul 1 08:37:47.655587 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 1 08:37:47.658620 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 1 08:37:47.661592 google_oslogin_nss_cache[1520]: oslogin_cache_refresh[1520]: Failure getting users, quitting Jul 1 08:37:47.661592 google_oslogin_nss_cache[1520]: oslogin_cache_refresh[1520]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 1 08:37:47.661575 oslogin_cache_refresh[1520]: Failure getting users, quitting Jul 1 08:37:47.661708 google_oslogin_nss_cache[1520]: oslogin_cache_refresh[1520]: Refreshing group entry cache Jul 1 08:37:47.661599 oslogin_cache_refresh[1520]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 1 08:37:47.661665 oslogin_cache_refresh[1520]: Refreshing group entry cache Jul 1 08:37:47.663943 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 1 08:37:47.665936 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 1 08:37:47.666233 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 1 08:37:47.672258 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 1 08:37:47.672704 oslogin_cache_refresh[1520]: Failure getting groups, quitting Jul 1 08:37:47.674653 google_oslogin_nss_cache[1520]: oslogin_cache_refresh[1520]: Failure getting groups, quitting Jul 1 08:37:47.674653 google_oslogin_nss_cache[1520]: oslogin_cache_refresh[1520]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 1 08:37:47.674203 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 1 08:37:47.672717 oslogin_cache_refresh[1520]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 1 08:37:47.676061 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jul 1 08:37:47.676340 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jul 1 08:37:47.680452 systemd[1]: motdgen.service: Deactivated successfully. Jul 1 08:37:47.680755 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 1 08:37:47.684825 jq[1532]: true Jul 1 08:37:47.689433 extend-filesystems[1518]: Found /dev/vda6 Jul 1 08:37:47.693431 extend-filesystems[1518]: Found /dev/vda9 Jul 1 08:37:47.698736 extend-filesystems[1518]: Checking size of /dev/vda9 Jul 1 08:37:47.704825 (ntainerd)[1542]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 1 08:37:47.706132 update_engine[1530]: I20250701 08:37:47.705955 1530 main.cc:92] Flatcar Update Engine starting Jul 1 08:37:47.713777 tar[1541]: linux-amd64/LICENSE Jul 1 08:37:47.714127 tar[1541]: linux-amd64/helm Jul 1 08:37:47.718119 jq[1546]: true Jul 1 08:37:47.726327 extend-filesystems[1518]: Resized partition /dev/vda9 Jul 1 08:37:47.730866 extend-filesystems[1563]: resize2fs 1.47.2 (1-Jan-2025) Jul 1 08:37:47.789152 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 1 08:37:47.789223 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 1 08:37:47.811991 update_engine[1530]: I20250701 08:37:47.767979 1530 update_check_scheduler.cc:74] Next update check in 3m17s Jul 1 08:37:47.754478 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 1 08:37:47.754312 dbus-daemon[1514]: [system] SELinux support is enabled Jul 1 08:37:47.758118 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 1 08:37:47.758143 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 1 08:37:47.759695 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 1 08:37:47.759717 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 1 08:37:47.767770 systemd[1]: Started update-engine.service - Update Engine. Jul 1 08:37:47.777548 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 1 08:37:47.813120 extend-filesystems[1563]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 1 08:37:47.813120 extend-filesystems[1563]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 1 08:37:47.813120 extend-filesystems[1563]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 1 08:37:47.820525 extend-filesystems[1518]: Resized filesystem in /dev/vda9 Jul 1 08:37:47.814747 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 1 08:37:47.817357 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 1 08:37:47.828510 bash[1579]: Updated "/home/core/.ssh/authorized_keys" Jul 1 08:37:47.828767 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 1 08:37:47.830727 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 1 08:37:47.834233 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 1 08:37:47.840083 kernel: kvm_amd: TSC scaling supported Jul 1 08:37:47.840119 kernel: kvm_amd: Nested Virtualization enabled Jul 1 08:37:47.840134 kernel: kvm_amd: Nested Paging enabled Jul 1 08:37:47.840158 kernel: kvm_amd: LBR virtualization supported Jul 1 08:37:47.842270 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jul 1 08:37:47.842398 kernel: kvm_amd: Virtual GIF supported Jul 1 08:37:47.842452 sshd_keygen[1540]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 1 08:37:47.877901 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 1 08:37:47.881046 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 1 08:37:47.945097 systemd[1]: issuegen.service: Deactivated successfully. Jul 1 08:37:47.945397 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 1 08:37:47.977129 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 1 08:37:47.985801 systemd-logind[1525]: Watching system buttons on /dev/input/event2 (Power Button) Jul 1 08:37:47.985832 systemd-logind[1525]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 1 08:37:47.987079 systemd-logind[1525]: New seat seat0. Jul 1 08:37:47.988880 systemd[1]: Started systemd-logind.service - User Login Management. Jul 1 08:37:47.991450 kernel: EDAC MC: Ver: 3.0.0 Jul 1 08:37:48.003473 locksmithd[1573]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 1 08:37:48.007108 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 1 08:37:48.010639 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 1 08:37:48.012810 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 1 08:37:48.014202 systemd[1]: Reached target getty.target - Login Prompts. Jul 1 08:37:48.045168 containerd[1542]: time="2025-07-01T08:37:48Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 1 08:37:48.046315 containerd[1542]: time="2025-07-01T08:37:48.046280005Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Jul 1 08:37:48.054552 containerd[1542]: time="2025-07-01T08:37:48.054515727Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="7.22µs" Jul 1 08:37:48.054552 containerd[1542]: time="2025-07-01T08:37:48.054540489Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 1 08:37:48.054641 containerd[1542]: time="2025-07-01T08:37:48.054556637Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 1 08:37:48.054748 containerd[1542]: time="2025-07-01T08:37:48.054720596Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 1 08:37:48.054748 containerd[1542]: time="2025-07-01T08:37:48.054743865Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 1 08:37:48.054786 containerd[1542]: time="2025-07-01T08:37:48.054767742Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 1 08:37:48.054900 containerd[1542]: time="2025-07-01T08:37:48.054871650Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 1 08:37:48.054900 containerd[1542]: time="2025-07-01T08:37:48.054894310Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 1 08:37:48.055177 containerd[1542]: time="2025-07-01T08:37:48.055149685Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 1 08:37:48.055177 containerd[1542]: time="2025-07-01T08:37:48.055165823Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 1 08:37:48.055224 containerd[1542]: time="2025-07-01T08:37:48.055175624Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 1 08:37:48.055224 containerd[1542]: time="2025-07-01T08:37:48.055183021Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 1 08:37:48.055294 containerd[1542]: time="2025-07-01T08:37:48.055275299Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 1 08:37:48.055569 containerd[1542]: time="2025-07-01T08:37:48.055543187Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 1 08:37:48.055592 containerd[1542]: time="2025-07-01T08:37:48.055578459Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 1 08:37:48.055592 containerd[1542]: time="2025-07-01T08:37:48.055588261Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 1 08:37:48.055651 containerd[1542]: time="2025-07-01T08:37:48.055630879Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 1 08:37:48.055970 containerd[1542]: time="2025-07-01T08:37:48.055946139Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 1 08:37:48.056043 containerd[1542]: time="2025-07-01T08:37:48.056027073Z" level=info msg="metadata content store policy set" policy=shared Jul 1 08:37:48.062776 containerd[1542]: time="2025-07-01T08:37:48.062693371Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 1 08:37:48.062776 containerd[1542]: time="2025-07-01T08:37:48.062736077Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 1 08:37:48.062776 containerd[1542]: time="2025-07-01T08:37:48.062748512Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 1 08:37:48.062776 containerd[1542]: time="2025-07-01T08:37:48.062759709Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 1 08:37:48.062776 containerd[1542]: time="2025-07-01T08:37:48.062769531Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 1 08:37:48.062776 containerd[1542]: time="2025-07-01T08:37:48.062779569Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 1 08:37:48.062911 containerd[1542]: time="2025-07-01T08:37:48.062802916Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 1 08:37:48.062911 containerd[1542]: time="2025-07-01T08:37:48.062814614Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 1 08:37:48.062911 containerd[1542]: time="2025-07-01T08:37:48.062823984Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 1 08:37:48.062911 containerd[1542]: time="2025-07-01T08:37:48.062832873Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 1 08:37:48.062911 containerd[1542]: time="2025-07-01T08:37:48.062842087Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 1 08:37:48.062911 containerd[1542]: time="2025-07-01T08:37:48.062860248Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 1 08:37:48.063021 containerd[1542]: time="2025-07-01T08:37:48.062974577Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 1 08:37:48.063021 containerd[1542]: time="2025-07-01T08:37:48.062991462Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 1 08:37:48.063021 containerd[1542]: time="2025-07-01T08:37:48.063004083Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 1 08:37:48.064106 containerd[1542]: time="2025-07-01T08:37:48.064053721Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 1 08:37:48.064197 containerd[1542]: time="2025-07-01T08:37:48.064171419Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 1 08:37:48.064279 containerd[1542]: time="2025-07-01T08:37:48.064260948Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 1 08:37:48.064370 containerd[1542]: time="2025-07-01T08:37:48.064350633Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 1 08:37:48.064480 containerd[1542]: time="2025-07-01T08:37:48.064462016Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 1 08:37:48.064551 containerd[1542]: time="2025-07-01T08:37:48.064534179Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 1 08:37:48.064614 containerd[1542]: time="2025-07-01T08:37:48.064600036Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 1 08:37:48.064686 containerd[1542]: time="2025-07-01T08:37:48.064671119Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 1 08:37:48.064804 containerd[1542]: time="2025-07-01T08:37:48.064791528Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 1 08:37:48.064853 containerd[1542]: time="2025-07-01T08:37:48.064843123Z" level=info msg="Start snapshots syncer" Jul 1 08:37:48.064942 containerd[1542]: time="2025-07-01T08:37:48.064925256Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 1 08:37:48.065209 containerd[1542]: time="2025-07-01T08:37:48.065177379Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 1 08:37:48.065384 containerd[1542]: time="2025-07-01T08:37:48.065363881Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 1 08:37:48.065564 containerd[1542]: time="2025-07-01T08:37:48.065540158Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 1 08:37:48.065772 containerd[1542]: time="2025-07-01T08:37:48.065752738Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 1 08:37:48.065850 containerd[1542]: time="2025-07-01T08:37:48.065835961Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 1 08:37:48.065898 containerd[1542]: time="2025-07-01T08:37:48.065887065Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 1 08:37:48.065944 containerd[1542]: time="2025-07-01T08:37:48.065931559Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 1 08:37:48.066018 containerd[1542]: time="2025-07-01T08:37:48.066002632Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 1 08:37:48.066070 containerd[1542]: time="2025-07-01T08:37:48.066058726Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 1 08:37:48.066126 containerd[1542]: time="2025-07-01T08:37:48.066112541Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 1 08:37:48.066183 containerd[1542]: time="2025-07-01T08:37:48.066172574Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 1 08:37:48.066239 containerd[1542]: time="2025-07-01T08:37:48.066227607Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 1 08:37:48.066294 containerd[1542]: time="2025-07-01T08:37:48.066281020Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 1 08:37:48.066381 containerd[1542]: time="2025-07-01T08:37:48.066363359Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 1 08:37:48.066485 containerd[1542]: time="2025-07-01T08:37:48.066464683Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 1 08:37:48.066548 containerd[1542]: time="2025-07-01T08:37:48.066533536Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 1 08:37:48.066640 containerd[1542]: time="2025-07-01T08:37:48.066618880Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 1 08:37:48.066738 containerd[1542]: time="2025-07-01T08:37:48.066716050Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 1 08:37:48.066810 containerd[1542]: time="2025-07-01T08:37:48.066795354Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 1 08:37:48.066859 containerd[1542]: time="2025-07-01T08:37:48.066848236Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 1 08:37:48.066912 containerd[1542]: time="2025-07-01T08:37:48.066901914Z" level=info msg="runtime interface created" Jul 1 08:37:48.066962 containerd[1542]: time="2025-07-01T08:37:48.066949050Z" level=info msg="created NRI interface" Jul 1 08:37:48.067013 containerd[1542]: time="2025-07-01T08:37:48.067001107Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 1 08:37:48.067062 containerd[1542]: time="2025-07-01T08:37:48.067049491Z" level=info msg="Connect containerd service" Jul 1 08:37:48.067148 containerd[1542]: time="2025-07-01T08:37:48.067133205Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 1 08:37:48.068139 containerd[1542]: time="2025-07-01T08:37:48.068113833Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 1 08:37:48.150724 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 1 08:37:48.161530 tar[1541]: linux-amd64/README.md Jul 1 08:37:48.163138 containerd[1542]: time="2025-07-01T08:37:48.163092140Z" level=info msg="Start subscribing containerd event" Jul 1 08:37:48.163211 containerd[1542]: time="2025-07-01T08:37:48.163156956Z" level=info msg="Start recovering state" Jul 1 08:37:48.163317 containerd[1542]: time="2025-07-01T08:37:48.163295722Z" level=info msg="Start event monitor" Jul 1 08:37:48.163362 containerd[1542]: time="2025-07-01T08:37:48.163342318Z" level=info msg="Start cni network conf syncer for default" Jul 1 08:37:48.163462 containerd[1542]: time="2025-07-01T08:37:48.163357994Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 1 08:37:48.163507 containerd[1542]: time="2025-07-01T08:37:48.163472206Z" level=info msg="Start streaming server" Jul 1 08:37:48.163507 containerd[1542]: time="2025-07-01T08:37:48.163506112Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 1 08:37:48.163591 containerd[1542]: time="2025-07-01T08:37:48.163514676Z" level=info msg="runtime interface starting up..." Jul 1 08:37:48.163591 containerd[1542]: time="2025-07-01T08:37:48.163520816Z" level=info msg="starting plugins..." Jul 1 08:37:48.163591 containerd[1542]: time="2025-07-01T08:37:48.163533240Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 1 08:37:48.163663 containerd[1542]: time="2025-07-01T08:37:48.163536776Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 1 08:37:48.163794 containerd[1542]: time="2025-07-01T08:37:48.163768509Z" level=info msg="containerd successfully booted in 0.119268s" Jul 1 08:37:48.164316 systemd[1]: Started containerd.service - containerd container runtime. Jul 1 08:37:48.185971 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 1 08:37:49.351710 systemd-networkd[1475]: eth0: Gained IPv6LL Jul 1 08:37:49.355136 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 1 08:37:49.357306 systemd[1]: Reached target network-online.target - Network is Online. Jul 1 08:37:49.360456 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 1 08:37:49.363606 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 1 08:37:49.366322 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 1 08:37:49.400633 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 1 08:37:49.404136 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 1 08:37:49.404435 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 1 08:37:49.406174 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 1 08:37:50.822643 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 1 08:37:50.824907 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 1 08:37:50.827558 systemd[1]: Startup finished in 3.396s (kernel) + 7.490s (initrd) + 5.757s (userspace) = 16.644s. Jul 1 08:37:50.837861 (kubelet)[1659]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 1 08:37:51.491192 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 1 08:37:51.493531 systemd[1]: Started sshd@0-10.0.0.78:22-10.0.0.1:51080.service - OpenSSH per-connection server daemon (10.0.0.1:51080). Jul 1 08:37:51.498644 kubelet[1659]: E0701 08:37:51.498580 1659 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 1 08:37:51.503200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 1 08:37:51.503500 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 1 08:37:51.515756 systemd[1]: kubelet.service: Consumed 1.955s CPU time, 266.9M memory peak. Jul 1 08:37:51.609043 sshd[1671]: Accepted publickey for core from 10.0.0.1 port 51080 ssh2: RSA SHA256:Fdg/GPppvpuQQb5BRtreEtTPBEKGT5ZJUpnuhcL3IOo Jul 1 08:37:51.611024 sshd-session[1671]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:37:51.618905 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 1 08:37:51.620133 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 1 08:37:51.626861 systemd-logind[1525]: New session 1 of user core. Jul 1 08:37:51.645487 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 1 08:37:51.648797 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 1 08:37:51.668830 (systemd)[1677]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 1 08:37:51.671725 systemd-logind[1525]: New session c1 of user core. Jul 1 08:37:51.857689 systemd[1677]: Queued start job for default target default.target. Jul 1 08:37:51.866868 systemd[1677]: Created slice app.slice - User Application Slice. Jul 1 08:37:51.866902 systemd[1677]: Reached target paths.target - Paths. Jul 1 08:37:51.866946 systemd[1677]: Reached target timers.target - Timers. Jul 1 08:37:51.868687 systemd[1677]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 1 08:37:51.886280 systemd[1677]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 1 08:37:51.886452 systemd[1677]: Reached target sockets.target - Sockets. Jul 1 08:37:51.886517 systemd[1677]: Reached target basic.target - Basic System. Jul 1 08:37:51.886571 systemd[1677]: Reached target default.target - Main User Target. Jul 1 08:37:51.886609 systemd[1677]: Startup finished in 206ms. Jul 1 08:37:51.887155 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 1 08:37:51.888812 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 1 08:37:51.962370 systemd[1]: Started sshd@1-10.0.0.78:22-10.0.0.1:51090.service - OpenSSH per-connection server daemon (10.0.0.1:51090). Jul 1 08:37:52.041593 sshd[1688]: Accepted publickey for core from 10.0.0.1 port 51090 ssh2: RSA SHA256:Fdg/GPppvpuQQb5BRtreEtTPBEKGT5ZJUpnuhcL3IOo Jul 1 08:37:52.043809 sshd-session[1688]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:37:52.050472 systemd-logind[1525]: New session 2 of user core. Jul 1 08:37:52.066002 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 1 08:37:52.122615 sshd[1691]: Connection closed by 10.0.0.1 port 51090 Jul 1 08:37:52.122894 sshd-session[1688]: pam_unix(sshd:session): session closed for user core Jul 1 08:37:52.141442 systemd[1]: sshd@1-10.0.0.78:22-10.0.0.1:51090.service: Deactivated successfully. Jul 1 08:37:52.143825 systemd[1]: session-2.scope: Deactivated successfully. Jul 1 08:37:52.144660 systemd-logind[1525]: Session 2 logged out. Waiting for processes to exit. Jul 1 08:37:52.148240 systemd[1]: Started sshd@2-10.0.0.78:22-10.0.0.1:51106.service - OpenSSH per-connection server daemon (10.0.0.1:51106). Jul 1 08:37:52.149001 systemd-logind[1525]: Removed session 2. Jul 1 08:37:52.207068 sshd[1697]: Accepted publickey for core from 10.0.0.1 port 51106 ssh2: RSA SHA256:Fdg/GPppvpuQQb5BRtreEtTPBEKGT5ZJUpnuhcL3IOo Jul 1 08:37:52.209010 sshd-session[1697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:37:52.214034 systemd-logind[1525]: New session 3 of user core. Jul 1 08:37:52.223571 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 1 08:37:52.273466 sshd[1700]: Connection closed by 10.0.0.1 port 51106 Jul 1 08:37:52.274026 sshd-session[1697]: pam_unix(sshd:session): session closed for user core Jul 1 08:37:52.284744 systemd[1]: sshd@2-10.0.0.78:22-10.0.0.1:51106.service: Deactivated successfully. Jul 1 08:37:52.286685 systemd[1]: session-3.scope: Deactivated successfully. Jul 1 08:37:52.287586 systemd-logind[1525]: Session 3 logged out. Waiting for processes to exit. Jul 1 08:37:52.289944 systemd[1]: Started sshd@3-10.0.0.78:22-10.0.0.1:51114.service - OpenSSH per-connection server daemon (10.0.0.1:51114). Jul 1 08:37:52.291358 systemd-logind[1525]: Removed session 3. Jul 1 08:37:52.356325 sshd[1706]: Accepted publickey for core from 10.0.0.1 port 51114 ssh2: RSA SHA256:Fdg/GPppvpuQQb5BRtreEtTPBEKGT5ZJUpnuhcL3IOo Jul 1 08:37:52.358198 sshd-session[1706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:37:52.363207 systemd-logind[1525]: New session 4 of user core. Jul 1 08:37:52.373573 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 1 08:37:52.430181 sshd[1709]: Connection closed by 10.0.0.1 port 51114 Jul 1 08:37:52.430630 sshd-session[1706]: pam_unix(sshd:session): session closed for user core Jul 1 08:37:52.442704 systemd[1]: sshd@3-10.0.0.78:22-10.0.0.1:51114.service: Deactivated successfully. Jul 1 08:37:52.444695 systemd[1]: session-4.scope: Deactivated successfully. Jul 1 08:37:52.445457 systemd-logind[1525]: Session 4 logged out. Waiting for processes to exit. Jul 1 08:37:52.448107 systemd[1]: Started sshd@4-10.0.0.78:22-10.0.0.1:51122.service - OpenSSH per-connection server daemon (10.0.0.1:51122). Jul 1 08:37:52.448934 systemd-logind[1525]: Removed session 4. Jul 1 08:37:52.518002 sshd[1715]: Accepted publickey for core from 10.0.0.1 port 51122 ssh2: RSA SHA256:Fdg/GPppvpuQQb5BRtreEtTPBEKGT5ZJUpnuhcL3IOo Jul 1 08:37:52.519636 sshd-session[1715]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:37:52.524387 systemd-logind[1525]: New session 5 of user core. Jul 1 08:37:52.534573 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 1 08:37:52.598379 sudo[1720]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 1 08:37:52.598780 sudo[1720]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 1 08:37:52.623397 sudo[1720]: pam_unix(sudo:session): session closed for user root Jul 1 08:37:52.625786 sshd[1719]: Connection closed by 10.0.0.1 port 51122 Jul 1 08:37:52.626180 sshd-session[1715]: pam_unix(sshd:session): session closed for user core Jul 1 08:37:52.641093 systemd[1]: sshd@4-10.0.0.78:22-10.0.0.1:51122.service: Deactivated successfully. Jul 1 08:37:52.643222 systemd[1]: session-5.scope: Deactivated successfully. Jul 1 08:37:52.644148 systemd-logind[1525]: Session 5 logged out. Waiting for processes to exit. Jul 1 08:37:52.646941 systemd[1]: Started sshd@5-10.0.0.78:22-10.0.0.1:51138.service - OpenSSH per-connection server daemon (10.0.0.1:51138). Jul 1 08:37:52.648491 systemd-logind[1525]: Removed session 5. Jul 1 08:37:52.701161 sshd[1726]: Accepted publickey for core from 10.0.0.1 port 51138 ssh2: RSA SHA256:Fdg/GPppvpuQQb5BRtreEtTPBEKGT5ZJUpnuhcL3IOo Jul 1 08:37:52.702824 sshd-session[1726]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:37:52.708121 systemd-logind[1525]: New session 6 of user core. Jul 1 08:37:52.717569 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 1 08:37:52.771900 sudo[1731]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 1 08:37:52.772240 sudo[1731]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 1 08:37:52.780946 sudo[1731]: pam_unix(sudo:session): session closed for user root Jul 1 08:37:52.789618 sudo[1730]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 1 08:37:52.789965 sudo[1730]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 1 08:37:52.801449 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 1 08:37:52.859827 augenrules[1753]: No rules Jul 1 08:37:52.861881 systemd[1]: audit-rules.service: Deactivated successfully. Jul 1 08:37:52.862174 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 1 08:37:52.863669 sudo[1730]: pam_unix(sudo:session): session closed for user root Jul 1 08:37:52.865377 sshd[1729]: Connection closed by 10.0.0.1 port 51138 Jul 1 08:37:52.865785 sshd-session[1726]: pam_unix(sshd:session): session closed for user core Jul 1 08:37:52.879106 systemd[1]: sshd@5-10.0.0.78:22-10.0.0.1:51138.service: Deactivated successfully. Jul 1 08:37:52.881390 systemd[1]: session-6.scope: Deactivated successfully. Jul 1 08:37:52.882217 systemd-logind[1525]: Session 6 logged out. Waiting for processes to exit. Jul 1 08:37:52.885491 systemd[1]: Started sshd@6-10.0.0.78:22-10.0.0.1:51144.service - OpenSSH per-connection server daemon (10.0.0.1:51144). Jul 1 08:37:52.886598 systemd-logind[1525]: Removed session 6. Jul 1 08:37:52.949267 sshd[1762]: Accepted publickey for core from 10.0.0.1 port 51144 ssh2: RSA SHA256:Fdg/GPppvpuQQb5BRtreEtTPBEKGT5ZJUpnuhcL3IOo Jul 1 08:37:52.951526 sshd-session[1762]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:37:52.957100 systemd-logind[1525]: New session 7 of user core. Jul 1 08:37:52.966738 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 1 08:37:53.023885 sudo[1766]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 1 08:37:53.024333 sudo[1766]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 1 08:37:53.821163 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 1 08:37:53.846934 (dockerd)[1787]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 1 08:37:54.430742 dockerd[1787]: time="2025-07-01T08:37:54.430549976Z" level=info msg="Starting up" Jul 1 08:37:54.432661 dockerd[1787]: time="2025-07-01T08:37:54.432610309Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 1 08:37:55.169252 dockerd[1787]: time="2025-07-01T08:37:55.169177208Z" level=info msg="Loading containers: start." Jul 1 08:37:55.180463 kernel: Initializing XFRM netlink socket Jul 1 08:37:55.478112 systemd-networkd[1475]: docker0: Link UP Jul 1 08:37:55.484826 dockerd[1787]: time="2025-07-01T08:37:55.484747970Z" level=info msg="Loading containers: done." Jul 1 08:37:55.552146 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2035312908-merged.mount: Deactivated successfully. Jul 1 08:37:55.555904 dockerd[1787]: time="2025-07-01T08:37:55.555851715Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 1 08:37:55.555986 dockerd[1787]: time="2025-07-01T08:37:55.555977087Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jul 1 08:37:55.556134 dockerd[1787]: time="2025-07-01T08:37:55.556108485Z" level=info msg="Initializing buildkit" Jul 1 08:37:55.597737 dockerd[1787]: time="2025-07-01T08:37:55.597657769Z" level=info msg="Completed buildkit initialization" Jul 1 08:37:55.606090 dockerd[1787]: time="2025-07-01T08:37:55.605991788Z" level=info msg="Daemon has completed initialization" Jul 1 08:37:55.606312 dockerd[1787]: time="2025-07-01T08:37:55.606177874Z" level=info msg="API listen on /run/docker.sock" Jul 1 08:37:55.606377 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 1 08:37:56.617004 containerd[1542]: time="2025-07-01T08:37:56.616927978Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\"" Jul 1 08:37:57.367646 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3470555580.mount: Deactivated successfully. Jul 1 08:37:58.734969 containerd[1542]: time="2025-07-01T08:37:58.734890670Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:37:58.735622 containerd[1542]: time="2025-07-01T08:37:58.735572197Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.2: active requests=0, bytes read=30079099" Jul 1 08:37:58.737031 containerd[1542]: time="2025-07-01T08:37:58.736998680Z" level=info msg="ImageCreate event name:\"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:37:58.739824 containerd[1542]: time="2025-07-01T08:37:58.739792911Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:37:58.740856 containerd[1542]: time="2025-07-01T08:37:58.740814921Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.2\" with image id \"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\", size \"30075899\" in 2.12382252s" Jul 1 08:37:58.740927 containerd[1542]: time="2025-07-01T08:37:58.740862453Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\" returns image reference \"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\"" Jul 1 08:37:58.741683 containerd[1542]: time="2025-07-01T08:37:58.741646750Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\"" Jul 1 08:38:00.187587 containerd[1542]: time="2025-07-01T08:38:00.186874311Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:38:00.188122 containerd[1542]: time="2025-07-01T08:38:00.187931674Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.2: active requests=0, bytes read=26018946" Jul 1 08:38:00.189322 containerd[1542]: time="2025-07-01T08:38:00.189278014Z" level=info msg="ImageCreate event name:\"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:38:00.192287 containerd[1542]: time="2025-07-01T08:38:00.192248373Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:38:00.193382 containerd[1542]: time="2025-07-01T08:38:00.193346940Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.2\" with image id \"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\", size \"27646507\" in 1.451668317s" Jul 1 08:38:00.193382 containerd[1542]: time="2025-07-01T08:38:00.193380949Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\" returns image reference \"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\"" Jul 1 08:38:00.194341 containerd[1542]: time="2025-07-01T08:38:00.194313922Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\"" Jul 1 08:38:01.550187 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 1 08:38:01.552488 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 1 08:38:02.621592 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 1 08:38:02.626990 (kubelet)[2068]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 1 08:38:02.682097 kubelet[2068]: E0701 08:38:02.682023 2068 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 1 08:38:02.689922 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 1 08:38:02.690150 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 1 08:38:02.690552 systemd[1]: kubelet.service: Consumed 286ms CPU time, 111.4M memory peak. Jul 1 08:38:03.332152 containerd[1542]: time="2025-07-01T08:38:03.332079168Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:38:03.333155 containerd[1542]: time="2025-07-01T08:38:03.333109278Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.2: active requests=0, bytes read=20155055" Jul 1 08:38:03.334683 containerd[1542]: time="2025-07-01T08:38:03.334643187Z" level=info msg="ImageCreate event name:\"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:38:03.338069 containerd[1542]: time="2025-07-01T08:38:03.338010993Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:38:03.339161 containerd[1542]: time="2025-07-01T08:38:03.339120013Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.2\" with image id \"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\", size \"21782634\" in 3.144772848s" Jul 1 08:38:03.339161 containerd[1542]: time="2025-07-01T08:38:03.339149760Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\" returns image reference \"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\"" Jul 1 08:38:03.339677 containerd[1542]: time="2025-07-01T08:38:03.339641549Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\"" Jul 1 08:38:04.440762 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount799471688.mount: Deactivated successfully. Jul 1 08:38:05.306306 containerd[1542]: time="2025-07-01T08:38:05.306153195Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:38:05.307225 containerd[1542]: time="2025-07-01T08:38:05.307187397Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.2: active requests=0, bytes read=31892746" Jul 1 08:38:05.308935 containerd[1542]: time="2025-07-01T08:38:05.308866289Z" level=info msg="ImageCreate event name:\"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:38:05.311560 containerd[1542]: time="2025-07-01T08:38:05.311495026Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:38:05.312275 containerd[1542]: time="2025-07-01T08:38:05.312222414Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.2\" with image id \"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\", repo tag \"registry.k8s.io/kube-proxy:v1.33.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\", size \"31891765\" in 1.972551322s" Jul 1 08:38:05.312275 containerd[1542]: time="2025-07-01T08:38:05.312261118Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\" returns image reference \"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\"" Jul 1 08:38:05.312784 containerd[1542]: time="2025-07-01T08:38:05.312755882Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jul 1 08:38:06.270554 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4043351893.mount: Deactivated successfully. Jul 1 08:38:07.879659 containerd[1542]: time="2025-07-01T08:38:07.879581326Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:38:07.933337 containerd[1542]: time="2025-07-01T08:38:07.933249405Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Jul 1 08:38:08.023609 containerd[1542]: time="2025-07-01T08:38:08.023543393Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:38:08.081905 containerd[1542]: time="2025-07-01T08:38:08.081847717Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:38:08.082908 containerd[1542]: time="2025-07-01T08:38:08.082860540Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 2.770074606s" Jul 1 08:38:08.082908 containerd[1542]: time="2025-07-01T08:38:08.082892647Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Jul 1 08:38:08.083389 containerd[1542]: time="2025-07-01T08:38:08.083340529Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 1 08:38:08.912654 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1431022847.mount: Deactivated successfully. Jul 1 08:38:08.919245 containerd[1542]: time="2025-07-01T08:38:08.919184578Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 1 08:38:08.920094 containerd[1542]: time="2025-07-01T08:38:08.920053598Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jul 1 08:38:08.921385 containerd[1542]: time="2025-07-01T08:38:08.921353449Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 1 08:38:08.923546 containerd[1542]: time="2025-07-01T08:38:08.923518899Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 1 08:38:08.924325 containerd[1542]: time="2025-07-01T08:38:08.924294180Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 840.930199ms" Jul 1 08:38:08.924325 containerd[1542]: time="2025-07-01T08:38:08.924321294Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 1 08:38:08.924871 containerd[1542]: time="2025-07-01T08:38:08.924825745Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jul 1 08:38:09.472883 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2289712541.mount: Deactivated successfully. Jul 1 08:38:12.148700 containerd[1542]: time="2025-07-01T08:38:12.148625226Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:38:12.149811 containerd[1542]: time="2025-07-01T08:38:12.149736606Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58247175" Jul 1 08:38:12.152262 containerd[1542]: time="2025-07-01T08:38:12.152205441Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:38:12.156655 containerd[1542]: time="2025-07-01T08:38:12.156585699Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:38:12.158029 containerd[1542]: time="2025-07-01T08:38:12.157963676Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 3.233085068s" Jul 1 08:38:12.158110 containerd[1542]: time="2025-07-01T08:38:12.158030328Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Jul 1 08:38:12.781400 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 1 08:38:12.783715 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 1 08:38:13.023741 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 1 08:38:13.042775 (kubelet)[2228]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 1 08:38:13.078784 kubelet[2228]: E0701 08:38:13.078708 2228 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 1 08:38:13.083681 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 1 08:38:13.083924 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 1 08:38:13.084326 systemd[1]: kubelet.service: Consumed 240ms CPU time, 110.6M memory peak. Jul 1 08:38:15.207788 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 1 08:38:15.207960 systemd[1]: kubelet.service: Consumed 240ms CPU time, 110.6M memory peak. Jul 1 08:38:15.210365 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 1 08:38:15.239609 systemd[1]: Reload requested from client PID 2243 ('systemctl') (unit session-7.scope)... Jul 1 08:38:15.239633 systemd[1]: Reloading... Jul 1 08:38:15.343482 zram_generator::config[2289]: No configuration found. Jul 1 08:38:16.417742 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 1 08:38:16.624048 systemd[1]: Reloading finished in 1383 ms. Jul 1 08:38:16.724338 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 1 08:38:16.724572 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 1 08:38:16.725734 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 1 08:38:16.725805 systemd[1]: kubelet.service: Consumed 177ms CPU time, 98.2M memory peak. Jul 1 08:38:16.728952 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 1 08:38:17.042527 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 1 08:38:17.060956 (kubelet)[2335]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 1 08:38:17.110683 kubelet[2335]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 1 08:38:17.110683 kubelet[2335]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 1 08:38:17.110683 kubelet[2335]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 1 08:38:17.111157 kubelet[2335]: I0701 08:38:17.110748 2335 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 1 08:38:19.614059 kubelet[2335]: I0701 08:38:19.613985 2335 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 1 08:38:19.614059 kubelet[2335]: I0701 08:38:19.614032 2335 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 1 08:38:19.614696 kubelet[2335]: I0701 08:38:19.614349 2335 server.go:956] "Client rotation is on, will bootstrap in background" Jul 1 08:38:19.771715 kubelet[2335]: E0701 08:38:19.771640 2335 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.78:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.78:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jul 1 08:38:19.772965 kubelet[2335]: I0701 08:38:19.771790 2335 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 1 08:38:19.779618 kubelet[2335]: I0701 08:38:19.779571 2335 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 1 08:38:19.785686 kubelet[2335]: I0701 08:38:19.785654 2335 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 1 08:38:19.785966 kubelet[2335]: I0701 08:38:19.785920 2335 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 1 08:38:19.786152 kubelet[2335]: I0701 08:38:19.785953 2335 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 1 08:38:19.786268 kubelet[2335]: I0701 08:38:19.786157 2335 topology_manager.go:138] "Creating topology manager with none policy" Jul 1 08:38:19.786268 kubelet[2335]: I0701 08:38:19.786168 2335 container_manager_linux.go:303] "Creating device plugin manager" Jul 1 08:38:19.786377 kubelet[2335]: I0701 08:38:19.786352 2335 state_mem.go:36] "Initialized new in-memory state store" Jul 1 08:38:19.788774 kubelet[2335]: I0701 08:38:19.788735 2335 kubelet.go:480] "Attempting to sync node with API server" Jul 1 08:38:19.788774 kubelet[2335]: I0701 08:38:19.788764 2335 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 1 08:38:19.788850 kubelet[2335]: I0701 08:38:19.788793 2335 kubelet.go:386] "Adding apiserver pod source" Jul 1 08:38:19.791041 kubelet[2335]: I0701 08:38:19.790812 2335 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 1 08:38:19.793988 kubelet[2335]: E0701 08:38:19.793952 2335 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.78:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.78:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 1 08:38:19.793988 kubelet[2335]: E0701 08:38:19.793951 2335 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.78:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.78:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 1 08:38:19.797284 kubelet[2335]: I0701 08:38:19.797241 2335 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Jul 1 08:38:19.797806 kubelet[2335]: I0701 08:38:19.797785 2335 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 1 08:38:19.798522 kubelet[2335]: W0701 08:38:19.798495 2335 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 1 08:38:19.802679 kubelet[2335]: I0701 08:38:19.802653 2335 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 1 08:38:19.802747 kubelet[2335]: I0701 08:38:19.802724 2335 server.go:1289] "Started kubelet" Jul 1 08:38:19.806148 kubelet[2335]: I0701 08:38:19.806092 2335 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 1 08:38:19.807021 kubelet[2335]: I0701 08:38:19.806612 2335 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 1 08:38:19.807021 kubelet[2335]: I0701 08:38:19.806933 2335 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 1 08:38:19.807021 kubelet[2335]: I0701 08:38:19.807002 2335 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 1 08:38:19.807288 kubelet[2335]: E0701 08:38:19.805992 2335 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.78:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.78:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.184e13d279e2ff5b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-01 08:38:19.802681179 +0000 UTC m=+2.734814706,LastTimestamp:2025-07-01 08:38:19.802681179 +0000 UTC m=+2.734814706,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 1 08:38:19.807666 kubelet[2335]: I0701 08:38:19.807636 2335 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 1 08:38:19.807988 kubelet[2335]: E0701 08:38:19.807965 2335 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 1 08:38:19.808089 kubelet[2335]: I0701 08:38:19.808076 2335 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 1 08:38:19.808230 kubelet[2335]: I0701 08:38:19.808209 2335 server.go:317] "Adding debug handlers to kubelet server" Jul 1 08:38:19.808486 kubelet[2335]: I0701 08:38:19.808466 2335 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 1 08:38:19.808635 kubelet[2335]: I0701 08:38:19.808619 2335 reconciler.go:26] "Reconciler: start to sync state" Jul 1 08:38:19.809271 kubelet[2335]: E0701 08:38:19.809120 2335 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.78:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.78:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 1 08:38:19.809469 kubelet[2335]: I0701 08:38:19.809436 2335 factory.go:223] Registration of the systemd container factory successfully Jul 1 08:38:19.809525 kubelet[2335]: I0701 08:38:19.809509 2335 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 1 08:38:19.810608 kubelet[2335]: E0701 08:38:19.810432 2335 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.78:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.78:6443: connect: connection refused" interval="200ms" Jul 1 08:38:19.810718 kubelet[2335]: I0701 08:38:19.810673 2335 factory.go:223] Registration of the containerd container factory successfully Jul 1 08:38:19.810786 kubelet[2335]: E0701 08:38:19.810721 2335 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 1 08:38:19.820047 kubelet[2335]: I0701 08:38:19.820019 2335 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 1 08:38:19.820175 kubelet[2335]: I0701 08:38:19.820154 2335 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 1 08:38:19.820249 kubelet[2335]: I0701 08:38:19.820181 2335 state_mem.go:36] "Initialized new in-memory state store" Jul 1 08:38:19.857787 kubelet[2335]: I0701 08:38:19.857716 2335 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 1 08:38:19.859194 kubelet[2335]: I0701 08:38:19.859162 2335 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 1 08:38:19.859277 kubelet[2335]: I0701 08:38:19.859267 2335 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 1 08:38:19.859395 kubelet[2335]: I0701 08:38:19.859363 2335 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 1 08:38:19.859660 kubelet[2335]: I0701 08:38:19.859444 2335 kubelet.go:2436] "Starting kubelet main sync loop" Jul 1 08:38:19.859660 kubelet[2335]: E0701 08:38:19.859486 2335 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 1 08:38:19.868697 kubelet[2335]: E0701 08:38:19.868540 2335 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.78:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.78:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 1 08:38:19.880895 kubelet[2335]: I0701 08:38:19.880837 2335 policy_none.go:49] "None policy: Start" Jul 1 08:38:19.880895 kubelet[2335]: I0701 08:38:19.880893 2335 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 1 08:38:19.881063 kubelet[2335]: I0701 08:38:19.880913 2335 state_mem.go:35] "Initializing new in-memory state store" Jul 1 08:38:19.909033 kubelet[2335]: E0701 08:38:19.908972 2335 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 1 08:38:19.922047 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 1 08:38:19.945907 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 1 08:38:19.949955 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 1 08:38:19.960068 kubelet[2335]: E0701 08:38:19.959984 2335 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 1 08:38:19.963755 kubelet[2335]: E0701 08:38:19.963645 2335 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 1 08:38:19.963997 kubelet[2335]: I0701 08:38:19.963981 2335 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 1 08:38:19.964064 kubelet[2335]: I0701 08:38:19.964014 2335 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 1 08:38:19.964365 kubelet[2335]: I0701 08:38:19.964349 2335 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 1 08:38:19.965762 kubelet[2335]: E0701 08:38:19.965734 2335 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 1 08:38:19.965830 kubelet[2335]: E0701 08:38:19.965820 2335 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 1 08:38:20.011999 kubelet[2335]: E0701 08:38:20.011919 2335 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.78:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.78:6443: connect: connection refused" interval="400ms" Jul 1 08:38:20.071208 kubelet[2335]: I0701 08:38:20.071098 2335 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 1 08:38:20.071826 kubelet[2335]: E0701 08:38:20.071786 2335 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.78:6443/api/v1/nodes\": dial tcp 10.0.0.78:6443: connect: connection refused" node="localhost" Jul 1 08:38:20.273820 kubelet[2335]: I0701 08:38:20.273667 2335 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 1 08:38:20.274194 kubelet[2335]: E0701 08:38:20.274157 2335 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.78:6443/api/v1/nodes\": dial tcp 10.0.0.78:6443: connect: connection refused" node="localhost" Jul 1 08:38:20.310627 kubelet[2335]: I0701 08:38:20.310544 2335 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1350a82a820a22f4372d7bfb67892602-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"1350a82a820a22f4372d7bfb67892602\") " pod="kube-system/kube-apiserver-localhost" Jul 1 08:38:20.310627 kubelet[2335]: I0701 08:38:20.310612 2335 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1350a82a820a22f4372d7bfb67892602-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"1350a82a820a22f4372d7bfb67892602\") " pod="kube-system/kube-apiserver-localhost" Jul 1 08:38:20.310627 kubelet[2335]: I0701 08:38:20.310641 2335 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1350a82a820a22f4372d7bfb67892602-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"1350a82a820a22f4372d7bfb67892602\") " pod="kube-system/kube-apiserver-localhost" Jul 1 08:38:20.412764 kubelet[2335]: E0701 08:38:20.412690 2335 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.78:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.78:6443: connect: connection refused" interval="800ms" Jul 1 08:38:20.430785 systemd[1]: Created slice kubepods-burstable-pod1350a82a820a22f4372d7bfb67892602.slice - libcontainer container kubepods-burstable-pod1350a82a820a22f4372d7bfb67892602.slice. Jul 1 08:38:20.446673 kubelet[2335]: E0701 08:38:20.446621 2335 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 1 08:38:20.453684 containerd[1542]: time="2025-07-01T08:38:20.453638576Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:1350a82a820a22f4372d7bfb67892602,Namespace:kube-system,Attempt:0,}" Jul 1 08:38:20.511136 kubelet[2335]: I0701 08:38:20.511066 2335 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 1 08:38:20.511630 kubelet[2335]: I0701 08:38:20.511471 2335 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 1 08:38:20.511630 kubelet[2335]: I0701 08:38:20.511510 2335 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 1 08:38:20.511630 kubelet[2335]: I0701 08:38:20.511533 2335 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 1 08:38:20.511797 kubelet[2335]: I0701 08:38:20.511633 2335 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 1 08:38:20.604947 systemd[1]: Created slice kubepods-burstable-pod84b858ec27c8b2738b1d9ff9927e0dcb.slice - libcontainer container kubepods-burstable-pod84b858ec27c8b2738b1d9ff9927e0dcb.slice. Jul 1 08:38:20.606917 kubelet[2335]: E0701 08:38:20.606858 2335 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 1 08:38:20.612248 kubelet[2335]: I0701 08:38:20.612200 2335 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/834ee54f1daa06092e339273649eb5ea-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"834ee54f1daa06092e339273649eb5ea\") " pod="kube-system/kube-scheduler-localhost" Jul 1 08:38:20.675721 kubelet[2335]: I0701 08:38:20.675635 2335 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 1 08:38:20.676201 kubelet[2335]: E0701 08:38:20.676131 2335 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.78:6443/api/v1/nodes\": dial tcp 10.0.0.78:6443: connect: connection refused" node="localhost" Jul 1 08:38:20.696477 kubelet[2335]: E0701 08:38:20.696406 2335 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.78:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.78:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 1 08:38:20.698929 systemd[1]: Created slice kubepods-burstable-pod834ee54f1daa06092e339273649eb5ea.slice - libcontainer container kubepods-burstable-pod834ee54f1daa06092e339273649eb5ea.slice. Jul 1 08:38:20.701241 kubelet[2335]: E0701 08:38:20.701194 2335 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 1 08:38:20.806482 containerd[1542]: time="2025-07-01T08:38:20.806324109Z" level=info msg="connecting to shim 73f92a47bb9c8711853ed6c58b1f63b06d3aae970ba15fa1d31b165818bbca30" address="unix:///run/containerd/s/eb9062d89c0417c22503871e4c9ec7fab265feab44700c3a66544235561ee012" namespace=k8s.io protocol=ttrpc version=3 Jul 1 08:38:20.856723 systemd[1]: Started cri-containerd-73f92a47bb9c8711853ed6c58b1f63b06d3aae970ba15fa1d31b165818bbca30.scope - libcontainer container 73f92a47bb9c8711853ed6c58b1f63b06d3aae970ba15fa1d31b165818bbca30. Jul 1 08:38:20.885577 kubelet[2335]: E0701 08:38:20.885529 2335 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.78:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.78:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 1 08:38:20.910996 containerd[1542]: time="2025-07-01T08:38:20.910940147Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:84b858ec27c8b2738b1d9ff9927e0dcb,Namespace:kube-system,Attempt:0,}" Jul 1 08:38:20.938275 containerd[1542]: time="2025-07-01T08:38:20.938196355Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:1350a82a820a22f4372d7bfb67892602,Namespace:kube-system,Attempt:0,} returns sandbox id \"73f92a47bb9c8711853ed6c58b1f63b06d3aae970ba15fa1d31b165818bbca30\"" Jul 1 08:38:20.946441 containerd[1542]: time="2025-07-01T08:38:20.945717256Z" level=info msg="CreateContainer within sandbox \"73f92a47bb9c8711853ed6c58b1f63b06d3aae970ba15fa1d31b165818bbca30\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 1 08:38:20.951333 containerd[1542]: time="2025-07-01T08:38:20.951272174Z" level=info msg="connecting to shim d4367f16625d4d6aebd5595b59b5ff8af186c5de5adde43f06c9fae926e42235" address="unix:///run/containerd/s/fc518b16c23c33b5cea74cecb64331dc2b456e9e0d28af3957d76194be1735e7" namespace=k8s.io protocol=ttrpc version=3 Jul 1 08:38:20.961890 containerd[1542]: time="2025-07-01T08:38:20.961834030Z" level=info msg="Container 8fbbcaf67d6e902e79436488c3aa505efeb55b3985d011e4711e0b484726eea5: CDI devices from CRI Config.CDIDevices: []" Jul 1 08:38:20.991585 systemd[1]: Started cri-containerd-d4367f16625d4d6aebd5595b59b5ff8af186c5de5adde43f06c9fae926e42235.scope - libcontainer container d4367f16625d4d6aebd5595b59b5ff8af186c5de5adde43f06c9fae926e42235. Jul 1 08:38:21.002923 containerd[1542]: time="2025-07-01T08:38:21.002848801Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:834ee54f1daa06092e339273649eb5ea,Namespace:kube-system,Attempt:0,}" Jul 1 08:38:21.014965 kubelet[2335]: E0701 08:38:21.014920 2335 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.78:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.78:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 1 08:38:21.214125 kubelet[2335]: E0701 08:38:21.213972 2335 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.78:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.78:6443: connect: connection refused" interval="1.6s" Jul 1 08:38:21.265069 kubelet[2335]: E0701 08:38:21.264996 2335 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.78:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.78:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 1 08:38:21.275982 containerd[1542]: time="2025-07-01T08:38:21.275905593Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:84b858ec27c8b2738b1d9ff9927e0dcb,Namespace:kube-system,Attempt:0,} returns sandbox id \"d4367f16625d4d6aebd5595b59b5ff8af186c5de5adde43f06c9fae926e42235\"" Jul 1 08:38:21.276545 containerd[1542]: time="2025-07-01T08:38:21.276496264Z" level=info msg="CreateContainer within sandbox \"73f92a47bb9c8711853ed6c58b1f63b06d3aae970ba15fa1d31b165818bbca30\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8fbbcaf67d6e902e79436488c3aa505efeb55b3985d011e4711e0b484726eea5\"" Jul 1 08:38:21.277101 containerd[1542]: time="2025-07-01T08:38:21.277059122Z" level=info msg="StartContainer for \"8fbbcaf67d6e902e79436488c3aa505efeb55b3985d011e4711e0b484726eea5\"" Jul 1 08:38:21.278830 containerd[1542]: time="2025-07-01T08:38:21.278793774Z" level=info msg="connecting to shim 8fbbcaf67d6e902e79436488c3aa505efeb55b3985d011e4711e0b484726eea5" address="unix:///run/containerd/s/eb9062d89c0417c22503871e4c9ec7fab265feab44700c3a66544235561ee012" protocol=ttrpc version=3 Jul 1 08:38:21.283465 containerd[1542]: time="2025-07-01T08:38:21.283404914Z" level=info msg="CreateContainer within sandbox \"d4367f16625d4d6aebd5595b59b5ff8af186c5de5adde43f06c9fae926e42235\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 1 08:38:21.299435 containerd[1542]: time="2025-07-01T08:38:21.299134662Z" level=info msg="Container cb260d316e461dd326edbc5b125cf4c4e8a2d751381920c357c1ba2fdb86bb93: CDI devices from CRI Config.CDIDevices: []" Jul 1 08:38:21.303703 systemd[1]: Started cri-containerd-8fbbcaf67d6e902e79436488c3aa505efeb55b3985d011e4711e0b484726eea5.scope - libcontainer container 8fbbcaf67d6e902e79436488c3aa505efeb55b3985d011e4711e0b484726eea5. Jul 1 08:38:21.313162 containerd[1542]: time="2025-07-01T08:38:21.313114000Z" level=info msg="connecting to shim 95fdf104ed93398123b7e89d118366679dfc89fee86881da3fece234a4397e9f" address="unix:///run/containerd/s/18d75b9d2c8890e1e2225b449f89cdbbd36464c497517e08b19ca6432cacee67" namespace=k8s.io protocol=ttrpc version=3 Jul 1 08:38:21.314368 containerd[1542]: time="2025-07-01T08:38:21.314338924Z" level=info msg="CreateContainer within sandbox \"d4367f16625d4d6aebd5595b59b5ff8af186c5de5adde43f06c9fae926e42235\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"cb260d316e461dd326edbc5b125cf4c4e8a2d751381920c357c1ba2fdb86bb93\"" Jul 1 08:38:21.315269 containerd[1542]: time="2025-07-01T08:38:21.315233051Z" level=info msg="StartContainer for \"cb260d316e461dd326edbc5b125cf4c4e8a2d751381920c357c1ba2fdb86bb93\"" Jul 1 08:38:21.316628 containerd[1542]: time="2025-07-01T08:38:21.316585387Z" level=info msg="connecting to shim cb260d316e461dd326edbc5b125cf4c4e8a2d751381920c357c1ba2fdb86bb93" address="unix:///run/containerd/s/fc518b16c23c33b5cea74cecb64331dc2b456e9e0d28af3957d76194be1735e7" protocol=ttrpc version=3 Jul 1 08:38:21.402619 systemd[1]: Started cri-containerd-cb260d316e461dd326edbc5b125cf4c4e8a2d751381920c357c1ba2fdb86bb93.scope - libcontainer container cb260d316e461dd326edbc5b125cf4c4e8a2d751381920c357c1ba2fdb86bb93. Jul 1 08:38:21.427645 systemd[1]: Started cri-containerd-95fdf104ed93398123b7e89d118366679dfc89fee86881da3fece234a4397e9f.scope - libcontainer container 95fdf104ed93398123b7e89d118366679dfc89fee86881da3fece234a4397e9f. Jul 1 08:38:21.479840 kubelet[2335]: I0701 08:38:21.478980 2335 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 1 08:38:21.480735 kubelet[2335]: E0701 08:38:21.480617 2335 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.78:6443/api/v1/nodes\": dial tcp 10.0.0.78:6443: connect: connection refused" node="localhost" Jul 1 08:38:21.740825 containerd[1542]: time="2025-07-01T08:38:21.740716617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:834ee54f1daa06092e339273649eb5ea,Namespace:kube-system,Attempt:0,} returns sandbox id \"95fdf104ed93398123b7e89d118366679dfc89fee86881da3fece234a4397e9f\"" Jul 1 08:38:21.743263 containerd[1542]: time="2025-07-01T08:38:21.742866918Z" level=info msg="StartContainer for \"cb260d316e461dd326edbc5b125cf4c4e8a2d751381920c357c1ba2fdb86bb93\" returns successfully" Jul 1 08:38:21.744271 containerd[1542]: time="2025-07-01T08:38:21.744246535Z" level=info msg="StartContainer for \"8fbbcaf67d6e902e79436488c3aa505efeb55b3985d011e4711e0b484726eea5\" returns successfully" Jul 1 08:38:21.756981 containerd[1542]: time="2025-07-01T08:38:21.756905898Z" level=info msg="CreateContainer within sandbox \"95fdf104ed93398123b7e89d118366679dfc89fee86881da3fece234a4397e9f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 1 08:38:21.785258 containerd[1542]: time="2025-07-01T08:38:21.785182676Z" level=info msg="Container 68b450199ab4839ff6dfba349c448100643ef79b29664cd17d7d70aa7b7e1b4f: CDI devices from CRI Config.CDIDevices: []" Jul 1 08:38:21.791214 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1478813079.mount: Deactivated successfully. Jul 1 08:38:21.795394 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1780667712.mount: Deactivated successfully. Jul 1 08:38:21.872091 kubelet[2335]: E0701 08:38:21.871905 2335 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 1 08:38:21.875553 kubelet[2335]: E0701 08:38:21.875424 2335 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 1 08:38:21.906607 containerd[1542]: time="2025-07-01T08:38:21.906554330Z" level=info msg="CreateContainer within sandbox \"95fdf104ed93398123b7e89d118366679dfc89fee86881da3fece234a4397e9f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"68b450199ab4839ff6dfba349c448100643ef79b29664cd17d7d70aa7b7e1b4f\"" Jul 1 08:38:21.907999 containerd[1542]: time="2025-07-01T08:38:21.907764416Z" level=info msg="StartContainer for \"68b450199ab4839ff6dfba349c448100643ef79b29664cd17d7d70aa7b7e1b4f\"" Jul 1 08:38:21.909682 containerd[1542]: time="2025-07-01T08:38:21.909655204Z" level=info msg="connecting to shim 68b450199ab4839ff6dfba349c448100643ef79b29664cd17d7d70aa7b7e1b4f" address="unix:///run/containerd/s/18d75b9d2c8890e1e2225b449f89cdbbd36464c497517e08b19ca6432cacee67" protocol=ttrpc version=3 Jul 1 08:38:21.984591 systemd[1]: Started cri-containerd-68b450199ab4839ff6dfba349c448100643ef79b29664cd17d7d70aa7b7e1b4f.scope - libcontainer container 68b450199ab4839ff6dfba349c448100643ef79b29664cd17d7d70aa7b7e1b4f. Jul 1 08:38:22.142697 containerd[1542]: time="2025-07-01T08:38:22.142583181Z" level=info msg="StartContainer for \"68b450199ab4839ff6dfba349c448100643ef79b29664cd17d7d70aa7b7e1b4f\" returns successfully" Jul 1 08:38:22.879139 kubelet[2335]: E0701 08:38:22.879092 2335 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 1 08:38:22.879998 kubelet[2335]: E0701 08:38:22.879890 2335 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 1 08:38:23.082795 kubelet[2335]: I0701 08:38:23.082755 2335 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 1 08:38:23.669814 kubelet[2335]: E0701 08:38:23.669748 2335 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 1 08:38:23.757363 kubelet[2335]: I0701 08:38:23.757259 2335 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 1 08:38:23.757363 kubelet[2335]: E0701 08:38:23.757349 2335 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 1 08:38:23.767634 kubelet[2335]: I0701 08:38:23.767580 2335 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 1 08:38:23.795699 kubelet[2335]: I0701 08:38:23.795648 2335 apiserver.go:52] "Watching apiserver" Jul 1 08:38:23.797698 kubelet[2335]: E0701 08:38:23.797591 2335 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.184e13d279e2ff5b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-01 08:38:19.802681179 +0000 UTC m=+2.734814706,LastTimestamp:2025-07-01 08:38:19.802681179 +0000 UTC m=+2.734814706,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 1 08:38:23.809341 kubelet[2335]: I0701 08:38:23.809269 2335 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 1 08:38:23.810392 kubelet[2335]: I0701 08:38:23.810362 2335 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 1 08:38:23.825320 kubelet[2335]: E0701 08:38:23.825260 2335 kubelet.go:3311] "Failed creating a mirror pod" err="namespaces \"kube-system\" not found" pod="kube-system/kube-controller-manager-localhost" Jul 1 08:38:23.866384 kubelet[2335]: E0701 08:38:23.866302 2335 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 1 08:38:23.866384 kubelet[2335]: I0701 08:38:23.866365 2335 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 1 08:38:23.868430 kubelet[2335]: E0701 08:38:23.868342 2335 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jul 1 08:38:23.868430 kubelet[2335]: I0701 08:38:23.868378 2335 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 1 08:38:23.871130 kubelet[2335]: E0701 08:38:23.871081 2335 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jul 1 08:38:23.880694 kubelet[2335]: I0701 08:38:23.880654 2335 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 1 08:38:23.883904 kubelet[2335]: E0701 08:38:23.883245 2335 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jul 1 08:38:24.298865 kubelet[2335]: I0701 08:38:24.298766 2335 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 1 08:38:24.302232 kubelet[2335]: E0701 08:38:24.302190 2335 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 1 08:38:24.882382 kubelet[2335]: I0701 08:38:24.882336 2335 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 1 08:38:28.021833 systemd[1]: Reload requested from client PID 2622 ('systemctl') (unit session-7.scope)... Jul 1 08:38:28.021859 systemd[1]: Reloading... Jul 1 08:38:28.141511 zram_generator::config[2671]: No configuration found. Jul 1 08:38:28.266327 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 1 08:38:28.421879 systemd[1]: Reloading finished in 399 ms. Jul 1 08:38:28.450408 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 1 08:38:28.470018 systemd[1]: kubelet.service: Deactivated successfully. Jul 1 08:38:28.470395 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 1 08:38:28.470468 systemd[1]: kubelet.service: Consumed 1.049s CPU time, 134.3M memory peak. Jul 1 08:38:28.472516 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 1 08:38:28.705643 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 1 08:38:28.718820 (kubelet)[2710]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 1 08:38:28.765167 kubelet[2710]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 1 08:38:28.765167 kubelet[2710]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 1 08:38:28.765167 kubelet[2710]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 1 08:38:28.765629 kubelet[2710]: I0701 08:38:28.765280 2710 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 1 08:38:28.774652 kubelet[2710]: I0701 08:38:28.774607 2710 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 1 08:38:28.774652 kubelet[2710]: I0701 08:38:28.774633 2710 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 1 08:38:28.774847 kubelet[2710]: I0701 08:38:28.774823 2710 server.go:956] "Client rotation is on, will bootstrap in background" Jul 1 08:38:28.775972 kubelet[2710]: I0701 08:38:28.775944 2710 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jul 1 08:38:28.796277 kubelet[2710]: I0701 08:38:28.796175 2710 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 1 08:38:28.802284 kubelet[2710]: I0701 08:38:28.802230 2710 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 1 08:38:28.811656 kubelet[2710]: I0701 08:38:28.811589 2710 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 1 08:38:28.811932 kubelet[2710]: I0701 08:38:28.811904 2710 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 1 08:38:28.812138 kubelet[2710]: I0701 08:38:28.811932 2710 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 1 08:38:28.812138 kubelet[2710]: I0701 08:38:28.812129 2710 topology_manager.go:138] "Creating topology manager with none policy" Jul 1 08:38:28.812138 kubelet[2710]: I0701 08:38:28.812139 2710 container_manager_linux.go:303] "Creating device plugin manager" Jul 1 08:38:28.812337 kubelet[2710]: I0701 08:38:28.812194 2710 state_mem.go:36] "Initialized new in-memory state store" Jul 1 08:38:28.812480 kubelet[2710]: I0701 08:38:28.812429 2710 kubelet.go:480] "Attempting to sync node with API server" Jul 1 08:38:28.812480 kubelet[2710]: I0701 08:38:28.812448 2710 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 1 08:38:28.812562 kubelet[2710]: I0701 08:38:28.812514 2710 kubelet.go:386] "Adding apiserver pod source" Jul 1 08:38:28.812562 kubelet[2710]: I0701 08:38:28.812527 2710 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 1 08:38:28.814660 kubelet[2710]: I0701 08:38:28.814612 2710 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Jul 1 08:38:28.815165 kubelet[2710]: I0701 08:38:28.815117 2710 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 1 08:38:28.819787 kubelet[2710]: I0701 08:38:28.819746 2710 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 1 08:38:28.821069 kubelet[2710]: I0701 08:38:28.821050 2710 server.go:1289] "Started kubelet" Jul 1 08:38:28.823433 kubelet[2710]: I0701 08:38:28.822539 2710 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 1 08:38:28.823433 kubelet[2710]: I0701 08:38:28.822615 2710 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 1 08:38:28.823433 kubelet[2710]: I0701 08:38:28.823262 2710 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 1 08:38:28.824487 kubelet[2710]: I0701 08:38:28.823901 2710 server.go:317] "Adding debug handlers to kubelet server" Jul 1 08:38:28.828088 kubelet[2710]: I0701 08:38:28.827963 2710 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 1 08:38:28.831211 kubelet[2710]: I0701 08:38:28.831167 2710 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 1 08:38:28.833255 kubelet[2710]: I0701 08:38:28.833170 2710 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 1 08:38:28.836223 kubelet[2710]: I0701 08:38:28.836194 2710 factory.go:223] Registration of the systemd container factory successfully Jul 1 08:38:28.838058 kubelet[2710]: I0701 08:38:28.837988 2710 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 1 08:38:28.838490 kubelet[2710]: I0701 08:38:28.836399 2710 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 1 08:38:28.838490 kubelet[2710]: I0701 08:38:28.836572 2710 reconciler.go:26] "Reconciler: start to sync state" Jul 1 08:38:28.840750 kubelet[2710]: I0701 08:38:28.840716 2710 factory.go:223] Registration of the containerd container factory successfully Jul 1 08:38:28.845489 kubelet[2710]: E0701 08:38:28.842773 2710 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 1 08:38:28.869848 kubelet[2710]: I0701 08:38:28.869764 2710 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 1 08:38:28.873769 kubelet[2710]: I0701 08:38:28.873726 2710 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 1 08:38:28.873769 kubelet[2710]: I0701 08:38:28.873767 2710 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 1 08:38:28.873917 kubelet[2710]: I0701 08:38:28.873798 2710 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 1 08:38:28.873917 kubelet[2710]: I0701 08:38:28.873810 2710 kubelet.go:2436] "Starting kubelet main sync loop" Jul 1 08:38:28.873917 kubelet[2710]: E0701 08:38:28.873865 2710 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 1 08:38:28.896537 kubelet[2710]: I0701 08:38:28.896500 2710 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 1 08:38:28.897197 kubelet[2710]: I0701 08:38:28.896682 2710 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 1 08:38:28.897197 kubelet[2710]: I0701 08:38:28.896709 2710 state_mem.go:36] "Initialized new in-memory state store" Jul 1 08:38:28.897538 kubelet[2710]: I0701 08:38:28.897499 2710 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 1 08:38:28.897708 kubelet[2710]: I0701 08:38:28.897546 2710 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 1 08:38:28.897708 kubelet[2710]: I0701 08:38:28.897568 2710 policy_none.go:49] "None policy: Start" Jul 1 08:38:28.897708 kubelet[2710]: I0701 08:38:28.897578 2710 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 1 08:38:28.897708 kubelet[2710]: I0701 08:38:28.897591 2710 state_mem.go:35] "Initializing new in-memory state store" Jul 1 08:38:28.898899 kubelet[2710]: I0701 08:38:28.898858 2710 state_mem.go:75] "Updated machine memory state" Jul 1 08:38:28.906492 kubelet[2710]: E0701 08:38:28.905785 2710 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 1 08:38:28.906492 kubelet[2710]: I0701 08:38:28.906068 2710 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 1 08:38:28.906492 kubelet[2710]: I0701 08:38:28.906081 2710 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 1 08:38:28.906492 kubelet[2710]: I0701 08:38:28.906377 2710 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 1 08:38:28.908287 kubelet[2710]: E0701 08:38:28.908228 2710 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 1 08:38:28.976116 kubelet[2710]: I0701 08:38:28.975961 2710 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 1 08:38:28.976116 kubelet[2710]: I0701 08:38:28.976013 2710 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 1 08:38:28.976394 kubelet[2710]: I0701 08:38:28.976019 2710 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 1 08:38:28.986841 kubelet[2710]: E0701 08:38:28.986790 2710 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 1 08:38:29.013358 kubelet[2710]: I0701 08:38:29.013316 2710 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 1 08:38:29.024881 kubelet[2710]: I0701 08:38:29.024838 2710 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jul 1 08:38:29.025103 kubelet[2710]: I0701 08:38:29.024951 2710 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 1 08:38:29.139640 kubelet[2710]: I0701 08:38:29.139559 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/834ee54f1daa06092e339273649eb5ea-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"834ee54f1daa06092e339273649eb5ea\") " pod="kube-system/kube-scheduler-localhost" Jul 1 08:38:29.139640 kubelet[2710]: I0701 08:38:29.139610 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1350a82a820a22f4372d7bfb67892602-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"1350a82a820a22f4372d7bfb67892602\") " pod="kube-system/kube-apiserver-localhost" Jul 1 08:38:29.139640 kubelet[2710]: I0701 08:38:29.139636 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1350a82a820a22f4372d7bfb67892602-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"1350a82a820a22f4372d7bfb67892602\") " pod="kube-system/kube-apiserver-localhost" Jul 1 08:38:29.139888 kubelet[2710]: I0701 08:38:29.139662 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 1 08:38:29.139888 kubelet[2710]: I0701 08:38:29.139684 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 1 08:38:29.139888 kubelet[2710]: I0701 08:38:29.139704 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1350a82a820a22f4372d7bfb67892602-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"1350a82a820a22f4372d7bfb67892602\") " pod="kube-system/kube-apiserver-localhost" Jul 1 08:38:29.139888 kubelet[2710]: I0701 08:38:29.139725 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 1 08:38:29.139888 kubelet[2710]: I0701 08:38:29.139745 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 1 08:38:29.140033 kubelet[2710]: I0701 08:38:29.139768 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 1 08:38:29.813767 kubelet[2710]: I0701 08:38:29.813702 2710 apiserver.go:52] "Watching apiserver" Jul 1 08:38:29.838900 kubelet[2710]: I0701 08:38:29.838844 2710 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 1 08:38:29.893439 kubelet[2710]: I0701 08:38:29.893348 2710 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 1 08:38:29.895275 kubelet[2710]: I0701 08:38:29.894318 2710 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 1 08:38:29.908486 kubelet[2710]: E0701 08:38:29.908407 2710 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 1 08:38:29.909380 kubelet[2710]: E0701 08:38:29.909348 2710 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 1 08:38:29.916887 kubelet[2710]: I0701 08:38:29.916810 2710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=5.916789972 podStartE2EDuration="5.916789972s" podCreationTimestamp="2025-07-01 08:38:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-01 08:38:29.916387623 +0000 UTC m=+1.192456928" watchObservedRunningTime="2025-07-01 08:38:29.916789972 +0000 UTC m=+1.192859277" Jul 1 08:38:29.926840 kubelet[2710]: I0701 08:38:29.926760 2710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.9267382400000002 podStartE2EDuration="1.92673824s" podCreationTimestamp="2025-07-01 08:38:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-01 08:38:29.926154697 +0000 UTC m=+1.202224003" watchObservedRunningTime="2025-07-01 08:38:29.92673824 +0000 UTC m=+1.202807535" Jul 1 08:38:29.947658 kubelet[2710]: I0701 08:38:29.947546 2710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.947526421 podStartE2EDuration="1.947526421s" podCreationTimestamp="2025-07-01 08:38:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-01 08:38:29.934790686 +0000 UTC m=+1.210859991" watchObservedRunningTime="2025-07-01 08:38:29.947526421 +0000 UTC m=+1.223595726" Jul 1 08:38:32.529113 kubelet[2710]: I0701 08:38:32.529064 2710 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 1 08:38:32.529703 containerd[1542]: time="2025-07-01T08:38:32.529482332Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 1 08:38:32.529989 kubelet[2710]: I0701 08:38:32.529703 2710 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 1 08:38:33.187915 update_engine[1530]: I20250701 08:38:33.187795 1530 update_attempter.cc:509] Updating boot flags... Jul 1 08:38:35.000543 systemd[1]: Created slice kubepods-besteffort-pod8ec034d7_8ae4_4ba7_883c_b3b8b67606ca.slice - libcontainer container kubepods-besteffort-pod8ec034d7_8ae4_4ba7_883c_b3b8b67606ca.slice. Jul 1 08:38:35.179447 kubelet[2710]: I0701 08:38:35.179133 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8ec034d7-8ae4-4ba7-883c-b3b8b67606ca-xtables-lock\") pod \"kube-proxy-qvmrl\" (UID: \"8ec034d7-8ae4-4ba7-883c-b3b8b67606ca\") " pod="kube-system/kube-proxy-qvmrl" Jul 1 08:38:35.179447 kubelet[2710]: I0701 08:38:35.179231 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8ec034d7-8ae4-4ba7-883c-b3b8b67606ca-lib-modules\") pod \"kube-proxy-qvmrl\" (UID: \"8ec034d7-8ae4-4ba7-883c-b3b8b67606ca\") " pod="kube-system/kube-proxy-qvmrl" Jul 1 08:38:35.179447 kubelet[2710]: I0701 08:38:35.179345 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8ec034d7-8ae4-4ba7-883c-b3b8b67606ca-kube-proxy\") pod \"kube-proxy-qvmrl\" (UID: \"8ec034d7-8ae4-4ba7-883c-b3b8b67606ca\") " pod="kube-system/kube-proxy-qvmrl" Jul 1 08:38:35.179447 kubelet[2710]: I0701 08:38:35.179400 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-px9dx\" (UniqueName: \"kubernetes.io/projected/8ec034d7-8ae4-4ba7-883c-b3b8b67606ca-kube-api-access-px9dx\") pod \"kube-proxy-qvmrl\" (UID: \"8ec034d7-8ae4-4ba7-883c-b3b8b67606ca\") " pod="kube-system/kube-proxy-qvmrl" Jul 1 08:38:36.096492 systemd[1]: Created slice kubepods-besteffort-podc3fc57aa_d48a_4b7b_b0fc_ba8ad180771f.slice - libcontainer container kubepods-besteffort-podc3fc57aa_d48a_4b7b_b0fc_ba8ad180771f.slice. Jul 1 08:38:36.185716 kubelet[2710]: I0701 08:38:36.185646 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtrrj\" (UniqueName: \"kubernetes.io/projected/c3fc57aa-d48a-4b7b-b0fc-ba8ad180771f-kube-api-access-mtrrj\") pod \"tigera-operator-747864d56d-hkgvt\" (UID: \"c3fc57aa-d48a-4b7b-b0fc-ba8ad180771f\") " pod="tigera-operator/tigera-operator-747864d56d-hkgvt" Jul 1 08:38:36.185716 kubelet[2710]: I0701 08:38:36.185706 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c3fc57aa-d48a-4b7b-b0fc-ba8ad180771f-var-lib-calico\") pod \"tigera-operator-747864d56d-hkgvt\" (UID: \"c3fc57aa-d48a-4b7b-b0fc-ba8ad180771f\") " pod="tigera-operator/tigera-operator-747864d56d-hkgvt" Jul 1 08:38:36.227810 containerd[1542]: time="2025-07-01T08:38:36.227759283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qvmrl,Uid:8ec034d7-8ae4-4ba7-883c-b3b8b67606ca,Namespace:kube-system,Attempt:0,}" Jul 1 08:38:36.699804 containerd[1542]: time="2025-07-01T08:38:36.699747921Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-hkgvt,Uid:c3fc57aa-d48a-4b7b-b0fc-ba8ad180771f,Namespace:tigera-operator,Attempt:0,}" Jul 1 08:38:36.752380 containerd[1542]: time="2025-07-01T08:38:36.752303691Z" level=info msg="connecting to shim 17794c7b1a9617b4ad0d7d8800bb5cd587ce59491faf7b4dce66a311e10e5e53" address="unix:///run/containerd/s/53c274d20c4e555dc17496d18b15c05e0eb76fde114c1329807b0fc78a98ae6b" namespace=k8s.io protocol=ttrpc version=3 Jul 1 08:38:36.783634 systemd[1]: Started cri-containerd-17794c7b1a9617b4ad0d7d8800bb5cd587ce59491faf7b4dce66a311e10e5e53.scope - libcontainer container 17794c7b1a9617b4ad0d7d8800bb5cd587ce59491faf7b4dce66a311e10e5e53. Jul 1 08:38:37.060993 containerd[1542]: time="2025-07-01T08:38:37.060938904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qvmrl,Uid:8ec034d7-8ae4-4ba7-883c-b3b8b67606ca,Namespace:kube-system,Attempt:0,} returns sandbox id \"17794c7b1a9617b4ad0d7d8800bb5cd587ce59491faf7b4dce66a311e10e5e53\"" Jul 1 08:38:37.256902 containerd[1542]: time="2025-07-01T08:38:37.256852002Z" level=info msg="CreateContainer within sandbox \"17794c7b1a9617b4ad0d7d8800bb5cd587ce59491faf7b4dce66a311e10e5e53\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 1 08:38:38.053726 containerd[1542]: time="2025-07-01T08:38:38.053667574Z" level=info msg="Container e6214515a4b41dccfbcceb0d1fb914ad6e5b729235119d222b683697e90cfac7: CDI devices from CRI Config.CDIDevices: []" Jul 1 08:38:38.057332 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount973076636.mount: Deactivated successfully. Jul 1 08:38:38.133362 containerd[1542]: time="2025-07-01T08:38:38.133290468Z" level=info msg="connecting to shim 43e8cab94b9094c37d8ace0f02ba5033e8406f27e6a5714de9f93122e348bacd" address="unix:///run/containerd/s/da8a18ac59d4105fdc0d2f8eb4eacd52fe128761c7ec8b8192f20b8de823747e" namespace=k8s.io protocol=ttrpc version=3 Jul 1 08:38:38.163659 systemd[1]: Started cri-containerd-43e8cab94b9094c37d8ace0f02ba5033e8406f27e6a5714de9f93122e348bacd.scope - libcontainer container 43e8cab94b9094c37d8ace0f02ba5033e8406f27e6a5714de9f93122e348bacd. Jul 1 08:38:38.355671 containerd[1542]: time="2025-07-01T08:38:38.355381688Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-hkgvt,Uid:c3fc57aa-d48a-4b7b-b0fc-ba8ad180771f,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"43e8cab94b9094c37d8ace0f02ba5033e8406f27e6a5714de9f93122e348bacd\"" Jul 1 08:38:38.357676 containerd[1542]: time="2025-07-01T08:38:38.357515670Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 1 08:38:38.411532 containerd[1542]: time="2025-07-01T08:38:38.411473593Z" level=info msg="CreateContainer within sandbox \"17794c7b1a9617b4ad0d7d8800bb5cd587ce59491faf7b4dce66a311e10e5e53\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e6214515a4b41dccfbcceb0d1fb914ad6e5b729235119d222b683697e90cfac7\"" Jul 1 08:38:38.412377 containerd[1542]: time="2025-07-01T08:38:38.412252672Z" level=info msg="StartContainer for \"e6214515a4b41dccfbcceb0d1fb914ad6e5b729235119d222b683697e90cfac7\"" Jul 1 08:38:38.414146 containerd[1542]: time="2025-07-01T08:38:38.414104300Z" level=info msg="connecting to shim e6214515a4b41dccfbcceb0d1fb914ad6e5b729235119d222b683697e90cfac7" address="unix:///run/containerd/s/53c274d20c4e555dc17496d18b15c05e0eb76fde114c1329807b0fc78a98ae6b" protocol=ttrpc version=3 Jul 1 08:38:38.444395 systemd[1]: Started cri-containerd-e6214515a4b41dccfbcceb0d1fb914ad6e5b729235119d222b683697e90cfac7.scope - libcontainer container e6214515a4b41dccfbcceb0d1fb914ad6e5b729235119d222b683697e90cfac7. Jul 1 08:38:38.639002 containerd[1542]: time="2025-07-01T08:38:38.638894316Z" level=info msg="StartContainer for \"e6214515a4b41dccfbcceb0d1fb914ad6e5b729235119d222b683697e90cfac7\" returns successfully" Jul 1 08:38:39.681166 kubelet[2710]: I0701 08:38:39.680899 2710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qvmrl" podStartSLOduration=6.68087347 podStartE2EDuration="6.68087347s" podCreationTimestamp="2025-07-01 08:38:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-01 08:38:39.048668168 +0000 UTC m=+10.324737493" watchObservedRunningTime="2025-07-01 08:38:39.68087347 +0000 UTC m=+10.956942775" Jul 1 08:38:42.560510 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4098786560.mount: Deactivated successfully. Jul 1 08:38:43.480879 containerd[1542]: time="2025-07-01T08:38:43.480780266Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:38:43.513610 containerd[1542]: time="2025-07-01T08:38:43.513471456Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Jul 1 08:38:43.607521 containerd[1542]: time="2025-07-01T08:38:43.607455449Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:38:43.644876 containerd[1542]: time="2025-07-01T08:38:43.644750627Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:38:43.645699 containerd[1542]: time="2025-07-01T08:38:43.645656653Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 5.288103332s" Jul 1 08:38:43.645699 containerd[1542]: time="2025-07-01T08:38:43.645687440Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Jul 1 08:38:43.816799 containerd[1542]: time="2025-07-01T08:38:43.816744230Z" level=info msg="CreateContainer within sandbox \"43e8cab94b9094c37d8ace0f02ba5033e8406f27e6a5714de9f93122e348bacd\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 1 08:38:44.379901 containerd[1542]: time="2025-07-01T08:38:44.379802802Z" level=info msg="Container 2081a91056a5b899a017d01dfa80bac8829b7899835a9c4db5057eac9101b462: CDI devices from CRI Config.CDIDevices: []" Jul 1 08:38:44.515982 containerd[1542]: time="2025-07-01T08:38:44.515904590Z" level=info msg="CreateContainer within sandbox \"43e8cab94b9094c37d8ace0f02ba5033e8406f27e6a5714de9f93122e348bacd\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"2081a91056a5b899a017d01dfa80bac8829b7899835a9c4db5057eac9101b462\"" Jul 1 08:38:44.516543 containerd[1542]: time="2025-07-01T08:38:44.516486906Z" level=info msg="StartContainer for \"2081a91056a5b899a017d01dfa80bac8829b7899835a9c4db5057eac9101b462\"" Jul 1 08:38:44.517691 containerd[1542]: time="2025-07-01T08:38:44.517658281Z" level=info msg="connecting to shim 2081a91056a5b899a017d01dfa80bac8829b7899835a9c4db5057eac9101b462" address="unix:///run/containerd/s/da8a18ac59d4105fdc0d2f8eb4eacd52fe128761c7ec8b8192f20b8de823747e" protocol=ttrpc version=3 Jul 1 08:38:44.576618 systemd[1]: Started cri-containerd-2081a91056a5b899a017d01dfa80bac8829b7899835a9c4db5057eac9101b462.scope - libcontainer container 2081a91056a5b899a017d01dfa80bac8829b7899835a9c4db5057eac9101b462. Jul 1 08:38:44.726863 containerd[1542]: time="2025-07-01T08:38:44.726587102Z" level=info msg="StartContainer for \"2081a91056a5b899a017d01dfa80bac8829b7899835a9c4db5057eac9101b462\" returns successfully" Jul 1 08:38:54.628082 sudo[1766]: pam_unix(sudo:session): session closed for user root Jul 1 08:38:54.630197 sshd[1765]: Connection closed by 10.0.0.1 port 51144 Jul 1 08:38:54.632568 sshd-session[1762]: pam_unix(sshd:session): session closed for user core Jul 1 08:38:54.641255 systemd[1]: sshd@6-10.0.0.78:22-10.0.0.1:51144.service: Deactivated successfully. Jul 1 08:38:54.645873 systemd[1]: session-7.scope: Deactivated successfully. Jul 1 08:38:54.646453 systemd[1]: session-7.scope: Consumed 6.664s CPU time, 229.3M memory peak. Jul 1 08:38:54.649476 systemd-logind[1525]: Session 7 logged out. Waiting for processes to exit. Jul 1 08:38:54.653542 systemd-logind[1525]: Removed session 7. Jul 1 08:38:58.192123 kubelet[2710]: I0701 08:38:58.191792 2710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-hkgvt" podStartSLOduration=17.902033862 podStartE2EDuration="23.191731366s" podCreationTimestamp="2025-07-01 08:38:35 +0000 UTC" firstStartedPulling="2025-07-01 08:38:38.357133028 +0000 UTC m=+9.633202333" lastFinishedPulling="2025-07-01 08:38:43.646830532 +0000 UTC m=+14.922899837" observedRunningTime="2025-07-01 08:38:44.961614064 +0000 UTC m=+16.237683370" watchObservedRunningTime="2025-07-01 08:38:58.191731366 +0000 UTC m=+29.467800671" Jul 1 08:38:58.212138 systemd[1]: Created slice kubepods-besteffort-pod57e0aea6_835f_417b_b764_dbb22f24a633.slice - libcontainer container kubepods-besteffort-pod57e0aea6_835f_417b_b764_dbb22f24a633.slice. Jul 1 08:38:58.328704 kubelet[2710]: I0701 08:38:58.328626 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/57e0aea6-835f-417b-b764-dbb22f24a633-tigera-ca-bundle\") pod \"calico-typha-588f6dd969-rdxtz\" (UID: \"57e0aea6-835f-417b-b764-dbb22f24a633\") " pod="calico-system/calico-typha-588f6dd969-rdxtz" Jul 1 08:38:58.328704 kubelet[2710]: I0701 08:38:58.328700 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7k9s\" (UniqueName: \"kubernetes.io/projected/57e0aea6-835f-417b-b764-dbb22f24a633-kube-api-access-s7k9s\") pod \"calico-typha-588f6dd969-rdxtz\" (UID: \"57e0aea6-835f-417b-b764-dbb22f24a633\") " pod="calico-system/calico-typha-588f6dd969-rdxtz" Jul 1 08:38:58.328704 kubelet[2710]: I0701 08:38:58.328726 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/57e0aea6-835f-417b-b764-dbb22f24a633-typha-certs\") pod \"calico-typha-588f6dd969-rdxtz\" (UID: \"57e0aea6-835f-417b-b764-dbb22f24a633\") " pod="calico-system/calico-typha-588f6dd969-rdxtz" Jul 1 08:38:58.522260 containerd[1542]: time="2025-07-01T08:38:58.522127451Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-588f6dd969-rdxtz,Uid:57e0aea6-835f-417b-b764-dbb22f24a633,Namespace:calico-system,Attempt:0,}" Jul 1 08:38:59.285183 systemd[1]: Created slice kubepods-besteffort-pod851dfd6f_0336_4863_a160_b6fd80d81473.slice - libcontainer container kubepods-besteffort-pod851dfd6f_0336_4863_a160_b6fd80d81473.slice. Jul 1 08:38:59.287522 kubelet[2710]: E0701 08:38:59.287019 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4kczh" podUID="41da0259-d39e-42de-9ee6-ab885e8b5785" Jul 1 08:38:59.307338 containerd[1542]: time="2025-07-01T08:38:59.307271517Z" level=info msg="connecting to shim 20a31e4bf4af8b48746519c4b6329796b87e8ead0bd64340256ec5c62a1e1832" address="unix:///run/containerd/s/c82ba970ea34df10b365304185d0aee7d2c6d228ab7c0f7760793ca182e40269" namespace=k8s.io protocol=ttrpc version=3 Jul 1 08:38:59.368800 systemd[1]: Started cri-containerd-20a31e4bf4af8b48746519c4b6329796b87e8ead0bd64340256ec5c62a1e1832.scope - libcontainer container 20a31e4bf4af8b48746519c4b6329796b87e8ead0bd64340256ec5c62a1e1832. Jul 1 08:38:59.436990 kubelet[2710]: I0701 08:38:59.436297 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/41da0259-d39e-42de-9ee6-ab885e8b5785-registration-dir\") pod \"csi-node-driver-4kczh\" (UID: \"41da0259-d39e-42de-9ee6-ab885e8b5785\") " pod="calico-system/csi-node-driver-4kczh" Jul 1 08:38:59.436990 kubelet[2710]: I0701 08:38:59.436351 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/851dfd6f-0336-4863-a160-b6fd80d81473-tigera-ca-bundle\") pod \"calico-node-58v6s\" (UID: \"851dfd6f-0336-4863-a160-b6fd80d81473\") " pod="calico-system/calico-node-58v6s" Jul 1 08:38:59.436990 kubelet[2710]: I0701 08:38:59.436368 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7m42\" (UniqueName: \"kubernetes.io/projected/851dfd6f-0336-4863-a160-b6fd80d81473-kube-api-access-m7m42\") pod \"calico-node-58v6s\" (UID: \"851dfd6f-0336-4863-a160-b6fd80d81473\") " pod="calico-system/calico-node-58v6s" Jul 1 08:38:59.436990 kubelet[2710]: I0701 08:38:59.436391 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klw5v\" (UniqueName: \"kubernetes.io/projected/41da0259-d39e-42de-9ee6-ab885e8b5785-kube-api-access-klw5v\") pod \"csi-node-driver-4kczh\" (UID: \"41da0259-d39e-42de-9ee6-ab885e8b5785\") " pod="calico-system/csi-node-driver-4kczh" Jul 1 08:38:59.436990 kubelet[2710]: I0701 08:38:59.436405 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/851dfd6f-0336-4863-a160-b6fd80d81473-var-lib-calico\") pod \"calico-node-58v6s\" (UID: \"851dfd6f-0336-4863-a160-b6fd80d81473\") " pod="calico-system/calico-node-58v6s" Jul 1 08:38:59.437406 kubelet[2710]: I0701 08:38:59.437249 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/851dfd6f-0336-4863-a160-b6fd80d81473-xtables-lock\") pod \"calico-node-58v6s\" (UID: \"851dfd6f-0336-4863-a160-b6fd80d81473\") " pod="calico-system/calico-node-58v6s" Jul 1 08:38:59.437406 kubelet[2710]: I0701 08:38:59.437275 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/41da0259-d39e-42de-9ee6-ab885e8b5785-socket-dir\") pod \"csi-node-driver-4kczh\" (UID: \"41da0259-d39e-42de-9ee6-ab885e8b5785\") " pod="calico-system/csi-node-driver-4kczh" Jul 1 08:38:59.437406 kubelet[2710]: I0701 08:38:59.437293 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/851dfd6f-0336-4863-a160-b6fd80d81473-node-certs\") pod \"calico-node-58v6s\" (UID: \"851dfd6f-0336-4863-a160-b6fd80d81473\") " pod="calico-system/calico-node-58v6s" Jul 1 08:38:59.437406 kubelet[2710]: I0701 08:38:59.437313 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/851dfd6f-0336-4863-a160-b6fd80d81473-cni-net-dir\") pod \"calico-node-58v6s\" (UID: \"851dfd6f-0336-4863-a160-b6fd80d81473\") " pod="calico-system/calico-node-58v6s" Jul 1 08:38:59.437406 kubelet[2710]: I0701 08:38:59.437329 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/851dfd6f-0336-4863-a160-b6fd80d81473-policysync\") pod \"calico-node-58v6s\" (UID: \"851dfd6f-0336-4863-a160-b6fd80d81473\") " pod="calico-system/calico-node-58v6s" Jul 1 08:38:59.437756 kubelet[2710]: I0701 08:38:59.437342 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/851dfd6f-0336-4863-a160-b6fd80d81473-cni-bin-dir\") pod \"calico-node-58v6s\" (UID: \"851dfd6f-0336-4863-a160-b6fd80d81473\") " pod="calico-system/calico-node-58v6s" Jul 1 08:38:59.437756 kubelet[2710]: I0701 08:38:59.437363 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/851dfd6f-0336-4863-a160-b6fd80d81473-var-run-calico\") pod \"calico-node-58v6s\" (UID: \"851dfd6f-0336-4863-a160-b6fd80d81473\") " pod="calico-system/calico-node-58v6s" Jul 1 08:38:59.437756 kubelet[2710]: I0701 08:38:59.437380 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/851dfd6f-0336-4863-a160-b6fd80d81473-flexvol-driver-host\") pod \"calico-node-58v6s\" (UID: \"851dfd6f-0336-4863-a160-b6fd80d81473\") " pod="calico-system/calico-node-58v6s" Jul 1 08:38:59.437756 kubelet[2710]: I0701 08:38:59.437396 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/851dfd6f-0336-4863-a160-b6fd80d81473-lib-modules\") pod \"calico-node-58v6s\" (UID: \"851dfd6f-0336-4863-a160-b6fd80d81473\") " pod="calico-system/calico-node-58v6s" Jul 1 08:38:59.437756 kubelet[2710]: I0701 08:38:59.437429 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/41da0259-d39e-42de-9ee6-ab885e8b5785-varrun\") pod \"csi-node-driver-4kczh\" (UID: \"41da0259-d39e-42de-9ee6-ab885e8b5785\") " pod="calico-system/csi-node-driver-4kczh" Jul 1 08:38:59.437910 kubelet[2710]: I0701 08:38:59.437445 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/851dfd6f-0336-4863-a160-b6fd80d81473-cni-log-dir\") pod \"calico-node-58v6s\" (UID: \"851dfd6f-0336-4863-a160-b6fd80d81473\") " pod="calico-system/calico-node-58v6s" Jul 1 08:38:59.437910 kubelet[2710]: I0701 08:38:59.437465 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/41da0259-d39e-42de-9ee6-ab885e8b5785-kubelet-dir\") pod \"csi-node-driver-4kczh\" (UID: \"41da0259-d39e-42de-9ee6-ab885e8b5785\") " pod="calico-system/csi-node-driver-4kczh" Jul 1 08:38:59.488752 containerd[1542]: time="2025-07-01T08:38:59.488695941Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-588f6dd969-rdxtz,Uid:57e0aea6-835f-417b-b764-dbb22f24a633,Namespace:calico-system,Attempt:0,} returns sandbox id \"20a31e4bf4af8b48746519c4b6329796b87e8ead0bd64340256ec5c62a1e1832\"" Jul 1 08:38:59.491810 containerd[1542]: time="2025-07-01T08:38:59.491763045Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 1 08:38:59.544340 kubelet[2710]: E0701 08:38:59.543756 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:38:59.544340 kubelet[2710]: W0701 08:38:59.544263 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:38:59.544340 kubelet[2710]: E0701 08:38:59.544299 2710 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:38:59.549573 kubelet[2710]: E0701 08:38:59.549502 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:38:59.549573 kubelet[2710]: W0701 08:38:59.549520 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:38:59.549573 kubelet[2710]: E0701 08:38:59.549539 2710 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:38:59.553888 kubelet[2710]: E0701 08:38:59.553852 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:38:59.553987 kubelet[2710]: W0701 08:38:59.553919 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:38:59.553987 kubelet[2710]: E0701 08:38:59.553944 2710 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:38:59.591363 containerd[1542]: time="2025-07-01T08:38:59.591311574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-58v6s,Uid:851dfd6f-0336-4863-a160-b6fd80d81473,Namespace:calico-system,Attempt:0,}" Jul 1 08:38:59.621255 containerd[1542]: time="2025-07-01T08:38:59.621196740Z" level=info msg="connecting to shim 2ce32b4a8dcafe8ab8926cd06ac681424dad359fcd463df9783db04a1f139ca3" address="unix:///run/containerd/s/337f59d8e89a59ee8ad40292bb406a1895f02e75612d80dd0ed350fc6b87bc4a" namespace=k8s.io protocol=ttrpc version=3 Jul 1 08:38:59.655622 systemd[1]: Started cri-containerd-2ce32b4a8dcafe8ab8926cd06ac681424dad359fcd463df9783db04a1f139ca3.scope - libcontainer container 2ce32b4a8dcafe8ab8926cd06ac681424dad359fcd463df9783db04a1f139ca3. Jul 1 08:38:59.825581 containerd[1542]: time="2025-07-01T08:38:59.825482309Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-58v6s,Uid:851dfd6f-0336-4863-a160-b6fd80d81473,Namespace:calico-system,Attempt:0,} returns sandbox id \"2ce32b4a8dcafe8ab8926cd06ac681424dad359fcd463df9783db04a1f139ca3\"" Jul 1 08:39:00.875021 kubelet[2710]: E0701 08:39:00.874916 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4kczh" podUID="41da0259-d39e-42de-9ee6-ab885e8b5785" Jul 1 08:39:02.875570 kubelet[2710]: E0701 08:39:02.874905 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4kczh" podUID="41da0259-d39e-42de-9ee6-ab885e8b5785" Jul 1 08:39:04.874909 kubelet[2710]: E0701 08:39:04.874829 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4kczh" podUID="41da0259-d39e-42de-9ee6-ab885e8b5785" Jul 1 08:39:05.230864 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3984421481.mount: Deactivated successfully. Jul 1 08:39:06.876954 kubelet[2710]: E0701 08:39:06.876867 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4kczh" podUID="41da0259-d39e-42de-9ee6-ab885e8b5785" Jul 1 08:39:07.499243 containerd[1542]: time="2025-07-01T08:39:07.499077627Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:39:07.518466 containerd[1542]: time="2025-07-01T08:39:07.518308606Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=35233364" Jul 1 08:39:07.565135 containerd[1542]: time="2025-07-01T08:39:07.565065518Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:39:07.604819 containerd[1542]: time="2025-07-01T08:39:07.604755102Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:39:07.605634 containerd[1542]: time="2025-07-01T08:39:07.605576144Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 8.113613073s" Jul 1 08:39:07.605634 containerd[1542]: time="2025-07-01T08:39:07.605613224Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Jul 1 08:39:07.606775 containerd[1542]: time="2025-07-01T08:39:07.606723620Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 1 08:39:07.895959 containerd[1542]: time="2025-07-01T08:39:07.895882167Z" level=info msg="CreateContainer within sandbox \"20a31e4bf4af8b48746519c4b6329796b87e8ead0bd64340256ec5c62a1e1832\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 1 08:39:08.115455 containerd[1542]: time="2025-07-01T08:39:08.115238532Z" level=info msg="Container d1beaf70fbc867b7db8334d88dd72779ac2da7bb9d974c40f20267ddc52acada: CDI devices from CRI Config.CDIDevices: []" Jul 1 08:39:08.231657 containerd[1542]: time="2025-07-01T08:39:08.231481066Z" level=info msg="CreateContainer within sandbox \"20a31e4bf4af8b48746519c4b6329796b87e8ead0bd64340256ec5c62a1e1832\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"d1beaf70fbc867b7db8334d88dd72779ac2da7bb9d974c40f20267ddc52acada\"" Jul 1 08:39:08.233475 containerd[1542]: time="2025-07-01T08:39:08.232485804Z" level=info msg="StartContainer for \"d1beaf70fbc867b7db8334d88dd72779ac2da7bb9d974c40f20267ddc52acada\"" Jul 1 08:39:08.233863 containerd[1542]: time="2025-07-01T08:39:08.233833506Z" level=info msg="connecting to shim d1beaf70fbc867b7db8334d88dd72779ac2da7bb9d974c40f20267ddc52acada" address="unix:///run/containerd/s/c82ba970ea34df10b365304185d0aee7d2c6d228ab7c0f7760793ca182e40269" protocol=ttrpc version=3 Jul 1 08:39:08.263040 systemd[1]: Started cri-containerd-d1beaf70fbc867b7db8334d88dd72779ac2da7bb9d974c40f20267ddc52acada.scope - libcontainer container d1beaf70fbc867b7db8334d88dd72779ac2da7bb9d974c40f20267ddc52acada. Jul 1 08:39:08.340574 containerd[1542]: time="2025-07-01T08:39:08.340510982Z" level=info msg="StartContainer for \"d1beaf70fbc867b7db8334d88dd72779ac2da7bb9d974c40f20267ddc52acada\" returns successfully" Jul 1 08:39:08.876163 kubelet[2710]: E0701 08:39:08.876090 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4kczh" podUID="41da0259-d39e-42de-9ee6-ab885e8b5785" Jul 1 08:39:09.000852 kubelet[2710]: E0701 08:39:09.000809 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:39:09.000852 kubelet[2710]: W0701 08:39:09.000835 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:39:09.000852 kubelet[2710]: E0701 08:39:09.000857 2710 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:39:09.001169 kubelet[2710]: E0701 08:39:09.001145 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:39:09.001169 kubelet[2710]: W0701 08:39:09.001155 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:39:09.001169 kubelet[2710]: E0701 08:39:09.001163 2710 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:39:09.001327 kubelet[2710]: E0701 08:39:09.001311 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:39:09.001327 kubelet[2710]: W0701 08:39:09.001320 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:39:09.001327 kubelet[2710]: E0701 08:39:09.001328 2710 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:39:09.001554 kubelet[2710]: E0701 08:39:09.001536 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:39:09.001554 kubelet[2710]: W0701 08:39:09.001545 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:39:09.001554 kubelet[2710]: E0701 08:39:09.001553 2710 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:39:09.001784 kubelet[2710]: E0701 08:39:09.001767 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:39:09.001784 kubelet[2710]: W0701 08:39:09.001778 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:39:09.001858 kubelet[2710]: E0701 08:39:09.001786 2710 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:39:09.001938 kubelet[2710]: E0701 08:39:09.001923 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:39:09.001938 kubelet[2710]: W0701 08:39:09.001931 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:39:09.001997 kubelet[2710]: E0701 08:39:09.001939 2710 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:39:09.002081 kubelet[2710]: E0701 08:39:09.002066 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:39:09.002081 kubelet[2710]: W0701 08:39:09.002074 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:39:09.002145 kubelet[2710]: E0701 08:39:09.002081 2710 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:39:09.002227 kubelet[2710]: E0701 08:39:09.002211 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:39:09.002227 kubelet[2710]: W0701 08:39:09.002220 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:39:09.002284 kubelet[2710]: E0701 08:39:09.002228 2710 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:39:09.002387 kubelet[2710]: E0701 08:39:09.002371 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:39:09.002387 kubelet[2710]: W0701 08:39:09.002380 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:39:09.002486 kubelet[2710]: E0701 08:39:09.002387 2710 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:39:09.002563 kubelet[2710]: E0701 08:39:09.002547 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:39:09.002563 kubelet[2710]: W0701 08:39:09.002555 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:39:09.002626 kubelet[2710]: E0701 08:39:09.002565 2710 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:39:09.002708 kubelet[2710]: E0701 08:39:09.002693 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:39:09.002708 kubelet[2710]: W0701 08:39:09.002701 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:39:09.002772 kubelet[2710]: E0701 08:39:09.002709 2710 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:39:09.002861 kubelet[2710]: E0701 08:39:09.002846 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:39:09.002861 kubelet[2710]: W0701 08:39:09.002854 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:39:09.002918 kubelet[2710]: E0701 08:39:09.002862 2710 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:39:09.003027 kubelet[2710]: E0701 08:39:09.003011 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:39:09.003027 kubelet[2710]: W0701 08:39:09.003020 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:39:09.003090 kubelet[2710]: E0701 08:39:09.003028 2710 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:39:09.003174 kubelet[2710]: E0701 08:39:09.003159 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:39:09.003174 kubelet[2710]: W0701 08:39:09.003168 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:39:09.003247 kubelet[2710]: E0701 08:39:09.003175 2710 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:39:09.003323 kubelet[2710]: E0701 08:39:09.003308 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:39:09.003323 kubelet[2710]: W0701 08:39:09.003317 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:39:09.003387 kubelet[2710]: E0701 08:39:09.003326 2710 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:39:09.003632 kubelet[2710]: E0701 08:39:09.003605 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:39:09.003632 kubelet[2710]: W0701 08:39:09.003617 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:39:09.003632 kubelet[2710]: E0701 08:39:09.003626 2710 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:39:09.003856 kubelet[2710]: E0701 08:39:09.003836 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:39:09.003856 kubelet[2710]: W0701 08:39:09.003852 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:39:09.003937 kubelet[2710]: E0701 08:39:09.003865 2710 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:39:09.004093 kubelet[2710]: E0701 08:39:09.004079 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:39:09.004093 kubelet[2710]: W0701 08:39:09.004091 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:39:09.004146 kubelet[2710]: E0701 08:39:09.004100 2710 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:39:09.004312 kubelet[2710]: E0701 08:39:09.004297 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:39:09.004337 kubelet[2710]: W0701 08:39:09.004311 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:39:09.004337 kubelet[2710]: E0701 08:39:09.004321 2710 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:39:09.004521 kubelet[2710]: E0701 08:39:09.004508 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:39:09.004521 kubelet[2710]: W0701 08:39:09.004518 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:39:09.004579 kubelet[2710]: E0701 08:39:09.004527 2710 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:39:09.004734 kubelet[2710]: E0701 08:39:09.004719 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:39:09.004760 kubelet[2710]: W0701 08:39:09.004732 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:39:09.004760 kubelet[2710]: E0701 08:39:09.004744 2710 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:39:09.005017 kubelet[2710]: E0701 08:39:09.004996 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:39:09.005017 kubelet[2710]: W0701 08:39:09.005008 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:39:09.005062 kubelet[2710]: E0701 08:39:09.005019 2710 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:39:09.005324 kubelet[2710]: E0701 08:39:09.005311 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:39:09.005324 kubelet[2710]: W0701 08:39:09.005322 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:39:09.005388 kubelet[2710]: E0701 08:39:09.005332 2710 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:39:09.005556 kubelet[2710]: E0701 08:39:09.005542 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:39:09.005556 kubelet[2710]: W0701 08:39:09.005552 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:39:09.005607 kubelet[2710]: E0701 08:39:09.005560 2710 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:39:09.005734 kubelet[2710]: E0701 08:39:09.005722 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:39:09.005734 kubelet[2710]: W0701 08:39:09.005731 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:39:09.005782 kubelet[2710]: E0701 08:39:09.005738 2710 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:39:09.005916 kubelet[2710]: E0701 08:39:09.005904 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:39:09.005916 kubelet[2710]: W0701 08:39:09.005913 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:39:09.005967 kubelet[2710]: E0701 08:39:09.005921 2710 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:39:09.006093 kubelet[2710]: E0701 08:39:09.006080 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:39:09.006093 kubelet[2710]: W0701 08:39:09.006089 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:39:09.006151 kubelet[2710]: E0701 08:39:09.006096 2710 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:39:09.006309 kubelet[2710]: E0701 08:39:09.006297 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:39:09.006309 kubelet[2710]: W0701 08:39:09.006306 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:39:09.006359 kubelet[2710]: E0701 08:39:09.006313 2710 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:39:09.006813 kubelet[2710]: E0701 08:39:09.006764 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:39:09.006813 kubelet[2710]: W0701 08:39:09.006803 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:39:09.006876 kubelet[2710]: E0701 08:39:09.006835 2710 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:39:09.007093 kubelet[2710]: E0701 08:39:09.007058 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:39:09.007093 kubelet[2710]: W0701 08:39:09.007073 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:39:09.007093 kubelet[2710]: E0701 08:39:09.007084 2710 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:39:09.007383 kubelet[2710]: E0701 08:39:09.007360 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:39:09.007383 kubelet[2710]: W0701 08:39:09.007378 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:39:09.007519 kubelet[2710]: E0701 08:39:09.007390 2710 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:39:09.007759 kubelet[2710]: E0701 08:39:09.007740 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:39:09.007759 kubelet[2710]: W0701 08:39:09.007756 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:39:09.007815 kubelet[2710]: E0701 08:39:09.007776 2710 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:39:09.008023 kubelet[2710]: E0701 08:39:09.008006 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:39:09.008023 kubelet[2710]: W0701 08:39:09.008018 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:39:09.008111 kubelet[2710]: E0701 08:39:09.008028 2710 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:39:09.980228 kubelet[2710]: I0701 08:39:09.980192 2710 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 1 08:39:10.008138 kubelet[2710]: E0701 08:39:10.008094 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:39:10.008138 kubelet[2710]: W0701 08:39:10.008120 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:39:10.008138 kubelet[2710]: E0701 08:39:10.008141 2710 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:39:10.008436 kubelet[2710]: E0701 08:39:10.008374 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:39:10.008436 kubelet[2710]: W0701 08:39:10.008384 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:39:10.008436 kubelet[2710]: E0701 08:39:10.008433 2710 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:39:10.008659 kubelet[2710]: E0701 08:39:10.008634 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:39:10.008659 kubelet[2710]: W0701 08:39:10.008654 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:39:10.008720 kubelet[2710]: E0701 08:39:10.008664 2710 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:39:10.008895 kubelet[2710]: E0701 08:39:10.008870 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:39:10.008895 kubelet[2710]: W0701 08:39:10.008885 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:39:10.008895 kubelet[2710]: E0701 08:39:10.008897 2710 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:39:10.009112 kubelet[2710]: E0701 08:39:10.009082 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:39:10.009112 kubelet[2710]: W0701 08:39:10.009103 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:39:10.009112 kubelet[2710]: E0701 08:39:10.009111 2710 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:39:10.009266 kubelet[2710]: E0701 08:39:10.009251 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:39:10.009266 kubelet[2710]: W0701 08:39:10.009260 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:39:10.009315 kubelet[2710]: E0701 08:39:10.009269 2710 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:39:10.009473 kubelet[2710]: E0701 08:39:10.009443 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:39:10.009473 kubelet[2710]: W0701 08:39:10.009455 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:39:10.009473 kubelet[2710]: E0701 08:39:10.009463 2710 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:39:10.009743 kubelet[2710]: E0701 08:39:10.009624 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:39:10.009743 kubelet[2710]: W0701 08:39:10.009632 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:39:10.009743 kubelet[2710]: E0701 08:39:10.009640 2710 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:39:10.009867 kubelet[2710]: E0701 08:39:10.009845 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:39:10.009867 kubelet[2710]: W0701 08:39:10.009861 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:39:10.009951 kubelet[2710]: E0701 08:39:10.009874 2710 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:39:10.010079 kubelet[2710]: E0701 08:39:10.010062 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:39:10.010079 kubelet[2710]: W0701 08:39:10.010074 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:39:10.010187 kubelet[2710]: E0701 08:39:10.010085 2710 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:39:10.010270 kubelet[2710]: E0701 08:39:10.010251 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:39:10.010270 kubelet[2710]: W0701 08:39:10.010263 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:39:10.010356 kubelet[2710]: E0701 08:39:10.010273 2710 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:39:10.010494 kubelet[2710]: E0701 08:39:10.010476 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:39:10.010494 kubelet[2710]: W0701 08:39:10.010488 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:39:10.010593 kubelet[2710]: E0701 08:39:10.010499 2710 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:39:10.010730 kubelet[2710]: E0701 08:39:10.010712 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:39:10.010730 kubelet[2710]: W0701 08:39:10.010723 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:39:10.010809 kubelet[2710]: E0701 08:39:10.010734 2710 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:39:10.010949 kubelet[2710]: E0701 08:39:10.010922 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:39:10.010949 kubelet[2710]: W0701 08:39:10.010932 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:39:10.010949 kubelet[2710]: E0701 08:39:10.010940 2710 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:39:10.011109 kubelet[2710]: E0701 08:39:10.011095 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:39:10.011109 kubelet[2710]: W0701 08:39:10.011104 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:39:10.011179 kubelet[2710]: E0701 08:39:10.011111 2710 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:39:10.011349 kubelet[2710]: E0701 08:39:10.011325 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:39:10.011349 kubelet[2710]: W0701 08:39:10.011335 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:39:10.011349 kubelet[2710]: E0701 08:39:10.011343 2710 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:39:10.011569 kubelet[2710]: E0701 08:39:10.011553 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:39:10.011569 kubelet[2710]: W0701 08:39:10.011562 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:39:10.011569 kubelet[2710]: E0701 08:39:10.011569 2710 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:39:10.011770 kubelet[2710]: E0701 08:39:10.011756 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:39:10.011770 kubelet[2710]: W0701 08:39:10.011765 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:39:10.011770 kubelet[2710]: E0701 08:39:10.011773 2710 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:39:10.011982 kubelet[2710]: E0701 08:39:10.011968 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:39:10.011982 kubelet[2710]: W0701 08:39:10.011977 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:39:10.012052 kubelet[2710]: E0701 08:39:10.011984 2710 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:39:10.012171 kubelet[2710]: E0701 08:39:10.012156 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:39:10.012171 kubelet[2710]: W0701 08:39:10.012167 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:39:10.012233 kubelet[2710]: E0701 08:39:10.012175 2710 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:39:10.012370 kubelet[2710]: E0701 08:39:10.012356 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:39:10.012370 kubelet[2710]: W0701 08:39:10.012365 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:39:10.012469 kubelet[2710]: E0701 08:39:10.012373 2710 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:39:10.012616 kubelet[2710]: E0701 08:39:10.012600 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:39:10.012616 kubelet[2710]: W0701 08:39:10.012610 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:39:10.012678 kubelet[2710]: E0701 08:39:10.012630 2710 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:39:10.013145 kubelet[2710]: E0701 08:39:10.013096 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:39:10.013145 kubelet[2710]: W0701 08:39:10.013134 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:39:10.013234 kubelet[2710]: E0701 08:39:10.013163 2710 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:39:10.013467 kubelet[2710]: E0701 08:39:10.013445 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:39:10.013467 kubelet[2710]: W0701 08:39:10.013460 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:39:10.013467 kubelet[2710]: E0701 08:39:10.013470 2710 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:39:10.013723 kubelet[2710]: E0701 08:39:10.013701 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:39:10.013723 kubelet[2710]: W0701 08:39:10.013716 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:39:10.013797 kubelet[2710]: E0701 08:39:10.013728 2710 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:39:10.013977 kubelet[2710]: E0701 08:39:10.013956 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:39:10.013977 kubelet[2710]: W0701 08:39:10.013971 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:39:10.014054 kubelet[2710]: E0701 08:39:10.013982 2710 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:39:10.014259 kubelet[2710]: E0701 08:39:10.014238 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:39:10.014259 kubelet[2710]: W0701 08:39:10.014253 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:39:10.014333 kubelet[2710]: E0701 08:39:10.014264 2710 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:39:10.014562 kubelet[2710]: E0701 08:39:10.014540 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:39:10.014562 kubelet[2710]: W0701 08:39:10.014556 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:39:10.014638 kubelet[2710]: E0701 08:39:10.014569 2710 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:39:10.014963 kubelet[2710]: E0701 08:39:10.014930 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:39:10.014963 kubelet[2710]: W0701 08:39:10.014952 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:39:10.015038 kubelet[2710]: E0701 08:39:10.014968 2710 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:39:10.015268 kubelet[2710]: E0701 08:39:10.015251 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:39:10.015268 kubelet[2710]: W0701 08:39:10.015263 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:39:10.015346 kubelet[2710]: E0701 08:39:10.015273 2710 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:39:10.015629 kubelet[2710]: E0701 08:39:10.015596 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:39:10.015629 kubelet[2710]: W0701 08:39:10.015621 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:39:10.015728 kubelet[2710]: E0701 08:39:10.015637 2710 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:39:10.016182 kubelet[2710]: E0701 08:39:10.016156 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:39:10.016182 kubelet[2710]: W0701 08:39:10.016172 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:39:10.016182 kubelet[2710]: E0701 08:39:10.016181 2710 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:39:10.016454 kubelet[2710]: E0701 08:39:10.016425 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:39:10.016454 kubelet[2710]: W0701 08:39:10.016438 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:39:10.016454 kubelet[2710]: E0701 08:39:10.016447 2710 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:39:10.874912 kubelet[2710]: E0701 08:39:10.874771 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4kczh" podUID="41da0259-d39e-42de-9ee6-ab885e8b5785" Jul 1 08:39:12.875387 kubelet[2710]: E0701 08:39:12.875295 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4kczh" podUID="41da0259-d39e-42de-9ee6-ab885e8b5785" Jul 1 08:39:14.113459 containerd[1542]: time="2025-07-01T08:39:14.113310817Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:39:14.160038 containerd[1542]: time="2025-07-01T08:39:14.159948518Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4446956" Jul 1 08:39:14.195004 containerd[1542]: time="2025-07-01T08:39:14.194913553Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:39:14.238513 containerd[1542]: time="2025-07-01T08:39:14.238369457Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:39:14.238923 containerd[1542]: time="2025-07-01T08:39:14.238861590Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 6.632098146s" Jul 1 08:39:14.238972 containerd[1542]: time="2025-07-01T08:39:14.238926162Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Jul 1 08:39:14.298352 containerd[1542]: time="2025-07-01T08:39:14.298259749Z" level=info msg="CreateContainer within sandbox \"2ce32b4a8dcafe8ab8926cd06ac681424dad359fcd463df9783db04a1f139ca3\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 1 08:39:14.847325 containerd[1542]: time="2025-07-01T08:39:14.847222481Z" level=info msg="Container 4bf69ea150817c329d0da3d854d4a5ff4b1cb5caef42cd7aa5615d7a501e98bc: CDI devices from CRI Config.CDIDevices: []" Jul 1 08:39:14.874985 kubelet[2710]: E0701 08:39:14.874919 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4kczh" podUID="41da0259-d39e-42de-9ee6-ab885e8b5785" Jul 1 08:39:15.671500 containerd[1542]: time="2025-07-01T08:39:15.671408021Z" level=info msg="CreateContainer within sandbox \"2ce32b4a8dcafe8ab8926cd06ac681424dad359fcd463df9783db04a1f139ca3\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"4bf69ea150817c329d0da3d854d4a5ff4b1cb5caef42cd7aa5615d7a501e98bc\"" Jul 1 08:39:15.672024 containerd[1542]: time="2025-07-01T08:39:15.671945320Z" level=info msg="StartContainer for \"4bf69ea150817c329d0da3d854d4a5ff4b1cb5caef42cd7aa5615d7a501e98bc\"" Jul 1 08:39:15.719987 containerd[1542]: time="2025-07-01T08:39:15.719892818Z" level=info msg="connecting to shim 4bf69ea150817c329d0da3d854d4a5ff4b1cb5caef42cd7aa5615d7a501e98bc" address="unix:///run/containerd/s/337f59d8e89a59ee8ad40292bb406a1895f02e75612d80dd0ed350fc6b87bc4a" protocol=ttrpc version=3 Jul 1 08:39:15.748670 systemd[1]: Started cri-containerd-4bf69ea150817c329d0da3d854d4a5ff4b1cb5caef42cd7aa5615d7a501e98bc.scope - libcontainer container 4bf69ea150817c329d0da3d854d4a5ff4b1cb5caef42cd7aa5615d7a501e98bc. Jul 1 08:39:15.807243 systemd[1]: cri-containerd-4bf69ea150817c329d0da3d854d4a5ff4b1cb5caef42cd7aa5615d7a501e98bc.scope: Deactivated successfully. Jul 1 08:39:15.810408 containerd[1542]: time="2025-07-01T08:39:15.810373276Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4bf69ea150817c329d0da3d854d4a5ff4b1cb5caef42cd7aa5615d7a501e98bc\" id:\"4bf69ea150817c329d0da3d854d4a5ff4b1cb5caef42cd7aa5615d7a501e98bc\" pid:3386 exited_at:{seconds:1751359155 nanos:809879848}" Jul 1 08:39:15.958169 containerd[1542]: time="2025-07-01T08:39:15.957986723Z" level=info msg="received exit event container_id:\"4bf69ea150817c329d0da3d854d4a5ff4b1cb5caef42cd7aa5615d7a501e98bc\" id:\"4bf69ea150817c329d0da3d854d4a5ff4b1cb5caef42cd7aa5615d7a501e98bc\" pid:3386 exited_at:{seconds:1751359155 nanos:809879848}" Jul 1 08:39:15.959956 containerd[1542]: time="2025-07-01T08:39:15.959922269Z" level=info msg="StartContainer for \"4bf69ea150817c329d0da3d854d4a5ff4b1cb5caef42cd7aa5615d7a501e98bc\" returns successfully" Jul 1 08:39:15.984484 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4bf69ea150817c329d0da3d854d4a5ff4b1cb5caef42cd7aa5615d7a501e98bc-rootfs.mount: Deactivated successfully. Jul 1 08:39:16.137188 kubelet[2710]: I0701 08:39:16.136587 2710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-588f6dd969-rdxtz" podStartSLOduration=10.020926206 podStartE2EDuration="18.136568983s" podCreationTimestamp="2025-07-01 08:38:58 +0000 UTC" firstStartedPulling="2025-07-01 08:38:59.490915392 +0000 UTC m=+30.766984697" lastFinishedPulling="2025-07-01 08:39:07.606558169 +0000 UTC m=+38.882627474" observedRunningTime="2025-07-01 08:39:09.517096647 +0000 UTC m=+40.793165952" watchObservedRunningTime="2025-07-01 08:39:16.136568983 +0000 UTC m=+47.412638288" Jul 1 08:39:16.875215 kubelet[2710]: E0701 08:39:16.875123 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4kczh" podUID="41da0259-d39e-42de-9ee6-ab885e8b5785" Jul 1 08:39:19.057137 kubelet[2710]: E0701 08:39:19.057055 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4kczh" podUID="41da0259-d39e-42de-9ee6-ab885e8b5785" Jul 1 08:39:19.063383 containerd[1542]: time="2025-07-01T08:39:19.063333534Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 1 08:39:20.875160 kubelet[2710]: E0701 08:39:20.874925 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4kczh" podUID="41da0259-d39e-42de-9ee6-ab885e8b5785" Jul 1 08:39:22.874943 kubelet[2710]: E0701 08:39:22.874798 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4kczh" podUID="41da0259-d39e-42de-9ee6-ab885e8b5785" Jul 1 08:39:24.874671 kubelet[2710]: E0701 08:39:24.874621 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4kczh" podUID="41da0259-d39e-42de-9ee6-ab885e8b5785" Jul 1 08:39:25.461510 containerd[1542]: time="2025-07-01T08:39:25.461440626Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:39:25.462867 containerd[1542]: time="2025-07-01T08:39:25.462822504Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Jul 1 08:39:25.464251 containerd[1542]: time="2025-07-01T08:39:25.464200403Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:39:25.468370 containerd[1542]: time="2025-07-01T08:39:25.468320939Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:39:25.468965 containerd[1542]: time="2025-07-01T08:39:25.468921572Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 6.405547623s" Jul 1 08:39:25.468965 containerd[1542]: time="2025-07-01T08:39:25.468953183Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Jul 1 08:39:25.474321 containerd[1542]: time="2025-07-01T08:39:25.474263465Z" level=info msg="CreateContainer within sandbox \"2ce32b4a8dcafe8ab8926cd06ac681424dad359fcd463df9783db04a1f139ca3\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 1 08:39:25.484401 containerd[1542]: time="2025-07-01T08:39:25.484356212Z" level=info msg="Container b748020f25d71d9469947aa03233116dd51fc064b01276423376f5e9c4ae9184: CDI devices from CRI Config.CDIDevices: []" Jul 1 08:39:25.495208 containerd[1542]: time="2025-07-01T08:39:25.495151833Z" level=info msg="CreateContainer within sandbox \"2ce32b4a8dcafe8ab8926cd06ac681424dad359fcd463df9783db04a1f139ca3\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"b748020f25d71d9469947aa03233116dd51fc064b01276423376f5e9c4ae9184\"" Jul 1 08:39:25.497210 containerd[1542]: time="2025-07-01T08:39:25.495859263Z" level=info msg="StartContainer for \"b748020f25d71d9469947aa03233116dd51fc064b01276423376f5e9c4ae9184\"" Jul 1 08:39:25.497210 containerd[1542]: time="2025-07-01T08:39:25.497150906Z" level=info msg="connecting to shim b748020f25d71d9469947aa03233116dd51fc064b01276423376f5e9c4ae9184" address="unix:///run/containerd/s/337f59d8e89a59ee8ad40292bb406a1895f02e75612d80dd0ed350fc6b87bc4a" protocol=ttrpc version=3 Jul 1 08:39:25.522609 systemd[1]: Started cri-containerd-b748020f25d71d9469947aa03233116dd51fc064b01276423376f5e9c4ae9184.scope - libcontainer container b748020f25d71d9469947aa03233116dd51fc064b01276423376f5e9c4ae9184. Jul 1 08:39:25.568501 containerd[1542]: time="2025-07-01T08:39:25.568322925Z" level=info msg="StartContainer for \"b748020f25d71d9469947aa03233116dd51fc064b01276423376f5e9c4ae9184\" returns successfully" Jul 1 08:39:25.986752 kubelet[2710]: I0701 08:39:25.986684 2710 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 1 08:39:26.874671 kubelet[2710]: E0701 08:39:26.874583 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4kczh" podUID="41da0259-d39e-42de-9ee6-ab885e8b5785" Jul 1 08:39:27.479381 containerd[1542]: time="2025-07-01T08:39:27.479319243Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 1 08:39:27.482931 systemd[1]: cri-containerd-b748020f25d71d9469947aa03233116dd51fc064b01276423376f5e9c4ae9184.scope: Deactivated successfully. Jul 1 08:39:27.484278 systemd[1]: cri-containerd-b748020f25d71d9469947aa03233116dd51fc064b01276423376f5e9c4ae9184.scope: Consumed 688ms CPU time, 179.3M memory peak, 2.4M read from disk, 171.2M written to disk. Jul 1 08:39:27.485127 containerd[1542]: time="2025-07-01T08:39:27.484950757Z" level=info msg="received exit event container_id:\"b748020f25d71d9469947aa03233116dd51fc064b01276423376f5e9c4ae9184\" id:\"b748020f25d71d9469947aa03233116dd51fc064b01276423376f5e9c4ae9184\" pid:3448 exited_at:{seconds:1751359167 nanos:484643352}" Jul 1 08:39:27.485127 containerd[1542]: time="2025-07-01T08:39:27.485063346Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b748020f25d71d9469947aa03233116dd51fc064b01276423376f5e9c4ae9184\" id:\"b748020f25d71d9469947aa03233116dd51fc064b01276423376f5e9c4ae9184\" pid:3448 exited_at:{seconds:1751359167 nanos:484643352}" Jul 1 08:39:27.511310 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b748020f25d71d9469947aa03233116dd51fc064b01276423376f5e9c4ae9184-rootfs.mount: Deactivated successfully. Jul 1 08:39:27.532216 kubelet[2710]: I0701 08:39:27.532157 2710 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 1 08:39:28.048107 systemd[1]: Created slice kubepods-burstable-pod7c1e9f24_4286_4947_b2c6_20a4f6d90605.slice - libcontainer container kubepods-burstable-pod7c1e9f24_4286_4947_b2c6_20a4f6d90605.slice. Jul 1 08:39:28.071451 systemd[1]: Created slice kubepods-besteffort-pod0786e9ab_ac8a_49c1_b383_a63a8b688648.slice - libcontainer container kubepods-besteffort-pod0786e9ab_ac8a_49c1_b383_a63a8b688648.slice. Jul 1 08:39:28.086634 systemd[1]: Created slice kubepods-besteffort-podbe22af01_1e33_4ff0_84fd_b35a2cbb1ca7.slice - libcontainer container kubepods-besteffort-podbe22af01_1e33_4ff0_84fd_b35a2cbb1ca7.slice. Jul 1 08:39:28.092501 containerd[1542]: time="2025-07-01T08:39:28.092449780Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 1 08:39:28.099228 systemd[1]: Created slice kubepods-burstable-pod04495a2b_e4bd_4639_a451_560be8d17132.slice - libcontainer container kubepods-burstable-pod04495a2b_e4bd_4639_a451_560be8d17132.slice. Jul 1 08:39:28.106460 systemd[1]: Created slice kubepods-besteffort-podfad7d2f6_f023_4dfe_8f40_2b786f80da76.slice - libcontainer container kubepods-besteffort-podfad7d2f6_f023_4dfe_8f40_2b786f80da76.slice. Jul 1 08:39:28.114493 systemd[1]: Created slice kubepods-besteffort-pod768e85aa_cad8_45c2_9aa4_684f6da1de45.slice - libcontainer container kubepods-besteffort-pod768e85aa_cad8_45c2_9aa4_684f6da1de45.slice. Jul 1 08:39:28.119927 systemd[1]: Created slice kubepods-besteffort-pod8dd24de6_7001_4bd3_8b8c_a553b183fab5.slice - libcontainer container kubepods-besteffort-pod8dd24de6_7001_4bd3_8b8c_a553b183fab5.slice. Jul 1 08:39:28.139375 kubelet[2710]: I0701 08:39:28.137688 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8dd24de6-7001-4bd3-8b8c-a553b183fab5-config\") pod \"goldmane-768f4c5c69-hrqb5\" (UID: \"8dd24de6-7001-4bd3-8b8c-a553b183fab5\") " pod="calico-system/goldmane-768f4c5c69-hrqb5" Jul 1 08:39:28.139375 kubelet[2710]: I0701 08:39:28.137741 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/fad7d2f6-f023-4dfe-8f40-2b786f80da76-calico-apiserver-certs\") pod \"calico-apiserver-755864c8f7-nb9gx\" (UID: \"fad7d2f6-f023-4dfe-8f40-2b786f80da76\") " pod="calico-apiserver/calico-apiserver-755864c8f7-nb9gx" Jul 1 08:39:28.139375 kubelet[2710]: I0701 08:39:28.137765 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rlgg\" (UniqueName: \"kubernetes.io/projected/fad7d2f6-f023-4dfe-8f40-2b786f80da76-kube-api-access-8rlgg\") pod \"calico-apiserver-755864c8f7-nb9gx\" (UID: \"fad7d2f6-f023-4dfe-8f40-2b786f80da76\") " pod="calico-apiserver/calico-apiserver-755864c8f7-nb9gx" Jul 1 08:39:28.139375 kubelet[2710]: I0701 08:39:28.137787 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/be22af01-1e33-4ff0-84fd-b35a2cbb1ca7-tigera-ca-bundle\") pod \"calico-kube-controllers-6d868c7f67-4frp4\" (UID: \"be22af01-1e33-4ff0-84fd-b35a2cbb1ca7\") " pod="calico-system/calico-kube-controllers-6d868c7f67-4frp4" Jul 1 08:39:28.139375 kubelet[2710]: I0701 08:39:28.137806 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8dd24de6-7001-4bd3-8b8c-a553b183fab5-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-hrqb5\" (UID: \"8dd24de6-7001-4bd3-8b8c-a553b183fab5\") " pod="calico-system/goldmane-768f4c5c69-hrqb5" Jul 1 08:39:28.139748 kubelet[2710]: I0701 08:39:28.137828 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxqlq\" (UniqueName: \"kubernetes.io/projected/04495a2b-e4bd-4639-a451-560be8d17132-kube-api-access-sxqlq\") pod \"coredns-674b8bbfcf-bk4zk\" (UID: \"04495a2b-e4bd-4639-a451-560be8d17132\") " pod="kube-system/coredns-674b8bbfcf-bk4zk" Jul 1 08:39:28.139748 kubelet[2710]: I0701 08:39:28.137850 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6zzs\" (UniqueName: \"kubernetes.io/projected/768e85aa-cad8-45c2-9aa4-684f6da1de45-kube-api-access-x6zzs\") pod \"calico-apiserver-755864c8f7-l96sx\" (UID: \"768e85aa-cad8-45c2-9aa4-684f6da1de45\") " pod="calico-apiserver/calico-apiserver-755864c8f7-l96sx" Jul 1 08:39:28.139748 kubelet[2710]: I0701 08:39:28.137876 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0786e9ab-ac8a-49c1-b383-a63a8b688648-whisker-ca-bundle\") pod \"whisker-5f67cd65d6-ngf6p\" (UID: \"0786e9ab-ac8a-49c1-b383-a63a8b688648\") " pod="calico-system/whisker-5f67cd65d6-ngf6p" Jul 1 08:39:28.139748 kubelet[2710]: I0701 08:39:28.137895 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/8dd24de6-7001-4bd3-8b8c-a553b183fab5-goldmane-key-pair\") pod \"goldmane-768f4c5c69-hrqb5\" (UID: \"8dd24de6-7001-4bd3-8b8c-a553b183fab5\") " pod="calico-system/goldmane-768f4c5c69-hrqb5" Jul 1 08:39:28.139748 kubelet[2710]: I0701 08:39:28.137945 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/768e85aa-cad8-45c2-9aa4-684f6da1de45-calico-apiserver-certs\") pod \"calico-apiserver-755864c8f7-l96sx\" (UID: \"768e85aa-cad8-45c2-9aa4-684f6da1de45\") " pod="calico-apiserver/calico-apiserver-755864c8f7-l96sx" Jul 1 08:39:28.139918 kubelet[2710]: I0701 08:39:28.137985 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mb9gh\" (UniqueName: \"kubernetes.io/projected/7c1e9f24-4286-4947-b2c6-20a4f6d90605-kube-api-access-mb9gh\") pod \"coredns-674b8bbfcf-m5pqf\" (UID: \"7c1e9f24-4286-4947-b2c6-20a4f6d90605\") " pod="kube-system/coredns-674b8bbfcf-m5pqf" Jul 1 08:39:28.139918 kubelet[2710]: I0701 08:39:28.138019 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kld4g\" (UniqueName: \"kubernetes.io/projected/0786e9ab-ac8a-49c1-b383-a63a8b688648-kube-api-access-kld4g\") pod \"whisker-5f67cd65d6-ngf6p\" (UID: \"0786e9ab-ac8a-49c1-b383-a63a8b688648\") " pod="calico-system/whisker-5f67cd65d6-ngf6p" Jul 1 08:39:28.139918 kubelet[2710]: I0701 08:39:28.138048 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0786e9ab-ac8a-49c1-b383-a63a8b688648-whisker-backend-key-pair\") pod \"whisker-5f67cd65d6-ngf6p\" (UID: \"0786e9ab-ac8a-49c1-b383-a63a8b688648\") " pod="calico-system/whisker-5f67cd65d6-ngf6p" Jul 1 08:39:28.139918 kubelet[2710]: I0701 08:39:28.138071 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpsvq\" (UniqueName: \"kubernetes.io/projected/be22af01-1e33-4ff0-84fd-b35a2cbb1ca7-kube-api-access-jpsvq\") pod \"calico-kube-controllers-6d868c7f67-4frp4\" (UID: \"be22af01-1e33-4ff0-84fd-b35a2cbb1ca7\") " pod="calico-system/calico-kube-controllers-6d868c7f67-4frp4" Jul 1 08:39:28.139918 kubelet[2710]: I0701 08:39:28.138112 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7c1e9f24-4286-4947-b2c6-20a4f6d90605-config-volume\") pod \"coredns-674b8bbfcf-m5pqf\" (UID: \"7c1e9f24-4286-4947-b2c6-20a4f6d90605\") " pod="kube-system/coredns-674b8bbfcf-m5pqf" Jul 1 08:39:28.140102 kubelet[2710]: I0701 08:39:28.138134 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87rvj\" (UniqueName: \"kubernetes.io/projected/8dd24de6-7001-4bd3-8b8c-a553b183fab5-kube-api-access-87rvj\") pod \"goldmane-768f4c5c69-hrqb5\" (UID: \"8dd24de6-7001-4bd3-8b8c-a553b183fab5\") " pod="calico-system/goldmane-768f4c5c69-hrqb5" Jul 1 08:39:28.140102 kubelet[2710]: I0701 08:39:28.138156 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/04495a2b-e4bd-4639-a451-560be8d17132-config-volume\") pod \"coredns-674b8bbfcf-bk4zk\" (UID: \"04495a2b-e4bd-4639-a451-560be8d17132\") " pod="kube-system/coredns-674b8bbfcf-bk4zk" Jul 1 08:39:28.353786 containerd[1542]: time="2025-07-01T08:39:28.353728359Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-m5pqf,Uid:7c1e9f24-4286-4947-b2c6-20a4f6d90605,Namespace:kube-system,Attempt:0,}" Jul 1 08:39:28.379951 containerd[1542]: time="2025-07-01T08:39:28.379677308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5f67cd65d6-ngf6p,Uid:0786e9ab-ac8a-49c1-b383-a63a8b688648,Namespace:calico-system,Attempt:0,}" Jul 1 08:39:28.391456 containerd[1542]: time="2025-07-01T08:39:28.391393776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d868c7f67-4frp4,Uid:be22af01-1e33-4ff0-84fd-b35a2cbb1ca7,Namespace:calico-system,Attempt:0,}" Jul 1 08:39:28.403447 containerd[1542]: time="2025-07-01T08:39:28.403373552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-bk4zk,Uid:04495a2b-e4bd-4639-a451-560be8d17132,Namespace:kube-system,Attempt:0,}" Jul 1 08:39:28.412560 containerd[1542]: time="2025-07-01T08:39:28.412497960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-755864c8f7-nb9gx,Uid:fad7d2f6-f023-4dfe-8f40-2b786f80da76,Namespace:calico-apiserver,Attempt:0,}" Jul 1 08:39:28.418750 containerd[1542]: time="2025-07-01T08:39:28.418697764Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-755864c8f7-l96sx,Uid:768e85aa-cad8-45c2-9aa4-684f6da1de45,Namespace:calico-apiserver,Attempt:0,}" Jul 1 08:39:28.425820 containerd[1542]: time="2025-07-01T08:39:28.425767289Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-hrqb5,Uid:8dd24de6-7001-4bd3-8b8c-a553b183fab5,Namespace:calico-system,Attempt:0,}" Jul 1 08:39:28.610097 containerd[1542]: time="2025-07-01T08:39:28.609669730Z" level=error msg="Failed to destroy network for sandbox \"1561901689e74ffc88f85dabbf876f1ac06d3b3bd6ce6ded19f38b82ba6ab20e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:39:28.612695 systemd[1]: run-netns-cni\x2dbceca69a\x2dfddc\x2df021\x2d781c\x2dbc0e2b10f5e8.mount: Deactivated successfully. Jul 1 08:39:28.755442 containerd[1542]: time="2025-07-01T08:39:28.755316994Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-m5pqf,Uid:7c1e9f24-4286-4947-b2c6-20a4f6d90605,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1561901689e74ffc88f85dabbf876f1ac06d3b3bd6ce6ded19f38b82ba6ab20e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:39:28.756438 kubelet[2710]: E0701 08:39:28.756023 2710 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1561901689e74ffc88f85dabbf876f1ac06d3b3bd6ce6ded19f38b82ba6ab20e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:39:28.756438 kubelet[2710]: E0701 08:39:28.756144 2710 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1561901689e74ffc88f85dabbf876f1ac06d3b3bd6ce6ded19f38b82ba6ab20e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-m5pqf" Jul 1 08:39:28.756438 kubelet[2710]: E0701 08:39:28.756176 2710 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1561901689e74ffc88f85dabbf876f1ac06d3b3bd6ce6ded19f38b82ba6ab20e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-m5pqf" Jul 1 08:39:28.756922 kubelet[2710]: E0701 08:39:28.756254 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-m5pqf_kube-system(7c1e9f24-4286-4947-b2c6-20a4f6d90605)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-m5pqf_kube-system(7c1e9f24-4286-4947-b2c6-20a4f6d90605)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1561901689e74ffc88f85dabbf876f1ac06d3b3bd6ce6ded19f38b82ba6ab20e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-m5pqf" podUID="7c1e9f24-4286-4947-b2c6-20a4f6d90605" Jul 1 08:39:28.850501 containerd[1542]: time="2025-07-01T08:39:28.850406491Z" level=error msg="Failed to destroy network for sandbox \"28da1377b0ad1f5ef66c48332c0d9c24f7b8d0c142c097930fa5ad5749378910\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:39:28.855453 containerd[1542]: time="2025-07-01T08:39:28.854880889Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5f67cd65d6-ngf6p,Uid:0786e9ab-ac8a-49c1-b383-a63a8b688648,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"28da1377b0ad1f5ef66c48332c0d9c24f7b8d0c142c097930fa5ad5749378910\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:39:28.855651 kubelet[2710]: E0701 08:39:28.855174 2710 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"28da1377b0ad1f5ef66c48332c0d9c24f7b8d0c142c097930fa5ad5749378910\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:39:28.855651 kubelet[2710]: E0701 08:39:28.855268 2710 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"28da1377b0ad1f5ef66c48332c0d9c24f7b8d0c142c097930fa5ad5749378910\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5f67cd65d6-ngf6p" Jul 1 08:39:28.855651 kubelet[2710]: E0701 08:39:28.855295 2710 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"28da1377b0ad1f5ef66c48332c0d9c24f7b8d0c142c097930fa5ad5749378910\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5f67cd65d6-ngf6p" Jul 1 08:39:28.855815 kubelet[2710]: E0701 08:39:28.855356 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5f67cd65d6-ngf6p_calico-system(0786e9ab-ac8a-49c1-b383-a63a8b688648)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5f67cd65d6-ngf6p_calico-system(0786e9ab-ac8a-49c1-b383-a63a8b688648)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"28da1377b0ad1f5ef66c48332c0d9c24f7b8d0c142c097930fa5ad5749378910\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5f67cd65d6-ngf6p" podUID="0786e9ab-ac8a-49c1-b383-a63a8b688648" Jul 1 08:39:28.869286 containerd[1542]: time="2025-07-01T08:39:28.869105576Z" level=error msg="Failed to destroy network for sandbox \"1c4facff71eaa4e20eda625dcd69dde43bf825cbd432a11e455baf9032f90f5c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:39:28.872166 containerd[1542]: time="2025-07-01T08:39:28.872101567Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-755864c8f7-nb9gx,Uid:fad7d2f6-f023-4dfe-8f40-2b786f80da76,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c4facff71eaa4e20eda625dcd69dde43bf825cbd432a11e455baf9032f90f5c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:39:28.872837 containerd[1542]: time="2025-07-01T08:39:28.872801891Z" level=error msg="Failed to destroy network for sandbox \"f215d935915124d2f8679ce4a765d75e8af1b43b0470c9c13be7a758cfc1a6df\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:39:28.872926 kubelet[2710]: E0701 08:39:28.872841 2710 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c4facff71eaa4e20eda625dcd69dde43bf825cbd432a11e455baf9032f90f5c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:39:28.873055 kubelet[2710]: E0701 08:39:28.873012 2710 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c4facff71eaa4e20eda625dcd69dde43bf825cbd432a11e455baf9032f90f5c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-755864c8f7-nb9gx" Jul 1 08:39:28.873106 kubelet[2710]: E0701 08:39:28.873066 2710 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c4facff71eaa4e20eda625dcd69dde43bf825cbd432a11e455baf9032f90f5c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-755864c8f7-nb9gx" Jul 1 08:39:28.873228 kubelet[2710]: E0701 08:39:28.873170 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-755864c8f7-nb9gx_calico-apiserver(fad7d2f6-f023-4dfe-8f40-2b786f80da76)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-755864c8f7-nb9gx_calico-apiserver(fad7d2f6-f023-4dfe-8f40-2b786f80da76)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1c4facff71eaa4e20eda625dcd69dde43bf825cbd432a11e455baf9032f90f5c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-755864c8f7-nb9gx" podUID="fad7d2f6-f023-4dfe-8f40-2b786f80da76" Jul 1 08:39:28.875391 containerd[1542]: time="2025-07-01T08:39:28.874823979Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-bk4zk,Uid:04495a2b-e4bd-4639-a451-560be8d17132,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f215d935915124d2f8679ce4a765d75e8af1b43b0470c9c13be7a758cfc1a6df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:39:28.876332 kubelet[2710]: E0701 08:39:28.876175 2710 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f215d935915124d2f8679ce4a765d75e8af1b43b0470c9c13be7a758cfc1a6df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:39:28.876696 kubelet[2710]: E0701 08:39:28.876661 2710 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f215d935915124d2f8679ce4a765d75e8af1b43b0470c9c13be7a758cfc1a6df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-bk4zk" Jul 1 08:39:28.876757 kubelet[2710]: E0701 08:39:28.876701 2710 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f215d935915124d2f8679ce4a765d75e8af1b43b0470c9c13be7a758cfc1a6df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-bk4zk" Jul 1 08:39:28.876883 kubelet[2710]: E0701 08:39:28.876775 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-bk4zk_kube-system(04495a2b-e4bd-4639-a451-560be8d17132)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-bk4zk_kube-system(04495a2b-e4bd-4639-a451-560be8d17132)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f215d935915124d2f8679ce4a765d75e8af1b43b0470c9c13be7a758cfc1a6df\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-bk4zk" podUID="04495a2b-e4bd-4639-a451-560be8d17132" Jul 1 08:39:28.878438 containerd[1542]: time="2025-07-01T08:39:28.877891228Z" level=error msg="Failed to destroy network for sandbox \"8f0b298f184572fe27f80acd2ee52c7d686ef5c193d369c69d283e47dfa55e72\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:39:28.881910 containerd[1542]: time="2025-07-01T08:39:28.881742082Z" level=error msg="Failed to destroy network for sandbox \"dc00490765453921ddbb69df0c3d78e0753292f34355975c419ab57759a8fce1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:39:28.882403 containerd[1542]: time="2025-07-01T08:39:28.882364856Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d868c7f67-4frp4,Uid:be22af01-1e33-4ff0-84fd-b35a2cbb1ca7,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f0b298f184572fe27f80acd2ee52c7d686ef5c193d369c69d283e47dfa55e72\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:39:28.883005 kubelet[2710]: E0701 08:39:28.882823 2710 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f0b298f184572fe27f80acd2ee52c7d686ef5c193d369c69d283e47dfa55e72\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:39:28.883279 kubelet[2710]: E0701 08:39:28.883129 2710 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f0b298f184572fe27f80acd2ee52c7d686ef5c193d369c69d283e47dfa55e72\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6d868c7f67-4frp4" Jul 1 08:39:28.883279 kubelet[2710]: E0701 08:39:28.883159 2710 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f0b298f184572fe27f80acd2ee52c7d686ef5c193d369c69d283e47dfa55e72\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6d868c7f67-4frp4" Jul 1 08:39:28.883536 kubelet[2710]: E0701 08:39:28.883323 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6d868c7f67-4frp4_calico-system(be22af01-1e33-4ff0-84fd-b35a2cbb1ca7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6d868c7f67-4frp4_calico-system(be22af01-1e33-4ff0-84fd-b35a2cbb1ca7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8f0b298f184572fe27f80acd2ee52c7d686ef5c193d369c69d283e47dfa55e72\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6d868c7f67-4frp4" podUID="be22af01-1e33-4ff0-84fd-b35a2cbb1ca7" Jul 1 08:39:28.886963 systemd[1]: Created slice kubepods-besteffort-pod41da0259_d39e_42de_9ee6_ab885e8b5785.slice - libcontainer container kubepods-besteffort-pod41da0259_d39e_42de_9ee6_ab885e8b5785.slice. Jul 1 08:39:28.889953 containerd[1542]: time="2025-07-01T08:39:28.889857469Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-755864c8f7-l96sx,Uid:768e85aa-cad8-45c2-9aa4-684f6da1de45,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc00490765453921ddbb69df0c3d78e0753292f34355975c419ab57759a8fce1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:39:28.890321 kubelet[2710]: E0701 08:39:28.890276 2710 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc00490765453921ddbb69df0c3d78e0753292f34355975c419ab57759a8fce1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:39:28.890390 kubelet[2710]: E0701 08:39:28.890335 2710 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc00490765453921ddbb69df0c3d78e0753292f34355975c419ab57759a8fce1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-755864c8f7-l96sx" Jul 1 08:39:28.890390 kubelet[2710]: E0701 08:39:28.890355 2710 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc00490765453921ddbb69df0c3d78e0753292f34355975c419ab57759a8fce1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-755864c8f7-l96sx" Jul 1 08:39:28.890657 kubelet[2710]: E0701 08:39:28.890392 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-755864c8f7-l96sx_calico-apiserver(768e85aa-cad8-45c2-9aa4-684f6da1de45)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-755864c8f7-l96sx_calico-apiserver(768e85aa-cad8-45c2-9aa4-684f6da1de45)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dc00490765453921ddbb69df0c3d78e0753292f34355975c419ab57759a8fce1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-755864c8f7-l96sx" podUID="768e85aa-cad8-45c2-9aa4-684f6da1de45" Jul 1 08:39:28.890740 containerd[1542]: time="2025-07-01T08:39:28.890402142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4kczh,Uid:41da0259-d39e-42de-9ee6-ab885e8b5785,Namespace:calico-system,Attempt:0,}" Jul 1 08:39:28.893247 containerd[1542]: time="2025-07-01T08:39:28.893194128Z" level=error msg="Failed to destroy network for sandbox \"5ac5a7e39221291555206a2adef09c4ed2fb7fd0cbf2ac52e1c81fb7b0893974\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:39:28.897357 containerd[1542]: time="2025-07-01T08:39:28.896915712Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-hrqb5,Uid:8dd24de6-7001-4bd3-8b8c-a553b183fab5,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ac5a7e39221291555206a2adef09c4ed2fb7fd0cbf2ac52e1c81fb7b0893974\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:39:28.897710 kubelet[2710]: E0701 08:39:28.897197 2710 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ac5a7e39221291555206a2adef09c4ed2fb7fd0cbf2ac52e1c81fb7b0893974\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:39:28.897710 kubelet[2710]: E0701 08:39:28.897287 2710 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ac5a7e39221291555206a2adef09c4ed2fb7fd0cbf2ac52e1c81fb7b0893974\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-hrqb5" Jul 1 08:39:28.897710 kubelet[2710]: E0701 08:39:28.897317 2710 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ac5a7e39221291555206a2adef09c4ed2fb7fd0cbf2ac52e1c81fb7b0893974\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-hrqb5" Jul 1 08:39:28.897852 kubelet[2710]: E0701 08:39:28.897377 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-hrqb5_calico-system(8dd24de6-7001-4bd3-8b8c-a553b183fab5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-hrqb5_calico-system(8dd24de6-7001-4bd3-8b8c-a553b183fab5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5ac5a7e39221291555206a2adef09c4ed2fb7fd0cbf2ac52e1c81fb7b0893974\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-hrqb5" podUID="8dd24de6-7001-4bd3-8b8c-a553b183fab5" Jul 1 08:39:28.954274 containerd[1542]: time="2025-07-01T08:39:28.954189773Z" level=error msg="Failed to destroy network for sandbox \"1d82abd26f457f21b960d7813b632084e1f175fa27686475c9a0e9ac78f9c591\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:39:28.955728 containerd[1542]: time="2025-07-01T08:39:28.955683791Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4kczh,Uid:41da0259-d39e-42de-9ee6-ab885e8b5785,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d82abd26f457f21b960d7813b632084e1f175fa27686475c9a0e9ac78f9c591\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:39:28.956058 kubelet[2710]: E0701 08:39:28.955985 2710 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d82abd26f457f21b960d7813b632084e1f175fa27686475c9a0e9ac78f9c591\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:39:28.956253 kubelet[2710]: E0701 08:39:28.956078 2710 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d82abd26f457f21b960d7813b632084e1f175fa27686475c9a0e9ac78f9c591\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4kczh" Jul 1 08:39:28.956253 kubelet[2710]: E0701 08:39:28.956114 2710 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d82abd26f457f21b960d7813b632084e1f175fa27686475c9a0e9ac78f9c591\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4kczh" Jul 1 08:39:28.956253 kubelet[2710]: E0701 08:39:28.956202 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-4kczh_calico-system(41da0259-d39e-42de-9ee6-ab885e8b5785)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-4kczh_calico-system(41da0259-d39e-42de-9ee6-ab885e8b5785)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1d82abd26f457f21b960d7813b632084e1f175fa27686475c9a0e9ac78f9c591\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4kczh" podUID="41da0259-d39e-42de-9ee6-ab885e8b5785" Jul 1 08:39:29.511880 systemd[1]: run-netns-cni\x2d3645aeda\x2dc7ad\x2d59be\x2d6fb2\x2d80eb6da1a3fd.mount: Deactivated successfully. Jul 1 08:39:29.512002 systemd[1]: run-netns-cni\x2d593913d8\x2decb4\x2d29cf\x2ddfa4\x2d20debc952e64.mount: Deactivated successfully. Jul 1 08:39:29.512070 systemd[1]: run-netns-cni\x2d602ae06b\x2de8cd\x2d7bac\x2d1f4b\x2d7b75a58bb091.mount: Deactivated successfully. Jul 1 08:39:29.512135 systemd[1]: run-netns-cni\x2d61622936\x2d23c6\x2d4300\x2d8250\x2d691bc151f17c.mount: Deactivated successfully. Jul 1 08:39:36.958592 systemd[1]: Started sshd@7-10.0.0.78:22-10.0.0.1:46714.service - OpenSSH per-connection server daemon (10.0.0.1:46714). Jul 1 08:39:37.084003 sshd[3763]: Accepted publickey for core from 10.0.0.1 port 46714 ssh2: RSA SHA256:Fdg/GPppvpuQQb5BRtreEtTPBEKGT5ZJUpnuhcL3IOo Jul 1 08:39:37.085729 sshd-session[3763]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:39:37.101285 systemd-logind[1525]: New session 8 of user core. Jul 1 08:39:37.112684 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 1 08:39:37.334962 sshd[3766]: Connection closed by 10.0.0.1 port 46714 Jul 1 08:39:37.335277 sshd-session[3763]: pam_unix(sshd:session): session closed for user core Jul 1 08:39:37.340077 systemd[1]: sshd@7-10.0.0.78:22-10.0.0.1:46714.service: Deactivated successfully. Jul 1 08:39:37.340600 systemd-logind[1525]: Session 8 logged out. Waiting for processes to exit. Jul 1 08:39:37.343375 systemd[1]: session-8.scope: Deactivated successfully. Jul 1 08:39:37.346748 systemd-logind[1525]: Removed session 8. Jul 1 08:39:38.277799 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3570917899.mount: Deactivated successfully. Jul 1 08:39:39.875535 containerd[1542]: time="2025-07-01T08:39:39.875442879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-755864c8f7-nb9gx,Uid:fad7d2f6-f023-4dfe-8f40-2b786f80da76,Namespace:calico-apiserver,Attempt:0,}" Jul 1 08:39:40.473073 containerd[1542]: time="2025-07-01T08:39:40.472983804Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:39:40.493848 containerd[1542]: time="2025-07-01T08:39:40.493774122Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Jul 1 08:39:40.506614 containerd[1542]: time="2025-07-01T08:39:40.506536696Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:39:40.522502 containerd[1542]: time="2025-07-01T08:39:40.522386400Z" level=error msg="Failed to destroy network for sandbox \"dd4d54a9813e3306fa4bb19ad0894d7a67fdb019f9f002b07df1cc5c12ef0c72\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:39:40.525060 systemd[1]: run-netns-cni\x2df48dfd52\x2d9470\x2d171a\x2dec84\x2da210a7ceed01.mount: Deactivated successfully. Jul 1 08:39:40.535674 containerd[1542]: time="2025-07-01T08:39:40.535583898Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:39:40.538892 containerd[1542]: time="2025-07-01T08:39:40.536686142Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 12.444182767s" Jul 1 08:39:40.538892 containerd[1542]: time="2025-07-01T08:39:40.536731659Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Jul 1 08:39:40.554523 containerd[1542]: time="2025-07-01T08:39:40.554338943Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-755864c8f7-nb9gx,Uid:fad7d2f6-f023-4dfe-8f40-2b786f80da76,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd4d54a9813e3306fa4bb19ad0894d7a67fdb019f9f002b07df1cc5c12ef0c72\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:39:40.557200 kubelet[2710]: E0701 08:39:40.556811 2710 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd4d54a9813e3306fa4bb19ad0894d7a67fdb019f9f002b07df1cc5c12ef0c72\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:39:40.557200 kubelet[2710]: E0701 08:39:40.556904 2710 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd4d54a9813e3306fa4bb19ad0894d7a67fdb019f9f002b07df1cc5c12ef0c72\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-755864c8f7-nb9gx" Jul 1 08:39:40.557200 kubelet[2710]: E0701 08:39:40.556928 2710 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd4d54a9813e3306fa4bb19ad0894d7a67fdb019f9f002b07df1cc5c12ef0c72\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-755864c8f7-nb9gx" Jul 1 08:39:40.565056 kubelet[2710]: E0701 08:39:40.556998 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-755864c8f7-nb9gx_calico-apiserver(fad7d2f6-f023-4dfe-8f40-2b786f80da76)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-755864c8f7-nb9gx_calico-apiserver(fad7d2f6-f023-4dfe-8f40-2b786f80da76)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dd4d54a9813e3306fa4bb19ad0894d7a67fdb019f9f002b07df1cc5c12ef0c72\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-755864c8f7-nb9gx" podUID="fad7d2f6-f023-4dfe-8f40-2b786f80da76" Jul 1 08:39:40.572927 containerd[1542]: time="2025-07-01T08:39:40.572865139Z" level=info msg="CreateContainer within sandbox \"2ce32b4a8dcafe8ab8926cd06ac681424dad359fcd463df9783db04a1f139ca3\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 1 08:39:40.712808 containerd[1542]: time="2025-07-01T08:39:40.712740600Z" level=info msg="Container 2c8e2919101a188028e88b546c2a3552831da553595016afcdf0b19583c05443: CDI devices from CRI Config.CDIDevices: []" Jul 1 08:39:40.875586 containerd[1542]: time="2025-07-01T08:39:40.875495716Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5f67cd65d6-ngf6p,Uid:0786e9ab-ac8a-49c1-b383-a63a8b688648,Namespace:calico-system,Attempt:0,}" Jul 1 08:39:40.876028 containerd[1542]: time="2025-07-01T08:39:40.875693545Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-hrqb5,Uid:8dd24de6-7001-4bd3-8b8c-a553b183fab5,Namespace:calico-system,Attempt:0,}" Jul 1 08:39:41.400324 containerd[1542]: time="2025-07-01T08:39:41.400230759Z" level=info msg="CreateContainer within sandbox \"2ce32b4a8dcafe8ab8926cd06ac681424dad359fcd463df9783db04a1f139ca3\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"2c8e2919101a188028e88b546c2a3552831da553595016afcdf0b19583c05443\"" Jul 1 08:39:41.402061 containerd[1542]: time="2025-07-01T08:39:41.401171693Z" level=info msg="StartContainer for \"2c8e2919101a188028e88b546c2a3552831da553595016afcdf0b19583c05443\"" Jul 1 08:39:41.431723 containerd[1542]: time="2025-07-01T08:39:41.431658654Z" level=info msg="connecting to shim 2c8e2919101a188028e88b546c2a3552831da553595016afcdf0b19583c05443" address="unix:///run/containerd/s/337f59d8e89a59ee8ad40292bb406a1895f02e75612d80dd0ed350fc6b87bc4a" protocol=ttrpc version=3 Jul 1 08:39:41.465073 systemd[1]: Started cri-containerd-2c8e2919101a188028e88b546c2a3552831da553595016afcdf0b19583c05443.scope - libcontainer container 2c8e2919101a188028e88b546c2a3552831da553595016afcdf0b19583c05443. Jul 1 08:39:41.470856 containerd[1542]: time="2025-07-01T08:39:41.470789833Z" level=error msg="Failed to destroy network for sandbox \"87efd01589590ff5b2cd7be3b9893498b66e6635c4ceeab956ba3b818daf6d97\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:39:41.473663 containerd[1542]: time="2025-07-01T08:39:41.473398763Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5f67cd65d6-ngf6p,Uid:0786e9ab-ac8a-49c1-b383-a63a8b688648,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"87efd01589590ff5b2cd7be3b9893498b66e6635c4ceeab956ba3b818daf6d97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:39:41.474353 kubelet[2710]: E0701 08:39:41.474257 2710 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87efd01589590ff5b2cd7be3b9893498b66e6635c4ceeab956ba3b818daf6d97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:39:41.474353 kubelet[2710]: E0701 08:39:41.474345 2710 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87efd01589590ff5b2cd7be3b9893498b66e6635c4ceeab956ba3b818daf6d97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5f67cd65d6-ngf6p" Jul 1 08:39:41.474573 kubelet[2710]: E0701 08:39:41.474374 2710 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87efd01589590ff5b2cd7be3b9893498b66e6635c4ceeab956ba3b818daf6d97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5f67cd65d6-ngf6p" Jul 1 08:39:41.474469 systemd[1]: run-netns-cni\x2d6aab7981\x2d503a\x2d9a80\x2d7c50\x2dd0aa422f9091.mount: Deactivated successfully. Jul 1 08:39:41.475986 kubelet[2710]: E0701 08:39:41.475247 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5f67cd65d6-ngf6p_calico-system(0786e9ab-ac8a-49c1-b383-a63a8b688648)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5f67cd65d6-ngf6p_calico-system(0786e9ab-ac8a-49c1-b383-a63a8b688648)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"87efd01589590ff5b2cd7be3b9893498b66e6635c4ceeab956ba3b818daf6d97\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5f67cd65d6-ngf6p" podUID="0786e9ab-ac8a-49c1-b383-a63a8b688648" Jul 1 08:39:41.480643 containerd[1542]: time="2025-07-01T08:39:41.480316301Z" level=error msg="Failed to destroy network for sandbox \"eade982cc3b88b1e749d8dc715ce4db86a5a09e2f4d06fd0409481c9ff6ca79b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:39:41.484639 systemd[1]: run-netns-cni\x2d1d1425ad\x2dcee8\x2d6fa4\x2d3d2c\x2dd3292fae1dab.mount: Deactivated successfully. Jul 1 08:39:41.485506 containerd[1542]: time="2025-07-01T08:39:41.484601755Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-hrqb5,Uid:8dd24de6-7001-4bd3-8b8c-a553b183fab5,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"eade982cc3b88b1e749d8dc715ce4db86a5a09e2f4d06fd0409481c9ff6ca79b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:39:41.485630 kubelet[2710]: E0701 08:39:41.485103 2710 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eade982cc3b88b1e749d8dc715ce4db86a5a09e2f4d06fd0409481c9ff6ca79b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:39:41.485630 kubelet[2710]: E0701 08:39:41.485191 2710 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eade982cc3b88b1e749d8dc715ce4db86a5a09e2f4d06fd0409481c9ff6ca79b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-hrqb5" Jul 1 08:39:41.485630 kubelet[2710]: E0701 08:39:41.485222 2710 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eade982cc3b88b1e749d8dc715ce4db86a5a09e2f4d06fd0409481c9ff6ca79b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-hrqb5" Jul 1 08:39:41.485772 kubelet[2710]: E0701 08:39:41.485302 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-hrqb5_calico-system(8dd24de6-7001-4bd3-8b8c-a553b183fab5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-hrqb5_calico-system(8dd24de6-7001-4bd3-8b8c-a553b183fab5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eade982cc3b88b1e749d8dc715ce4db86a5a09e2f4d06fd0409481c9ff6ca79b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-hrqb5" podUID="8dd24de6-7001-4bd3-8b8c-a553b183fab5" Jul 1 08:39:41.543843 containerd[1542]: time="2025-07-01T08:39:41.543787581Z" level=info msg="StartContainer for \"2c8e2919101a188028e88b546c2a3552831da553595016afcdf0b19583c05443\" returns successfully" Jul 1 08:39:41.646741 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 1 08:39:41.647619 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 1 08:39:41.794899 kubelet[2710]: I0701 08:39:41.794715 2710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-58v6s" podStartSLOduration=3.084007952 podStartE2EDuration="43.794698437s" podCreationTimestamp="2025-07-01 08:38:58 +0000 UTC" firstStartedPulling="2025-07-01 08:38:59.826989231 +0000 UTC m=+31.103058536" lastFinishedPulling="2025-07-01 08:39:40.537679716 +0000 UTC m=+71.813749021" observedRunningTime="2025-07-01 08:39:41.794621359 +0000 UTC m=+73.070690665" watchObservedRunningTime="2025-07-01 08:39:41.794698437 +0000 UTC m=+73.070767732" Jul 1 08:39:41.951728 kubelet[2710]: I0701 08:39:41.951615 2710 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0786e9ab-ac8a-49c1-b383-a63a8b688648-whisker-backend-key-pair\") pod \"0786e9ab-ac8a-49c1-b383-a63a8b688648\" (UID: \"0786e9ab-ac8a-49c1-b383-a63a8b688648\") " Jul 1 08:39:41.951728 kubelet[2710]: I0701 08:39:41.951692 2710 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kld4g\" (UniqueName: \"kubernetes.io/projected/0786e9ab-ac8a-49c1-b383-a63a8b688648-kube-api-access-kld4g\") pod \"0786e9ab-ac8a-49c1-b383-a63a8b688648\" (UID: \"0786e9ab-ac8a-49c1-b383-a63a8b688648\") " Jul 1 08:39:41.951728 kubelet[2710]: I0701 08:39:41.951730 2710 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0786e9ab-ac8a-49c1-b383-a63a8b688648-whisker-ca-bundle\") pod \"0786e9ab-ac8a-49c1-b383-a63a8b688648\" (UID: \"0786e9ab-ac8a-49c1-b383-a63a8b688648\") " Jul 1 08:39:41.952439 kubelet[2710]: I0701 08:39:41.952309 2710 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0786e9ab-ac8a-49c1-b383-a63a8b688648-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "0786e9ab-ac8a-49c1-b383-a63a8b688648" (UID: "0786e9ab-ac8a-49c1-b383-a63a8b688648"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 1 08:39:41.962641 systemd[1]: var-lib-kubelet-pods-0786e9ab\x2dac8a\x2d49c1\x2db383\x2da63a8b688648-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkld4g.mount: Deactivated successfully. Jul 1 08:39:41.963184 systemd[1]: var-lib-kubelet-pods-0786e9ab\x2dac8a\x2d49c1\x2db383\x2da63a8b688648-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 1 08:39:41.966104 kubelet[2710]: I0701 08:39:41.964913 2710 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0786e9ab-ac8a-49c1-b383-a63a8b688648-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "0786e9ab-ac8a-49c1-b383-a63a8b688648" (UID: "0786e9ab-ac8a-49c1-b383-a63a8b688648"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 1 08:39:41.966104 kubelet[2710]: I0701 08:39:41.966022 2710 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0786e9ab-ac8a-49c1-b383-a63a8b688648-kube-api-access-kld4g" (OuterVolumeSpecName: "kube-api-access-kld4g") pod "0786e9ab-ac8a-49c1-b383-a63a8b688648" (UID: "0786e9ab-ac8a-49c1-b383-a63a8b688648"). InnerVolumeSpecName "kube-api-access-kld4g". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 1 08:39:41.987638 containerd[1542]: time="2025-07-01T08:39:41.987591036Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2c8e2919101a188028e88b546c2a3552831da553595016afcdf0b19583c05443\" id:\"e99957e0ff703abccad04b1662d9b4a1b126fb8859068381f511cceaf16344de\" pid:3939 exit_status:1 exited_at:{seconds:1751359181 nanos:976599509}" Jul 1 08:39:42.052315 kubelet[2710]: I0701 08:39:42.052241 2710 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0786e9ab-ac8a-49c1-b383-a63a8b688648-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jul 1 08:39:42.052315 kubelet[2710]: I0701 08:39:42.052285 2710 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0786e9ab-ac8a-49c1-b383-a63a8b688648-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jul 1 08:39:42.052315 kubelet[2710]: I0701 08:39:42.052298 2710 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kld4g\" (UniqueName: \"kubernetes.io/projected/0786e9ab-ac8a-49c1-b383-a63a8b688648-kube-api-access-kld4g\") on node \"localhost\" DevicePath \"\"" Jul 1 08:39:42.352288 systemd[1]: Started sshd@8-10.0.0.78:22-10.0.0.1:52484.service - OpenSSH per-connection server daemon (10.0.0.1:52484). Jul 1 08:39:42.429435 sshd[3963]: Accepted publickey for core from 10.0.0.1 port 52484 ssh2: RSA SHA256:Fdg/GPppvpuQQb5BRtreEtTPBEKGT5ZJUpnuhcL3IOo Jul 1 08:39:42.431443 sshd-session[3963]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:39:42.437232 systemd-logind[1525]: New session 9 of user core. Jul 1 08:39:42.455797 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 1 08:39:42.631893 sshd[3966]: Connection closed by 10.0.0.1 port 52484 Jul 1 08:39:42.632817 sshd-session[3963]: pam_unix(sshd:session): session closed for user core Jul 1 08:39:42.638649 systemd[1]: sshd@8-10.0.0.78:22-10.0.0.1:52484.service: Deactivated successfully. Jul 1 08:39:42.641458 systemd[1]: session-9.scope: Deactivated successfully. Jul 1 08:39:42.642745 systemd-logind[1525]: Session 9 logged out. Waiting for processes to exit. Jul 1 08:39:42.644582 systemd-logind[1525]: Removed session 9. Jul 1 08:39:42.765869 systemd[1]: Removed slice kubepods-besteffort-pod0786e9ab_ac8a_49c1_b383_a63a8b688648.slice - libcontainer container kubepods-besteffort-pod0786e9ab_ac8a_49c1_b383_a63a8b688648.slice. Jul 1 08:39:42.845396 containerd[1542]: time="2025-07-01T08:39:42.845269569Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2c8e2919101a188028e88b546c2a3552831da553595016afcdf0b19583c05443\" id:\"b72127f15f5ae3f7ca3cbc383bcdd9348a4c055dcb099a8806d77c3beaae18d2\" pid:3991 exit_status:1 exited_at:{seconds:1751359182 nanos:844862360}" Jul 1 08:39:42.878566 containerd[1542]: time="2025-07-01T08:39:42.877393967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d868c7f67-4frp4,Uid:be22af01-1e33-4ff0-84fd-b35a2cbb1ca7,Namespace:calico-system,Attempt:0,}" Jul 1 08:39:42.879402 containerd[1542]: time="2025-07-01T08:39:42.879323594Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-755864c8f7-l96sx,Uid:768e85aa-cad8-45c2-9aa4-684f6da1de45,Namespace:calico-apiserver,Attempt:0,}" Jul 1 08:39:42.879529 containerd[1542]: time="2025-07-01T08:39:42.878458116Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4kczh,Uid:41da0259-d39e-42de-9ee6-ab885e8b5785,Namespace:calico-system,Attempt:0,}" Jul 1 08:39:42.972462 systemd[1]: Created slice kubepods-besteffort-pod6a33d63a_3622_4ba2_92e2_3d0c014d3abe.slice - libcontainer container kubepods-besteffort-pod6a33d63a_3622_4ba2_92e2_3d0c014d3abe.slice. Jul 1 08:39:43.062105 kubelet[2710]: I0701 08:39:43.062006 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6a33d63a-3622-4ba2-92e2-3d0c014d3abe-whisker-backend-key-pair\") pod \"whisker-7464967f58-kz8mf\" (UID: \"6a33d63a-3622-4ba2-92e2-3d0c014d3abe\") " pod="calico-system/whisker-7464967f58-kz8mf" Jul 1 08:39:43.062105 kubelet[2710]: I0701 08:39:43.062070 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m277w\" (UniqueName: \"kubernetes.io/projected/6a33d63a-3622-4ba2-92e2-3d0c014d3abe-kube-api-access-m277w\") pod \"whisker-7464967f58-kz8mf\" (UID: \"6a33d63a-3622-4ba2-92e2-3d0c014d3abe\") " pod="calico-system/whisker-7464967f58-kz8mf" Jul 1 08:39:43.062105 kubelet[2710]: I0701 08:39:43.062089 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a33d63a-3622-4ba2-92e2-3d0c014d3abe-whisker-ca-bundle\") pod \"whisker-7464967f58-kz8mf\" (UID: \"6a33d63a-3622-4ba2-92e2-3d0c014d3abe\") " pod="calico-system/whisker-7464967f58-kz8mf" Jul 1 08:39:43.576647 containerd[1542]: time="2025-07-01T08:39:43.576595140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7464967f58-kz8mf,Uid:6a33d63a-3622-4ba2-92e2-3d0c014d3abe,Namespace:calico-system,Attempt:0,}" Jul 1 08:39:43.875588 containerd[1542]: time="2025-07-01T08:39:43.875430248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-bk4zk,Uid:04495a2b-e4bd-4639-a451-560be8d17132,Namespace:kube-system,Attempt:0,}" Jul 1 08:39:43.875936 containerd[1542]: time="2025-07-01T08:39:43.875651021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-m5pqf,Uid:7c1e9f24-4286-4947-b2c6-20a4f6d90605,Namespace:kube-system,Attempt:0,}" Jul 1 08:39:44.377262 systemd-networkd[1475]: calibac0c00c658: Link UP Jul 1 08:39:44.377642 systemd-networkd[1475]: calibac0c00c658: Gained carrier Jul 1 08:39:44.409342 containerd[1542]: 2025-07-01 08:39:42.944 [INFO][4037] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 1 08:39:44.409342 containerd[1542]: 2025-07-01 08:39:43.344 [INFO][4037] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--4kczh-eth0 csi-node-driver- calico-system 41da0259-d39e-42de-9ee6-ab885e8b5785 750 0 2025-07-01 08:38:59 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-4kczh eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calibac0c00c658 [] [] }} ContainerID="c13aca9892c893563bffc3e2bb73095cd5fb3689107611175367ed4571bf6411" Namespace="calico-system" Pod="csi-node-driver-4kczh" WorkloadEndpoint="localhost-k8s-csi--node--driver--4kczh-" Jul 1 08:39:44.409342 containerd[1542]: 2025-07-01 08:39:43.345 [INFO][4037] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c13aca9892c893563bffc3e2bb73095cd5fb3689107611175367ed4571bf6411" Namespace="calico-system" Pod="csi-node-driver-4kczh" WorkloadEndpoint="localhost-k8s-csi--node--driver--4kczh-eth0" Jul 1 08:39:44.409342 containerd[1542]: 2025-07-01 08:39:44.206 [INFO][4065] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c13aca9892c893563bffc3e2bb73095cd5fb3689107611175367ed4571bf6411" HandleID="k8s-pod-network.c13aca9892c893563bffc3e2bb73095cd5fb3689107611175367ed4571bf6411" Workload="localhost-k8s-csi--node--driver--4kczh-eth0" Jul 1 08:39:44.411109 containerd[1542]: 2025-07-01 08:39:44.222 [INFO][4065] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c13aca9892c893563bffc3e2bb73095cd5fb3689107611175367ed4571bf6411" HandleID="k8s-pod-network.c13aca9892c893563bffc3e2bb73095cd5fb3689107611175367ed4571bf6411" Workload="localhost-k8s-csi--node--driver--4kczh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001897c0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-4kczh", "timestamp":"2025-07-01 08:39:44.206580676 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 1 08:39:44.411109 containerd[1542]: 2025-07-01 08:39:44.222 [INFO][4065] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 1 08:39:44.411109 containerd[1542]: 2025-07-01 08:39:44.223 [INFO][4065] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 1 08:39:44.411109 containerd[1542]: 2025-07-01 08:39:44.223 [INFO][4065] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 1 08:39:44.411109 containerd[1542]: 2025-07-01 08:39:44.283 [INFO][4065] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c13aca9892c893563bffc3e2bb73095cd5fb3689107611175367ed4571bf6411" host="localhost" Jul 1 08:39:44.411109 containerd[1542]: 2025-07-01 08:39:44.301 [INFO][4065] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 1 08:39:44.411109 containerd[1542]: 2025-07-01 08:39:44.319 [INFO][4065] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 1 08:39:44.411109 containerd[1542]: 2025-07-01 08:39:44.326 [INFO][4065] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 1 08:39:44.411109 containerd[1542]: 2025-07-01 08:39:44.330 [INFO][4065] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 1 08:39:44.411109 containerd[1542]: 2025-07-01 08:39:44.330 [INFO][4065] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c13aca9892c893563bffc3e2bb73095cd5fb3689107611175367ed4571bf6411" host="localhost" Jul 1 08:39:44.411470 containerd[1542]: 2025-07-01 08:39:44.334 [INFO][4065] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c13aca9892c893563bffc3e2bb73095cd5fb3689107611175367ed4571bf6411 Jul 1 08:39:44.411470 containerd[1542]: 2025-07-01 08:39:44.341 [INFO][4065] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c13aca9892c893563bffc3e2bb73095cd5fb3689107611175367ed4571bf6411" host="localhost" Jul 1 08:39:44.411470 containerd[1542]: 2025-07-01 08:39:44.352 [INFO][4065] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.c13aca9892c893563bffc3e2bb73095cd5fb3689107611175367ed4571bf6411" host="localhost" Jul 1 08:39:44.411470 containerd[1542]: 2025-07-01 08:39:44.353 [INFO][4065] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.c13aca9892c893563bffc3e2bb73095cd5fb3689107611175367ed4571bf6411" host="localhost" Jul 1 08:39:44.411470 containerd[1542]: 2025-07-01 08:39:44.353 [INFO][4065] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 1 08:39:44.411470 containerd[1542]: 2025-07-01 08:39:44.353 [INFO][4065] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="c13aca9892c893563bffc3e2bb73095cd5fb3689107611175367ed4571bf6411" HandleID="k8s-pod-network.c13aca9892c893563bffc3e2bb73095cd5fb3689107611175367ed4571bf6411" Workload="localhost-k8s-csi--node--driver--4kczh-eth0" Jul 1 08:39:44.411675 containerd[1542]: 2025-07-01 08:39:44.359 [INFO][4037] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c13aca9892c893563bffc3e2bb73095cd5fb3689107611175367ed4571bf6411" Namespace="calico-system" Pod="csi-node-driver-4kczh" WorkloadEndpoint="localhost-k8s-csi--node--driver--4kczh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--4kczh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"41da0259-d39e-42de-9ee6-ab885e8b5785", ResourceVersion:"750", Generation:0, CreationTimestamp:time.Date(2025, time.July, 1, 8, 38, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-4kczh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibac0c00c658", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 1 08:39:44.411759 containerd[1542]: 2025-07-01 08:39:44.360 [INFO][4037] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="c13aca9892c893563bffc3e2bb73095cd5fb3689107611175367ed4571bf6411" Namespace="calico-system" Pod="csi-node-driver-4kczh" WorkloadEndpoint="localhost-k8s-csi--node--driver--4kczh-eth0" Jul 1 08:39:44.411759 containerd[1542]: 2025-07-01 08:39:44.361 [INFO][4037] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibac0c00c658 ContainerID="c13aca9892c893563bffc3e2bb73095cd5fb3689107611175367ed4571bf6411" Namespace="calico-system" Pod="csi-node-driver-4kczh" WorkloadEndpoint="localhost-k8s-csi--node--driver--4kczh-eth0" Jul 1 08:39:44.411759 containerd[1542]: 2025-07-01 08:39:44.379 [INFO][4037] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c13aca9892c893563bffc3e2bb73095cd5fb3689107611175367ed4571bf6411" Namespace="calico-system" Pod="csi-node-driver-4kczh" WorkloadEndpoint="localhost-k8s-csi--node--driver--4kczh-eth0" Jul 1 08:39:44.411973 containerd[1542]: 2025-07-01 08:39:44.380 [INFO][4037] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c13aca9892c893563bffc3e2bb73095cd5fb3689107611175367ed4571bf6411" Namespace="calico-system" Pod="csi-node-driver-4kczh" WorkloadEndpoint="localhost-k8s-csi--node--driver--4kczh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--4kczh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"41da0259-d39e-42de-9ee6-ab885e8b5785", ResourceVersion:"750", Generation:0, CreationTimestamp:time.Date(2025, time.July, 1, 8, 38, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c13aca9892c893563bffc3e2bb73095cd5fb3689107611175367ed4571bf6411", Pod:"csi-node-driver-4kczh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibac0c00c658", MAC:"d6:d3:e2:22:c8:76", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 1 08:39:44.412049 containerd[1542]: 2025-07-01 08:39:44.404 [INFO][4037] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c13aca9892c893563bffc3e2bb73095cd5fb3689107611175367ed4571bf6411" Namespace="calico-system" Pod="csi-node-driver-4kczh" WorkloadEndpoint="localhost-k8s-csi--node--driver--4kczh-eth0" Jul 1 08:39:44.552499 systemd-networkd[1475]: cali0c127bed15b: Link UP Jul 1 08:39:44.554531 systemd-networkd[1475]: cali0c127bed15b: Gained carrier Jul 1 08:39:44.601363 containerd[1542]: 2025-07-01 08:39:42.946 [INFO][4022] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 1 08:39:44.601363 containerd[1542]: 2025-07-01 08:39:43.332 [INFO][4022] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--755864c8f7--l96sx-eth0 calico-apiserver-755864c8f7- calico-apiserver 768e85aa-cad8-45c2-9aa4-684f6da1de45 908 0 2025-07-01 08:38:55 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:755864c8f7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-755864c8f7-l96sx eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali0c127bed15b [] [] }} ContainerID="d9b1b7b636c2d180b659c6d6383daa0776e3f9796da0edad60c7714fc634aded" Namespace="calico-apiserver" Pod="calico-apiserver-755864c8f7-l96sx" WorkloadEndpoint="localhost-k8s-calico--apiserver--755864c8f7--l96sx-" Jul 1 08:39:44.601363 containerd[1542]: 2025-07-01 08:39:43.334 [INFO][4022] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d9b1b7b636c2d180b659c6d6383daa0776e3f9796da0edad60c7714fc634aded" Namespace="calico-apiserver" Pod="calico-apiserver-755864c8f7-l96sx" WorkloadEndpoint="localhost-k8s-calico--apiserver--755864c8f7--l96sx-eth0" Jul 1 08:39:44.601363 containerd[1542]: 2025-07-01 08:39:44.206 [INFO][4061] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d9b1b7b636c2d180b659c6d6383daa0776e3f9796da0edad60c7714fc634aded" HandleID="k8s-pod-network.d9b1b7b636c2d180b659c6d6383daa0776e3f9796da0edad60c7714fc634aded" Workload="localhost-k8s-calico--apiserver--755864c8f7--l96sx-eth0" Jul 1 08:39:44.605532 containerd[1542]: 2025-07-01 08:39:44.222 [INFO][4061] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d9b1b7b636c2d180b659c6d6383daa0776e3f9796da0edad60c7714fc634aded" HandleID="k8s-pod-network.d9b1b7b636c2d180b659c6d6383daa0776e3f9796da0edad60c7714fc634aded" Workload="localhost-k8s-calico--apiserver--755864c8f7--l96sx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001389a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-755864c8f7-l96sx", "timestamp":"2025-07-01 08:39:44.206565486 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 1 08:39:44.605532 containerd[1542]: 2025-07-01 08:39:44.223 [INFO][4061] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 1 08:39:44.605532 containerd[1542]: 2025-07-01 08:39:44.353 [INFO][4061] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 1 08:39:44.605532 containerd[1542]: 2025-07-01 08:39:44.353 [INFO][4061] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 1 08:39:44.605532 containerd[1542]: 2025-07-01 08:39:44.385 [INFO][4061] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d9b1b7b636c2d180b659c6d6383daa0776e3f9796da0edad60c7714fc634aded" host="localhost" Jul 1 08:39:44.605532 containerd[1542]: 2025-07-01 08:39:44.419 [INFO][4061] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 1 08:39:44.605532 containerd[1542]: 2025-07-01 08:39:44.438 [INFO][4061] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 1 08:39:44.605532 containerd[1542]: 2025-07-01 08:39:44.445 [INFO][4061] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 1 08:39:44.605532 containerd[1542]: 2025-07-01 08:39:44.451 [INFO][4061] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 1 08:39:44.605532 containerd[1542]: 2025-07-01 08:39:44.452 [INFO][4061] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d9b1b7b636c2d180b659c6d6383daa0776e3f9796da0edad60c7714fc634aded" host="localhost" Jul 1 08:39:44.605921 containerd[1542]: 2025-07-01 08:39:44.457 [INFO][4061] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d9b1b7b636c2d180b659c6d6383daa0776e3f9796da0edad60c7714fc634aded Jul 1 08:39:44.605921 containerd[1542]: 2025-07-01 08:39:44.468 [INFO][4061] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d9b1b7b636c2d180b659c6d6383daa0776e3f9796da0edad60c7714fc634aded" host="localhost" Jul 1 08:39:44.605921 containerd[1542]: 2025-07-01 08:39:44.509 [INFO][4061] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.d9b1b7b636c2d180b659c6d6383daa0776e3f9796da0edad60c7714fc634aded" host="localhost" Jul 1 08:39:44.605921 containerd[1542]: 2025-07-01 08:39:44.510 [INFO][4061] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.d9b1b7b636c2d180b659c6d6383daa0776e3f9796da0edad60c7714fc634aded" host="localhost" Jul 1 08:39:44.605921 containerd[1542]: 2025-07-01 08:39:44.511 [INFO][4061] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 1 08:39:44.605921 containerd[1542]: 2025-07-01 08:39:44.511 [INFO][4061] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="d9b1b7b636c2d180b659c6d6383daa0776e3f9796da0edad60c7714fc634aded" HandleID="k8s-pod-network.d9b1b7b636c2d180b659c6d6383daa0776e3f9796da0edad60c7714fc634aded" Workload="localhost-k8s-calico--apiserver--755864c8f7--l96sx-eth0" Jul 1 08:39:44.606104 containerd[1542]: 2025-07-01 08:39:44.532 [INFO][4022] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d9b1b7b636c2d180b659c6d6383daa0776e3f9796da0edad60c7714fc634aded" Namespace="calico-apiserver" Pod="calico-apiserver-755864c8f7-l96sx" WorkloadEndpoint="localhost-k8s-calico--apiserver--755864c8f7--l96sx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--755864c8f7--l96sx-eth0", GenerateName:"calico-apiserver-755864c8f7-", Namespace:"calico-apiserver", SelfLink:"", UID:"768e85aa-cad8-45c2-9aa4-684f6da1de45", ResourceVersion:"908", Generation:0, CreationTimestamp:time.Date(2025, time.July, 1, 8, 38, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"755864c8f7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-755864c8f7-l96sx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0c127bed15b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 1 08:39:44.606185 containerd[1542]: 2025-07-01 08:39:44.535 [INFO][4022] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="d9b1b7b636c2d180b659c6d6383daa0776e3f9796da0edad60c7714fc634aded" Namespace="calico-apiserver" Pod="calico-apiserver-755864c8f7-l96sx" WorkloadEndpoint="localhost-k8s-calico--apiserver--755864c8f7--l96sx-eth0" Jul 1 08:39:44.606185 containerd[1542]: 2025-07-01 08:39:44.535 [INFO][4022] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0c127bed15b ContainerID="d9b1b7b636c2d180b659c6d6383daa0776e3f9796da0edad60c7714fc634aded" Namespace="calico-apiserver" Pod="calico-apiserver-755864c8f7-l96sx" WorkloadEndpoint="localhost-k8s-calico--apiserver--755864c8f7--l96sx-eth0" Jul 1 08:39:44.606185 containerd[1542]: 2025-07-01 08:39:44.558 [INFO][4022] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d9b1b7b636c2d180b659c6d6383daa0776e3f9796da0edad60c7714fc634aded" Namespace="calico-apiserver" Pod="calico-apiserver-755864c8f7-l96sx" WorkloadEndpoint="localhost-k8s-calico--apiserver--755864c8f7--l96sx-eth0" Jul 1 08:39:44.606366 containerd[1542]: 2025-07-01 08:39:44.564 [INFO][4022] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d9b1b7b636c2d180b659c6d6383daa0776e3f9796da0edad60c7714fc634aded" Namespace="calico-apiserver" Pod="calico-apiserver-755864c8f7-l96sx" WorkloadEndpoint="localhost-k8s-calico--apiserver--755864c8f7--l96sx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--755864c8f7--l96sx-eth0", GenerateName:"calico-apiserver-755864c8f7-", Namespace:"calico-apiserver", SelfLink:"", UID:"768e85aa-cad8-45c2-9aa4-684f6da1de45", ResourceVersion:"908", Generation:0, CreationTimestamp:time.Date(2025, time.July, 1, 8, 38, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"755864c8f7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d9b1b7b636c2d180b659c6d6383daa0776e3f9796da0edad60c7714fc634aded", Pod:"calico-apiserver-755864c8f7-l96sx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0c127bed15b", MAC:"46:92:f0:7f:95:0e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 1 08:39:44.612033 containerd[1542]: 2025-07-01 08:39:44.590 [INFO][4022] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d9b1b7b636c2d180b659c6d6383daa0776e3f9796da0edad60c7714fc634aded" Namespace="calico-apiserver" Pod="calico-apiserver-755864c8f7-l96sx" WorkloadEndpoint="localhost-k8s-calico--apiserver--755864c8f7--l96sx-eth0" Jul 1 08:39:44.623438 containerd[1542]: time="2025-07-01T08:39:44.623263697Z" level=info msg="connecting to shim c13aca9892c893563bffc3e2bb73095cd5fb3689107611175367ed4571bf6411" address="unix:///run/containerd/s/67afe8488c12810983f363bfd78fd006d216720582dc9b1c060a660611c74855" namespace=k8s.io protocol=ttrpc version=3 Jul 1 08:39:44.676841 systemd[1]: Started cri-containerd-c13aca9892c893563bffc3e2bb73095cd5fb3689107611175367ed4571bf6411.scope - libcontainer container c13aca9892c893563bffc3e2bb73095cd5fb3689107611175367ed4571bf6411. Jul 1 08:39:44.682328 systemd-networkd[1475]: calied813a1fdc4: Link UP Jul 1 08:39:44.684050 systemd-networkd[1475]: calied813a1fdc4: Gained carrier Jul 1 08:39:44.709881 containerd[1542]: 2025-07-01 08:39:42.946 [INFO][4011] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 1 08:39:44.709881 containerd[1542]: 2025-07-01 08:39:43.332 [INFO][4011] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--6d868c7f67--4frp4-eth0 calico-kube-controllers-6d868c7f67- calico-system be22af01-1e33-4ff0-84fd-b35a2cbb1ca7 906 0 2025-07-01 08:38:59 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6d868c7f67 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-6d868c7f67-4frp4 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calied813a1fdc4 [] [] }} ContainerID="bf03840e4a574380d2eacdf6c4f9365709f2d5f849ad7bbed34140a2169aabb7" Namespace="calico-system" Pod="calico-kube-controllers-6d868c7f67-4frp4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6d868c7f67--4frp4-" Jul 1 08:39:44.709881 containerd[1542]: 2025-07-01 08:39:43.334 [INFO][4011] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bf03840e4a574380d2eacdf6c4f9365709f2d5f849ad7bbed34140a2169aabb7" Namespace="calico-system" Pod="calico-kube-controllers-6d868c7f67-4frp4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6d868c7f67--4frp4-eth0" Jul 1 08:39:44.709881 containerd[1542]: 2025-07-01 08:39:44.206 [INFO][4062] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bf03840e4a574380d2eacdf6c4f9365709f2d5f849ad7bbed34140a2169aabb7" HandleID="k8s-pod-network.bf03840e4a574380d2eacdf6c4f9365709f2d5f849ad7bbed34140a2169aabb7" Workload="localhost-k8s-calico--kube--controllers--6d868c7f67--4frp4-eth0" Jul 1 08:39:44.710295 containerd[1542]: 2025-07-01 08:39:44.222 [INFO][4062] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bf03840e4a574380d2eacdf6c4f9365709f2d5f849ad7bbed34140a2169aabb7" HandleID="k8s-pod-network.bf03840e4a574380d2eacdf6c4f9365709f2d5f849ad7bbed34140a2169aabb7" Workload="localhost-k8s-calico--kube--controllers--6d868c7f67--4frp4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000132d40), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-6d868c7f67-4frp4", "timestamp":"2025-07-01 08:39:44.206917009 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 1 08:39:44.710295 containerd[1542]: 2025-07-01 08:39:44.223 [INFO][4062] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 1 08:39:44.710295 containerd[1542]: 2025-07-01 08:39:44.511 [INFO][4062] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 1 08:39:44.710295 containerd[1542]: 2025-07-01 08:39:44.512 [INFO][4062] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 1 08:39:44.710295 containerd[1542]: 2025-07-01 08:39:44.537 [INFO][4062] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bf03840e4a574380d2eacdf6c4f9365709f2d5f849ad7bbed34140a2169aabb7" host="localhost" Jul 1 08:39:44.710295 containerd[1542]: 2025-07-01 08:39:44.560 [INFO][4062] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 1 08:39:44.710295 containerd[1542]: 2025-07-01 08:39:44.593 [INFO][4062] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 1 08:39:44.710295 containerd[1542]: 2025-07-01 08:39:44.608 [INFO][4062] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 1 08:39:44.710295 containerd[1542]: 2025-07-01 08:39:44.617 [INFO][4062] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 1 08:39:44.710295 containerd[1542]: 2025-07-01 08:39:44.617 [INFO][4062] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.bf03840e4a574380d2eacdf6c4f9365709f2d5f849ad7bbed34140a2169aabb7" host="localhost" Jul 1 08:39:44.710849 containerd[1542]: 2025-07-01 08:39:44.621 [INFO][4062] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.bf03840e4a574380d2eacdf6c4f9365709f2d5f849ad7bbed34140a2169aabb7 Jul 1 08:39:44.710849 containerd[1542]: 2025-07-01 08:39:44.632 [INFO][4062] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.bf03840e4a574380d2eacdf6c4f9365709f2d5f849ad7bbed34140a2169aabb7" host="localhost" Jul 1 08:39:44.710849 containerd[1542]: 2025-07-01 08:39:44.657 [INFO][4062] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.bf03840e4a574380d2eacdf6c4f9365709f2d5f849ad7bbed34140a2169aabb7" host="localhost" Jul 1 08:39:44.710849 containerd[1542]: 2025-07-01 08:39:44.657 [INFO][4062] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.bf03840e4a574380d2eacdf6c4f9365709f2d5f849ad7bbed34140a2169aabb7" host="localhost" Jul 1 08:39:44.710849 containerd[1542]: 2025-07-01 08:39:44.657 [INFO][4062] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 1 08:39:44.710849 containerd[1542]: 2025-07-01 08:39:44.657 [INFO][4062] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="bf03840e4a574380d2eacdf6c4f9365709f2d5f849ad7bbed34140a2169aabb7" HandleID="k8s-pod-network.bf03840e4a574380d2eacdf6c4f9365709f2d5f849ad7bbed34140a2169aabb7" Workload="localhost-k8s-calico--kube--controllers--6d868c7f67--4frp4-eth0" Jul 1 08:39:44.710973 containerd[1542]: 2025-07-01 08:39:44.674 [INFO][4011] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bf03840e4a574380d2eacdf6c4f9365709f2d5f849ad7bbed34140a2169aabb7" Namespace="calico-system" Pod="calico-kube-controllers-6d868c7f67-4frp4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6d868c7f67--4frp4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6d868c7f67--4frp4-eth0", GenerateName:"calico-kube-controllers-6d868c7f67-", Namespace:"calico-system", SelfLink:"", UID:"be22af01-1e33-4ff0-84fd-b35a2cbb1ca7", ResourceVersion:"906", Generation:0, CreationTimestamp:time.Date(2025, time.July, 1, 8, 38, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6d868c7f67", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-6d868c7f67-4frp4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calied813a1fdc4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 1 08:39:44.711031 containerd[1542]: 2025-07-01 08:39:44.674 [INFO][4011] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="bf03840e4a574380d2eacdf6c4f9365709f2d5f849ad7bbed34140a2169aabb7" Namespace="calico-system" Pod="calico-kube-controllers-6d868c7f67-4frp4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6d868c7f67--4frp4-eth0" Jul 1 08:39:44.711031 containerd[1542]: 2025-07-01 08:39:44.675 [INFO][4011] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calied813a1fdc4 ContainerID="bf03840e4a574380d2eacdf6c4f9365709f2d5f849ad7bbed34140a2169aabb7" Namespace="calico-system" Pod="calico-kube-controllers-6d868c7f67-4frp4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6d868c7f67--4frp4-eth0" Jul 1 08:39:44.711031 containerd[1542]: 2025-07-01 08:39:44.685 [INFO][4011] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bf03840e4a574380d2eacdf6c4f9365709f2d5f849ad7bbed34140a2169aabb7" Namespace="calico-system" Pod="calico-kube-controllers-6d868c7f67-4frp4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6d868c7f67--4frp4-eth0" Jul 1 08:39:44.711094 containerd[1542]: 2025-07-01 08:39:44.688 [INFO][4011] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bf03840e4a574380d2eacdf6c4f9365709f2d5f849ad7bbed34140a2169aabb7" Namespace="calico-system" Pod="calico-kube-controllers-6d868c7f67-4frp4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6d868c7f67--4frp4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6d868c7f67--4frp4-eth0", GenerateName:"calico-kube-controllers-6d868c7f67-", Namespace:"calico-system", SelfLink:"", UID:"be22af01-1e33-4ff0-84fd-b35a2cbb1ca7", ResourceVersion:"906", Generation:0, CreationTimestamp:time.Date(2025, time.July, 1, 8, 38, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6d868c7f67", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bf03840e4a574380d2eacdf6c4f9365709f2d5f849ad7bbed34140a2169aabb7", Pod:"calico-kube-controllers-6d868c7f67-4frp4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calied813a1fdc4", MAC:"52:81:ef:67:96:73", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 1 08:39:44.711149 containerd[1542]: 2025-07-01 08:39:44.703 [INFO][4011] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bf03840e4a574380d2eacdf6c4f9365709f2d5f849ad7bbed34140a2169aabb7" Namespace="calico-system" Pod="calico-kube-controllers-6d868c7f67-4frp4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6d868c7f67--4frp4-eth0" Jul 1 08:39:44.719618 systemd-resolved[1399]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 1 08:39:44.739269 systemd-networkd[1475]: vxlan.calico: Link UP Jul 1 08:39:44.739283 systemd-networkd[1475]: vxlan.calico: Gained carrier Jul 1 08:39:44.781447 containerd[1542]: time="2025-07-01T08:39:44.781258751Z" level=info msg="connecting to shim d9b1b7b636c2d180b659c6d6383daa0776e3f9796da0edad60c7714fc634aded" address="unix:///run/containerd/s/5722f2e6c1908e95b26f5552eb4495e25279c0bf6575d52eb71ce0f2eaf2cf8b" namespace=k8s.io protocol=ttrpc version=3 Jul 1 08:39:44.847677 systemd[1]: Started cri-containerd-d9b1b7b636c2d180b659c6d6383daa0776e3f9796da0edad60c7714fc634aded.scope - libcontainer container d9b1b7b636c2d180b659c6d6383daa0776e3f9796da0edad60c7714fc634aded. Jul 1 08:39:44.875879 systemd-resolved[1399]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 1 08:39:44.881355 kubelet[2710]: I0701 08:39:44.881306 2710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0786e9ab-ac8a-49c1-b383-a63a8b688648" path="/var/lib/kubelet/pods/0786e9ab-ac8a-49c1-b383-a63a8b688648/volumes" Jul 1 08:39:45.370804 containerd[1542]: time="2025-07-01T08:39:45.370695424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4kczh,Uid:41da0259-d39e-42de-9ee6-ab885e8b5785,Namespace:calico-system,Attempt:0,} returns sandbox id \"c13aca9892c893563bffc3e2bb73095cd5fb3689107611175367ed4571bf6411\"" Jul 1 08:39:45.379486 containerd[1542]: time="2025-07-01T08:39:45.379385416Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 1 08:39:45.390179 containerd[1542]: time="2025-07-01T08:39:45.390101583Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-755864c8f7-l96sx,Uid:768e85aa-cad8-45c2-9aa4-684f6da1de45,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"d9b1b7b636c2d180b659c6d6383daa0776e3f9796da0edad60c7714fc634aded\"" Jul 1 08:39:45.421120 systemd-networkd[1475]: cali46a536f9123: Link UP Jul 1 08:39:45.424034 systemd-networkd[1475]: cali46a536f9123: Gained carrier Jul 1 08:39:45.451216 containerd[1542]: time="2025-07-01T08:39:45.451109968Z" level=info msg="connecting to shim bf03840e4a574380d2eacdf6c4f9365709f2d5f849ad7bbed34140a2169aabb7" address="unix:///run/containerd/s/6d8a453cdf111b2e8439738c942db1fe4662fb7fb906a009a5262d76ab3b9f6a" namespace=k8s.io protocol=ttrpc version=3 Jul 1 08:39:45.452027 containerd[1542]: 2025-07-01 08:39:44.847 [INFO][4298] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--m5pqf-eth0 coredns-674b8bbfcf- kube-system 7c1e9f24-4286-4947-b2c6-20a4f6d90605 904 0 2025-07-01 08:38:34 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-m5pqf eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali46a536f9123 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="ae27775104615e0e1b2ce0e9fa874e5be8d950f7a959b7f45d8e3966e645e97a" Namespace="kube-system" Pod="coredns-674b8bbfcf-m5pqf" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--m5pqf-" Jul 1 08:39:45.452027 containerd[1542]: 2025-07-01 08:39:44.848 [INFO][4298] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ae27775104615e0e1b2ce0e9fa874e5be8d950f7a959b7f45d8e3966e645e97a" Namespace="kube-system" Pod="coredns-674b8bbfcf-m5pqf" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--m5pqf-eth0" Jul 1 08:39:45.452027 containerd[1542]: 2025-07-01 08:39:44.949 [INFO][4412] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ae27775104615e0e1b2ce0e9fa874e5be8d950f7a959b7f45d8e3966e645e97a" HandleID="k8s-pod-network.ae27775104615e0e1b2ce0e9fa874e5be8d950f7a959b7f45d8e3966e645e97a" Workload="localhost-k8s-coredns--674b8bbfcf--m5pqf-eth0" Jul 1 08:39:45.452191 containerd[1542]: 2025-07-01 08:39:44.949 [INFO][4412] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ae27775104615e0e1b2ce0e9fa874e5be8d950f7a959b7f45d8e3966e645e97a" HandleID="k8s-pod-network.ae27775104615e0e1b2ce0e9fa874e5be8d950f7a959b7f45d8e3966e645e97a" Workload="localhost-k8s-coredns--674b8bbfcf--m5pqf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f450), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-m5pqf", "timestamp":"2025-07-01 08:39:44.949507737 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 1 08:39:45.452191 containerd[1542]: 2025-07-01 08:39:44.949 [INFO][4412] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 1 08:39:45.452191 containerd[1542]: 2025-07-01 08:39:44.950 [INFO][4412] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 1 08:39:45.452191 containerd[1542]: 2025-07-01 08:39:44.950 [INFO][4412] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 1 08:39:45.452191 containerd[1542]: 2025-07-01 08:39:44.959 [INFO][4412] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ae27775104615e0e1b2ce0e9fa874e5be8d950f7a959b7f45d8e3966e645e97a" host="localhost" Jul 1 08:39:45.452191 containerd[1542]: 2025-07-01 08:39:44.969 [INFO][4412] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 1 08:39:45.452191 containerd[1542]: 2025-07-01 08:39:44.976 [INFO][4412] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 1 08:39:45.452191 containerd[1542]: 2025-07-01 08:39:44.979 [INFO][4412] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 1 08:39:45.452191 containerd[1542]: 2025-07-01 08:39:44.982 [INFO][4412] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 1 08:39:45.452191 containerd[1542]: 2025-07-01 08:39:44.982 [INFO][4412] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ae27775104615e0e1b2ce0e9fa874e5be8d950f7a959b7f45d8e3966e645e97a" host="localhost" Jul 1 08:39:45.452437 containerd[1542]: 2025-07-01 08:39:44.984 [INFO][4412] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ae27775104615e0e1b2ce0e9fa874e5be8d950f7a959b7f45d8e3966e645e97a Jul 1 08:39:45.452437 containerd[1542]: 2025-07-01 08:39:45.334 [INFO][4412] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ae27775104615e0e1b2ce0e9fa874e5be8d950f7a959b7f45d8e3966e645e97a" host="localhost" Jul 1 08:39:45.452437 containerd[1542]: 2025-07-01 08:39:45.400 [INFO][4412] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.ae27775104615e0e1b2ce0e9fa874e5be8d950f7a959b7f45d8e3966e645e97a" host="localhost" Jul 1 08:39:45.452437 containerd[1542]: 2025-07-01 08:39:45.400 [INFO][4412] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.ae27775104615e0e1b2ce0e9fa874e5be8d950f7a959b7f45d8e3966e645e97a" host="localhost" Jul 1 08:39:45.452437 containerd[1542]: 2025-07-01 08:39:45.401 [INFO][4412] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 1 08:39:45.452437 containerd[1542]: 2025-07-01 08:39:45.402 [INFO][4412] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="ae27775104615e0e1b2ce0e9fa874e5be8d950f7a959b7f45d8e3966e645e97a" HandleID="k8s-pod-network.ae27775104615e0e1b2ce0e9fa874e5be8d950f7a959b7f45d8e3966e645e97a" Workload="localhost-k8s-coredns--674b8bbfcf--m5pqf-eth0" Jul 1 08:39:45.452626 containerd[1542]: 2025-07-01 08:39:45.408 [INFO][4298] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ae27775104615e0e1b2ce0e9fa874e5be8d950f7a959b7f45d8e3966e645e97a" Namespace="kube-system" Pod="coredns-674b8bbfcf-m5pqf" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--m5pqf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--m5pqf-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"7c1e9f24-4286-4947-b2c6-20a4f6d90605", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2025, time.July, 1, 8, 38, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-m5pqf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali46a536f9123", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 1 08:39:45.452721 containerd[1542]: 2025-07-01 08:39:45.409 [INFO][4298] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="ae27775104615e0e1b2ce0e9fa874e5be8d950f7a959b7f45d8e3966e645e97a" Namespace="kube-system" Pod="coredns-674b8bbfcf-m5pqf" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--m5pqf-eth0" Jul 1 08:39:45.452721 containerd[1542]: 2025-07-01 08:39:45.409 [INFO][4298] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali46a536f9123 ContainerID="ae27775104615e0e1b2ce0e9fa874e5be8d950f7a959b7f45d8e3966e645e97a" Namespace="kube-system" Pod="coredns-674b8bbfcf-m5pqf" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--m5pqf-eth0" Jul 1 08:39:45.452721 containerd[1542]: 2025-07-01 08:39:45.426 [INFO][4298] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ae27775104615e0e1b2ce0e9fa874e5be8d950f7a959b7f45d8e3966e645e97a" Namespace="kube-system" Pod="coredns-674b8bbfcf-m5pqf" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--m5pqf-eth0" Jul 1 08:39:45.452799 containerd[1542]: 2025-07-01 08:39:45.429 [INFO][4298] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ae27775104615e0e1b2ce0e9fa874e5be8d950f7a959b7f45d8e3966e645e97a" Namespace="kube-system" Pod="coredns-674b8bbfcf-m5pqf" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--m5pqf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--m5pqf-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"7c1e9f24-4286-4947-b2c6-20a4f6d90605", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2025, time.July, 1, 8, 38, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ae27775104615e0e1b2ce0e9fa874e5be8d950f7a959b7f45d8e3966e645e97a", Pod:"coredns-674b8bbfcf-m5pqf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali46a536f9123", MAC:"fa:c7:7a:78:17:88", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 1 08:39:45.452799 containerd[1542]: 2025-07-01 08:39:45.446 [INFO][4298] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ae27775104615e0e1b2ce0e9fa874e5be8d950f7a959b7f45d8e3966e645e97a" Namespace="kube-system" Pod="coredns-674b8bbfcf-m5pqf" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--m5pqf-eth0" Jul 1 08:39:45.500735 systemd[1]: Started cri-containerd-bf03840e4a574380d2eacdf6c4f9365709f2d5f849ad7bbed34140a2169aabb7.scope - libcontainer container bf03840e4a574380d2eacdf6c4f9365709f2d5f849ad7bbed34140a2169aabb7. Jul 1 08:39:45.514088 containerd[1542]: time="2025-07-01T08:39:45.513462024Z" level=info msg="connecting to shim ae27775104615e0e1b2ce0e9fa874e5be8d950f7a959b7f45d8e3966e645e97a" address="unix:///run/containerd/s/da7f04b6cb1f02316fdf9519cb47efd8ad42f7769b40322c70518a3dd7e15899" namespace=k8s.io protocol=ttrpc version=3 Jul 1 08:39:45.534734 systemd-networkd[1475]: caliba8173fd249: Link UP Jul 1 08:39:45.534968 systemd-networkd[1475]: caliba8173fd249: Gained carrier Jul 1 08:39:45.550500 systemd-resolved[1399]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 1 08:39:45.562871 containerd[1542]: 2025-07-01 08:39:44.868 [INFO][4296] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--7464967f58--kz8mf-eth0 whisker-7464967f58- calico-system 6a33d63a-3622-4ba2-92e2-3d0c014d3abe 1031 0 2025-07-01 08:39:42 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7464967f58 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-7464967f58-kz8mf eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] caliba8173fd249 [] [] }} ContainerID="abb25e4e12ffb47337b8715ef2b712af00acc1127e87695e01b29bfbcae9961b" Namespace="calico-system" Pod="whisker-7464967f58-kz8mf" WorkloadEndpoint="localhost-k8s-whisker--7464967f58--kz8mf-" Jul 1 08:39:45.562871 containerd[1542]: 2025-07-01 08:39:44.870 [INFO][4296] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="abb25e4e12ffb47337b8715ef2b712af00acc1127e87695e01b29bfbcae9961b" Namespace="calico-system" Pod="whisker-7464967f58-kz8mf" WorkloadEndpoint="localhost-k8s-whisker--7464967f58--kz8mf-eth0" Jul 1 08:39:45.562871 containerd[1542]: 2025-07-01 08:39:44.960 [INFO][4414] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="abb25e4e12ffb47337b8715ef2b712af00acc1127e87695e01b29bfbcae9961b" HandleID="k8s-pod-network.abb25e4e12ffb47337b8715ef2b712af00acc1127e87695e01b29bfbcae9961b" Workload="localhost-k8s-whisker--7464967f58--kz8mf-eth0" Jul 1 08:39:45.562871 containerd[1542]: 2025-07-01 08:39:44.961 [INFO][4414] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="abb25e4e12ffb47337b8715ef2b712af00acc1127e87695e01b29bfbcae9961b" HandleID="k8s-pod-network.abb25e4e12ffb47337b8715ef2b712af00acc1127e87695e01b29bfbcae9961b" Workload="localhost-k8s-whisker--7464967f58--kz8mf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004ff00), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-7464967f58-kz8mf", "timestamp":"2025-07-01 08:39:44.960780439 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 1 08:39:45.562871 containerd[1542]: 2025-07-01 08:39:44.961 [INFO][4414] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 1 08:39:45.562871 containerd[1542]: 2025-07-01 08:39:45.401 [INFO][4414] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 1 08:39:45.562871 containerd[1542]: 2025-07-01 08:39:45.402 [INFO][4414] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 1 08:39:45.562871 containerd[1542]: 2025-07-01 08:39:45.422 [INFO][4414] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.abb25e4e12ffb47337b8715ef2b712af00acc1127e87695e01b29bfbcae9961b" host="localhost" Jul 1 08:39:45.562871 containerd[1542]: 2025-07-01 08:39:45.448 [INFO][4414] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 1 08:39:45.562871 containerd[1542]: 2025-07-01 08:39:45.462 [INFO][4414] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 1 08:39:45.562871 containerd[1542]: 2025-07-01 08:39:45.472 [INFO][4414] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 1 08:39:45.562871 containerd[1542]: 2025-07-01 08:39:45.478 [INFO][4414] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 1 08:39:45.562871 containerd[1542]: 2025-07-01 08:39:45.479 [INFO][4414] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.abb25e4e12ffb47337b8715ef2b712af00acc1127e87695e01b29bfbcae9961b" host="localhost" Jul 1 08:39:45.562871 containerd[1542]: 2025-07-01 08:39:45.484 [INFO][4414] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.abb25e4e12ffb47337b8715ef2b712af00acc1127e87695e01b29bfbcae9961b Jul 1 08:39:45.562871 containerd[1542]: 2025-07-01 08:39:45.495 [INFO][4414] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.abb25e4e12ffb47337b8715ef2b712af00acc1127e87695e01b29bfbcae9961b" host="localhost" Jul 1 08:39:45.562871 containerd[1542]: 2025-07-01 08:39:45.515 [INFO][4414] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.abb25e4e12ffb47337b8715ef2b712af00acc1127e87695e01b29bfbcae9961b" host="localhost" Jul 1 08:39:45.562871 containerd[1542]: 2025-07-01 08:39:45.516 [INFO][4414] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.abb25e4e12ffb47337b8715ef2b712af00acc1127e87695e01b29bfbcae9961b" host="localhost" Jul 1 08:39:45.562871 containerd[1542]: 2025-07-01 08:39:45.516 [INFO][4414] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 1 08:39:45.562871 containerd[1542]: 2025-07-01 08:39:45.517 [INFO][4414] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="abb25e4e12ffb47337b8715ef2b712af00acc1127e87695e01b29bfbcae9961b" HandleID="k8s-pod-network.abb25e4e12ffb47337b8715ef2b712af00acc1127e87695e01b29bfbcae9961b" Workload="localhost-k8s-whisker--7464967f58--kz8mf-eth0" Jul 1 08:39:45.563750 containerd[1542]: 2025-07-01 08:39:45.527 [INFO][4296] cni-plugin/k8s.go 418: Populated endpoint ContainerID="abb25e4e12ffb47337b8715ef2b712af00acc1127e87695e01b29bfbcae9961b" Namespace="calico-system" Pod="whisker-7464967f58-kz8mf" WorkloadEndpoint="localhost-k8s-whisker--7464967f58--kz8mf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7464967f58--kz8mf-eth0", GenerateName:"whisker-7464967f58-", Namespace:"calico-system", SelfLink:"", UID:"6a33d63a-3622-4ba2-92e2-3d0c014d3abe", ResourceVersion:"1031", Generation:0, CreationTimestamp:time.Date(2025, time.July, 1, 8, 39, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7464967f58", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-7464967f58-kz8mf", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"caliba8173fd249", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 1 08:39:45.563750 containerd[1542]: 2025-07-01 08:39:45.527 [INFO][4296] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="abb25e4e12ffb47337b8715ef2b712af00acc1127e87695e01b29bfbcae9961b" Namespace="calico-system" Pod="whisker-7464967f58-kz8mf" WorkloadEndpoint="localhost-k8s-whisker--7464967f58--kz8mf-eth0" Jul 1 08:39:45.563750 containerd[1542]: 2025-07-01 08:39:45.528 [INFO][4296] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliba8173fd249 ContainerID="abb25e4e12ffb47337b8715ef2b712af00acc1127e87695e01b29bfbcae9961b" Namespace="calico-system" Pod="whisker-7464967f58-kz8mf" WorkloadEndpoint="localhost-k8s-whisker--7464967f58--kz8mf-eth0" Jul 1 08:39:45.563750 containerd[1542]: 2025-07-01 08:39:45.534 [INFO][4296] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="abb25e4e12ffb47337b8715ef2b712af00acc1127e87695e01b29bfbcae9961b" Namespace="calico-system" Pod="whisker-7464967f58-kz8mf" WorkloadEndpoint="localhost-k8s-whisker--7464967f58--kz8mf-eth0" Jul 1 08:39:45.563750 containerd[1542]: 2025-07-01 08:39:45.535 [INFO][4296] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="abb25e4e12ffb47337b8715ef2b712af00acc1127e87695e01b29bfbcae9961b" Namespace="calico-system" Pod="whisker-7464967f58-kz8mf" WorkloadEndpoint="localhost-k8s-whisker--7464967f58--kz8mf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7464967f58--kz8mf-eth0", GenerateName:"whisker-7464967f58-", Namespace:"calico-system", SelfLink:"", UID:"6a33d63a-3622-4ba2-92e2-3d0c014d3abe", ResourceVersion:"1031", Generation:0, CreationTimestamp:time.Date(2025, time.July, 1, 8, 39, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7464967f58", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"abb25e4e12ffb47337b8715ef2b712af00acc1127e87695e01b29bfbcae9961b", Pod:"whisker-7464967f58-kz8mf", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"caliba8173fd249", MAC:"0e:cd:2a:b8:6d:e3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 1 08:39:45.563750 containerd[1542]: 2025-07-01 08:39:45.553 [INFO][4296] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="abb25e4e12ffb47337b8715ef2b712af00acc1127e87695e01b29bfbcae9961b" Namespace="calico-system" Pod="whisker-7464967f58-kz8mf" WorkloadEndpoint="localhost-k8s-whisker--7464967f58--kz8mf-eth0" Jul 1 08:39:45.582916 systemd[1]: Started cri-containerd-ae27775104615e0e1b2ce0e9fa874e5be8d950f7a959b7f45d8e3966e645e97a.scope - libcontainer container ae27775104615e0e1b2ce0e9fa874e5be8d950f7a959b7f45d8e3966e645e97a. Jul 1 08:39:45.621625 systemd-resolved[1399]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 1 08:39:45.630928 containerd[1542]: time="2025-07-01T08:39:45.630858859Z" level=info msg="connecting to shim abb25e4e12ffb47337b8715ef2b712af00acc1127e87695e01b29bfbcae9961b" address="unix:///run/containerd/s/e328cadbe7f69318ad14f01aba149394faae18118bb9fd7da9266359191eebc8" namespace=k8s.io protocol=ttrpc version=3 Jul 1 08:39:45.635129 systemd-networkd[1475]: cali60ded29c5b7: Link UP Jul 1 08:39:45.636487 systemd-networkd[1475]: cali60ded29c5b7: Gained carrier Jul 1 08:39:45.637762 containerd[1542]: time="2025-07-01T08:39:45.637187914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d868c7f67-4frp4,Uid:be22af01-1e33-4ff0-84fd-b35a2cbb1ca7,Namespace:calico-system,Attempt:0,} returns sandbox id \"bf03840e4a574380d2eacdf6c4f9365709f2d5f849ad7bbed34140a2169aabb7\"" Jul 1 08:39:45.671117 containerd[1542]: 2025-07-01 08:39:44.870 [INFO][4316] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--bk4zk-eth0 coredns-674b8bbfcf- kube-system 04495a2b-e4bd-4639-a451-560be8d17132 911 0 2025-07-01 08:38:35 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-bk4zk eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali60ded29c5b7 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="be0f270ce86c79b4640e9142a77ce3f99bad370a4093e621e37c67e989882c97" Namespace="kube-system" Pod="coredns-674b8bbfcf-bk4zk" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--bk4zk-" Jul 1 08:39:45.671117 containerd[1542]: 2025-07-01 08:39:44.870 [INFO][4316] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="be0f270ce86c79b4640e9142a77ce3f99bad370a4093e621e37c67e989882c97" Namespace="kube-system" Pod="coredns-674b8bbfcf-bk4zk" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--bk4zk-eth0" Jul 1 08:39:45.671117 containerd[1542]: 2025-07-01 08:39:44.971 [INFO][4416] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="be0f270ce86c79b4640e9142a77ce3f99bad370a4093e621e37c67e989882c97" HandleID="k8s-pod-network.be0f270ce86c79b4640e9142a77ce3f99bad370a4093e621e37c67e989882c97" Workload="localhost-k8s-coredns--674b8bbfcf--bk4zk-eth0" Jul 1 08:39:45.671117 containerd[1542]: 2025-07-01 08:39:44.972 [INFO][4416] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="be0f270ce86c79b4640e9142a77ce3f99bad370a4093e621e37c67e989882c97" HandleID="k8s-pod-network.be0f270ce86c79b4640e9142a77ce3f99bad370a4093e621e37c67e989882c97" Workload="localhost-k8s-coredns--674b8bbfcf--bk4zk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c6270), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-bk4zk", "timestamp":"2025-07-01 08:39:44.971779218 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 1 08:39:45.671117 containerd[1542]: 2025-07-01 08:39:44.972 [INFO][4416] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 1 08:39:45.671117 containerd[1542]: 2025-07-01 08:39:45.516 [INFO][4416] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 1 08:39:45.671117 containerd[1542]: 2025-07-01 08:39:45.517 [INFO][4416] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 1 08:39:45.671117 containerd[1542]: 2025-07-01 08:39:45.535 [INFO][4416] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.be0f270ce86c79b4640e9142a77ce3f99bad370a4093e621e37c67e989882c97" host="localhost" Jul 1 08:39:45.671117 containerd[1542]: 2025-07-01 08:39:45.555 [INFO][4416] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 1 08:39:45.671117 containerd[1542]: 2025-07-01 08:39:45.568 [INFO][4416] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 1 08:39:45.671117 containerd[1542]: 2025-07-01 08:39:45.572 [INFO][4416] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 1 08:39:45.671117 containerd[1542]: 2025-07-01 08:39:45.580 [INFO][4416] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 1 08:39:45.671117 containerd[1542]: 2025-07-01 08:39:45.581 [INFO][4416] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.be0f270ce86c79b4640e9142a77ce3f99bad370a4093e621e37c67e989882c97" host="localhost" Jul 1 08:39:45.671117 containerd[1542]: 2025-07-01 08:39:45.590 [INFO][4416] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.be0f270ce86c79b4640e9142a77ce3f99bad370a4093e621e37c67e989882c97 Jul 1 08:39:45.671117 containerd[1542]: 2025-07-01 08:39:45.602 [INFO][4416] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.be0f270ce86c79b4640e9142a77ce3f99bad370a4093e621e37c67e989882c97" host="localhost" Jul 1 08:39:45.671117 containerd[1542]: 2025-07-01 08:39:45.620 [INFO][4416] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.be0f270ce86c79b4640e9142a77ce3f99bad370a4093e621e37c67e989882c97" host="localhost" Jul 1 08:39:45.671117 containerd[1542]: 2025-07-01 08:39:45.622 [INFO][4416] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.be0f270ce86c79b4640e9142a77ce3f99bad370a4093e621e37c67e989882c97" host="localhost" Jul 1 08:39:45.671117 containerd[1542]: 2025-07-01 08:39:45.623 [INFO][4416] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 1 08:39:45.671117 containerd[1542]: 2025-07-01 08:39:45.623 [INFO][4416] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="be0f270ce86c79b4640e9142a77ce3f99bad370a4093e621e37c67e989882c97" HandleID="k8s-pod-network.be0f270ce86c79b4640e9142a77ce3f99bad370a4093e621e37c67e989882c97" Workload="localhost-k8s-coredns--674b8bbfcf--bk4zk-eth0" Jul 1 08:39:45.671968 containerd[1542]: 2025-07-01 08:39:45.630 [INFO][4316] cni-plugin/k8s.go 418: Populated endpoint ContainerID="be0f270ce86c79b4640e9142a77ce3f99bad370a4093e621e37c67e989882c97" Namespace="kube-system" Pod="coredns-674b8bbfcf-bk4zk" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--bk4zk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--bk4zk-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"04495a2b-e4bd-4639-a451-560be8d17132", ResourceVersion:"911", Generation:0, CreationTimestamp:time.Date(2025, time.July, 1, 8, 38, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-bk4zk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali60ded29c5b7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 1 08:39:45.671968 containerd[1542]: 2025-07-01 08:39:45.630 [INFO][4316] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="be0f270ce86c79b4640e9142a77ce3f99bad370a4093e621e37c67e989882c97" Namespace="kube-system" Pod="coredns-674b8bbfcf-bk4zk" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--bk4zk-eth0" Jul 1 08:39:45.671968 containerd[1542]: 2025-07-01 08:39:45.630 [INFO][4316] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60ded29c5b7 ContainerID="be0f270ce86c79b4640e9142a77ce3f99bad370a4093e621e37c67e989882c97" Namespace="kube-system" Pod="coredns-674b8bbfcf-bk4zk" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--bk4zk-eth0" Jul 1 08:39:45.671968 containerd[1542]: 2025-07-01 08:39:45.637 [INFO][4316] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="be0f270ce86c79b4640e9142a77ce3f99bad370a4093e621e37c67e989882c97" Namespace="kube-system" Pod="coredns-674b8bbfcf-bk4zk" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--bk4zk-eth0" Jul 1 08:39:45.671968 containerd[1542]: 2025-07-01 08:39:45.638 [INFO][4316] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="be0f270ce86c79b4640e9142a77ce3f99bad370a4093e621e37c67e989882c97" Namespace="kube-system" Pod="coredns-674b8bbfcf-bk4zk" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--bk4zk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--bk4zk-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"04495a2b-e4bd-4639-a451-560be8d17132", ResourceVersion:"911", Generation:0, CreationTimestamp:time.Date(2025, time.July, 1, 8, 38, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"be0f270ce86c79b4640e9142a77ce3f99bad370a4093e621e37c67e989882c97", Pod:"coredns-674b8bbfcf-bk4zk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali60ded29c5b7", MAC:"86:bc:23:73:b4:b1", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 1 08:39:45.671968 containerd[1542]: 2025-07-01 08:39:45.662 [INFO][4316] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="be0f270ce86c79b4640e9142a77ce3f99bad370a4093e621e37c67e989882c97" Namespace="kube-system" Pod="coredns-674b8bbfcf-bk4zk" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--bk4zk-eth0" Jul 1 08:39:45.688741 containerd[1542]: time="2025-07-01T08:39:45.688613500Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-m5pqf,Uid:7c1e9f24-4286-4947-b2c6-20a4f6d90605,Namespace:kube-system,Attempt:0,} returns sandbox id \"ae27775104615e0e1b2ce0e9fa874e5be8d950f7a959b7f45d8e3966e645e97a\"" Jul 1 08:39:45.693068 systemd[1]: Started cri-containerd-abb25e4e12ffb47337b8715ef2b712af00acc1127e87695e01b29bfbcae9961b.scope - libcontainer container abb25e4e12ffb47337b8715ef2b712af00acc1127e87695e01b29bfbcae9961b. Jul 1 08:39:45.712633 containerd[1542]: time="2025-07-01T08:39:45.711392141Z" level=info msg="CreateContainer within sandbox \"ae27775104615e0e1b2ce0e9fa874e5be8d950f7a959b7f45d8e3966e645e97a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 1 08:39:45.722297 systemd-resolved[1399]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 1 08:39:45.730492 containerd[1542]: time="2025-07-01T08:39:45.730044108Z" level=info msg="connecting to shim be0f270ce86c79b4640e9142a77ce3f99bad370a4093e621e37c67e989882c97" address="unix:///run/containerd/s/2203f9bb7bbcf0cf0b53b740f3052b8792167ff1b57280193a0586ef18dd5efc" namespace=k8s.io protocol=ttrpc version=3 Jul 1 08:39:45.749968 containerd[1542]: time="2025-07-01T08:39:45.749882404Z" level=info msg="Container dd1cba607c195a70a7042f9011a775be49a494041086ec5cc33b27d227432f0a: CDI devices from CRI Config.CDIDevices: []" Jul 1 08:39:45.769942 systemd[1]: Started cri-containerd-be0f270ce86c79b4640e9142a77ce3f99bad370a4093e621e37c67e989882c97.scope - libcontainer container be0f270ce86c79b4640e9142a77ce3f99bad370a4093e621e37c67e989882c97. Jul 1 08:39:45.788188 containerd[1542]: time="2025-07-01T08:39:45.787185707Z" level=info msg="CreateContainer within sandbox \"ae27775104615e0e1b2ce0e9fa874e5be8d950f7a959b7f45d8e3966e645e97a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"dd1cba607c195a70a7042f9011a775be49a494041086ec5cc33b27d227432f0a\"" Jul 1 08:39:45.789133 containerd[1542]: time="2025-07-01T08:39:45.789062647Z" level=info msg="StartContainer for \"dd1cba607c195a70a7042f9011a775be49a494041086ec5cc33b27d227432f0a\"" Jul 1 08:39:45.790196 containerd[1542]: time="2025-07-01T08:39:45.790155357Z" level=info msg="connecting to shim dd1cba607c195a70a7042f9011a775be49a494041086ec5cc33b27d227432f0a" address="unix:///run/containerd/s/da7f04b6cb1f02316fdf9519cb47efd8ad42f7769b40322c70518a3dd7e15899" protocol=ttrpc version=3 Jul 1 08:39:45.794405 systemd-resolved[1399]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 1 08:39:45.797969 containerd[1542]: time="2025-07-01T08:39:45.797712983Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7464967f58-kz8mf,Uid:6a33d63a-3622-4ba2-92e2-3d0c014d3abe,Namespace:calico-system,Attempt:0,} returns sandbox id \"abb25e4e12ffb47337b8715ef2b712af00acc1127e87695e01b29bfbcae9961b\"" Jul 1 08:39:45.833155 systemd[1]: Started cri-containerd-dd1cba607c195a70a7042f9011a775be49a494041086ec5cc33b27d227432f0a.scope - libcontainer container dd1cba607c195a70a7042f9011a775be49a494041086ec5cc33b27d227432f0a. Jul 1 08:39:45.852164 containerd[1542]: time="2025-07-01T08:39:45.852082911Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-bk4zk,Uid:04495a2b-e4bd-4639-a451-560be8d17132,Namespace:kube-system,Attempt:0,} returns sandbox id \"be0f270ce86c79b4640e9142a77ce3f99bad370a4093e621e37c67e989882c97\"" Jul 1 08:39:45.862635 containerd[1542]: time="2025-07-01T08:39:45.862561715Z" level=info msg="CreateContainer within sandbox \"be0f270ce86c79b4640e9142a77ce3f99bad370a4093e621e37c67e989882c97\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 1 08:39:45.891677 containerd[1542]: time="2025-07-01T08:39:45.891367474Z" level=info msg="Container 86ae3868082530bfcc52a95f6b488a22fe48e5b86ae3db7f12581ba307174431: CDI devices from CRI Config.CDIDevices: []" Jul 1 08:39:45.895723 systemd-networkd[1475]: vxlan.calico: Gained IPv6LL Jul 1 08:39:45.954084 containerd[1542]: time="2025-07-01T08:39:45.953978665Z" level=info msg="StartContainer for \"dd1cba607c195a70a7042f9011a775be49a494041086ec5cc33b27d227432f0a\" returns successfully" Jul 1 08:39:45.958784 containerd[1542]: time="2025-07-01T08:39:45.958600426Z" level=info msg="CreateContainer within sandbox \"be0f270ce86c79b4640e9142a77ce3f99bad370a4093e621e37c67e989882c97\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"86ae3868082530bfcc52a95f6b488a22fe48e5b86ae3db7f12581ba307174431\"" Jul 1 08:39:45.959671 containerd[1542]: time="2025-07-01T08:39:45.959622089Z" level=info msg="StartContainer for \"86ae3868082530bfcc52a95f6b488a22fe48e5b86ae3db7f12581ba307174431\"" Jul 1 08:39:45.961065 containerd[1542]: time="2025-07-01T08:39:45.961017539Z" level=info msg="connecting to shim 86ae3868082530bfcc52a95f6b488a22fe48e5b86ae3db7f12581ba307174431" address="unix:///run/containerd/s/2203f9bb7bbcf0cf0b53b740f3052b8792167ff1b57280193a0586ef18dd5efc" protocol=ttrpc version=3 Jul 1 08:39:45.998246 systemd[1]: Started cri-containerd-86ae3868082530bfcc52a95f6b488a22fe48e5b86ae3db7f12581ba307174431.scope - libcontainer container 86ae3868082530bfcc52a95f6b488a22fe48e5b86ae3db7f12581ba307174431. Jul 1 08:39:46.063625 containerd[1542]: time="2025-07-01T08:39:46.063561848Z" level=info msg="StartContainer for \"86ae3868082530bfcc52a95f6b488a22fe48e5b86ae3db7f12581ba307174431\" returns successfully" Jul 1 08:39:46.151833 systemd-networkd[1475]: cali0c127bed15b: Gained IPv6LL Jul 1 08:39:46.279687 systemd-networkd[1475]: calibac0c00c658: Gained IPv6LL Jul 1 08:39:46.407856 systemd-networkd[1475]: calied813a1fdc4: Gained IPv6LL Jul 1 08:39:46.808642 kubelet[2710]: I0701 08:39:46.808347 2710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-m5pqf" podStartSLOduration=72.808319749 podStartE2EDuration="1m12.808319749s" podCreationTimestamp="2025-07-01 08:38:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-01 08:39:46.8061706 +0000 UTC m=+78.082239925" watchObservedRunningTime="2025-07-01 08:39:46.808319749 +0000 UTC m=+78.084389054" Jul 1 08:39:47.175650 systemd-networkd[1475]: cali46a536f9123: Gained IPv6LL Jul 1 08:39:47.367651 systemd-networkd[1475]: cali60ded29c5b7: Gained IPv6LL Jul 1 08:39:47.431684 systemd-networkd[1475]: caliba8173fd249: Gained IPv6LL Jul 1 08:39:47.521791 kubelet[2710]: I0701 08:39:47.521723 2710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-bk4zk" podStartSLOduration=72.521708149 podStartE2EDuration="1m12.521708149s" podCreationTimestamp="2025-07-01 08:38:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-01 08:39:47.520952846 +0000 UTC m=+78.797022151" watchObservedRunningTime="2025-07-01 08:39:47.521708149 +0000 UTC m=+78.797777444" Jul 1 08:39:47.649862 systemd[1]: Started sshd@9-10.0.0.78:22-10.0.0.1:52488.service - OpenSSH per-connection server daemon (10.0.0.1:52488). Jul 1 08:39:47.726791 sshd[4762]: Accepted publickey for core from 10.0.0.1 port 52488 ssh2: RSA SHA256:Fdg/GPppvpuQQb5BRtreEtTPBEKGT5ZJUpnuhcL3IOo Jul 1 08:39:47.729224 sshd-session[4762]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:39:47.736250 systemd-logind[1525]: New session 10 of user core. Jul 1 08:39:47.745650 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 1 08:39:48.744610 sshd[4765]: Connection closed by 10.0.0.1 port 52488 Jul 1 08:39:48.745003 sshd-session[4762]: pam_unix(sshd:session): session closed for user core Jul 1 08:39:48.749716 systemd[1]: sshd@9-10.0.0.78:22-10.0.0.1:52488.service: Deactivated successfully. Jul 1 08:39:48.751934 systemd[1]: session-10.scope: Deactivated successfully. Jul 1 08:39:48.752895 systemd-logind[1525]: Session 10 logged out. Waiting for processes to exit. Jul 1 08:39:48.754889 systemd-logind[1525]: Removed session 10. Jul 1 08:39:48.895000 containerd[1542]: time="2025-07-01T08:39:48.894911897Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:39:48.899809 containerd[1542]: time="2025-07-01T08:39:48.899743417Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Jul 1 08:39:48.901776 containerd[1542]: time="2025-07-01T08:39:48.901701156Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:39:48.907711 containerd[1542]: time="2025-07-01T08:39:48.907646051Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:39:48.908529 containerd[1542]: time="2025-07-01T08:39:48.908471639Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 3.528991863s" Jul 1 08:39:48.908529 containerd[1542]: time="2025-07-01T08:39:48.908526333Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Jul 1 08:39:48.913392 containerd[1542]: time="2025-07-01T08:39:48.913338966Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 1 08:39:48.920252 containerd[1542]: time="2025-07-01T08:39:48.920170376Z" level=info msg="CreateContainer within sandbox \"c13aca9892c893563bffc3e2bb73095cd5fb3689107611175367ed4571bf6411\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 1 08:39:48.942338 containerd[1542]: time="2025-07-01T08:39:48.942251255Z" level=info msg="Container c80e62ef8f20a668e0d3a0b67cf58d4f2b6843181d97ef22f6f5da401a81b012: CDI devices from CRI Config.CDIDevices: []" Jul 1 08:39:48.955773 containerd[1542]: time="2025-07-01T08:39:48.955715194Z" level=info msg="CreateContainer within sandbox \"c13aca9892c893563bffc3e2bb73095cd5fb3689107611175367ed4571bf6411\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"c80e62ef8f20a668e0d3a0b67cf58d4f2b6843181d97ef22f6f5da401a81b012\"" Jul 1 08:39:48.956550 containerd[1542]: time="2025-07-01T08:39:48.956358974Z" level=info msg="StartContainer for \"c80e62ef8f20a668e0d3a0b67cf58d4f2b6843181d97ef22f6f5da401a81b012\"" Jul 1 08:39:48.958992 containerd[1542]: time="2025-07-01T08:39:48.958860412Z" level=info msg="connecting to shim c80e62ef8f20a668e0d3a0b67cf58d4f2b6843181d97ef22f6f5da401a81b012" address="unix:///run/containerd/s/67afe8488c12810983f363bfd78fd006d216720582dc9b1c060a660611c74855" protocol=ttrpc version=3 Jul 1 08:39:48.992823 systemd[1]: Started cri-containerd-c80e62ef8f20a668e0d3a0b67cf58d4f2b6843181d97ef22f6f5da401a81b012.scope - libcontainer container c80e62ef8f20a668e0d3a0b67cf58d4f2b6843181d97ef22f6f5da401a81b012. Jul 1 08:39:49.043161 containerd[1542]: time="2025-07-01T08:39:49.042957682Z" level=info msg="StartContainer for \"c80e62ef8f20a668e0d3a0b67cf58d4f2b6843181d97ef22f6f5da401a81b012\" returns successfully" Jul 1 08:39:51.876116 containerd[1542]: time="2025-07-01T08:39:51.876047436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-755864c8f7-nb9gx,Uid:fad7d2f6-f023-4dfe-8f40-2b786f80da76,Namespace:calico-apiserver,Attempt:0,}" Jul 1 08:39:52.047934 systemd-networkd[1475]: calif9336b9cc0b: Link UP Jul 1 08:39:52.049513 systemd-networkd[1475]: calif9336b9cc0b: Gained carrier Jul 1 08:39:52.074232 containerd[1542]: 2025-07-01 08:39:51.927 [INFO][4828] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--755864c8f7--nb9gx-eth0 calico-apiserver-755864c8f7- calico-apiserver fad7d2f6-f023-4dfe-8f40-2b786f80da76 909 0 2025-07-01 08:38:55 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:755864c8f7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-755864c8f7-nb9gx eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif9336b9cc0b [] [] }} ContainerID="db042a600c7adff3138a45ea7c6370646bb683ff444936aabee94c87870778ef" Namespace="calico-apiserver" Pod="calico-apiserver-755864c8f7-nb9gx" WorkloadEndpoint="localhost-k8s-calico--apiserver--755864c8f7--nb9gx-" Jul 1 08:39:52.074232 containerd[1542]: 2025-07-01 08:39:51.927 [INFO][4828] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="db042a600c7adff3138a45ea7c6370646bb683ff444936aabee94c87870778ef" Namespace="calico-apiserver" Pod="calico-apiserver-755864c8f7-nb9gx" WorkloadEndpoint="localhost-k8s-calico--apiserver--755864c8f7--nb9gx-eth0" Jul 1 08:39:52.074232 containerd[1542]: 2025-07-01 08:39:51.982 [INFO][4844] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="db042a600c7adff3138a45ea7c6370646bb683ff444936aabee94c87870778ef" HandleID="k8s-pod-network.db042a600c7adff3138a45ea7c6370646bb683ff444936aabee94c87870778ef" Workload="localhost-k8s-calico--apiserver--755864c8f7--nb9gx-eth0" Jul 1 08:39:52.074232 containerd[1542]: 2025-07-01 08:39:51.982 [INFO][4844] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="db042a600c7adff3138a45ea7c6370646bb683ff444936aabee94c87870778ef" HandleID="k8s-pod-network.db042a600c7adff3138a45ea7c6370646bb683ff444936aabee94c87870778ef" Workload="localhost-k8s-calico--apiserver--755864c8f7--nb9gx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000139540), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-755864c8f7-nb9gx", "timestamp":"2025-07-01 08:39:51.982242754 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 1 08:39:52.074232 containerd[1542]: 2025-07-01 08:39:51.982 [INFO][4844] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 1 08:39:52.074232 containerd[1542]: 2025-07-01 08:39:51.982 [INFO][4844] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 1 08:39:52.074232 containerd[1542]: 2025-07-01 08:39:51.982 [INFO][4844] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 1 08:39:52.074232 containerd[1542]: 2025-07-01 08:39:51.995 [INFO][4844] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.db042a600c7adff3138a45ea7c6370646bb683ff444936aabee94c87870778ef" host="localhost" Jul 1 08:39:52.074232 containerd[1542]: 2025-07-01 08:39:52.004 [INFO][4844] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 1 08:39:52.074232 containerd[1542]: 2025-07-01 08:39:52.011 [INFO][4844] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 1 08:39:52.074232 containerd[1542]: 2025-07-01 08:39:52.013 [INFO][4844] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 1 08:39:52.074232 containerd[1542]: 2025-07-01 08:39:52.017 [INFO][4844] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 1 08:39:52.074232 containerd[1542]: 2025-07-01 08:39:52.017 [INFO][4844] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.db042a600c7adff3138a45ea7c6370646bb683ff444936aabee94c87870778ef" host="localhost" Jul 1 08:39:52.074232 containerd[1542]: 2025-07-01 08:39:52.020 [INFO][4844] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.db042a600c7adff3138a45ea7c6370646bb683ff444936aabee94c87870778ef Jul 1 08:39:52.074232 containerd[1542]: 2025-07-01 08:39:52.027 [INFO][4844] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.db042a600c7adff3138a45ea7c6370646bb683ff444936aabee94c87870778ef" host="localhost" Jul 1 08:39:52.074232 containerd[1542]: 2025-07-01 08:39:52.039 [INFO][4844] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.db042a600c7adff3138a45ea7c6370646bb683ff444936aabee94c87870778ef" host="localhost" Jul 1 08:39:52.074232 containerd[1542]: 2025-07-01 08:39:52.039 [INFO][4844] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.db042a600c7adff3138a45ea7c6370646bb683ff444936aabee94c87870778ef" host="localhost" Jul 1 08:39:52.074232 containerd[1542]: 2025-07-01 08:39:52.039 [INFO][4844] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 1 08:39:52.074232 containerd[1542]: 2025-07-01 08:39:52.039 [INFO][4844] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="db042a600c7adff3138a45ea7c6370646bb683ff444936aabee94c87870778ef" HandleID="k8s-pod-network.db042a600c7adff3138a45ea7c6370646bb683ff444936aabee94c87870778ef" Workload="localhost-k8s-calico--apiserver--755864c8f7--nb9gx-eth0" Jul 1 08:39:52.074894 containerd[1542]: 2025-07-01 08:39:52.043 [INFO][4828] cni-plugin/k8s.go 418: Populated endpoint ContainerID="db042a600c7adff3138a45ea7c6370646bb683ff444936aabee94c87870778ef" Namespace="calico-apiserver" Pod="calico-apiserver-755864c8f7-nb9gx" WorkloadEndpoint="localhost-k8s-calico--apiserver--755864c8f7--nb9gx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--755864c8f7--nb9gx-eth0", GenerateName:"calico-apiserver-755864c8f7-", Namespace:"calico-apiserver", SelfLink:"", UID:"fad7d2f6-f023-4dfe-8f40-2b786f80da76", ResourceVersion:"909", Generation:0, CreationTimestamp:time.Date(2025, time.July, 1, 8, 38, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"755864c8f7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-755864c8f7-nb9gx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif9336b9cc0b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 1 08:39:52.074894 containerd[1542]: 2025-07-01 08:39:52.043 [INFO][4828] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="db042a600c7adff3138a45ea7c6370646bb683ff444936aabee94c87870778ef" Namespace="calico-apiserver" Pod="calico-apiserver-755864c8f7-nb9gx" WorkloadEndpoint="localhost-k8s-calico--apiserver--755864c8f7--nb9gx-eth0" Jul 1 08:39:52.074894 containerd[1542]: 2025-07-01 08:39:52.043 [INFO][4828] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif9336b9cc0b ContainerID="db042a600c7adff3138a45ea7c6370646bb683ff444936aabee94c87870778ef" Namespace="calico-apiserver" Pod="calico-apiserver-755864c8f7-nb9gx" WorkloadEndpoint="localhost-k8s-calico--apiserver--755864c8f7--nb9gx-eth0" Jul 1 08:39:52.074894 containerd[1542]: 2025-07-01 08:39:52.050 [INFO][4828] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="db042a600c7adff3138a45ea7c6370646bb683ff444936aabee94c87870778ef" Namespace="calico-apiserver" Pod="calico-apiserver-755864c8f7-nb9gx" WorkloadEndpoint="localhost-k8s-calico--apiserver--755864c8f7--nb9gx-eth0" Jul 1 08:39:52.074894 containerd[1542]: 2025-07-01 08:39:52.051 [INFO][4828] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="db042a600c7adff3138a45ea7c6370646bb683ff444936aabee94c87870778ef" Namespace="calico-apiserver" Pod="calico-apiserver-755864c8f7-nb9gx" WorkloadEndpoint="localhost-k8s-calico--apiserver--755864c8f7--nb9gx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--755864c8f7--nb9gx-eth0", GenerateName:"calico-apiserver-755864c8f7-", Namespace:"calico-apiserver", SelfLink:"", UID:"fad7d2f6-f023-4dfe-8f40-2b786f80da76", ResourceVersion:"909", Generation:0, CreationTimestamp:time.Date(2025, time.July, 1, 8, 38, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"755864c8f7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"db042a600c7adff3138a45ea7c6370646bb683ff444936aabee94c87870778ef", Pod:"calico-apiserver-755864c8f7-nb9gx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif9336b9cc0b", MAC:"12:97:c6:7c:64:cb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 1 08:39:52.074894 containerd[1542]: 2025-07-01 08:39:52.064 [INFO][4828] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="db042a600c7adff3138a45ea7c6370646bb683ff444936aabee94c87870778ef" Namespace="calico-apiserver" Pod="calico-apiserver-755864c8f7-nb9gx" WorkloadEndpoint="localhost-k8s-calico--apiserver--755864c8f7--nb9gx-eth0" Jul 1 08:39:52.115941 containerd[1542]: time="2025-07-01T08:39:52.115862039Z" level=info msg="connecting to shim db042a600c7adff3138a45ea7c6370646bb683ff444936aabee94c87870778ef" address="unix:///run/containerd/s/62d4aeaf7aba5d002bbe8924a42b3ec530918abe2df48ed28f4e948042e85a2d" namespace=k8s.io protocol=ttrpc version=3 Jul 1 08:39:52.161797 systemd[1]: Started cri-containerd-db042a600c7adff3138a45ea7c6370646bb683ff444936aabee94c87870778ef.scope - libcontainer container db042a600c7adff3138a45ea7c6370646bb683ff444936aabee94c87870778ef. Jul 1 08:39:52.184366 systemd-resolved[1399]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 1 08:39:52.439552 containerd[1542]: time="2025-07-01T08:39:52.423021638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-755864c8f7-nb9gx,Uid:fad7d2f6-f023-4dfe-8f40-2b786f80da76,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"db042a600c7adff3138a45ea7c6370646bb683ff444936aabee94c87870778ef\"" Jul 1 08:39:52.875841 containerd[1542]: time="2025-07-01T08:39:52.875781850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-hrqb5,Uid:8dd24de6-7001-4bd3-8b8c-a553b183fab5,Namespace:calico-system,Attempt:0,}" Jul 1 08:39:53.196976 containerd[1542]: time="2025-07-01T08:39:53.196678360Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:39:53.208996 containerd[1542]: time="2025-07-01T08:39:53.208906492Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=47317977" Jul 1 08:39:53.223983 systemd-networkd[1475]: cali066661a7c30: Link UP Jul 1 08:39:53.224691 systemd-networkd[1475]: cali066661a7c30: Gained carrier Jul 1 08:39:53.250194 containerd[1542]: time="2025-07-01T08:39:53.250113110Z" level=info msg="ImageCreate event name:\"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:39:53.260577 containerd[1542]: 2025-07-01 08:39:53.090 [INFO][4913] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--768f4c5c69--hrqb5-eth0 goldmane-768f4c5c69- calico-system 8dd24de6-7001-4bd3-8b8c-a553b183fab5 910 0 2025-07-01 08:38:58 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-768f4c5c69-hrqb5 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali066661a7c30 [] [] }} ContainerID="7e10d9e1acb66c0f34b71e5951ad3b2228d8b153cb16fd875daea178ebc586db" Namespace="calico-system" Pod="goldmane-768f4c5c69-hrqb5" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--hrqb5-" Jul 1 08:39:53.260577 containerd[1542]: 2025-07-01 08:39:53.090 [INFO][4913] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7e10d9e1acb66c0f34b71e5951ad3b2228d8b153cb16fd875daea178ebc586db" Namespace="calico-system" Pod="goldmane-768f4c5c69-hrqb5" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--hrqb5-eth0" Jul 1 08:39:53.260577 containerd[1542]: 2025-07-01 08:39:53.122 [INFO][4929] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7e10d9e1acb66c0f34b71e5951ad3b2228d8b153cb16fd875daea178ebc586db" HandleID="k8s-pod-network.7e10d9e1acb66c0f34b71e5951ad3b2228d8b153cb16fd875daea178ebc586db" Workload="localhost-k8s-goldmane--768f4c5c69--hrqb5-eth0" Jul 1 08:39:53.260577 containerd[1542]: 2025-07-01 08:39:53.122 [INFO][4929] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7e10d9e1acb66c0f34b71e5951ad3b2228d8b153cb16fd875daea178ebc586db" HandleID="k8s-pod-network.7e10d9e1acb66c0f34b71e5951ad3b2228d8b153cb16fd875daea178ebc586db" Workload="localhost-k8s-goldmane--768f4c5c69--hrqb5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e6f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-768f4c5c69-hrqb5", "timestamp":"2025-07-01 08:39:53.122794915 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 1 08:39:53.260577 containerd[1542]: 2025-07-01 08:39:53.123 [INFO][4929] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 1 08:39:53.260577 containerd[1542]: 2025-07-01 08:39:53.123 [INFO][4929] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 1 08:39:53.260577 containerd[1542]: 2025-07-01 08:39:53.123 [INFO][4929] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 1 08:39:53.260577 containerd[1542]: 2025-07-01 08:39:53.132 [INFO][4929] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7e10d9e1acb66c0f34b71e5951ad3b2228d8b153cb16fd875daea178ebc586db" host="localhost" Jul 1 08:39:53.260577 containerd[1542]: 2025-07-01 08:39:53.139 [INFO][4929] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 1 08:39:53.260577 containerd[1542]: 2025-07-01 08:39:53.145 [INFO][4929] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 1 08:39:53.260577 containerd[1542]: 2025-07-01 08:39:53.148 [INFO][4929] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 1 08:39:53.260577 containerd[1542]: 2025-07-01 08:39:53.151 [INFO][4929] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 1 08:39:53.260577 containerd[1542]: 2025-07-01 08:39:53.151 [INFO][4929] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7e10d9e1acb66c0f34b71e5951ad3b2228d8b153cb16fd875daea178ebc586db" host="localhost" Jul 1 08:39:53.260577 containerd[1542]: 2025-07-01 08:39:53.153 [INFO][4929] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7e10d9e1acb66c0f34b71e5951ad3b2228d8b153cb16fd875daea178ebc586db Jul 1 08:39:53.260577 containerd[1542]: 2025-07-01 08:39:53.180 [INFO][4929] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7e10d9e1acb66c0f34b71e5951ad3b2228d8b153cb16fd875daea178ebc586db" host="localhost" Jul 1 08:39:53.260577 containerd[1542]: 2025-07-01 08:39:53.217 [INFO][4929] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.7e10d9e1acb66c0f34b71e5951ad3b2228d8b153cb16fd875daea178ebc586db" host="localhost" Jul 1 08:39:53.260577 containerd[1542]: 2025-07-01 08:39:53.217 [INFO][4929] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.7e10d9e1acb66c0f34b71e5951ad3b2228d8b153cb16fd875daea178ebc586db" host="localhost" Jul 1 08:39:53.260577 containerd[1542]: 2025-07-01 08:39:53.217 [INFO][4929] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 1 08:39:53.260577 containerd[1542]: 2025-07-01 08:39:53.217 [INFO][4929] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="7e10d9e1acb66c0f34b71e5951ad3b2228d8b153cb16fd875daea178ebc586db" HandleID="k8s-pod-network.7e10d9e1acb66c0f34b71e5951ad3b2228d8b153cb16fd875daea178ebc586db" Workload="localhost-k8s-goldmane--768f4c5c69--hrqb5-eth0" Jul 1 08:39:53.261450 containerd[1542]: 2025-07-01 08:39:53.221 [INFO][4913] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7e10d9e1acb66c0f34b71e5951ad3b2228d8b153cb16fd875daea178ebc586db" Namespace="calico-system" Pod="goldmane-768f4c5c69-hrqb5" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--hrqb5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--hrqb5-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"8dd24de6-7001-4bd3-8b8c-a553b183fab5", ResourceVersion:"910", Generation:0, CreationTimestamp:time.Date(2025, time.July, 1, 8, 38, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-768f4c5c69-hrqb5", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali066661a7c30", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 1 08:39:53.261450 containerd[1542]: 2025-07-01 08:39:53.221 [INFO][4913] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="7e10d9e1acb66c0f34b71e5951ad3b2228d8b153cb16fd875daea178ebc586db" Namespace="calico-system" Pod="goldmane-768f4c5c69-hrqb5" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--hrqb5-eth0" Jul 1 08:39:53.261450 containerd[1542]: 2025-07-01 08:39:53.221 [INFO][4913] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali066661a7c30 ContainerID="7e10d9e1acb66c0f34b71e5951ad3b2228d8b153cb16fd875daea178ebc586db" Namespace="calico-system" Pod="goldmane-768f4c5c69-hrqb5" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--hrqb5-eth0" Jul 1 08:39:53.261450 containerd[1542]: 2025-07-01 08:39:53.223 [INFO][4913] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7e10d9e1acb66c0f34b71e5951ad3b2228d8b153cb16fd875daea178ebc586db" Namespace="calico-system" Pod="goldmane-768f4c5c69-hrqb5" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--hrqb5-eth0" Jul 1 08:39:53.261450 containerd[1542]: 2025-07-01 08:39:53.224 [INFO][4913] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7e10d9e1acb66c0f34b71e5951ad3b2228d8b153cb16fd875daea178ebc586db" Namespace="calico-system" Pod="goldmane-768f4c5c69-hrqb5" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--hrqb5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--hrqb5-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"8dd24de6-7001-4bd3-8b8c-a553b183fab5", ResourceVersion:"910", Generation:0, CreationTimestamp:time.Date(2025, time.July, 1, 8, 38, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7e10d9e1acb66c0f34b71e5951ad3b2228d8b153cb16fd875daea178ebc586db", Pod:"goldmane-768f4c5c69-hrqb5", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali066661a7c30", MAC:"02:e6:a3:8a:ef:61", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 1 08:39:53.261450 containerd[1542]: 2025-07-01 08:39:53.256 [INFO][4913] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7e10d9e1acb66c0f34b71e5951ad3b2228d8b153cb16fd875daea178ebc586db" Namespace="calico-system" Pod="goldmane-768f4c5c69-hrqb5" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--hrqb5-eth0" Jul 1 08:39:53.271082 containerd[1542]: time="2025-07-01T08:39:53.271022994Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:39:53.273103 containerd[1542]: time="2025-07-01T08:39:53.273061440Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 4.359611301s" Jul 1 08:39:53.273212 containerd[1542]: time="2025-07-01T08:39:53.273165348Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 1 08:39:53.276616 containerd[1542]: time="2025-07-01T08:39:53.276445903Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 1 08:39:53.293787 containerd[1542]: time="2025-07-01T08:39:53.293716097Z" level=info msg="CreateContainer within sandbox \"d9b1b7b636c2d180b659c6d6383daa0776e3f9796da0edad60c7714fc634aded\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 1 08:39:53.363817 containerd[1542]: time="2025-07-01T08:39:53.363753419Z" level=info msg="Container d801735143653904b3bff35e2284122dac5ebded4cd840dbceee63cb3c34a873: CDI devices from CRI Config.CDIDevices: []" Jul 1 08:39:53.381266 containerd[1542]: time="2025-07-01T08:39:53.381198507Z" level=info msg="CreateContainer within sandbox \"d9b1b7b636c2d180b659c6d6383daa0776e3f9796da0edad60c7714fc634aded\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"d801735143653904b3bff35e2284122dac5ebded4cd840dbceee63cb3c34a873\"" Jul 1 08:39:53.384775 containerd[1542]: time="2025-07-01T08:39:53.384596245Z" level=info msg="StartContainer for \"d801735143653904b3bff35e2284122dac5ebded4cd840dbceee63cb3c34a873\"" Jul 1 08:39:53.386878 containerd[1542]: time="2025-07-01T08:39:53.386506877Z" level=info msg="connecting to shim d801735143653904b3bff35e2284122dac5ebded4cd840dbceee63cb3c34a873" address="unix:///run/containerd/s/5722f2e6c1908e95b26f5552eb4495e25279c0bf6575d52eb71ce0f2eaf2cf8b" protocol=ttrpc version=3 Jul 1 08:39:53.387346 containerd[1542]: time="2025-07-01T08:39:53.387288848Z" level=info msg="connecting to shim 7e10d9e1acb66c0f34b71e5951ad3b2228d8b153cb16fd875daea178ebc586db" address="unix:///run/containerd/s/91a6fb595c2cd76e13bad821cb8b99cc055e6b6bd3d6359c731370252270660c" namespace=k8s.io protocol=ttrpc version=3 Jul 1 08:39:53.426798 systemd[1]: Started cri-containerd-d801735143653904b3bff35e2284122dac5ebded4cd840dbceee63cb3c34a873.scope - libcontainer container d801735143653904b3bff35e2284122dac5ebded4cd840dbceee63cb3c34a873. Jul 1 08:39:53.431759 systemd[1]: Started cri-containerd-7e10d9e1acb66c0f34b71e5951ad3b2228d8b153cb16fd875daea178ebc586db.scope - libcontainer container 7e10d9e1acb66c0f34b71e5951ad3b2228d8b153cb16fd875daea178ebc586db. Jul 1 08:39:53.453359 systemd-resolved[1399]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 1 08:39:53.497367 containerd[1542]: time="2025-07-01T08:39:53.497296030Z" level=info msg="StartContainer for \"d801735143653904b3bff35e2284122dac5ebded4cd840dbceee63cb3c34a873\" returns successfully" Jul 1 08:39:53.502845 containerd[1542]: time="2025-07-01T08:39:53.502697568Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-hrqb5,Uid:8dd24de6-7001-4bd3-8b8c-a553b183fab5,Namespace:calico-system,Attempt:0,} returns sandbox id \"7e10d9e1acb66c0f34b71e5951ad3b2228d8b153cb16fd875daea178ebc586db\"" Jul 1 08:39:53.767146 systemd[1]: Started sshd@10-10.0.0.78:22-10.0.0.1:37050.service - OpenSSH per-connection server daemon (10.0.0.1:37050). Jul 1 08:39:53.829031 kubelet[2710]: I0701 08:39:53.828837 2710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-755864c8f7-l96sx" podStartSLOduration=50.946182982 podStartE2EDuration="58.82878178s" podCreationTimestamp="2025-07-01 08:38:55 +0000 UTC" firstStartedPulling="2025-07-01 08:39:45.391732783 +0000 UTC m=+76.667802088" lastFinishedPulling="2025-07-01 08:39:53.274331571 +0000 UTC m=+84.550400886" observedRunningTime="2025-07-01 08:39:53.828383329 +0000 UTC m=+85.104452665" watchObservedRunningTime="2025-07-01 08:39:53.82878178 +0000 UTC m=+85.104851095" Jul 1 08:39:53.871558 sshd[5029]: Accepted publickey for core from 10.0.0.1 port 37050 ssh2: RSA SHA256:Fdg/GPppvpuQQb5BRtreEtTPBEKGT5ZJUpnuhcL3IOo Jul 1 08:39:53.873774 sshd-session[5029]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:39:53.879339 systemd-logind[1525]: New session 11 of user core. Jul 1 08:39:53.887912 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 1 08:39:53.895672 systemd-networkd[1475]: calif9336b9cc0b: Gained IPv6LL Jul 1 08:39:54.099600 sshd[5034]: Connection closed by 10.0.0.1 port 37050 Jul 1 08:39:54.101922 sshd-session[5029]: pam_unix(sshd:session): session closed for user core Jul 1 08:39:54.106726 systemd[1]: sshd@10-10.0.0.78:22-10.0.0.1:37050.service: Deactivated successfully. Jul 1 08:39:54.108961 systemd[1]: session-11.scope: Deactivated successfully. Jul 1 08:39:54.109887 systemd-logind[1525]: Session 11 logged out. Waiting for processes to exit. Jul 1 08:39:54.111468 systemd-logind[1525]: Removed session 11. Jul 1 08:39:54.599785 systemd-networkd[1475]: cali066661a7c30: Gained IPv6LL Jul 1 08:39:54.813452 kubelet[2710]: I0701 08:39:54.813350 2710 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 1 08:39:56.980104 containerd[1542]: time="2025-07-01T08:39:56.980015147Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:39:57.103670 containerd[1542]: time="2025-07-01T08:39:57.103574184Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Jul 1 08:39:57.140878 containerd[1542]: time="2025-07-01T08:39:57.140801880Z" level=info msg="ImageCreate event name:\"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:39:57.146327 containerd[1542]: time="2025-07-01T08:39:57.146206940Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:39:57.147797 containerd[1542]: time="2025-07-01T08:39:57.147748595Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"52769359\" in 3.871252838s" Jul 1 08:39:57.147903 containerd[1542]: time="2025-07-01T08:39:57.147802878Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Jul 1 08:39:57.156445 containerd[1542]: time="2025-07-01T08:39:57.156227137Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 1 08:39:57.344901 containerd[1542]: time="2025-07-01T08:39:57.344807612Z" level=info msg="CreateContainer within sandbox \"bf03840e4a574380d2eacdf6c4f9365709f2d5f849ad7bbed34140a2169aabb7\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 1 08:39:57.754947 containerd[1542]: time="2025-07-01T08:39:57.754358086Z" level=info msg="Container 63a1210d034c1fcfb54d3011ef35668a37d8b4dd3e9865d9c7c56382449ecea2: CDI devices from CRI Config.CDIDevices: []" Jul 1 08:39:57.908027 containerd[1542]: time="2025-07-01T08:39:57.907980348Z" level=info msg="CreateContainer within sandbox \"bf03840e4a574380d2eacdf6c4f9365709f2d5f849ad7bbed34140a2169aabb7\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"63a1210d034c1fcfb54d3011ef35668a37d8b4dd3e9865d9c7c56382449ecea2\"" Jul 1 08:39:57.909022 containerd[1542]: time="2025-07-01T08:39:57.908667074Z" level=info msg="StartContainer for \"63a1210d034c1fcfb54d3011ef35668a37d8b4dd3e9865d9c7c56382449ecea2\"" Jul 1 08:39:57.910350 containerd[1542]: time="2025-07-01T08:39:57.910304362Z" level=info msg="connecting to shim 63a1210d034c1fcfb54d3011ef35668a37d8b4dd3e9865d9c7c56382449ecea2" address="unix:///run/containerd/s/6d8a453cdf111b2e8439738c942db1fe4662fb7fb906a009a5262d76ab3b9f6a" protocol=ttrpc version=3 Jul 1 08:39:57.966671 systemd[1]: Started cri-containerd-63a1210d034c1fcfb54d3011ef35668a37d8b4dd3e9865d9c7c56382449ecea2.scope - libcontainer container 63a1210d034c1fcfb54d3011ef35668a37d8b4dd3e9865d9c7c56382449ecea2. Jul 1 08:39:58.099285 containerd[1542]: time="2025-07-01T08:39:58.099212362Z" level=info msg="StartContainer for \"63a1210d034c1fcfb54d3011ef35668a37d8b4dd3e9865d9c7c56382449ecea2\" returns successfully" Jul 1 08:39:58.870643 containerd[1542]: time="2025-07-01T08:39:58.870554571Z" level=info msg="TaskExit event in podsandbox handler container_id:\"63a1210d034c1fcfb54d3011ef35668a37d8b4dd3e9865d9c7c56382449ecea2\" id:\"f95de137b3221bdc82b639b506aec562d7e5a918ae1267777392adb424ca2591\" pid:5119 exited_at:{seconds:1751359198 nanos:870186070}" Jul 1 08:39:58.912653 kubelet[2710]: I0701 08:39:58.912577 2710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6d868c7f67-4frp4" podStartSLOduration=48.402030058 podStartE2EDuration="59.912558905s" podCreationTimestamp="2025-07-01 08:38:59 +0000 UTC" firstStartedPulling="2025-07-01 08:39:45.645289531 +0000 UTC m=+76.921358836" lastFinishedPulling="2025-07-01 08:39:57.155818378 +0000 UTC m=+88.431887683" observedRunningTime="2025-07-01 08:39:58.911188858 +0000 UTC m=+90.187258173" watchObservedRunningTime="2025-07-01 08:39:58.912558905 +0000 UTC m=+90.188628210" Jul 1 08:39:59.117653 systemd[1]: Started sshd@11-10.0.0.78:22-10.0.0.1:49680.service - OpenSSH per-connection server daemon (10.0.0.1:49680). Jul 1 08:39:59.196073 sshd[5130]: Accepted publickey for core from 10.0.0.1 port 49680 ssh2: RSA SHA256:Fdg/GPppvpuQQb5BRtreEtTPBEKGT5ZJUpnuhcL3IOo Jul 1 08:39:59.197975 sshd-session[5130]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:39:59.202775 systemd-logind[1525]: New session 12 of user core. Jul 1 08:39:59.214694 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 1 08:39:59.402721 sshd[5136]: Connection closed by 10.0.0.1 port 49680 Jul 1 08:39:59.403216 sshd-session[5130]: pam_unix(sshd:session): session closed for user core Jul 1 08:39:59.413661 systemd[1]: sshd@11-10.0.0.78:22-10.0.0.1:49680.service: Deactivated successfully. Jul 1 08:39:59.415812 systemd[1]: session-12.scope: Deactivated successfully. Jul 1 08:39:59.416779 systemd-logind[1525]: Session 12 logged out. Waiting for processes to exit. Jul 1 08:39:59.419995 systemd[1]: Started sshd@12-10.0.0.78:22-10.0.0.1:49688.service - OpenSSH per-connection server daemon (10.0.0.1:49688). Jul 1 08:39:59.421021 systemd-logind[1525]: Removed session 12. Jul 1 08:39:59.475978 sshd[5150]: Accepted publickey for core from 10.0.0.1 port 49688 ssh2: RSA SHA256:Fdg/GPppvpuQQb5BRtreEtTPBEKGT5ZJUpnuhcL3IOo Jul 1 08:39:59.478245 sshd-session[5150]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:39:59.485673 systemd-logind[1525]: New session 13 of user core. Jul 1 08:39:59.491663 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 1 08:39:59.687896 sshd[5153]: Connection closed by 10.0.0.1 port 49688 Jul 1 08:39:59.688836 sshd-session[5150]: pam_unix(sshd:session): session closed for user core Jul 1 08:39:59.703691 systemd[1]: sshd@12-10.0.0.78:22-10.0.0.1:49688.service: Deactivated successfully. Jul 1 08:39:59.706440 systemd[1]: session-13.scope: Deactivated successfully. Jul 1 08:39:59.707769 systemd-logind[1525]: Session 13 logged out. Waiting for processes to exit. Jul 1 08:39:59.712151 systemd[1]: Started sshd@13-10.0.0.78:22-10.0.0.1:49700.service - OpenSSH per-connection server daemon (10.0.0.1:49700). Jul 1 08:39:59.712995 systemd-logind[1525]: Removed session 13. Jul 1 08:39:59.976890 sshd[5165]: Accepted publickey for core from 10.0.0.1 port 49700 ssh2: RSA SHA256:Fdg/GPppvpuQQb5BRtreEtTPBEKGT5ZJUpnuhcL3IOo Jul 1 08:39:59.976904 sshd-session[5165]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:39:59.982920 systemd-logind[1525]: New session 14 of user core. Jul 1 08:39:59.992587 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 1 08:40:00.601818 sshd[5168]: Connection closed by 10.0.0.1 port 49700 Jul 1 08:40:00.605729 sshd-session[5165]: pam_unix(sshd:session): session closed for user core Jul 1 08:40:00.613824 systemd-logind[1525]: Session 14 logged out. Waiting for processes to exit. Jul 1 08:40:00.616817 systemd[1]: sshd@13-10.0.0.78:22-10.0.0.1:49700.service: Deactivated successfully. Jul 1 08:40:00.622741 systemd[1]: session-14.scope: Deactivated successfully. Jul 1 08:40:00.627133 systemd-logind[1525]: Removed session 14. Jul 1 08:40:00.918897 containerd[1542]: time="2025-07-01T08:40:00.918718325Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:40:00.926497 containerd[1542]: time="2025-07-01T08:40:00.926135483Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4661207" Jul 1 08:40:00.936407 containerd[1542]: time="2025-07-01T08:40:00.936343249Z" level=info msg="ImageCreate event name:\"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:40:00.963052 containerd[1542]: time="2025-07-01T08:40:00.962986877Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:40:00.963910 containerd[1542]: time="2025-07-01T08:40:00.963857342Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"6153902\" in 3.807584689s" Jul 1 08:40:00.963910 containerd[1542]: time="2025-07-01T08:40:00.963899753Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Jul 1 08:40:00.965206 containerd[1542]: time="2025-07-01T08:40:00.965149951Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 1 08:40:01.099888 containerd[1542]: time="2025-07-01T08:40:01.099837604Z" level=info msg="CreateContainer within sandbox \"abb25e4e12ffb47337b8715ef2b712af00acc1127e87695e01b29bfbcae9961b\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 1 08:40:02.920710 containerd[1542]: time="2025-07-01T08:40:02.920635356Z" level=info msg="Container e9c80818d3ca7228c41de4673cdabad33ab2385612557f33840437fa6dbb86c6: CDI devices from CRI Config.CDIDevices: []" Jul 1 08:40:04.138194 containerd[1542]: time="2025-07-01T08:40:04.138119558Z" level=info msg="CreateContainer within sandbox \"abb25e4e12ffb47337b8715ef2b712af00acc1127e87695e01b29bfbcae9961b\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"e9c80818d3ca7228c41de4673cdabad33ab2385612557f33840437fa6dbb86c6\"" Jul 1 08:40:04.138787 containerd[1542]: time="2025-07-01T08:40:04.138758132Z" level=info msg="StartContainer for \"e9c80818d3ca7228c41de4673cdabad33ab2385612557f33840437fa6dbb86c6\"" Jul 1 08:40:04.140263 containerd[1542]: time="2025-07-01T08:40:04.140226802Z" level=info msg="connecting to shim e9c80818d3ca7228c41de4673cdabad33ab2385612557f33840437fa6dbb86c6" address="unix:///run/containerd/s/e328cadbe7f69318ad14f01aba149394faae18118bb9fd7da9266359191eebc8" protocol=ttrpc version=3 Jul 1 08:40:04.168700 systemd[1]: Started cri-containerd-e9c80818d3ca7228c41de4673cdabad33ab2385612557f33840437fa6dbb86c6.scope - libcontainer container e9c80818d3ca7228c41de4673cdabad33ab2385612557f33840437fa6dbb86c6. Jul 1 08:40:04.255135 containerd[1542]: time="2025-07-01T08:40:04.255081074Z" level=info msg="StartContainer for \"e9c80818d3ca7228c41de4673cdabad33ab2385612557f33840437fa6dbb86c6\" returns successfully" Jul 1 08:40:05.571913 containerd[1542]: time="2025-07-01T08:40:05.571822879Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:40:05.572797 containerd[1542]: time="2025-07-01T08:40:05.572758646Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Jul 1 08:40:05.574768 containerd[1542]: time="2025-07-01T08:40:05.574678072Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:40:05.577648 containerd[1542]: time="2025-07-01T08:40:05.577524810Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:40:05.578207 containerd[1542]: time="2025-07-01T08:40:05.578161900Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 4.612959178s" Jul 1 08:40:05.578207 containerd[1542]: time="2025-07-01T08:40:05.578198510Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Jul 1 08:40:05.580258 containerd[1542]: time="2025-07-01T08:40:05.580177539Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 1 08:40:05.585594 containerd[1542]: time="2025-07-01T08:40:05.585525919Z" level=info msg="CreateContainer within sandbox \"c13aca9892c893563bffc3e2bb73095cd5fb3689107611175367ed4571bf6411\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 1 08:40:05.600379 containerd[1542]: time="2025-07-01T08:40:05.599913167Z" level=info msg="Container ffa8f2dc05bf7e024b35cab9e6d902322a8cbc19dc417d3ffbd4f5271a838903: CDI devices from CRI Config.CDIDevices: []" Jul 1 08:40:05.626057 systemd[1]: Started sshd@14-10.0.0.78:22-10.0.0.1:49710.service - OpenSSH per-connection server daemon (10.0.0.1:49710). Jul 1 08:40:05.638463 containerd[1542]: time="2025-07-01T08:40:05.638354657Z" level=info msg="CreateContainer within sandbox \"c13aca9892c893563bffc3e2bb73095cd5fb3689107611175367ed4571bf6411\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"ffa8f2dc05bf7e024b35cab9e6d902322a8cbc19dc417d3ffbd4f5271a838903\"" Jul 1 08:40:05.640489 containerd[1542]: time="2025-07-01T08:40:05.639539558Z" level=info msg="StartContainer for \"ffa8f2dc05bf7e024b35cab9e6d902322a8cbc19dc417d3ffbd4f5271a838903\"" Jul 1 08:40:05.641664 containerd[1542]: time="2025-07-01T08:40:05.641626481Z" level=info msg="connecting to shim ffa8f2dc05bf7e024b35cab9e6d902322a8cbc19dc417d3ffbd4f5271a838903" address="unix:///run/containerd/s/67afe8488c12810983f363bfd78fd006d216720582dc9b1c060a660611c74855" protocol=ttrpc version=3 Jul 1 08:40:05.684873 systemd[1]: Started cri-containerd-ffa8f2dc05bf7e024b35cab9e6d902322a8cbc19dc417d3ffbd4f5271a838903.scope - libcontainer container ffa8f2dc05bf7e024b35cab9e6d902322a8cbc19dc417d3ffbd4f5271a838903. Jul 1 08:40:05.712949 sshd[5231]: Accepted publickey for core from 10.0.0.1 port 49710 ssh2: RSA SHA256:Fdg/GPppvpuQQb5BRtreEtTPBEKGT5ZJUpnuhcL3IOo Jul 1 08:40:05.714929 sshd-session[5231]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:40:05.720883 systemd-logind[1525]: New session 15 of user core. Jul 1 08:40:05.729613 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 1 08:40:05.773024 containerd[1542]: time="2025-07-01T08:40:05.772979857Z" level=info msg="StartContainer for \"ffa8f2dc05bf7e024b35cab9e6d902322a8cbc19dc417d3ffbd4f5271a838903\" returns successfully" Jul 1 08:40:05.847190 kubelet[2710]: I0701 08:40:05.846991 2710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-4kczh" podStartSLOduration=46.646799765 podStartE2EDuration="1m6.846969932s" podCreationTimestamp="2025-07-01 08:38:59 +0000 UTC" firstStartedPulling="2025-07-01 08:39:45.378982124 +0000 UTC m=+76.655051429" lastFinishedPulling="2025-07-01 08:40:05.579152291 +0000 UTC m=+96.855221596" observedRunningTime="2025-07-01 08:40:05.846450605 +0000 UTC m=+97.122519940" watchObservedRunningTime="2025-07-01 08:40:05.846969932 +0000 UTC m=+97.123039237" Jul 1 08:40:05.891403 sshd[5253]: Connection closed by 10.0.0.1 port 49710 Jul 1 08:40:05.891800 sshd-session[5231]: pam_unix(sshd:session): session closed for user core Jul 1 08:40:05.896941 systemd[1]: sshd@14-10.0.0.78:22-10.0.0.1:49710.service: Deactivated successfully. Jul 1 08:40:05.899121 systemd[1]: session-15.scope: Deactivated successfully. Jul 1 08:40:05.900521 systemd-logind[1525]: Session 15 logged out. Waiting for processes to exit. Jul 1 08:40:05.902116 systemd-logind[1525]: Removed session 15. Jul 1 08:40:05.967367 kubelet[2710]: I0701 08:40:05.967315 2710 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 1 08:40:05.983767 kubelet[2710]: I0701 08:40:05.983703 2710 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 1 08:40:06.451846 containerd[1542]: time="2025-07-01T08:40:06.451767827Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:40:06.454858 containerd[1542]: time="2025-07-01T08:40:06.454815955Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 1 08:40:06.457832 containerd[1542]: time="2025-07-01T08:40:06.457756680Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 877.534387ms" Jul 1 08:40:06.457832 containerd[1542]: time="2025-07-01T08:40:06.457819108Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 1 08:40:06.459336 containerd[1542]: time="2025-07-01T08:40:06.459046449Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 1 08:40:06.466779 containerd[1542]: time="2025-07-01T08:40:06.466725522Z" level=info msg="CreateContainer within sandbox \"db042a600c7adff3138a45ea7c6370646bb683ff444936aabee94c87870778ef\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 1 08:40:06.485635 containerd[1542]: time="2025-07-01T08:40:06.485543518Z" level=info msg="Container 401a5ec848ec3d6c32721f0c174c6d3f73c2734a8504fa73602b5531b7c7a9d2: CDI devices from CRI Config.CDIDevices: []" Jul 1 08:40:06.497198 containerd[1542]: time="2025-07-01T08:40:06.497105264Z" level=info msg="CreateContainer within sandbox \"db042a600c7adff3138a45ea7c6370646bb683ff444936aabee94c87870778ef\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"401a5ec848ec3d6c32721f0c174c6d3f73c2734a8504fa73602b5531b7c7a9d2\"" Jul 1 08:40:06.498158 containerd[1542]: time="2025-07-01T08:40:06.498107888Z" level=info msg="StartContainer for \"401a5ec848ec3d6c32721f0c174c6d3f73c2734a8504fa73602b5531b7c7a9d2\"" Jul 1 08:40:06.499617 containerd[1542]: time="2025-07-01T08:40:06.499563532Z" level=info msg="connecting to shim 401a5ec848ec3d6c32721f0c174c6d3f73c2734a8504fa73602b5531b7c7a9d2" address="unix:///run/containerd/s/62d4aeaf7aba5d002bbe8924a42b3ec530918abe2df48ed28f4e948042e85a2d" protocol=ttrpc version=3 Jul 1 08:40:06.523751 systemd[1]: Started cri-containerd-401a5ec848ec3d6c32721f0c174c6d3f73c2734a8504fa73602b5531b7c7a9d2.scope - libcontainer container 401a5ec848ec3d6c32721f0c174c6d3f73c2734a8504fa73602b5531b7c7a9d2. Jul 1 08:40:06.686944 containerd[1542]: time="2025-07-01T08:40:06.686887447Z" level=info msg="StartContainer for \"401a5ec848ec3d6c32721f0c174c6d3f73c2734a8504fa73602b5531b7c7a9d2\" returns successfully" Jul 1 08:40:06.851750 kubelet[2710]: I0701 08:40:06.851650 2710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-755864c8f7-nb9gx" podStartSLOduration=57.817463943 podStartE2EDuration="1m11.851622875s" podCreationTimestamp="2025-07-01 08:38:55 +0000 UTC" firstStartedPulling="2025-07-01 08:39:52.424704399 +0000 UTC m=+83.700773704" lastFinishedPulling="2025-07-01 08:40:06.458863331 +0000 UTC m=+97.734932636" observedRunningTime="2025-07-01 08:40:06.85156196 +0000 UTC m=+98.127631265" watchObservedRunningTime="2025-07-01 08:40:06.851622875 +0000 UTC m=+98.127692181" Jul 1 08:40:07.833516 kubelet[2710]: I0701 08:40:07.833462 2710 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 1 08:40:10.834445 kubelet[2710]: I0701 08:40:10.834359 2710 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 1 08:40:10.912521 systemd[1]: Started sshd@15-10.0.0.78:22-10.0.0.1:35992.service - OpenSSH per-connection server daemon (10.0.0.1:35992). Jul 1 08:40:11.006384 sshd[5328]: Accepted publickey for core from 10.0.0.1 port 35992 ssh2: RSA SHA256:Fdg/GPppvpuQQb5BRtreEtTPBEKGT5ZJUpnuhcL3IOo Jul 1 08:40:11.011730 sshd-session[5328]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:40:11.024987 systemd-logind[1525]: New session 16 of user core. Jul 1 08:40:11.029931 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 1 08:40:11.085708 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2861160156.mount: Deactivated successfully. Jul 1 08:40:11.343232 sshd[5332]: Connection closed by 10.0.0.1 port 35992 Jul 1 08:40:11.349486 systemd[1]: sshd@15-10.0.0.78:22-10.0.0.1:35992.service: Deactivated successfully. Jul 1 08:40:11.343567 sshd-session[5328]: pam_unix(sshd:session): session closed for user core Jul 1 08:40:11.352213 systemd[1]: session-16.scope: Deactivated successfully. Jul 1 08:40:11.353512 systemd-logind[1525]: Session 16 logged out. Waiting for processes to exit. Jul 1 08:40:11.356348 systemd-logind[1525]: Removed session 16. Jul 1 08:40:12.763353 containerd[1542]: time="2025-07-01T08:40:12.763115412Z" level=info msg="TaskExit event in podsandbox handler container_id:\"63a1210d034c1fcfb54d3011ef35668a37d8b4dd3e9865d9c7c56382449ecea2\" id:\"e541520996b154dde840b29225baff5f9883471141293068c84ebdde7a93a521\" pid:5360 exited_at:{seconds:1751359212 nanos:762600907}" Jul 1 08:40:12.871993 kubelet[2710]: I0701 08:40:12.871505 2710 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 1 08:40:12.965272 containerd[1542]: time="2025-07-01T08:40:12.965191764Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2c8e2919101a188028e88b546c2a3552831da553595016afcdf0b19583c05443\" id:\"741a7bce5e81f566c9afeea0977e53fba44f4e49e2f86caff6f0f00bf79e2b39\" pid:5385 exited_at:{seconds:1751359212 nanos:964769804}" Jul 1 08:40:13.848895 containerd[1542]: time="2025-07-01T08:40:13.848830855Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:40:13.850893 containerd[1542]: time="2025-07-01T08:40:13.850831158Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=66352308" Jul 1 08:40:13.858069 containerd[1542]: time="2025-07-01T08:40:13.857952671Z" level=info msg="ImageCreate event name:\"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:40:13.861873 containerd[1542]: time="2025-07-01T08:40:13.861814704Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:40:13.863829 containerd[1542]: time="2025-07-01T08:40:13.862966649Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"66352154\" in 7.403874713s" Jul 1 08:40:13.863829 containerd[1542]: time="2025-07-01T08:40:13.863027825Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Jul 1 08:40:13.865230 containerd[1542]: time="2025-07-01T08:40:13.865098381Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 1 08:40:13.876206 containerd[1542]: time="2025-07-01T08:40:13.876017405Z" level=info msg="CreateContainer within sandbox \"7e10d9e1acb66c0f34b71e5951ad3b2228d8b153cb16fd875daea178ebc586db\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 1 08:40:13.893660 containerd[1542]: time="2025-07-01T08:40:13.892634593Z" level=info msg="Container 67c5543b4e5f0a038fc1269311f36ae2bc37b98058a4fe418cec5dd2b115a062: CDI devices from CRI Config.CDIDevices: []" Jul 1 08:40:13.912826 containerd[1542]: time="2025-07-01T08:40:13.912738433Z" level=info msg="CreateContainer within sandbox \"7e10d9e1acb66c0f34b71e5951ad3b2228d8b153cb16fd875daea178ebc586db\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"67c5543b4e5f0a038fc1269311f36ae2bc37b98058a4fe418cec5dd2b115a062\"" Jul 1 08:40:13.913981 containerd[1542]: time="2025-07-01T08:40:13.913851885Z" level=info msg="StartContainer for \"67c5543b4e5f0a038fc1269311f36ae2bc37b98058a4fe418cec5dd2b115a062\"" Jul 1 08:40:13.915961 containerd[1542]: time="2025-07-01T08:40:13.915906110Z" level=info msg="connecting to shim 67c5543b4e5f0a038fc1269311f36ae2bc37b98058a4fe418cec5dd2b115a062" address="unix:///run/containerd/s/91a6fb595c2cd76e13bad821cb8b99cc055e6b6bd3d6359c731370252270660c" protocol=ttrpc version=3 Jul 1 08:40:13.949926 systemd[1]: Started cri-containerd-67c5543b4e5f0a038fc1269311f36ae2bc37b98058a4fe418cec5dd2b115a062.scope - libcontainer container 67c5543b4e5f0a038fc1269311f36ae2bc37b98058a4fe418cec5dd2b115a062. Jul 1 08:40:14.043748 containerd[1542]: time="2025-07-01T08:40:14.043677408Z" level=info msg="StartContainer for \"67c5543b4e5f0a038fc1269311f36ae2bc37b98058a4fe418cec5dd2b115a062\" returns successfully" Jul 1 08:40:14.917198 kubelet[2710]: I0701 08:40:14.917041 2710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-768f4c5c69-hrqb5" podStartSLOduration=56.557171735 podStartE2EDuration="1m16.916993495s" podCreationTimestamp="2025-07-01 08:38:58 +0000 UTC" firstStartedPulling="2025-07-01 08:39:53.504942037 +0000 UTC m=+84.781011352" lastFinishedPulling="2025-07-01 08:40:13.864763767 +0000 UTC m=+105.140833112" observedRunningTime="2025-07-01 08:40:14.916841106 +0000 UTC m=+106.192910431" watchObservedRunningTime="2025-07-01 08:40:14.916993495 +0000 UTC m=+106.193062800" Jul 1 08:40:14.958325 containerd[1542]: time="2025-07-01T08:40:14.958276199Z" level=info msg="TaskExit event in podsandbox handler container_id:\"67c5543b4e5f0a038fc1269311f36ae2bc37b98058a4fe418cec5dd2b115a062\" id:\"c3949bd966f38dd638fbc98d106523fba6b48c115ba9d9829a4961f40e7309c7\" pid:5454 exit_status:1 exited_at:{seconds:1751359214 nanos:957868545}" Jul 1 08:40:15.951475 containerd[1542]: time="2025-07-01T08:40:15.951390133Z" level=info msg="TaskExit event in podsandbox handler container_id:\"67c5543b4e5f0a038fc1269311f36ae2bc37b98058a4fe418cec5dd2b115a062\" id:\"745466f71e91edba73aa32ef3f42f9ecb21f8e8c22f76df5c3b6c7d805be2b6a\" pid:5483 exited_at:{seconds:1751359215 nanos:950861751}" Jul 1 08:40:16.363036 systemd[1]: Started sshd@16-10.0.0.78:22-10.0.0.1:35998.service - OpenSSH per-connection server daemon (10.0.0.1:35998). Jul 1 08:40:16.460690 sshd[5498]: Accepted publickey for core from 10.0.0.1 port 35998 ssh2: RSA SHA256:Fdg/GPppvpuQQb5BRtreEtTPBEKGT5ZJUpnuhcL3IOo Jul 1 08:40:16.463052 sshd-session[5498]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:40:16.471777 systemd-logind[1525]: New session 17 of user core. Jul 1 08:40:16.478754 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 1 08:40:16.656361 sshd[5501]: Connection closed by 10.0.0.1 port 35998 Jul 1 08:40:16.657753 sshd-session[5498]: pam_unix(sshd:session): session closed for user core Jul 1 08:40:16.665775 systemd[1]: sshd@16-10.0.0.78:22-10.0.0.1:35998.service: Deactivated successfully. Jul 1 08:40:16.668567 systemd[1]: session-17.scope: Deactivated successfully. Jul 1 08:40:16.669554 systemd-logind[1525]: Session 17 logged out. Waiting for processes to exit. Jul 1 08:40:16.671986 systemd-logind[1525]: Removed session 17. Jul 1 08:40:18.265569 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount881648546.mount: Deactivated successfully. Jul 1 08:40:18.344606 containerd[1542]: time="2025-07-01T08:40:18.344528642Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:40:18.345455 containerd[1542]: time="2025-07-01T08:40:18.345389233Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=33083477" Jul 1 08:40:18.346681 containerd[1542]: time="2025-07-01T08:40:18.346638960Z" level=info msg="ImageCreate event name:\"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:40:18.349735 containerd[1542]: time="2025-07-01T08:40:18.349703967Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:40:18.350171 containerd[1542]: time="2025-07-01T08:40:18.350139072Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"33083307\" in 4.484995914s" Jul 1 08:40:18.350235 containerd[1542]: time="2025-07-01T08:40:18.350173386Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Jul 1 08:40:18.446579 containerd[1542]: time="2025-07-01T08:40:18.446444788Z" level=info msg="CreateContainer within sandbox \"abb25e4e12ffb47337b8715ef2b712af00acc1127e87695e01b29bfbcae9961b\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 1 08:40:18.725555 containerd[1542]: time="2025-07-01T08:40:18.725472389Z" level=info msg="Container 2f80028dfc2e2ae35752b615f2f94c383379699789e928df425b7e50f2297601: CDI devices from CRI Config.CDIDevices: []" Jul 1 08:40:18.796941 containerd[1542]: time="2025-07-01T08:40:18.796891448Z" level=info msg="CreateContainer within sandbox \"abb25e4e12ffb47337b8715ef2b712af00acc1127e87695e01b29bfbcae9961b\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"2f80028dfc2e2ae35752b615f2f94c383379699789e928df425b7e50f2297601\"" Jul 1 08:40:18.797655 containerd[1542]: time="2025-07-01T08:40:18.797624747Z" level=info msg="StartContainer for \"2f80028dfc2e2ae35752b615f2f94c383379699789e928df425b7e50f2297601\"" Jul 1 08:40:18.799339 containerd[1542]: time="2025-07-01T08:40:18.798841903Z" level=info msg="connecting to shim 2f80028dfc2e2ae35752b615f2f94c383379699789e928df425b7e50f2297601" address="unix:///run/containerd/s/e328cadbe7f69318ad14f01aba149394faae18118bb9fd7da9266359191eebc8" protocol=ttrpc version=3 Jul 1 08:40:18.841741 systemd[1]: Started cri-containerd-2f80028dfc2e2ae35752b615f2f94c383379699789e928df425b7e50f2297601.scope - libcontainer container 2f80028dfc2e2ae35752b615f2f94c383379699789e928df425b7e50f2297601. Jul 1 08:40:18.904437 containerd[1542]: time="2025-07-01T08:40:18.904368286Z" level=info msg="StartContainer for \"2f80028dfc2e2ae35752b615f2f94c383379699789e928df425b7e50f2297601\" returns successfully" Jul 1 08:40:20.056639 kubelet[2710]: I0701 08:40:20.056338 2710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-7464967f58-kz8mf" podStartSLOduration=5.50224713 podStartE2EDuration="38.056318576s" podCreationTimestamp="2025-07-01 08:39:42 +0000 UTC" firstStartedPulling="2025-07-01 08:39:45.799356827 +0000 UTC m=+77.075426122" lastFinishedPulling="2025-07-01 08:40:18.353428253 +0000 UTC m=+109.629497568" observedRunningTime="2025-07-01 08:40:20.055573224 +0000 UTC m=+111.331642539" watchObservedRunningTime="2025-07-01 08:40:20.056318576 +0000 UTC m=+111.332387881" Jul 1 08:40:21.283112 containerd[1542]: time="2025-07-01T08:40:21.283023988Z" level=info msg="TaskExit event in podsandbox handler container_id:\"67c5543b4e5f0a038fc1269311f36ae2bc37b98058a4fe418cec5dd2b115a062\" id:\"d12ab37cf17a0ed74b8133d138d38eb0c39b7b3307956fd0bdb0fe1f7a975707\" pid:5569 exited_at:{seconds:1751359221 nanos:282688313}" Jul 1 08:40:21.677563 systemd[1]: Started sshd@17-10.0.0.78:22-10.0.0.1:33878.service - OpenSSH per-connection server daemon (10.0.0.1:33878). Jul 1 08:40:21.777793 sshd[5582]: Accepted publickey for core from 10.0.0.1 port 33878 ssh2: RSA SHA256:Fdg/GPppvpuQQb5BRtreEtTPBEKGT5ZJUpnuhcL3IOo Jul 1 08:40:21.780580 sshd-session[5582]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:40:21.787015 systemd-logind[1525]: New session 18 of user core. Jul 1 08:40:21.798706 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 1 08:40:22.076253 sshd[5585]: Connection closed by 10.0.0.1 port 33878 Jul 1 08:40:22.076609 sshd-session[5582]: pam_unix(sshd:session): session closed for user core Jul 1 08:40:22.080683 systemd[1]: sshd@17-10.0.0.78:22-10.0.0.1:33878.service: Deactivated successfully. Jul 1 08:40:22.082872 systemd[1]: session-18.scope: Deactivated successfully. Jul 1 08:40:22.083711 systemd-logind[1525]: Session 18 logged out. Waiting for processes to exit. Jul 1 08:40:22.084902 systemd-logind[1525]: Removed session 18. Jul 1 08:40:27.093092 systemd[1]: Started sshd@18-10.0.0.78:22-10.0.0.1:33884.service - OpenSSH per-connection server daemon (10.0.0.1:33884). Jul 1 08:40:27.147591 sshd[5605]: Accepted publickey for core from 10.0.0.1 port 33884 ssh2: RSA SHA256:Fdg/GPppvpuQQb5BRtreEtTPBEKGT5ZJUpnuhcL3IOo Jul 1 08:40:27.149653 sshd-session[5605]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:40:27.154767 systemd-logind[1525]: New session 19 of user core. Jul 1 08:40:27.164595 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 1 08:40:27.326140 sshd[5608]: Connection closed by 10.0.0.1 port 33884 Jul 1 08:40:27.328662 sshd-session[5605]: pam_unix(sshd:session): session closed for user core Jul 1 08:40:27.337735 systemd[1]: sshd@18-10.0.0.78:22-10.0.0.1:33884.service: Deactivated successfully. Jul 1 08:40:27.340568 systemd[1]: session-19.scope: Deactivated successfully. Jul 1 08:40:27.342521 systemd-logind[1525]: Session 19 logged out. Waiting for processes to exit. Jul 1 08:40:27.346766 systemd-logind[1525]: Removed session 19. Jul 1 08:40:27.348038 systemd[1]: Started sshd@19-10.0.0.78:22-10.0.0.1:33890.service - OpenSSH per-connection server daemon (10.0.0.1:33890). Jul 1 08:40:27.405686 sshd[5621]: Accepted publickey for core from 10.0.0.1 port 33890 ssh2: RSA SHA256:Fdg/GPppvpuQQb5BRtreEtTPBEKGT5ZJUpnuhcL3IOo Jul 1 08:40:27.407962 sshd-session[5621]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:40:27.414562 systemd-logind[1525]: New session 20 of user core. Jul 1 08:40:27.420739 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 1 08:40:28.020226 sshd[5624]: Connection closed by 10.0.0.1 port 33890 Jul 1 08:40:28.019669 sshd-session[5621]: pam_unix(sshd:session): session closed for user core Jul 1 08:40:28.033330 systemd[1]: sshd@19-10.0.0.78:22-10.0.0.1:33890.service: Deactivated successfully. Jul 1 08:40:28.036353 systemd[1]: session-20.scope: Deactivated successfully. Jul 1 08:40:28.039188 systemd-logind[1525]: Session 20 logged out. Waiting for processes to exit. Jul 1 08:40:28.043916 systemd[1]: Started sshd@20-10.0.0.78:22-10.0.0.1:33906.service - OpenSSH per-connection server daemon (10.0.0.1:33906). Jul 1 08:40:28.045283 systemd-logind[1525]: Removed session 20. Jul 1 08:40:28.109406 sshd[5636]: Accepted publickey for core from 10.0.0.1 port 33906 ssh2: RSA SHA256:Fdg/GPppvpuQQb5BRtreEtTPBEKGT5ZJUpnuhcL3IOo Jul 1 08:40:28.111217 sshd-session[5636]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:40:28.116800 systemd-logind[1525]: New session 21 of user core. Jul 1 08:40:28.122664 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 1 08:40:28.963910 containerd[1542]: time="2025-07-01T08:40:28.963862034Z" level=info msg="TaskExit event in podsandbox handler container_id:\"63a1210d034c1fcfb54d3011ef35668a37d8b4dd3e9865d9c7c56382449ecea2\" id:\"cb0e5e580ec5c754d6a0bc4a1d3a5377a10304527c9468df94b7f30cda3a03b5\" pid:5662 exited_at:{seconds:1751359228 nanos:962649380}" Jul 1 08:40:30.819990 sshd[5639]: Connection closed by 10.0.0.1 port 33906 Jul 1 08:40:30.821029 sshd-session[5636]: pam_unix(sshd:session): session closed for user core Jul 1 08:40:30.831860 systemd[1]: sshd@20-10.0.0.78:22-10.0.0.1:33906.service: Deactivated successfully. Jul 1 08:40:30.833913 systemd[1]: session-21.scope: Deactivated successfully. Jul 1 08:40:30.834815 systemd-logind[1525]: Session 21 logged out. Waiting for processes to exit. Jul 1 08:40:30.839014 systemd[1]: Started sshd@21-10.0.0.78:22-10.0.0.1:52498.service - OpenSSH per-connection server daemon (10.0.0.1:52498). Jul 1 08:40:30.840021 systemd-logind[1525]: Removed session 21. Jul 1 08:40:30.906835 sshd[5682]: Accepted publickey for core from 10.0.0.1 port 52498 ssh2: RSA SHA256:Fdg/GPppvpuQQb5BRtreEtTPBEKGT5ZJUpnuhcL3IOo Jul 1 08:40:30.909243 sshd-session[5682]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:40:30.921443 systemd-logind[1525]: New session 22 of user core. Jul 1 08:40:30.931776 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 1 08:40:31.443233 sshd[5685]: Connection closed by 10.0.0.1 port 52498 Jul 1 08:40:31.443750 sshd-session[5682]: pam_unix(sshd:session): session closed for user core Jul 1 08:40:31.457150 systemd[1]: sshd@21-10.0.0.78:22-10.0.0.1:52498.service: Deactivated successfully. Jul 1 08:40:31.459679 systemd[1]: session-22.scope: Deactivated successfully. Jul 1 08:40:31.460532 systemd-logind[1525]: Session 22 logged out. Waiting for processes to exit. Jul 1 08:40:31.463358 systemd[1]: Started sshd@22-10.0.0.78:22-10.0.0.1:52514.service - OpenSSH per-connection server daemon (10.0.0.1:52514). Jul 1 08:40:31.464090 systemd-logind[1525]: Removed session 22. Jul 1 08:40:31.530464 sshd[5697]: Accepted publickey for core from 10.0.0.1 port 52514 ssh2: RSA SHA256:Fdg/GPppvpuQQb5BRtreEtTPBEKGT5ZJUpnuhcL3IOo Jul 1 08:40:31.532833 sshd-session[5697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:40:31.539362 systemd-logind[1525]: New session 23 of user core. Jul 1 08:40:31.548678 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 1 08:40:31.696537 sshd[5700]: Connection closed by 10.0.0.1 port 52514 Jul 1 08:40:31.697155 sshd-session[5697]: pam_unix(sshd:session): session closed for user core Jul 1 08:40:31.702069 systemd[1]: sshd@22-10.0.0.78:22-10.0.0.1:52514.service: Deactivated successfully. Jul 1 08:40:31.704947 systemd[1]: session-23.scope: Deactivated successfully. Jul 1 08:40:31.707618 systemd-logind[1525]: Session 23 logged out. Waiting for processes to exit. Jul 1 08:40:31.710149 systemd-logind[1525]: Removed session 23. Jul 1 08:40:36.714014 systemd[1]: Started sshd@23-10.0.0.78:22-10.0.0.1:52528.service - OpenSSH per-connection server daemon (10.0.0.1:52528). Jul 1 08:40:36.784880 sshd[5715]: Accepted publickey for core from 10.0.0.1 port 52528 ssh2: RSA SHA256:Fdg/GPppvpuQQb5BRtreEtTPBEKGT5ZJUpnuhcL3IOo Jul 1 08:40:36.787025 sshd-session[5715]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:40:36.792712 systemd-logind[1525]: New session 24 of user core. Jul 1 08:40:36.802817 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 1 08:40:36.930559 sshd[5718]: Connection closed by 10.0.0.1 port 52528 Jul 1 08:40:36.930903 sshd-session[5715]: pam_unix(sshd:session): session closed for user core Jul 1 08:40:36.935271 systemd[1]: sshd@23-10.0.0.78:22-10.0.0.1:52528.service: Deactivated successfully. Jul 1 08:40:36.937597 systemd[1]: session-24.scope: Deactivated successfully. Jul 1 08:40:36.938501 systemd-logind[1525]: Session 24 logged out. Waiting for processes to exit. Jul 1 08:40:36.939768 systemd-logind[1525]: Removed session 24. Jul 1 08:40:41.947383 systemd[1]: Started sshd@24-10.0.0.78:22-10.0.0.1:50148.service - OpenSSH per-connection server daemon (10.0.0.1:50148). Jul 1 08:40:41.995074 sshd[5736]: Accepted publickey for core from 10.0.0.1 port 50148 ssh2: RSA SHA256:Fdg/GPppvpuQQb5BRtreEtTPBEKGT5ZJUpnuhcL3IOo Jul 1 08:40:41.996563 sshd-session[5736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:40:42.000818 systemd-logind[1525]: New session 25 of user core. Jul 1 08:40:42.013586 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 1 08:40:42.120112 sshd[5739]: Connection closed by 10.0.0.1 port 50148 Jul 1 08:40:42.120500 sshd-session[5736]: pam_unix(sshd:session): session closed for user core Jul 1 08:40:42.124524 systemd[1]: sshd@24-10.0.0.78:22-10.0.0.1:50148.service: Deactivated successfully. Jul 1 08:40:42.126514 systemd[1]: session-25.scope: Deactivated successfully. Jul 1 08:40:42.127257 systemd-logind[1525]: Session 25 logged out. Waiting for processes to exit. Jul 1 08:40:42.129223 systemd-logind[1525]: Removed session 25. Jul 1 08:40:42.862662 containerd[1542]: time="2025-07-01T08:40:42.862609303Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2c8e2919101a188028e88b546c2a3552831da553595016afcdf0b19583c05443\" id:\"811c6c42c939a468d2c245d0d110c64701879a61f0b851ed56b578bb4ce4739f\" pid:5764 exited_at:{seconds:1751359242 nanos:862265505}" Jul 1 08:40:45.957648 containerd[1542]: time="2025-07-01T08:40:45.957589122Z" level=info msg="TaskExit event in podsandbox handler container_id:\"67c5543b4e5f0a038fc1269311f36ae2bc37b98058a4fe418cec5dd2b115a062\" id:\"efed197061104032cbdcdf46e01f0527160673da76348c762a670b59525aec0d\" pid:5789 exited_at:{seconds:1751359245 nanos:957198475}" Jul 1 08:40:47.141121 systemd[1]: Started sshd@25-10.0.0.78:22-10.0.0.1:50154.service - OpenSSH per-connection server daemon (10.0.0.1:50154). Jul 1 08:40:47.209600 sshd[5802]: Accepted publickey for core from 10.0.0.1 port 50154 ssh2: RSA SHA256:Fdg/GPppvpuQQb5BRtreEtTPBEKGT5ZJUpnuhcL3IOo Jul 1 08:40:47.211642 sshd-session[5802]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:40:47.216784 systemd-logind[1525]: New session 26 of user core. Jul 1 08:40:47.226059 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 1 08:40:47.366229 sshd[5805]: Connection closed by 10.0.0.1 port 50154 Jul 1 08:40:47.366724 sshd-session[5802]: pam_unix(sshd:session): session closed for user core Jul 1 08:40:47.371442 systemd[1]: sshd@25-10.0.0.78:22-10.0.0.1:50154.service: Deactivated successfully. Jul 1 08:40:47.374054 systemd[1]: session-26.scope: Deactivated successfully. Jul 1 08:40:47.375063 systemd-logind[1525]: Session 26 logged out. Waiting for processes to exit. Jul 1 08:40:47.377221 systemd-logind[1525]: Removed session 26.