Oct 13 00:03:08.773427 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Oct 13 00:03:08.773448 kernel: Linux version 6.12.51-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Sun Oct 12 22:32:01 -00 2025 Oct 13 00:03:08.773457 kernel: KASLR enabled Oct 13 00:03:08.773463 kernel: efi: EFI v2.7 by EDK II Oct 13 00:03:08.773469 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb228018 ACPI 2.0=0xdb9b8018 RNG=0xdb9b8a18 MEMRESERVE=0xdb21fd18 Oct 13 00:03:08.773474 kernel: random: crng init done Oct 13 00:03:08.773481 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 Oct 13 00:03:08.773487 kernel: secureboot: Secure boot enabled Oct 13 00:03:08.773492 kernel: ACPI: Early table checksum verification disabled Oct 13 00:03:08.773500 kernel: ACPI: RSDP 0x00000000DB9B8018 000024 (v02 BOCHS ) Oct 13 00:03:08.773506 kernel: ACPI: XSDT 0x00000000DB9B8F18 000064 (v01 BOCHS BXPC 00000001 01000013) Oct 13 00:03:08.773512 kernel: ACPI: FACP 0x00000000DB9B8B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 00:03:08.773517 kernel: ACPI: DSDT 0x00000000DB904018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 00:03:08.773523 kernel: ACPI: APIC 0x00000000DB9B8C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 00:03:08.773530 kernel: ACPI: PPTT 0x00000000DB9B8098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 00:03:08.773537 kernel: ACPI: GTDT 0x00000000DB9B8818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 00:03:08.773543 kernel: ACPI: MCFG 0x00000000DB9B8A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 00:03:08.773549 kernel: ACPI: SPCR 0x00000000DB9B8918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 00:03:08.773555 kernel: ACPI: DBG2 0x00000000DB9B8998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 00:03:08.773561 kernel: ACPI: IORT 0x00000000DB9B8198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 00:03:08.773567 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Oct 13 00:03:08.773573 kernel: ACPI: Use ACPI SPCR as default console: No Oct 13 00:03:08.773579 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Oct 13 00:03:08.773585 kernel: NODE_DATA(0) allocated [mem 0xdc737a00-0xdc73efff] Oct 13 00:03:08.773591 kernel: Zone ranges: Oct 13 00:03:08.773598 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Oct 13 00:03:08.773604 kernel: DMA32 empty Oct 13 00:03:08.773610 kernel: Normal empty Oct 13 00:03:08.773616 kernel: Device empty Oct 13 00:03:08.773622 kernel: Movable zone start for each node Oct 13 00:03:08.773628 kernel: Early memory node ranges Oct 13 00:03:08.773634 kernel: node 0: [mem 0x0000000040000000-0x00000000dbb4ffff] Oct 13 00:03:08.773640 kernel: node 0: [mem 0x00000000dbb50000-0x00000000dbe7ffff] Oct 13 00:03:08.773646 kernel: node 0: [mem 0x00000000dbe80000-0x00000000dbe9ffff] Oct 13 00:03:08.773652 kernel: node 0: [mem 0x00000000dbea0000-0x00000000dbedffff] Oct 13 00:03:08.773658 kernel: node 0: [mem 0x00000000dbee0000-0x00000000dbf1ffff] Oct 13 00:03:08.773664 kernel: node 0: [mem 0x00000000dbf20000-0x00000000dbf6ffff] Oct 13 00:03:08.773671 kernel: node 0: [mem 0x00000000dbf70000-0x00000000dcbfffff] Oct 13 00:03:08.773677 kernel: node 0: [mem 0x00000000dcc00000-0x00000000dcfdffff] Oct 13 00:03:08.773683 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Oct 13 00:03:08.773692 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Oct 13 00:03:08.773698 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Oct 13 00:03:08.773705 kernel: cma: Reserved 16 MiB at 0x00000000d7a00000 on node -1 Oct 13 00:03:08.773711 kernel: psci: probing for conduit method from ACPI. Oct 13 00:03:08.773719 kernel: psci: PSCIv1.1 detected in firmware. Oct 13 00:03:08.773725 kernel: psci: Using standard PSCI v0.2 function IDs Oct 13 00:03:08.773732 kernel: psci: Trusted OS migration not required Oct 13 00:03:08.773738 kernel: psci: SMC Calling Convention v1.1 Oct 13 00:03:08.773744 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Oct 13 00:03:08.773751 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Oct 13 00:03:08.773757 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Oct 13 00:03:08.773764 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Oct 13 00:03:08.773770 kernel: Detected PIPT I-cache on CPU0 Oct 13 00:03:08.773778 kernel: CPU features: detected: GIC system register CPU interface Oct 13 00:03:08.773784 kernel: CPU features: detected: Spectre-v4 Oct 13 00:03:08.773790 kernel: CPU features: detected: Spectre-BHB Oct 13 00:03:08.773797 kernel: CPU features: kernel page table isolation forced ON by KASLR Oct 13 00:03:08.773804 kernel: CPU features: detected: Kernel page table isolation (KPTI) Oct 13 00:03:08.773810 kernel: CPU features: detected: ARM erratum 1418040 Oct 13 00:03:08.773816 kernel: CPU features: detected: SSBS not fully self-synchronizing Oct 13 00:03:08.773834 kernel: alternatives: applying boot alternatives Oct 13 00:03:08.773842 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=37fc523060a9b8894388e25ab0f082059dd744d472a2b8577211d4b3dd66a910 Oct 13 00:03:08.773849 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 13 00:03:08.773858 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 13 00:03:08.773868 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 13 00:03:08.773876 kernel: Fallback order for Node 0: 0 Oct 13 00:03:08.773883 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Oct 13 00:03:08.773889 kernel: Policy zone: DMA Oct 13 00:03:08.773895 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 13 00:03:08.773902 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Oct 13 00:03:08.773908 kernel: software IO TLB: area num 4. Oct 13 00:03:08.773914 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Oct 13 00:03:08.773921 kernel: software IO TLB: mapped [mem 0x00000000db504000-0x00000000db904000] (4MB) Oct 13 00:03:08.773927 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Oct 13 00:03:08.773933 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 13 00:03:08.773940 kernel: rcu: RCU event tracing is enabled. Oct 13 00:03:08.773948 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Oct 13 00:03:08.773955 kernel: Trampoline variant of Tasks RCU enabled. Oct 13 00:03:08.773961 kernel: Tracing variant of Tasks RCU enabled. Oct 13 00:03:08.773967 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 13 00:03:08.774004 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Oct 13 00:03:08.774012 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 13 00:03:08.774020 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 13 00:03:08.774026 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Oct 13 00:03:08.774032 kernel: GICv3: 256 SPIs implemented Oct 13 00:03:08.774039 kernel: GICv3: 0 Extended SPIs implemented Oct 13 00:03:08.774045 kernel: Root IRQ handler: gic_handle_irq Oct 13 00:03:08.774069 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Oct 13 00:03:08.774076 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Oct 13 00:03:08.774082 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Oct 13 00:03:08.774088 kernel: ITS [mem 0x08080000-0x0809ffff] Oct 13 00:03:08.774095 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Oct 13 00:03:08.774101 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Oct 13 00:03:08.774107 kernel: GICv3: using LPI property table @0x0000000040130000 Oct 13 00:03:08.774114 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Oct 13 00:03:08.774120 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 13 00:03:08.774127 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 13 00:03:08.774133 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Oct 13 00:03:08.774140 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Oct 13 00:03:08.774147 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Oct 13 00:03:08.774154 kernel: arm-pv: using stolen time PV Oct 13 00:03:08.774160 kernel: Console: colour dummy device 80x25 Oct 13 00:03:08.774167 kernel: ACPI: Core revision 20240827 Oct 13 00:03:08.774173 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Oct 13 00:03:08.774180 kernel: pid_max: default: 32768 minimum: 301 Oct 13 00:03:08.774187 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Oct 13 00:03:08.774193 kernel: landlock: Up and running. Oct 13 00:03:08.774200 kernel: SELinux: Initializing. Oct 13 00:03:08.774206 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 13 00:03:08.774214 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 13 00:03:08.774221 kernel: rcu: Hierarchical SRCU implementation. Oct 13 00:03:08.774228 kernel: rcu: Max phase no-delay instances is 400. Oct 13 00:03:08.774243 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Oct 13 00:03:08.774251 kernel: Remapping and enabling EFI services. Oct 13 00:03:08.774257 kernel: smp: Bringing up secondary CPUs ... Oct 13 00:03:08.774264 kernel: Detected PIPT I-cache on CPU1 Oct 13 00:03:08.774270 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Oct 13 00:03:08.774277 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Oct 13 00:03:08.774290 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 13 00:03:08.774297 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Oct 13 00:03:08.774305 kernel: Detected PIPT I-cache on CPU2 Oct 13 00:03:08.774312 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Oct 13 00:03:08.774319 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Oct 13 00:03:08.774325 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 13 00:03:08.774332 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Oct 13 00:03:08.774339 kernel: Detected PIPT I-cache on CPU3 Oct 13 00:03:08.774347 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Oct 13 00:03:08.774354 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Oct 13 00:03:08.774361 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 13 00:03:08.774368 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Oct 13 00:03:08.774375 kernel: smp: Brought up 1 node, 4 CPUs Oct 13 00:03:08.774381 kernel: SMP: Total of 4 processors activated. Oct 13 00:03:08.774388 kernel: CPU: All CPU(s) started at EL1 Oct 13 00:03:08.774395 kernel: CPU features: detected: 32-bit EL0 Support Oct 13 00:03:08.774402 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Oct 13 00:03:08.774410 kernel: CPU features: detected: Common not Private translations Oct 13 00:03:08.774417 kernel: CPU features: detected: CRC32 instructions Oct 13 00:03:08.774424 kernel: CPU features: detected: Enhanced Virtualization Traps Oct 13 00:03:08.774431 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Oct 13 00:03:08.774438 kernel: CPU features: detected: LSE atomic instructions Oct 13 00:03:08.774445 kernel: CPU features: detected: Privileged Access Never Oct 13 00:03:08.774452 kernel: CPU features: detected: RAS Extension Support Oct 13 00:03:08.774459 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Oct 13 00:03:08.774465 kernel: alternatives: applying system-wide alternatives Oct 13 00:03:08.774474 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Oct 13 00:03:08.774481 kernel: Memory: 2422372K/2572288K available (11136K kernel code, 2450K rwdata, 9076K rodata, 38976K init, 1038K bss, 127580K reserved, 16384K cma-reserved) Oct 13 00:03:08.774488 kernel: devtmpfs: initialized Oct 13 00:03:08.774495 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 13 00:03:08.774502 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Oct 13 00:03:08.774509 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Oct 13 00:03:08.774515 kernel: 0 pages in range for non-PLT usage Oct 13 00:03:08.774522 kernel: 508560 pages in range for PLT usage Oct 13 00:03:08.774529 kernel: pinctrl core: initialized pinctrl subsystem Oct 13 00:03:08.774538 kernel: SMBIOS 3.0.0 present. Oct 13 00:03:08.774545 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Oct 13 00:03:08.774552 kernel: DMI: Memory slots populated: 1/1 Oct 13 00:03:08.774559 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 13 00:03:08.774567 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Oct 13 00:03:08.774574 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Oct 13 00:03:08.774607 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Oct 13 00:03:08.774616 kernel: audit: initializing netlink subsys (disabled) Oct 13 00:03:08.774623 kernel: audit: type=2000 audit(0.024:1): state=initialized audit_enabled=0 res=1 Oct 13 00:03:08.774633 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 13 00:03:08.774640 kernel: cpuidle: using governor menu Oct 13 00:03:08.774647 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Oct 13 00:03:08.774654 kernel: ASID allocator initialised with 32768 entries Oct 13 00:03:08.774662 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 13 00:03:08.774669 kernel: Serial: AMBA PL011 UART driver Oct 13 00:03:08.774676 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 13 00:03:08.774683 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Oct 13 00:03:08.774690 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Oct 13 00:03:08.774698 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Oct 13 00:03:08.774705 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 13 00:03:08.774712 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Oct 13 00:03:08.774719 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Oct 13 00:03:08.774726 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Oct 13 00:03:08.774733 kernel: ACPI: Added _OSI(Module Device) Oct 13 00:03:08.774740 kernel: ACPI: Added _OSI(Processor Device) Oct 13 00:03:08.774746 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 13 00:03:08.774753 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 13 00:03:08.774761 kernel: ACPI: Interpreter enabled Oct 13 00:03:08.774768 kernel: ACPI: Using GIC for interrupt routing Oct 13 00:03:08.774775 kernel: ACPI: MCFG table detected, 1 entries Oct 13 00:03:08.774782 kernel: ACPI: CPU0 has been hot-added Oct 13 00:03:08.774789 kernel: ACPI: CPU1 has been hot-added Oct 13 00:03:08.774796 kernel: ACPI: CPU2 has been hot-added Oct 13 00:03:08.774802 kernel: ACPI: CPU3 has been hot-added Oct 13 00:03:08.774809 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Oct 13 00:03:08.774816 kernel: printk: legacy console [ttyAMA0] enabled Oct 13 00:03:08.774830 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 13 00:03:08.774970 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 13 00:03:08.775034 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Oct 13 00:03:08.775093 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Oct 13 00:03:08.775150 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Oct 13 00:03:08.775282 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Oct 13 00:03:08.775297 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Oct 13 00:03:08.775308 kernel: PCI host bridge to bus 0000:00 Oct 13 00:03:08.775386 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Oct 13 00:03:08.775441 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Oct 13 00:03:08.775493 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Oct 13 00:03:08.775544 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 13 00:03:08.775619 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Oct 13 00:03:08.775687 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Oct 13 00:03:08.775749 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Oct 13 00:03:08.775808 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Oct 13 00:03:08.775876 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Oct 13 00:03:08.775936 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Oct 13 00:03:08.775995 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Oct 13 00:03:08.776056 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Oct 13 00:03:08.776114 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Oct 13 00:03:08.776169 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Oct 13 00:03:08.776225 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Oct 13 00:03:08.776248 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Oct 13 00:03:08.776259 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Oct 13 00:03:08.776268 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Oct 13 00:03:08.776276 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Oct 13 00:03:08.776284 kernel: iommu: Default domain type: Translated Oct 13 00:03:08.776294 kernel: iommu: DMA domain TLB invalidation policy: strict mode Oct 13 00:03:08.776301 kernel: efivars: Registered efivars operations Oct 13 00:03:08.776308 kernel: vgaarb: loaded Oct 13 00:03:08.776315 kernel: clocksource: Switched to clocksource arch_sys_counter Oct 13 00:03:08.776322 kernel: VFS: Disk quotas dquot_6.6.0 Oct 13 00:03:08.776330 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 13 00:03:08.776337 kernel: pnp: PnP ACPI init Oct 13 00:03:08.776500 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Oct 13 00:03:08.776511 kernel: pnp: PnP ACPI: found 1 devices Oct 13 00:03:08.776521 kernel: NET: Registered PF_INET protocol family Oct 13 00:03:08.776529 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 13 00:03:08.776536 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 13 00:03:08.776543 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 13 00:03:08.776550 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 13 00:03:08.776557 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Oct 13 00:03:08.776564 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 13 00:03:08.776572 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 13 00:03:08.776579 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 13 00:03:08.776588 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 13 00:03:08.776595 kernel: PCI: CLS 0 bytes, default 64 Oct 13 00:03:08.776602 kernel: kvm [1]: HYP mode not available Oct 13 00:03:08.776609 kernel: Initialise system trusted keyrings Oct 13 00:03:08.776616 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 13 00:03:08.776623 kernel: Key type asymmetric registered Oct 13 00:03:08.776630 kernel: Asymmetric key parser 'x509' registered Oct 13 00:03:08.776638 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Oct 13 00:03:08.776645 kernel: io scheduler mq-deadline registered Oct 13 00:03:08.776653 kernel: io scheduler kyber registered Oct 13 00:03:08.776661 kernel: io scheduler bfq registered Oct 13 00:03:08.776668 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Oct 13 00:03:08.776675 kernel: ACPI: button: Power Button [PWRB] Oct 13 00:03:08.776683 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Oct 13 00:03:08.776743 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Oct 13 00:03:08.776754 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 13 00:03:08.776761 kernel: thunder_xcv, ver 1.0 Oct 13 00:03:08.776768 kernel: thunder_bgx, ver 1.0 Oct 13 00:03:08.776776 kernel: nicpf, ver 1.0 Oct 13 00:03:08.776783 kernel: nicvf, ver 1.0 Oct 13 00:03:08.776858 kernel: rtc-efi rtc-efi.0: registered as rtc0 Oct 13 00:03:08.776915 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-10-13T00:03:08 UTC (1760313788) Oct 13 00:03:08.776925 kernel: hid: raw HID events driver (C) Jiri Kosina Oct 13 00:03:08.776932 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Oct 13 00:03:08.776939 kernel: watchdog: NMI not fully supported Oct 13 00:03:08.776947 kernel: watchdog: Hard watchdog permanently disabled Oct 13 00:03:08.776955 kernel: NET: Registered PF_INET6 protocol family Oct 13 00:03:08.776962 kernel: Segment Routing with IPv6 Oct 13 00:03:08.776969 kernel: In-situ OAM (IOAM) with IPv6 Oct 13 00:03:08.776976 kernel: NET: Registered PF_PACKET protocol family Oct 13 00:03:08.776983 kernel: Key type dns_resolver registered Oct 13 00:03:08.776990 kernel: registered taskstats version 1 Oct 13 00:03:08.776997 kernel: Loading compiled-in X.509 certificates Oct 13 00:03:08.777004 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.51-flatcar: b8447a1087a9e9c4d5b9d4c2f2bba5a69a74f139' Oct 13 00:03:08.777011 kernel: Demotion targets for Node 0: null Oct 13 00:03:08.777019 kernel: Key type .fscrypt registered Oct 13 00:03:08.777026 kernel: Key type fscrypt-provisioning registered Oct 13 00:03:08.777033 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 13 00:03:08.777040 kernel: ima: Allocated hash algorithm: sha1 Oct 13 00:03:08.777046 kernel: ima: No architecture policies found Oct 13 00:03:08.777053 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Oct 13 00:03:08.777060 kernel: clk: Disabling unused clocks Oct 13 00:03:08.777067 kernel: PM: genpd: Disabling unused power domains Oct 13 00:03:08.777074 kernel: Warning: unable to open an initial console. Oct 13 00:03:08.777082 kernel: Freeing unused kernel memory: 38976K Oct 13 00:03:08.777089 kernel: Run /init as init process Oct 13 00:03:08.777096 kernel: with arguments: Oct 13 00:03:08.777103 kernel: /init Oct 13 00:03:08.777110 kernel: with environment: Oct 13 00:03:08.777116 kernel: HOME=/ Oct 13 00:03:08.777123 kernel: TERM=linux Oct 13 00:03:08.777130 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 13 00:03:08.777137 systemd[1]: Successfully made /usr/ read-only. Oct 13 00:03:08.777149 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 13 00:03:08.777157 systemd[1]: Detected virtualization kvm. Oct 13 00:03:08.777164 systemd[1]: Detected architecture arm64. Oct 13 00:03:08.777171 systemd[1]: Running in initrd. Oct 13 00:03:08.777179 systemd[1]: No hostname configured, using default hostname. Oct 13 00:03:08.777186 systemd[1]: Hostname set to . Oct 13 00:03:08.777193 systemd[1]: Initializing machine ID from VM UUID. Oct 13 00:03:08.777202 systemd[1]: Queued start job for default target initrd.target. Oct 13 00:03:08.777210 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 13 00:03:08.777218 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 13 00:03:08.777226 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 13 00:03:08.777233 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 13 00:03:08.777249 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 13 00:03:08.777258 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 13 00:03:08.777268 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Oct 13 00:03:08.777276 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Oct 13 00:03:08.777283 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 13 00:03:08.777291 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 13 00:03:08.777298 systemd[1]: Reached target paths.target - Path Units. Oct 13 00:03:08.777306 systemd[1]: Reached target slices.target - Slice Units. Oct 13 00:03:08.777313 systemd[1]: Reached target swap.target - Swaps. Oct 13 00:03:08.777321 systemd[1]: Reached target timers.target - Timer Units. Oct 13 00:03:08.777330 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 13 00:03:08.777337 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 13 00:03:08.777357 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 13 00:03:08.777365 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Oct 13 00:03:08.777372 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 13 00:03:08.777380 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 13 00:03:08.777387 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 13 00:03:08.777395 systemd[1]: Reached target sockets.target - Socket Units. Oct 13 00:03:08.777402 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 13 00:03:08.777411 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 13 00:03:08.777419 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 13 00:03:08.777427 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Oct 13 00:03:08.777435 systemd[1]: Starting systemd-fsck-usr.service... Oct 13 00:03:08.777449 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 13 00:03:08.777457 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 13 00:03:08.777465 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 13 00:03:08.777473 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 13 00:03:08.777484 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 13 00:03:08.777491 systemd[1]: Finished systemd-fsck-usr.service. Oct 13 00:03:08.777518 systemd-journald[244]: Collecting audit messages is disabled. Oct 13 00:03:08.777539 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 13 00:03:08.777548 systemd-journald[244]: Journal started Oct 13 00:03:08.777566 systemd-journald[244]: Runtime Journal (/run/log/journal/4e0b85e81807403aba76e8da414135e0) is 6M, max 48.5M, 42.4M free. Oct 13 00:03:08.770972 systemd-modules-load[246]: Inserted module 'overlay' Oct 13 00:03:08.779711 systemd[1]: Started systemd-journald.service - Journal Service. Oct 13 00:03:08.784257 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 13 00:03:08.786345 kernel: Bridge firewalling registered Oct 13 00:03:08.786281 systemd-modules-load[246]: Inserted module 'br_netfilter' Oct 13 00:03:08.786416 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 00:03:08.788479 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 13 00:03:08.790888 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 13 00:03:08.792360 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 13 00:03:08.794366 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 13 00:03:08.798226 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 13 00:03:08.802878 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 13 00:03:08.808659 systemd-tmpfiles[268]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Oct 13 00:03:08.810384 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 13 00:03:08.811500 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 13 00:03:08.814712 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 13 00:03:08.816835 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 13 00:03:08.819070 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 13 00:03:08.832900 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 13 00:03:08.856952 systemd-resolved[286]: Positive Trust Anchors: Oct 13 00:03:08.856972 systemd-resolved[286]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 13 00:03:08.857002 systemd-resolved[286]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 13 00:03:08.861819 systemd-resolved[286]: Defaulting to hostname 'linux'. Oct 13 00:03:08.862809 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 13 00:03:08.864291 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 13 00:03:08.870003 dracut-cmdline[292]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=37fc523060a9b8894388e25ab0f082059dd744d472a2b8577211d4b3dd66a910 Oct 13 00:03:08.943273 kernel: SCSI subsystem initialized Oct 13 00:03:08.948264 kernel: Loading iSCSI transport class v2.0-870. Oct 13 00:03:08.955266 kernel: iscsi: registered transport (tcp) Oct 13 00:03:08.968304 kernel: iscsi: registered transport (qla4xxx) Oct 13 00:03:08.968365 kernel: QLogic iSCSI HBA Driver Oct 13 00:03:08.985509 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 13 00:03:09.005056 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 13 00:03:09.006972 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 13 00:03:09.052010 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 13 00:03:09.054224 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 13 00:03:09.124286 kernel: raid6: neonx8 gen() 15780 MB/s Oct 13 00:03:09.141277 kernel: raid6: neonx4 gen() 15785 MB/s Oct 13 00:03:09.158268 kernel: raid6: neonx2 gen() 13248 MB/s Oct 13 00:03:09.175260 kernel: raid6: neonx1 gen() 10428 MB/s Oct 13 00:03:09.192261 kernel: raid6: int64x8 gen() 6887 MB/s Oct 13 00:03:09.209263 kernel: raid6: int64x4 gen() 7341 MB/s Oct 13 00:03:09.226260 kernel: raid6: int64x2 gen() 6101 MB/s Oct 13 00:03:09.243477 kernel: raid6: int64x1 gen() 5047 MB/s Oct 13 00:03:09.243490 kernel: raid6: using algorithm neonx4 gen() 15785 MB/s Oct 13 00:03:09.261520 kernel: raid6: .... xor() 12331 MB/s, rmw enabled Oct 13 00:03:09.261545 kernel: raid6: using neon recovery algorithm Oct 13 00:03:09.267264 kernel: xor: measuring software checksum speed Oct 13 00:03:09.267283 kernel: 8regs : 21601 MB/sec Oct 13 00:03:09.268563 kernel: 32regs : 18190 MB/sec Oct 13 00:03:09.268579 kernel: arm64_neon : 28109 MB/sec Oct 13 00:03:09.268588 kernel: xor: using function: arm64_neon (28109 MB/sec) Oct 13 00:03:09.322271 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 13 00:03:09.328227 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 13 00:03:09.330525 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 13 00:03:09.355793 systemd-udevd[500]: Using default interface naming scheme 'v255'. Oct 13 00:03:09.359987 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 13 00:03:09.361806 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 13 00:03:09.393823 dracut-pre-trigger[508]: rd.md=0: removing MD RAID activation Oct 13 00:03:09.418305 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 13 00:03:09.420412 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 13 00:03:09.475274 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 13 00:03:09.477788 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 13 00:03:09.533966 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Oct 13 00:03:09.534402 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Oct 13 00:03:09.542684 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 13 00:03:09.542724 kernel: GPT:9289727 != 19775487 Oct 13 00:03:09.545269 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 13 00:03:09.545312 kernel: GPT:9289727 != 19775487 Oct 13 00:03:09.545323 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 13 00:03:09.545332 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 13 00:03:09.549732 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 13 00:03:09.549865 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 00:03:09.552450 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 13 00:03:09.555038 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 13 00:03:09.584166 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Oct 13 00:03:09.585653 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 00:03:09.599869 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 13 00:03:09.607252 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Oct 13 00:03:09.613662 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Oct 13 00:03:09.614664 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Oct 13 00:03:09.627682 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 13 00:03:09.628632 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 13 00:03:09.630134 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 13 00:03:09.631949 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 13 00:03:09.634299 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 13 00:03:09.635925 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 13 00:03:09.651685 disk-uuid[591]: Primary Header is updated. Oct 13 00:03:09.651685 disk-uuid[591]: Secondary Entries is updated. Oct 13 00:03:09.651685 disk-uuid[591]: Secondary Header is updated. Oct 13 00:03:09.656255 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 13 00:03:09.656761 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 13 00:03:10.664264 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 13 00:03:10.665394 disk-uuid[594]: The operation has completed successfully. Oct 13 00:03:10.695028 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 13 00:03:10.695151 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 13 00:03:10.715807 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Oct 13 00:03:10.747295 sh[610]: Success Oct 13 00:03:10.759258 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 13 00:03:10.759298 kernel: device-mapper: uevent: version 1.0.3 Oct 13 00:03:10.761108 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Oct 13 00:03:10.768255 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Oct 13 00:03:10.795831 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Oct 13 00:03:10.797403 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Oct 13 00:03:10.813626 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Oct 13 00:03:10.819281 kernel: BTRFS: device fsid e4495086-3456-43e0-be7b-4c3c53a67174 devid 1 transid 38 /dev/mapper/usr (253:0) scanned by mount (622) Oct 13 00:03:10.821599 kernel: BTRFS info (device dm-0): first mount of filesystem e4495086-3456-43e0-be7b-4c3c53a67174 Oct 13 00:03:10.821615 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Oct 13 00:03:10.826265 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 13 00:03:10.826287 kernel: BTRFS info (device dm-0): enabling free space tree Oct 13 00:03:10.827280 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Oct 13 00:03:10.828399 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Oct 13 00:03:10.829526 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 13 00:03:10.830306 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 13 00:03:10.833337 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 13 00:03:10.861372 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (655) Oct 13 00:03:10.861427 kernel: BTRFS info (device vda6): first mount of filesystem 51f6bef3-5c80-492f-be85-d924f50fa726 Oct 13 00:03:10.861437 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 13 00:03:10.865631 kernel: BTRFS info (device vda6): turning on async discard Oct 13 00:03:10.865681 kernel: BTRFS info (device vda6): enabling free space tree Oct 13 00:03:10.871419 kernel: BTRFS info (device vda6): last unmount of filesystem 51f6bef3-5c80-492f-be85-d924f50fa726 Oct 13 00:03:10.871805 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 13 00:03:10.874436 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 13 00:03:10.943178 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 13 00:03:10.946812 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 13 00:03:10.984080 ignition[704]: Ignition 2.22.0 Oct 13 00:03:10.984095 ignition[704]: Stage: fetch-offline Oct 13 00:03:10.984132 ignition[704]: no configs at "/usr/lib/ignition/base.d" Oct 13 00:03:10.984140 ignition[704]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 13 00:03:10.984222 ignition[704]: parsed url from cmdline: "" Oct 13 00:03:10.984225 ignition[704]: no config URL provided Oct 13 00:03:10.984230 ignition[704]: reading system config file "/usr/lib/ignition/user.ign" Oct 13 00:03:10.984252 ignition[704]: no config at "/usr/lib/ignition/user.ign" Oct 13 00:03:10.984279 ignition[704]: op(1): [started] loading QEMU firmware config module Oct 13 00:03:10.984283 ignition[704]: op(1): executing: "modprobe" "qemu_fw_cfg" Oct 13 00:03:10.992341 ignition[704]: op(1): [finished] loading QEMU firmware config module Oct 13 00:03:10.996832 systemd-networkd[803]: lo: Link UP Oct 13 00:03:10.996841 systemd-networkd[803]: lo: Gained carrier Oct 13 00:03:10.997515 systemd-networkd[803]: Enumeration completed Oct 13 00:03:10.997750 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 13 00:03:10.997996 systemd-networkd[803]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 13 00:03:10.998000 systemd-networkd[803]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 13 00:03:10.998898 systemd-networkd[803]: eth0: Link UP Oct 13 00:03:10.998989 systemd-networkd[803]: eth0: Gained carrier Oct 13 00:03:10.998997 systemd-networkd[803]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 13 00:03:10.999120 systemd[1]: Reached target network.target - Network. Oct 13 00:03:11.010287 systemd-networkd[803]: eth0: DHCPv4 address 10.0.0.63/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 13 00:03:11.043988 ignition[704]: parsing config with SHA512: 45d3b4ddd49ea91baddef3939b81924146acfabbd55aaa8e669983bbc8f3e2cbb32d8fa20adc1c65c53169cbe0e3701cb0925f1e2fafb34911ea63909d051b07 Oct 13 00:03:11.049920 unknown[704]: fetched base config from "system" Oct 13 00:03:11.049932 unknown[704]: fetched user config from "qemu" Oct 13 00:03:11.050322 ignition[704]: fetch-offline: fetch-offline passed Oct 13 00:03:11.050376 ignition[704]: Ignition finished successfully Oct 13 00:03:11.051959 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 13 00:03:11.053320 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 13 00:03:11.054751 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 13 00:03:11.088438 ignition[813]: Ignition 2.22.0 Oct 13 00:03:11.088454 ignition[813]: Stage: kargs Oct 13 00:03:11.088573 ignition[813]: no configs at "/usr/lib/ignition/base.d" Oct 13 00:03:11.088581 ignition[813]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 13 00:03:11.089334 ignition[813]: kargs: kargs passed Oct 13 00:03:11.089378 ignition[813]: Ignition finished successfully Oct 13 00:03:11.091952 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 13 00:03:11.093672 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 13 00:03:11.116933 ignition[821]: Ignition 2.22.0 Oct 13 00:03:11.116951 ignition[821]: Stage: disks Oct 13 00:03:11.117078 ignition[821]: no configs at "/usr/lib/ignition/base.d" Oct 13 00:03:11.117087 ignition[821]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 13 00:03:11.117826 ignition[821]: disks: disks passed Oct 13 00:03:11.119660 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 13 00:03:11.117869 ignition[821]: Ignition finished successfully Oct 13 00:03:11.121001 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 13 00:03:11.122206 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 13 00:03:11.123562 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 13 00:03:11.124935 systemd[1]: Reached target sysinit.target - System Initialization. Oct 13 00:03:11.126427 systemd[1]: Reached target basic.target - Basic System. Oct 13 00:03:11.128649 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 13 00:03:11.163773 systemd-fsck[831]: ROOT: clean, 15/553520 files, 52789/553472 blocks Oct 13 00:03:11.167930 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 13 00:03:11.171129 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 13 00:03:11.232259 kernel: EXT4-fs (vda9): mounted filesystem 1aa1d0b4-cbac-4728-b9e0-662fa574e9ad r/w with ordered data mode. Quota mode: none. Oct 13 00:03:11.233071 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 13 00:03:11.234143 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 13 00:03:11.236155 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 13 00:03:11.238211 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 13 00:03:11.239629 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Oct 13 00:03:11.239665 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 13 00:03:11.239686 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 13 00:03:11.251723 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 13 00:03:11.253909 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 13 00:03:11.258265 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (839) Oct 13 00:03:11.258292 kernel: BTRFS info (device vda6): first mount of filesystem 51f6bef3-5c80-492f-be85-d924f50fa726 Oct 13 00:03:11.258303 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 13 00:03:11.262613 kernel: BTRFS info (device vda6): turning on async discard Oct 13 00:03:11.262639 kernel: BTRFS info (device vda6): enabling free space tree Oct 13 00:03:11.264027 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 13 00:03:11.287543 initrd-setup-root[863]: cut: /sysroot/etc/passwd: No such file or directory Oct 13 00:03:11.291266 initrd-setup-root[870]: cut: /sysroot/etc/group: No such file or directory Oct 13 00:03:11.294782 initrd-setup-root[877]: cut: /sysroot/etc/shadow: No such file or directory Oct 13 00:03:11.298111 initrd-setup-root[884]: cut: /sysroot/etc/gshadow: No such file or directory Oct 13 00:03:11.361647 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 13 00:03:11.364341 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 13 00:03:11.365700 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 13 00:03:11.378266 kernel: BTRFS info (device vda6): last unmount of filesystem 51f6bef3-5c80-492f-be85-d924f50fa726 Oct 13 00:03:11.390657 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 13 00:03:11.397864 ignition[952]: INFO : Ignition 2.22.0 Oct 13 00:03:11.397864 ignition[952]: INFO : Stage: mount Oct 13 00:03:11.399175 ignition[952]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 13 00:03:11.399175 ignition[952]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 13 00:03:11.399175 ignition[952]: INFO : mount: mount passed Oct 13 00:03:11.399175 ignition[952]: INFO : Ignition finished successfully Oct 13 00:03:11.400675 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 13 00:03:11.402460 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 13 00:03:11.818916 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 13 00:03:11.820613 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 13 00:03:11.843379 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (965) Oct 13 00:03:11.843418 kernel: BTRFS info (device vda6): first mount of filesystem 51f6bef3-5c80-492f-be85-d924f50fa726 Oct 13 00:03:11.843429 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 13 00:03:11.847824 kernel: BTRFS info (device vda6): turning on async discard Oct 13 00:03:11.847859 kernel: BTRFS info (device vda6): enabling free space tree Oct 13 00:03:11.849324 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 13 00:03:11.882871 ignition[982]: INFO : Ignition 2.22.0 Oct 13 00:03:11.882871 ignition[982]: INFO : Stage: files Oct 13 00:03:11.882871 ignition[982]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 13 00:03:11.882871 ignition[982]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 13 00:03:11.886394 ignition[982]: DEBUG : files: compiled without relabeling support, skipping Oct 13 00:03:11.887714 ignition[982]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 13 00:03:11.887714 ignition[982]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 13 00:03:11.893675 ignition[982]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 13 00:03:11.894856 ignition[982]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 13 00:03:11.897170 ignition[982]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 13 00:03:11.895248 unknown[982]: wrote ssh authorized keys file for user: core Oct 13 00:03:11.899453 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Oct 13 00:03:11.899453 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Oct 13 00:03:11.995322 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 13 00:03:12.116321 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Oct 13 00:03:12.116321 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Oct 13 00:03:12.119647 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Oct 13 00:03:12.119647 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 13 00:03:12.119647 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 13 00:03:12.119647 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 13 00:03:12.119647 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 13 00:03:12.119647 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 13 00:03:12.119647 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 13 00:03:12.131261 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 13 00:03:12.131261 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 13 00:03:12.131261 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Oct 13 00:03:12.136866 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Oct 13 00:03:12.136866 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Oct 13 00:03:12.136866 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-arm64.raw: attempt #1 Oct 13 00:03:12.513456 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Oct 13 00:03:12.600362 systemd-networkd[803]: eth0: Gained IPv6LL Oct 13 00:03:13.111863 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Oct 13 00:03:13.111863 ignition[982]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Oct 13 00:03:13.115641 ignition[982]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 13 00:03:13.118931 ignition[982]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 13 00:03:13.118931 ignition[982]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Oct 13 00:03:13.118931 ignition[982]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Oct 13 00:03:13.118931 ignition[982]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 13 00:03:13.118931 ignition[982]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 13 00:03:13.118931 ignition[982]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Oct 13 00:03:13.118931 ignition[982]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Oct 13 00:03:13.135450 ignition[982]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 13 00:03:13.138850 ignition[982]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 13 00:03:13.138850 ignition[982]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Oct 13 00:03:13.138850 ignition[982]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Oct 13 00:03:13.138850 ignition[982]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Oct 13 00:03:13.138850 ignition[982]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 13 00:03:13.138850 ignition[982]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 13 00:03:13.138850 ignition[982]: INFO : files: files passed Oct 13 00:03:13.138850 ignition[982]: INFO : Ignition finished successfully Oct 13 00:03:13.147818 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 13 00:03:13.149962 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 13 00:03:13.151276 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 13 00:03:13.169944 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 13 00:03:13.170028 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 13 00:03:13.172150 initrd-setup-root-after-ignition[1010]: grep: /sysroot/oem/oem-release: No such file or directory Oct 13 00:03:13.173432 initrd-setup-root-after-ignition[1013]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 13 00:03:13.173432 initrd-setup-root-after-ignition[1013]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 13 00:03:13.175854 initrd-setup-root-after-ignition[1017]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 13 00:03:13.176194 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 13 00:03:13.178686 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 13 00:03:13.181370 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 13 00:03:13.214683 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 13 00:03:13.214790 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 13 00:03:13.216522 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 13 00:03:13.219398 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 13 00:03:13.220824 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 13 00:03:13.221549 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 13 00:03:13.253314 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 13 00:03:13.255455 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 13 00:03:13.279894 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 13 00:03:13.280844 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 13 00:03:13.282343 systemd[1]: Stopped target timers.target - Timer Units. Oct 13 00:03:13.283650 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 13 00:03:13.283756 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 13 00:03:13.285715 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 13 00:03:13.286513 systemd[1]: Stopped target basic.target - Basic System. Oct 13 00:03:13.287974 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 13 00:03:13.289409 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 13 00:03:13.290705 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 13 00:03:13.292170 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Oct 13 00:03:13.293800 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 13 00:03:13.295230 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 13 00:03:13.297028 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 13 00:03:13.298440 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 13 00:03:13.299961 systemd[1]: Stopped target swap.target - Swaps. Oct 13 00:03:13.301089 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 13 00:03:13.301200 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 13 00:03:13.302968 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 13 00:03:13.304329 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 13 00:03:13.305781 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 13 00:03:13.309330 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 13 00:03:13.310317 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 13 00:03:13.310421 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 13 00:03:13.312802 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 13 00:03:13.312922 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 13 00:03:13.314367 systemd[1]: Stopped target paths.target - Path Units. Oct 13 00:03:13.315578 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 13 00:03:13.316324 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 13 00:03:13.317745 systemd[1]: Stopped target slices.target - Slice Units. Oct 13 00:03:13.319003 systemd[1]: Stopped target sockets.target - Socket Units. Oct 13 00:03:13.320615 systemd[1]: iscsid.socket: Deactivated successfully. Oct 13 00:03:13.320693 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 13 00:03:13.321932 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 13 00:03:13.322007 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 13 00:03:13.323262 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 13 00:03:13.323372 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 13 00:03:13.324757 systemd[1]: ignition-files.service: Deactivated successfully. Oct 13 00:03:13.324870 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 13 00:03:13.326824 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 13 00:03:13.328499 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 13 00:03:13.329177 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 13 00:03:13.329303 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 13 00:03:13.330971 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 13 00:03:13.331064 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 13 00:03:13.335454 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 13 00:03:13.339383 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 13 00:03:13.347527 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 13 00:03:13.356820 ignition[1037]: INFO : Ignition 2.22.0 Oct 13 00:03:13.356820 ignition[1037]: INFO : Stage: umount Oct 13 00:03:13.359232 ignition[1037]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 13 00:03:13.359232 ignition[1037]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 13 00:03:13.359232 ignition[1037]: INFO : umount: umount passed Oct 13 00:03:13.359232 ignition[1037]: INFO : Ignition finished successfully Oct 13 00:03:13.360166 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 13 00:03:13.362304 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 13 00:03:13.363889 systemd[1]: Stopped target network.target - Network. Oct 13 00:03:13.366648 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 13 00:03:13.366714 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 13 00:03:13.367547 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 13 00:03:13.367586 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 13 00:03:13.368994 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 13 00:03:13.369038 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 13 00:03:13.370368 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 13 00:03:13.370407 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 13 00:03:13.371931 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 13 00:03:13.373229 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 13 00:03:13.382293 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 13 00:03:13.382405 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 13 00:03:13.385563 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Oct 13 00:03:13.385793 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 13 00:03:13.385837 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 13 00:03:13.389866 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Oct 13 00:03:13.390107 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 13 00:03:13.390190 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 13 00:03:13.393591 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Oct 13 00:03:13.393940 systemd[1]: Stopped target network-pre.target - Preparation for Network. Oct 13 00:03:13.395448 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 13 00:03:13.395483 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 13 00:03:13.398603 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 13 00:03:13.401082 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 13 00:03:13.401147 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 13 00:03:13.402785 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 13 00:03:13.402837 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 13 00:03:13.405003 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 13 00:03:13.405063 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 13 00:03:13.406783 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 13 00:03:13.408980 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Oct 13 00:03:13.411440 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 13 00:03:13.411778 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 13 00:03:13.415182 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 13 00:03:13.415320 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 13 00:03:13.424949 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 13 00:03:13.425122 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 13 00:03:13.427052 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 13 00:03:13.427156 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 13 00:03:13.428972 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 13 00:03:13.429051 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 13 00:03:13.430007 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 13 00:03:13.430040 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 13 00:03:13.431439 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 13 00:03:13.431479 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 13 00:03:13.433672 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 13 00:03:13.433720 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 13 00:03:13.436026 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 13 00:03:13.436078 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 13 00:03:13.439250 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 13 00:03:13.440057 systemd[1]: systemd-network-generator.service: Deactivated successfully. Oct 13 00:03:13.440110 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Oct 13 00:03:13.442465 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 13 00:03:13.442506 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 13 00:03:13.445138 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Oct 13 00:03:13.445181 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 13 00:03:13.448037 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 13 00:03:13.448089 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 13 00:03:13.449751 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 13 00:03:13.449789 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 00:03:13.459384 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 13 00:03:13.459497 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 13 00:03:13.461358 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 13 00:03:13.463483 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 13 00:03:13.491639 systemd[1]: Switching root. Oct 13 00:03:13.519435 systemd-journald[244]: Journal stopped Oct 13 00:03:14.285388 systemd-journald[244]: Received SIGTERM from PID 1 (systemd). Oct 13 00:03:14.285433 kernel: SELinux: policy capability network_peer_controls=1 Oct 13 00:03:14.285450 kernel: SELinux: policy capability open_perms=1 Oct 13 00:03:14.285459 kernel: SELinux: policy capability extended_socket_class=1 Oct 13 00:03:14.285468 kernel: SELinux: policy capability always_check_network=0 Oct 13 00:03:14.285478 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 13 00:03:14.285489 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 13 00:03:14.285498 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 13 00:03:14.285510 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 13 00:03:14.285519 kernel: SELinux: policy capability userspace_initial_context=0 Oct 13 00:03:14.285530 kernel: audit: type=1403 audit(1760313793.705:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 13 00:03:14.285540 systemd[1]: Successfully loaded SELinux policy in 65.060ms. Oct 13 00:03:14.285558 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.229ms. Oct 13 00:03:14.285571 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 13 00:03:14.285582 systemd[1]: Detected virtualization kvm. Oct 13 00:03:14.285591 systemd[1]: Detected architecture arm64. Oct 13 00:03:14.285601 systemd[1]: Detected first boot. Oct 13 00:03:14.285611 systemd[1]: Initializing machine ID from VM UUID. Oct 13 00:03:14.285620 kernel: NET: Registered PF_VSOCK protocol family Oct 13 00:03:14.285630 zram_generator::config[1083]: No configuration found. Oct 13 00:03:14.285640 systemd[1]: Populated /etc with preset unit settings. Oct 13 00:03:14.285651 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Oct 13 00:03:14.285662 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 13 00:03:14.285672 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 13 00:03:14.285682 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 13 00:03:14.285692 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 13 00:03:14.285702 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 13 00:03:14.285712 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 13 00:03:14.285722 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 13 00:03:14.285732 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 13 00:03:14.285746 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 13 00:03:14.285757 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 13 00:03:14.285769 systemd[1]: Created slice user.slice - User and Session Slice. Oct 13 00:03:14.285780 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 13 00:03:14.285790 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 13 00:03:14.285800 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 13 00:03:14.285818 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 13 00:03:14.285829 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 13 00:03:14.285839 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 13 00:03:14.285851 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Oct 13 00:03:14.285861 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 13 00:03:14.285871 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 13 00:03:14.285882 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 13 00:03:14.285891 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 13 00:03:14.285901 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 13 00:03:14.285911 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 13 00:03:14.285920 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 13 00:03:14.285931 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 13 00:03:14.285941 systemd[1]: Reached target slices.target - Slice Units. Oct 13 00:03:14.285951 systemd[1]: Reached target swap.target - Swaps. Oct 13 00:03:14.285961 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 13 00:03:14.285971 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 13 00:03:14.285981 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Oct 13 00:03:14.285990 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 13 00:03:14.286000 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 13 00:03:14.286010 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 13 00:03:14.286021 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 13 00:03:14.286031 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 13 00:03:14.286040 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 13 00:03:14.286051 systemd[1]: Mounting media.mount - External Media Directory... Oct 13 00:03:14.286064 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 13 00:03:14.286074 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 13 00:03:14.286083 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 13 00:03:14.286094 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 13 00:03:14.286104 systemd[1]: Reached target machines.target - Containers. Oct 13 00:03:14.286115 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 13 00:03:14.286125 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 13 00:03:14.286136 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 13 00:03:14.286146 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 13 00:03:14.286156 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 13 00:03:14.286166 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 13 00:03:14.286176 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 13 00:03:14.286186 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 13 00:03:14.286198 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 13 00:03:14.286208 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 13 00:03:14.286219 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 13 00:03:14.286229 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 13 00:03:14.286246 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 13 00:03:14.286258 kernel: fuse: init (API version 7.41) Oct 13 00:03:14.286268 systemd[1]: Stopped systemd-fsck-usr.service. Oct 13 00:03:14.286279 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 13 00:03:14.286289 kernel: loop: module loaded Oct 13 00:03:14.286303 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 13 00:03:14.286313 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 13 00:03:14.286323 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 13 00:03:14.286333 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 13 00:03:14.286343 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Oct 13 00:03:14.286353 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 13 00:03:14.286363 systemd[1]: verity-setup.service: Deactivated successfully. Oct 13 00:03:14.286373 systemd[1]: Stopped verity-setup.service. Oct 13 00:03:14.286384 kernel: ACPI: bus type drm_connector registered Oct 13 00:03:14.286393 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 13 00:03:14.286403 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 13 00:03:14.286414 systemd[1]: Mounted media.mount - External Media Directory. Oct 13 00:03:14.286423 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 13 00:03:14.286455 systemd-journald[1158]: Collecting audit messages is disabled. Oct 13 00:03:14.286477 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 13 00:03:14.286488 systemd-journald[1158]: Journal started Oct 13 00:03:14.286509 systemd-journald[1158]: Runtime Journal (/run/log/journal/4e0b85e81807403aba76e8da414135e0) is 6M, max 48.5M, 42.4M free. Oct 13 00:03:14.063813 systemd[1]: Queued start job for default target multi-user.target. Oct 13 00:03:14.079090 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Oct 13 00:03:14.079469 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 13 00:03:14.288348 systemd[1]: Started systemd-journald.service - Journal Service. Oct 13 00:03:14.289014 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 13 00:03:14.290269 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 13 00:03:14.293642 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 13 00:03:14.294917 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 13 00:03:14.295074 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 13 00:03:14.296326 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 13 00:03:14.296484 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 13 00:03:14.297560 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 13 00:03:14.297711 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 13 00:03:14.298912 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 13 00:03:14.300357 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 13 00:03:14.301512 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 13 00:03:14.301657 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 13 00:03:14.302748 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 13 00:03:14.304310 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 13 00:03:14.305536 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 13 00:03:14.308265 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 13 00:03:14.309614 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 13 00:03:14.311011 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Oct 13 00:03:14.323301 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 13 00:03:14.325518 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 13 00:03:14.327649 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 13 00:03:14.329643 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 13 00:03:14.330657 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 13 00:03:14.330697 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 13 00:03:14.332410 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Oct 13 00:03:14.343311 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 13 00:03:14.344310 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 13 00:03:14.345308 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 13 00:03:14.347038 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 13 00:03:14.348371 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 13 00:03:14.349250 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 13 00:03:14.351353 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 13 00:03:14.352293 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 13 00:03:14.355376 systemd-journald[1158]: Time spent on flushing to /var/log/journal/4e0b85e81807403aba76e8da414135e0 is 12.271ms for 883 entries. Oct 13 00:03:14.355376 systemd-journald[1158]: System Journal (/var/log/journal/4e0b85e81807403aba76e8da414135e0) is 8M, max 195.6M, 187.6M free. Oct 13 00:03:14.372065 systemd-journald[1158]: Received client request to flush runtime journal. Oct 13 00:03:14.372098 kernel: loop0: detected capacity change from 0 to 100632 Oct 13 00:03:14.356348 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 13 00:03:14.360565 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 13 00:03:14.364902 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 13 00:03:14.366344 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 13 00:03:14.368411 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 13 00:03:14.373828 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 13 00:03:14.378023 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 13 00:03:14.382262 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 13 00:03:14.386352 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Oct 13 00:03:14.390819 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 13 00:03:14.399376 systemd-tmpfiles[1201]: ACLs are not supported, ignoring. Oct 13 00:03:14.399390 systemd-tmpfiles[1201]: ACLs are not supported, ignoring. Oct 13 00:03:14.401262 kernel: loop1: detected capacity change from 0 to 200800 Oct 13 00:03:14.404126 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 13 00:03:14.410350 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 13 00:03:14.421414 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Oct 13 00:03:14.429262 kernel: loop2: detected capacity change from 0 to 119368 Oct 13 00:03:14.443188 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 13 00:03:14.445453 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 13 00:03:14.449257 kernel: loop3: detected capacity change from 0 to 100632 Oct 13 00:03:14.463263 kernel: loop4: detected capacity change from 0 to 200800 Oct 13 00:03:14.469720 systemd-tmpfiles[1222]: ACLs are not supported, ignoring. Oct 13 00:03:14.469741 systemd-tmpfiles[1222]: ACLs are not supported, ignoring. Oct 13 00:03:14.471256 kernel: loop5: detected capacity change from 0 to 119368 Oct 13 00:03:14.472930 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 13 00:03:14.476843 (sd-merge)[1223]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Oct 13 00:03:14.477310 (sd-merge)[1223]: Merged extensions into '/usr'. Oct 13 00:03:14.480870 systemd[1]: Reload requested from client PID 1200 ('systemd-sysext') (unit systemd-sysext.service)... Oct 13 00:03:14.480892 systemd[1]: Reloading... Oct 13 00:03:14.549261 zram_generator::config[1256]: No configuration found. Oct 13 00:03:14.627216 ldconfig[1195]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 13 00:03:14.691443 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 13 00:03:14.691767 systemd[1]: Reloading finished in 210 ms. Oct 13 00:03:14.727816 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 13 00:03:14.728961 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 13 00:03:14.744577 systemd[1]: Starting ensure-sysext.service... Oct 13 00:03:14.746381 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 13 00:03:14.755778 systemd[1]: Reload requested from client PID 1287 ('systemctl') (unit ensure-sysext.service)... Oct 13 00:03:14.755793 systemd[1]: Reloading... Oct 13 00:03:14.764323 systemd-tmpfiles[1288]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Oct 13 00:03:14.764361 systemd-tmpfiles[1288]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Oct 13 00:03:14.764609 systemd-tmpfiles[1288]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 13 00:03:14.764800 systemd-tmpfiles[1288]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 13 00:03:14.765414 systemd-tmpfiles[1288]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 13 00:03:14.765609 systemd-tmpfiles[1288]: ACLs are not supported, ignoring. Oct 13 00:03:14.765656 systemd-tmpfiles[1288]: ACLs are not supported, ignoring. Oct 13 00:03:14.768287 systemd-tmpfiles[1288]: Detected autofs mount point /boot during canonicalization of boot. Oct 13 00:03:14.768298 systemd-tmpfiles[1288]: Skipping /boot Oct 13 00:03:14.774823 systemd-tmpfiles[1288]: Detected autofs mount point /boot during canonicalization of boot. Oct 13 00:03:14.774836 systemd-tmpfiles[1288]: Skipping /boot Oct 13 00:03:14.799298 zram_generator::config[1315]: No configuration found. Oct 13 00:03:14.930310 systemd[1]: Reloading finished in 174 ms. Oct 13 00:03:14.951284 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 13 00:03:14.956597 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 13 00:03:14.969316 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 13 00:03:14.972392 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 13 00:03:14.974346 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 13 00:03:14.978381 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 13 00:03:14.983373 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 13 00:03:14.987188 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 13 00:03:14.992843 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 13 00:03:14.994347 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 13 00:03:14.998303 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 13 00:03:15.000220 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 13 00:03:15.001527 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 13 00:03:15.001650 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 13 00:03:15.004162 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 13 00:03:15.016345 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 13 00:03:15.018297 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 13 00:03:15.018445 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 13 00:03:15.020126 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 13 00:03:15.021554 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 13 00:03:15.021986 systemd-udevd[1361]: Using default interface naming scheme 'v255'. Oct 13 00:03:15.023177 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 13 00:03:15.024767 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 13 00:03:15.024932 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 13 00:03:15.032860 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 13 00:03:15.036263 augenrules[1385]: No rules Oct 13 00:03:15.034465 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 13 00:03:15.036490 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 13 00:03:15.040508 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 13 00:03:15.041420 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 13 00:03:15.041581 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 13 00:03:15.058550 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 13 00:03:15.059307 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 13 00:03:15.060810 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 13 00:03:15.062439 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 13 00:03:15.064446 systemd[1]: audit-rules.service: Deactivated successfully. Oct 13 00:03:15.065379 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 13 00:03:15.067970 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 13 00:03:15.069283 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 13 00:03:15.070764 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 13 00:03:15.070911 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 13 00:03:15.075173 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 13 00:03:15.075345 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 13 00:03:15.085218 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 13 00:03:15.120850 systemd[1]: Finished ensure-sysext.service. Oct 13 00:03:15.135264 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 13 00:03:15.147934 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Oct 13 00:03:15.156386 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 13 00:03:15.157150 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 13 00:03:15.158353 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 13 00:03:15.161117 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 13 00:03:15.163885 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 13 00:03:15.166975 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 13 00:03:15.167944 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 13 00:03:15.167984 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 13 00:03:15.175241 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 13 00:03:15.180508 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 13 00:03:15.181477 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 13 00:03:15.182057 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 13 00:03:15.182227 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 13 00:03:15.183432 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 13 00:03:15.183579 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 13 00:03:15.184696 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 13 00:03:15.184850 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 13 00:03:15.186104 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 13 00:03:15.186254 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 13 00:03:15.194218 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 13 00:03:15.197634 augenrules[1443]: /sbin/augenrules: No change Oct 13 00:03:15.202439 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 13 00:03:15.203776 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 13 00:03:15.203894 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 13 00:03:15.213692 augenrules[1472]: No rules Oct 13 00:03:15.214644 systemd[1]: audit-rules.service: Deactivated successfully. Oct 13 00:03:15.215096 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 13 00:03:15.219736 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 13 00:03:15.236842 systemd-resolved[1355]: Positive Trust Anchors: Oct 13 00:03:15.236861 systemd-resolved[1355]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 13 00:03:15.236893 systemd-resolved[1355]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 13 00:03:15.243470 systemd-resolved[1355]: Defaulting to hostname 'linux'. Oct 13 00:03:15.245095 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 13 00:03:15.246269 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 13 00:03:15.266753 systemd-networkd[1448]: lo: Link UP Oct 13 00:03:15.266763 systemd-networkd[1448]: lo: Gained carrier Oct 13 00:03:15.267655 systemd-networkd[1448]: Enumeration completed Oct 13 00:03:15.267749 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 13 00:03:15.268110 systemd-networkd[1448]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 13 00:03:15.268118 systemd-networkd[1448]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 13 00:03:15.268697 systemd-networkd[1448]: eth0: Link UP Oct 13 00:03:15.268819 systemd-networkd[1448]: eth0: Gained carrier Oct 13 00:03:15.268833 systemd-networkd[1448]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 13 00:03:15.268886 systemd[1]: Reached target network.target - Network. Oct 13 00:03:15.271018 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Oct 13 00:03:15.273395 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 13 00:03:15.274939 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 13 00:03:15.276223 systemd[1]: Reached target sysinit.target - System Initialization. Oct 13 00:03:15.277715 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 13 00:03:15.278917 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 13 00:03:15.280299 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 13 00:03:15.282015 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 13 00:03:15.282049 systemd[1]: Reached target paths.target - Path Units. Oct 13 00:03:15.283159 systemd[1]: Reached target time-set.target - System Time Set. Oct 13 00:03:15.283347 systemd-networkd[1448]: eth0: DHCPv4 address 10.0.0.63/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 13 00:03:15.284699 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 13 00:03:15.285040 systemd-timesyncd[1450]: Network configuration changed, trying to establish connection. Oct 13 00:03:15.285906 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 13 00:03:15.287532 systemd[1]: Reached target timers.target - Timer Units. Oct 13 00:03:15.288928 systemd-timesyncd[1450]: Contacted time server 10.0.0.1:123 (10.0.0.1). Oct 13 00:03:15.288999 systemd-timesyncd[1450]: Initial clock synchronization to Mon 2025-10-13 00:03:14.899906 UTC. Oct 13 00:03:15.289390 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 13 00:03:15.291814 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 13 00:03:15.296051 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Oct 13 00:03:15.297350 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Oct 13 00:03:15.298455 systemd[1]: Reached target ssh-access.target - SSH Access Available. Oct 13 00:03:15.307114 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 13 00:03:15.309158 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Oct 13 00:03:15.311916 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Oct 13 00:03:15.313309 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 13 00:03:15.323621 systemd[1]: Reached target sockets.target - Socket Units. Oct 13 00:03:15.324479 systemd[1]: Reached target basic.target - Basic System. Oct 13 00:03:15.325179 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 13 00:03:15.325210 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 13 00:03:15.326257 systemd[1]: Starting containerd.service - containerd container runtime... Oct 13 00:03:15.327998 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 13 00:03:15.329652 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 13 00:03:15.338673 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 13 00:03:15.340671 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 13 00:03:15.341459 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 13 00:03:15.342460 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 13 00:03:15.346111 jq[1504]: false Oct 13 00:03:15.345337 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 13 00:03:15.348977 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 13 00:03:15.351086 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 13 00:03:15.355674 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 13 00:03:15.358411 extend-filesystems[1505]: Found /dev/vda6 Oct 13 00:03:15.359394 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 13 00:03:15.361619 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 13 00:03:15.362107 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 13 00:03:15.362403 extend-filesystems[1505]: Found /dev/vda9 Oct 13 00:03:15.362749 systemd[1]: Starting update-engine.service - Update Engine... Oct 13 00:03:15.364566 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 13 00:03:15.366872 extend-filesystems[1505]: Checking size of /dev/vda9 Oct 13 00:03:15.367708 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 13 00:03:15.370645 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 13 00:03:15.370839 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 13 00:03:15.371078 systemd[1]: motdgen.service: Deactivated successfully. Oct 13 00:03:15.371273 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 13 00:03:15.373519 jq[1523]: true Oct 13 00:03:15.374484 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 13 00:03:15.374662 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 13 00:03:15.384477 extend-filesystems[1505]: Resized partition /dev/vda9 Oct 13 00:03:15.386693 extend-filesystems[1543]: resize2fs 1.47.3 (8-Jul-2025) Oct 13 00:03:15.389588 (ntainerd)[1536]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 13 00:03:15.399648 jq[1531]: true Oct 13 00:03:15.401948 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Oct 13 00:03:15.404730 tar[1530]: linux-arm64/LICENSE Oct 13 00:03:15.404972 tar[1530]: linux-arm64/helm Oct 13 00:03:15.415553 update_engine[1522]: I20251013 00:03:15.415323 1522 main.cc:92] Flatcar Update Engine starting Oct 13 00:03:15.432539 dbus-daemon[1502]: [system] SELinux support is enabled Oct 13 00:03:15.432705 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 13 00:03:15.436305 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 13 00:03:15.436336 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 13 00:03:15.437376 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 13 00:03:15.437392 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 13 00:03:15.455171 systemd[1]: Started update-engine.service - Update Engine. Oct 13 00:03:15.468166 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Oct 13 00:03:15.468207 update_engine[1522]: I20251013 00:03:15.457422 1522 update_check_scheduler.cc:74] Next update check in 11m32s Oct 13 00:03:15.458331 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 13 00:03:15.468399 systemd-logind[1518]: Watching system buttons on /dev/input/event0 (Power Button) Oct 13 00:03:15.469290 extend-filesystems[1543]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 13 00:03:15.469290 extend-filesystems[1543]: old_desc_blocks = 1, new_desc_blocks = 1 Oct 13 00:03:15.469290 extend-filesystems[1543]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Oct 13 00:03:15.480875 extend-filesystems[1505]: Resized filesystem in /dev/vda9 Oct 13 00:03:15.469821 systemd-logind[1518]: New seat seat0. Oct 13 00:03:15.483744 bash[1566]: Updated "/home/core/.ssh/authorized_keys" Oct 13 00:03:15.471199 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 13 00:03:15.472574 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 13 00:03:15.481850 systemd[1]: Started systemd-logind.service - User Login Management. Oct 13 00:03:15.484470 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 00:03:15.489300 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 13 00:03:15.493580 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Oct 13 00:03:15.544554 locksmithd[1562]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 13 00:03:15.582321 containerd[1536]: time="2025-10-13T00:03:15Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Oct 13 00:03:15.584424 containerd[1536]: time="2025-10-13T00:03:15.584386280Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Oct 13 00:03:15.594408 containerd[1536]: time="2025-10-13T00:03:15.594363200Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.24µs" Oct 13 00:03:15.594408 containerd[1536]: time="2025-10-13T00:03:15.594398280Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Oct 13 00:03:15.594493 containerd[1536]: time="2025-10-13T00:03:15.594415880Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Oct 13 00:03:15.594571 containerd[1536]: time="2025-10-13T00:03:15.594549880Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Oct 13 00:03:15.594598 containerd[1536]: time="2025-10-13T00:03:15.594570880Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Oct 13 00:03:15.594634 containerd[1536]: time="2025-10-13T00:03:15.594595120Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 13 00:03:15.594668 containerd[1536]: time="2025-10-13T00:03:15.594647200Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 13 00:03:15.594668 containerd[1536]: time="2025-10-13T00:03:15.594664120Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 13 00:03:15.594906 containerd[1536]: time="2025-10-13T00:03:15.594881800Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 13 00:03:15.594906 containerd[1536]: time="2025-10-13T00:03:15.594903680Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 13 00:03:15.594952 containerd[1536]: time="2025-10-13T00:03:15.594915480Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 13 00:03:15.594952 containerd[1536]: time="2025-10-13T00:03:15.594924160Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Oct 13 00:03:15.595011 containerd[1536]: time="2025-10-13T00:03:15.594992760Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Oct 13 00:03:15.595211 containerd[1536]: time="2025-10-13T00:03:15.595190000Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 13 00:03:15.595257 containerd[1536]: time="2025-10-13T00:03:15.595226040Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 13 00:03:15.595282 containerd[1536]: time="2025-10-13T00:03:15.595257440Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Oct 13 00:03:15.595301 containerd[1536]: time="2025-10-13T00:03:15.595287600Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Oct 13 00:03:15.595552 containerd[1536]: time="2025-10-13T00:03:15.595528080Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Oct 13 00:03:15.595627 containerd[1536]: time="2025-10-13T00:03:15.595607440Z" level=info msg="metadata content store policy set" policy=shared Oct 13 00:03:15.598962 containerd[1536]: time="2025-10-13T00:03:15.598923000Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Oct 13 00:03:15.599043 containerd[1536]: time="2025-10-13T00:03:15.599028720Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Oct 13 00:03:15.599063 containerd[1536]: time="2025-10-13T00:03:15.599054160Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Oct 13 00:03:15.599117 containerd[1536]: time="2025-10-13T00:03:15.599067200Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Oct 13 00:03:15.599142 containerd[1536]: time="2025-10-13T00:03:15.599117520Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Oct 13 00:03:15.599142 containerd[1536]: time="2025-10-13T00:03:15.599132680Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Oct 13 00:03:15.599179 containerd[1536]: time="2025-10-13T00:03:15.599145240Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Oct 13 00:03:15.599179 containerd[1536]: time="2025-10-13T00:03:15.599157200Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Oct 13 00:03:15.599179 containerd[1536]: time="2025-10-13T00:03:15.599167560Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Oct 13 00:03:15.599179 containerd[1536]: time="2025-10-13T00:03:15.599178080Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Oct 13 00:03:15.599250 containerd[1536]: time="2025-10-13T00:03:15.599187480Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Oct 13 00:03:15.599250 containerd[1536]: time="2025-10-13T00:03:15.599206440Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Oct 13 00:03:15.599367 containerd[1536]: time="2025-10-13T00:03:15.599343480Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Oct 13 00:03:15.599451 containerd[1536]: time="2025-10-13T00:03:15.599430000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Oct 13 00:03:15.599479 containerd[1536]: time="2025-10-13T00:03:15.599459360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Oct 13 00:03:15.599479 containerd[1536]: time="2025-10-13T00:03:15.599473720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Oct 13 00:03:15.599511 containerd[1536]: time="2025-10-13T00:03:15.599484320Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Oct 13 00:03:15.599511 containerd[1536]: time="2025-10-13T00:03:15.599494440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Oct 13 00:03:15.599511 containerd[1536]: time="2025-10-13T00:03:15.599506320Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Oct 13 00:03:15.599567 containerd[1536]: time="2025-10-13T00:03:15.599516640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Oct 13 00:03:15.599567 containerd[1536]: time="2025-10-13T00:03:15.599533920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Oct 13 00:03:15.599567 containerd[1536]: time="2025-10-13T00:03:15.599544440Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Oct 13 00:03:15.599567 containerd[1536]: time="2025-10-13T00:03:15.599554400Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Oct 13 00:03:15.599857 containerd[1536]: time="2025-10-13T00:03:15.599785520Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Oct 13 00:03:15.599887 containerd[1536]: time="2025-10-13T00:03:15.599859280Z" level=info msg="Start snapshots syncer" Oct 13 00:03:15.599906 containerd[1536]: time="2025-10-13T00:03:15.599897200Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Oct 13 00:03:15.600447 containerd[1536]: time="2025-10-13T00:03:15.600397640Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Oct 13 00:03:15.600555 containerd[1536]: time="2025-10-13T00:03:15.600462040Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Oct 13 00:03:15.600734 containerd[1536]: time="2025-10-13T00:03:15.600615800Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Oct 13 00:03:15.600940 containerd[1536]: time="2025-10-13T00:03:15.600907240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Oct 13 00:03:15.600940 containerd[1536]: time="2025-10-13T00:03:15.600943040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Oct 13 00:03:15.600997 containerd[1536]: time="2025-10-13T00:03:15.600954800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Oct 13 00:03:15.600997 containerd[1536]: time="2025-10-13T00:03:15.600966880Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Oct 13 00:03:15.600997 containerd[1536]: time="2025-10-13T00:03:15.600977920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Oct 13 00:03:15.600997 containerd[1536]: time="2025-10-13T00:03:15.600989720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Oct 13 00:03:15.601070 containerd[1536]: time="2025-10-13T00:03:15.601000120Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Oct 13 00:03:15.601070 containerd[1536]: time="2025-10-13T00:03:15.601033880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Oct 13 00:03:15.601070 containerd[1536]: time="2025-10-13T00:03:15.601044440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Oct 13 00:03:15.601070 containerd[1536]: time="2025-10-13T00:03:15.601054440Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Oct 13 00:03:15.601130 containerd[1536]: time="2025-10-13T00:03:15.601097800Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 13 00:03:15.601130 containerd[1536]: time="2025-10-13T00:03:15.601113240Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 13 00:03:15.601130 containerd[1536]: time="2025-10-13T00:03:15.601121720Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 13 00:03:15.601180 containerd[1536]: time="2025-10-13T00:03:15.601130800Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 13 00:03:15.601219 containerd[1536]: time="2025-10-13T00:03:15.601139320Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Oct 13 00:03:15.601255 containerd[1536]: time="2025-10-13T00:03:15.601222080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Oct 13 00:03:15.601275 containerd[1536]: time="2025-10-13T00:03:15.601254360Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Oct 13 00:03:15.601418 containerd[1536]: time="2025-10-13T00:03:15.601396480Z" level=info msg="runtime interface created" Oct 13 00:03:15.601418 containerd[1536]: time="2025-10-13T00:03:15.601413680Z" level=info msg="created NRI interface" Oct 13 00:03:15.601474 containerd[1536]: time="2025-10-13T00:03:15.601426240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Oct 13 00:03:15.601474 containerd[1536]: time="2025-10-13T00:03:15.601439560Z" level=info msg="Connect containerd service" Oct 13 00:03:15.601508 containerd[1536]: time="2025-10-13T00:03:15.601477960Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 13 00:03:15.603314 containerd[1536]: time="2025-10-13T00:03:15.603265120Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 13 00:03:15.681250 containerd[1536]: time="2025-10-13T00:03:15.680998880Z" level=info msg="Start subscribing containerd event" Oct 13 00:03:15.681250 containerd[1536]: time="2025-10-13T00:03:15.681084640Z" level=info msg="Start recovering state" Oct 13 00:03:15.681250 containerd[1536]: time="2025-10-13T00:03:15.681175120Z" level=info msg="Start event monitor" Oct 13 00:03:15.681250 containerd[1536]: time="2025-10-13T00:03:15.681188040Z" level=info msg="Start cni network conf syncer for default" Oct 13 00:03:15.681250 containerd[1536]: time="2025-10-13T00:03:15.681196400Z" level=info msg="Start streaming server" Oct 13 00:03:15.681250 containerd[1536]: time="2025-10-13T00:03:15.681212440Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Oct 13 00:03:15.681250 containerd[1536]: time="2025-10-13T00:03:15.681219480Z" level=info msg="runtime interface starting up..." Oct 13 00:03:15.681250 containerd[1536]: time="2025-10-13T00:03:15.681224800Z" level=info msg="starting plugins..." Oct 13 00:03:15.681477 containerd[1536]: time="2025-10-13T00:03:15.681324200Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Oct 13 00:03:15.681498 containerd[1536]: time="2025-10-13T00:03:15.681461400Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 13 00:03:15.681563 containerd[1536]: time="2025-10-13T00:03:15.681542040Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 13 00:03:15.683484 containerd[1536]: time="2025-10-13T00:03:15.681698960Z" level=info msg="containerd successfully booted in 0.099762s" Oct 13 00:03:15.681814 systemd[1]: Started containerd.service - containerd container runtime. Oct 13 00:03:15.734586 tar[1530]: linux-arm64/README.md Oct 13 00:03:15.753310 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 13 00:03:16.696358 systemd-networkd[1448]: eth0: Gained IPv6LL Oct 13 00:03:16.698654 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 13 00:03:16.702413 systemd[1]: Reached target network-online.target - Network is Online. Oct 13 00:03:16.705140 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Oct 13 00:03:16.707569 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 00:03:16.711434 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 13 00:03:16.747269 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 13 00:03:16.747855 sshd_keygen[1529]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 13 00:03:16.749767 systemd[1]: coreos-metadata.service: Deactivated successfully. Oct 13 00:03:16.751275 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Oct 13 00:03:16.753933 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 13 00:03:16.770190 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 13 00:03:16.772825 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 13 00:03:16.792092 systemd[1]: issuegen.service: Deactivated successfully. Oct 13 00:03:16.792306 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 13 00:03:16.796157 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 13 00:03:16.816938 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 13 00:03:16.820197 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 13 00:03:16.822602 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Oct 13 00:03:16.823789 systemd[1]: Reached target getty.target - Login Prompts. Oct 13 00:03:17.249208 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 00:03:17.250539 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 13 00:03:17.252027 systemd[1]: Startup finished in 2.066s (kernel) + 5.084s (initrd) + 3.612s (userspace) = 10.762s. Oct 13 00:03:17.254744 (kubelet)[1640]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 13 00:03:17.561870 kubelet[1640]: E1013 00:03:17.561777 1640 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 13 00:03:17.564275 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 13 00:03:17.564410 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 13 00:03:17.564802 systemd[1]: kubelet.service: Consumed 684ms CPU time, 248.7M memory peak. Oct 13 00:03:21.639379 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 13 00:03:21.640315 systemd[1]: Started sshd@0-10.0.0.63:22-10.0.0.1:49618.service - OpenSSH per-connection server daemon (10.0.0.1:49618). Oct 13 00:03:21.737325 sshd[1654]: Accepted publickey for core from 10.0.0.1 port 49618 ssh2: RSA SHA256:TSsUX0+AOpz/e050I2QTTANvLGIa9yseBHOQ57c0ZcY Oct 13 00:03:21.739324 sshd-session[1654]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:03:21.747759 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 13 00:03:21.748696 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 13 00:03:21.757424 systemd-logind[1518]: New session 1 of user core. Oct 13 00:03:21.771982 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 13 00:03:21.775705 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 13 00:03:21.790705 (systemd)[1659]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 13 00:03:21.793800 systemd-logind[1518]: New session c1 of user core. Oct 13 00:03:21.908922 systemd[1659]: Queued start job for default target default.target. Oct 13 00:03:21.920133 systemd[1659]: Created slice app.slice - User Application Slice. Oct 13 00:03:21.920164 systemd[1659]: Reached target paths.target - Paths. Oct 13 00:03:21.920200 systemd[1659]: Reached target timers.target - Timers. Oct 13 00:03:21.921442 systemd[1659]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 13 00:03:21.930694 systemd[1659]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 13 00:03:21.930754 systemd[1659]: Reached target sockets.target - Sockets. Oct 13 00:03:21.930790 systemd[1659]: Reached target basic.target - Basic System. Oct 13 00:03:21.930816 systemd[1659]: Reached target default.target - Main User Target. Oct 13 00:03:21.930841 systemd[1659]: Startup finished in 130ms. Oct 13 00:03:21.931081 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 13 00:03:21.932401 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 13 00:03:21.987406 systemd[1]: Started sshd@1-10.0.0.63:22-10.0.0.1:49648.service - OpenSSH per-connection server daemon (10.0.0.1:49648). Oct 13 00:03:22.046737 sshd[1670]: Accepted publickey for core from 10.0.0.1 port 49648 ssh2: RSA SHA256:TSsUX0+AOpz/e050I2QTTANvLGIa9yseBHOQ57c0ZcY Oct 13 00:03:22.048035 sshd-session[1670]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:03:22.052979 systemd-logind[1518]: New session 2 of user core. Oct 13 00:03:22.066452 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 13 00:03:22.118043 sshd[1673]: Connection closed by 10.0.0.1 port 49648 Oct 13 00:03:22.118412 sshd-session[1670]: pam_unix(sshd:session): session closed for user core Oct 13 00:03:22.127088 systemd[1]: sshd@1-10.0.0.63:22-10.0.0.1:49648.service: Deactivated successfully. Oct 13 00:03:22.128612 systemd[1]: session-2.scope: Deactivated successfully. Oct 13 00:03:22.129868 systemd-logind[1518]: Session 2 logged out. Waiting for processes to exit. Oct 13 00:03:22.131503 systemd[1]: Started sshd@2-10.0.0.63:22-10.0.0.1:49670.service - OpenSSH per-connection server daemon (10.0.0.1:49670). Oct 13 00:03:22.133600 systemd-logind[1518]: Removed session 2. Oct 13 00:03:22.189789 sshd[1679]: Accepted publickey for core from 10.0.0.1 port 49670 ssh2: RSA SHA256:TSsUX0+AOpz/e050I2QTTANvLGIa9yseBHOQ57c0ZcY Oct 13 00:03:22.191196 sshd-session[1679]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:03:22.195156 systemd-logind[1518]: New session 3 of user core. Oct 13 00:03:22.206391 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 13 00:03:22.253055 sshd[1683]: Connection closed by 10.0.0.1 port 49670 Oct 13 00:03:22.253440 sshd-session[1679]: pam_unix(sshd:session): session closed for user core Oct 13 00:03:22.265060 systemd[1]: sshd@2-10.0.0.63:22-10.0.0.1:49670.service: Deactivated successfully. Oct 13 00:03:22.267820 systemd[1]: session-3.scope: Deactivated successfully. Oct 13 00:03:22.273829 systemd-logind[1518]: Session 3 logged out. Waiting for processes to exit. Oct 13 00:03:22.275877 systemd[1]: Started sshd@3-10.0.0.63:22-10.0.0.1:49702.service - OpenSSH per-connection server daemon (10.0.0.1:49702). Oct 13 00:03:22.278471 systemd-logind[1518]: Removed session 3. Oct 13 00:03:22.331314 sshd[1689]: Accepted publickey for core from 10.0.0.1 port 49702 ssh2: RSA SHA256:TSsUX0+AOpz/e050I2QTTANvLGIa9yseBHOQ57c0ZcY Oct 13 00:03:22.332575 sshd-session[1689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:03:22.337190 systemd-logind[1518]: New session 4 of user core. Oct 13 00:03:22.347404 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 13 00:03:22.398439 sshd[1692]: Connection closed by 10.0.0.1 port 49702 Oct 13 00:03:22.398394 sshd-session[1689]: pam_unix(sshd:session): session closed for user core Oct 13 00:03:22.409100 systemd[1]: sshd@3-10.0.0.63:22-10.0.0.1:49702.service: Deactivated successfully. Oct 13 00:03:22.410544 systemd[1]: session-4.scope: Deactivated successfully. Oct 13 00:03:22.411143 systemd-logind[1518]: Session 4 logged out. Waiting for processes to exit. Oct 13 00:03:22.413307 systemd[1]: Started sshd@4-10.0.0.63:22-10.0.0.1:49728.service - OpenSSH per-connection server daemon (10.0.0.1:49728). Oct 13 00:03:22.414409 systemd-logind[1518]: Removed session 4. Oct 13 00:03:22.463096 sshd[1698]: Accepted publickey for core from 10.0.0.1 port 49728 ssh2: RSA SHA256:TSsUX0+AOpz/e050I2QTTANvLGIa9yseBHOQ57c0ZcY Oct 13 00:03:22.464483 sshd-session[1698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:03:22.468280 systemd-logind[1518]: New session 5 of user core. Oct 13 00:03:22.483424 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 13 00:03:22.539271 sudo[1702]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 13 00:03:22.539546 sudo[1702]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 13 00:03:22.553132 sudo[1702]: pam_unix(sudo:session): session closed for user root Oct 13 00:03:22.554618 sshd[1701]: Connection closed by 10.0.0.1 port 49728 Oct 13 00:03:22.555079 sshd-session[1698]: pam_unix(sshd:session): session closed for user core Oct 13 00:03:22.562106 systemd[1]: sshd@4-10.0.0.63:22-10.0.0.1:49728.service: Deactivated successfully. Oct 13 00:03:22.563575 systemd[1]: session-5.scope: Deactivated successfully. Oct 13 00:03:22.567386 systemd-logind[1518]: Session 5 logged out. Waiting for processes to exit. Oct 13 00:03:22.568565 systemd[1]: Started sshd@5-10.0.0.63:22-10.0.0.1:49732.service - OpenSSH per-connection server daemon (10.0.0.1:49732). Oct 13 00:03:22.569490 systemd-logind[1518]: Removed session 5. Oct 13 00:03:22.626817 sshd[1708]: Accepted publickey for core from 10.0.0.1 port 49732 ssh2: RSA SHA256:TSsUX0+AOpz/e050I2QTTANvLGIa9yseBHOQ57c0ZcY Oct 13 00:03:22.628133 sshd-session[1708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:03:22.632651 systemd-logind[1518]: New session 6 of user core. Oct 13 00:03:22.648449 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 13 00:03:22.698368 sudo[1713]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 13 00:03:22.698614 sudo[1713]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 13 00:03:22.770995 sudo[1713]: pam_unix(sudo:session): session closed for user root Oct 13 00:03:22.775670 sudo[1712]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Oct 13 00:03:22.776159 sudo[1712]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 13 00:03:22.784096 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 13 00:03:22.826540 augenrules[1735]: No rules Oct 13 00:03:22.827727 systemd[1]: audit-rules.service: Deactivated successfully. Oct 13 00:03:22.827974 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 13 00:03:22.829127 sudo[1712]: pam_unix(sudo:session): session closed for user root Oct 13 00:03:22.830324 sshd[1711]: Connection closed by 10.0.0.1 port 49732 Oct 13 00:03:22.830821 sshd-session[1708]: pam_unix(sshd:session): session closed for user core Oct 13 00:03:22.840004 systemd[1]: sshd@5-10.0.0.63:22-10.0.0.1:49732.service: Deactivated successfully. Oct 13 00:03:22.842443 systemd[1]: session-6.scope: Deactivated successfully. Oct 13 00:03:22.843051 systemd-logind[1518]: Session 6 logged out. Waiting for processes to exit. Oct 13 00:03:22.845227 systemd[1]: Started sshd@6-10.0.0.63:22-10.0.0.1:49742.service - OpenSSH per-connection server daemon (10.0.0.1:49742). Oct 13 00:03:22.845674 systemd-logind[1518]: Removed session 6. Oct 13 00:03:22.895825 sshd[1744]: Accepted publickey for core from 10.0.0.1 port 49742 ssh2: RSA SHA256:TSsUX0+AOpz/e050I2QTTANvLGIa9yseBHOQ57c0ZcY Oct 13 00:03:22.896882 sshd-session[1744]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:03:22.900622 systemd-logind[1518]: New session 7 of user core. Oct 13 00:03:22.910378 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 13 00:03:22.959260 sudo[1748]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 13 00:03:22.959509 sudo[1748]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 13 00:03:23.236207 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 13 00:03:23.251586 (dockerd)[1768]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 13 00:03:23.453175 dockerd[1768]: time="2025-10-13T00:03:23.452486224Z" level=info msg="Starting up" Oct 13 00:03:23.453754 dockerd[1768]: time="2025-10-13T00:03:23.453728564Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Oct 13 00:03:23.464528 dockerd[1768]: time="2025-10-13T00:03:23.464485794Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Oct 13 00:03:23.496493 dockerd[1768]: time="2025-10-13T00:03:23.495959691Z" level=info msg="Loading containers: start." Oct 13 00:03:23.507255 kernel: Initializing XFRM netlink socket Oct 13 00:03:23.699956 systemd-networkd[1448]: docker0: Link UP Oct 13 00:03:23.703061 dockerd[1768]: time="2025-10-13T00:03:23.703029336Z" level=info msg="Loading containers: done." Oct 13 00:03:23.715273 dockerd[1768]: time="2025-10-13T00:03:23.715193698Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 13 00:03:23.715686 dockerd[1768]: time="2025-10-13T00:03:23.715454973Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Oct 13 00:03:23.715686 dockerd[1768]: time="2025-10-13T00:03:23.715540979Z" level=info msg="Initializing buildkit" Oct 13 00:03:23.738276 dockerd[1768]: time="2025-10-13T00:03:23.738232608Z" level=info msg="Completed buildkit initialization" Oct 13 00:03:23.743022 dockerd[1768]: time="2025-10-13T00:03:23.742978517Z" level=info msg="Daemon has completed initialization" Oct 13 00:03:23.743171 dockerd[1768]: time="2025-10-13T00:03:23.743032899Z" level=info msg="API listen on /run/docker.sock" Oct 13 00:03:23.743189 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 13 00:03:24.157998 containerd[1536]: time="2025-10-13T00:03:24.157912865Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\"" Oct 13 00:03:24.766346 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3695658747.mount: Deactivated successfully. Oct 13 00:03:25.630976 containerd[1536]: time="2025-10-13T00:03:25.630915798Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:03:25.632210 containerd[1536]: time="2025-10-13T00:03:25.632182646Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.1: active requests=0, bytes read=24574512" Oct 13 00:03:25.632841 containerd[1536]: time="2025-10-13T00:03:25.632801309Z" level=info msg="ImageCreate event name:\"sha256:43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:03:25.636019 containerd[1536]: time="2025-10-13T00:03:25.635980785Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:03:25.638132 containerd[1536]: time="2025-10-13T00:03:25.638086194Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.1\" with image id \"sha256:43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\", size \"24571109\" in 1.480131059s" Oct 13 00:03:25.638162 containerd[1536]: time="2025-10-13T00:03:25.638140942Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\" returns image reference \"sha256:43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196\"" Oct 13 00:03:25.641688 containerd[1536]: time="2025-10-13T00:03:25.641650758Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\"" Oct 13 00:03:26.552288 containerd[1536]: time="2025-10-13T00:03:26.552215650Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:03:26.553640 containerd[1536]: time="2025-10-13T00:03:26.553610025Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.1: active requests=0, bytes read=19132145" Oct 13 00:03:26.554993 containerd[1536]: time="2025-10-13T00:03:26.554528170Z" level=info msg="ImageCreate event name:\"sha256:7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:03:26.557645 containerd[1536]: time="2025-10-13T00:03:26.557062378Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:03:26.558180 containerd[1536]: time="2025-10-13T00:03:26.558142544Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.1\" with image id \"sha256:7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\", size \"20720058\" in 916.451844ms" Oct 13 00:03:26.558180 containerd[1536]: time="2025-10-13T00:03:26.558172437Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\" returns image reference \"sha256:7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a\"" Oct 13 00:03:26.558603 containerd[1536]: time="2025-10-13T00:03:26.558537152Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\"" Oct 13 00:03:27.388465 containerd[1536]: time="2025-10-13T00:03:27.388418305Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:03:27.389364 containerd[1536]: time="2025-10-13T00:03:27.389071907Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.1: active requests=0, bytes read=14191886" Oct 13 00:03:27.390076 containerd[1536]: time="2025-10-13T00:03:27.390036945Z" level=info msg="ImageCreate event name:\"sha256:b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:03:27.394200 containerd[1536]: time="2025-10-13T00:03:27.394160436Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.1\" with image id \"sha256:b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\", size \"15779817\" in 835.595667ms" Oct 13 00:03:27.394200 containerd[1536]: time="2025-10-13T00:03:27.394199040Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\" returns image reference \"sha256:b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0\"" Oct 13 00:03:27.394503 containerd[1536]: time="2025-10-13T00:03:27.394465543Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:03:27.395333 containerd[1536]: time="2025-10-13T00:03:27.395301641Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\"" Oct 13 00:03:27.814766 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 13 00:03:27.816065 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 00:03:27.928666 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 00:03:27.931695 (kubelet)[2058]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 13 00:03:27.962595 kubelet[2058]: E1013 00:03:27.962559 2058 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 13 00:03:27.965369 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 13 00:03:27.965493 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 13 00:03:27.965787 systemd[1]: kubelet.service: Consumed 133ms CPU time, 107.4M memory peak. Oct 13 00:03:28.478441 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4255506947.mount: Deactivated successfully. Oct 13 00:03:28.632543 containerd[1536]: time="2025-10-13T00:03:28.632497035Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:03:28.633215 containerd[1536]: time="2025-10-13T00:03:28.632992020Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.1: active requests=0, bytes read=22789030" Oct 13 00:03:28.633776 containerd[1536]: time="2025-10-13T00:03:28.633758599Z" level=info msg="ImageCreate event name:\"sha256:05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:03:28.635555 containerd[1536]: time="2025-10-13T00:03:28.635523387Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:03:28.636021 containerd[1536]: time="2025-10-13T00:03:28.635993775Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.1\" with image id \"sha256:05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9\", repo tag \"registry.k8s.io/kube-proxy:v1.34.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\", size \"22788047\" in 1.240654233s" Oct 13 00:03:28.636052 containerd[1536]: time="2025-10-13T00:03:28.636026373Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\" returns image reference \"sha256:05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9\"" Oct 13 00:03:28.636480 containerd[1536]: time="2025-10-13T00:03:28.636446815Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Oct 13 00:03:29.127438 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3267789790.mount: Deactivated successfully. Oct 13 00:03:30.035332 containerd[1536]: time="2025-10-13T00:03:30.035287386Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:03:30.036897 containerd[1536]: time="2025-10-13T00:03:30.036873049Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=20395408" Oct 13 00:03:30.037773 containerd[1536]: time="2025-10-13T00:03:30.037745895Z" level=info msg="ImageCreate event name:\"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:03:30.040968 containerd[1536]: time="2025-10-13T00:03:30.040930162Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:03:30.042737 containerd[1536]: time="2025-10-13T00:03:30.042698445Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"20392204\" in 1.406212993s" Oct 13 00:03:30.042782 containerd[1536]: time="2025-10-13T00:03:30.042742075Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\"" Oct 13 00:03:30.043312 containerd[1536]: time="2025-10-13T00:03:30.043285252Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Oct 13 00:03:30.482994 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount923928276.mount: Deactivated successfully. Oct 13 00:03:30.490112 containerd[1536]: time="2025-10-13T00:03:30.489363097Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:03:30.490112 containerd[1536]: time="2025-10-13T00:03:30.490056975Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=268711" Oct 13 00:03:30.491036 containerd[1536]: time="2025-10-13T00:03:30.490991675Z" level=info msg="ImageCreate event name:\"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:03:30.492953 containerd[1536]: time="2025-10-13T00:03:30.492908515Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:03:30.493612 containerd[1536]: time="2025-10-13T00:03:30.493442124Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"267939\" in 450.124279ms" Oct 13 00:03:30.493612 containerd[1536]: time="2025-10-13T00:03:30.493475711Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\"" Oct 13 00:03:30.493973 containerd[1536]: time="2025-10-13T00:03:30.493943696Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Oct 13 00:03:32.873263 containerd[1536]: time="2025-10-13T00:03:32.873192450Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:03:32.875255 containerd[1536]: time="2025-10-13T00:03:32.875036087Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=97410768" Oct 13 00:03:32.875911 containerd[1536]: time="2025-10-13T00:03:32.875858499Z" level=info msg="ImageCreate event name:\"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:03:32.879076 containerd[1536]: time="2025-10-13T00:03:32.879039773Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:03:32.880894 containerd[1536]: time="2025-10-13T00:03:32.880790426Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"98207481\" in 2.386816229s" Oct 13 00:03:32.880894 containerd[1536]: time="2025-10-13T00:03:32.880833060Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\"" Oct 13 00:03:37.495995 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 00:03:37.496138 systemd[1]: kubelet.service: Consumed 133ms CPU time, 107.4M memory peak. Oct 13 00:03:37.498070 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 00:03:37.522471 systemd[1]: Reload requested from client PID 2204 ('systemctl') (unit session-7.scope)... Oct 13 00:03:37.522491 systemd[1]: Reloading... Oct 13 00:03:37.590387 zram_generator::config[2249]: No configuration found. Oct 13 00:03:37.752537 systemd[1]: Reloading finished in 229 ms. Oct 13 00:03:37.804927 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 00:03:37.807520 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 00:03:37.808941 systemd[1]: kubelet.service: Deactivated successfully. Oct 13 00:03:37.810273 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 00:03:37.810318 systemd[1]: kubelet.service: Consumed 92ms CPU time, 95.1M memory peak. Oct 13 00:03:37.811765 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 00:03:37.982610 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 00:03:37.986834 (kubelet)[2293]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 13 00:03:38.020145 kubelet[2293]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 13 00:03:38.020145 kubelet[2293]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 13 00:03:38.020642 kubelet[2293]: I1013 00:03:38.020596 2293 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 13 00:03:38.529417 kubelet[2293]: I1013 00:03:38.529379 2293 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Oct 13 00:03:38.529417 kubelet[2293]: I1013 00:03:38.529410 2293 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 13 00:03:38.529563 kubelet[2293]: I1013 00:03:38.529436 2293 watchdog_linux.go:95] "Systemd watchdog is not enabled" Oct 13 00:03:38.529563 kubelet[2293]: I1013 00:03:38.529442 2293 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 13 00:03:38.529663 kubelet[2293]: I1013 00:03:38.529647 2293 server.go:956] "Client rotation is on, will bootstrap in background" Oct 13 00:03:38.592941 kubelet[2293]: E1013 00:03:38.592890 2293 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.63:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.63:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Oct 13 00:03:38.594803 kubelet[2293]: I1013 00:03:38.594256 2293 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 13 00:03:38.603966 kubelet[2293]: I1013 00:03:38.600865 2293 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 13 00:03:38.603966 kubelet[2293]: I1013 00:03:38.603256 2293 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Oct 13 00:03:38.603966 kubelet[2293]: I1013 00:03:38.603429 2293 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 13 00:03:38.603966 kubelet[2293]: I1013 00:03:38.603451 2293 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 13 00:03:38.604186 kubelet[2293]: I1013 00:03:38.603596 2293 topology_manager.go:138] "Creating topology manager with none policy" Oct 13 00:03:38.604186 kubelet[2293]: I1013 00:03:38.603604 2293 container_manager_linux.go:306] "Creating device plugin manager" Oct 13 00:03:38.604186 kubelet[2293]: I1013 00:03:38.603707 2293 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Oct 13 00:03:38.608373 kubelet[2293]: I1013 00:03:38.608101 2293 state_mem.go:36] "Initialized new in-memory state store" Oct 13 00:03:38.609293 kubelet[2293]: I1013 00:03:38.609273 2293 kubelet.go:475] "Attempting to sync node with API server" Oct 13 00:03:38.609339 kubelet[2293]: I1013 00:03:38.609297 2293 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 13 00:03:38.609918 kubelet[2293]: I1013 00:03:38.609878 2293 kubelet.go:387] "Adding apiserver pod source" Oct 13 00:03:38.609918 kubelet[2293]: I1013 00:03:38.609901 2293 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 13 00:03:38.610615 kubelet[2293]: E1013 00:03:38.610581 2293 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.63:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.63:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 13 00:03:38.612250 kubelet[2293]: E1013 00:03:38.611284 2293 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.63:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.63:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Oct 13 00:03:38.612632 kubelet[2293]: I1013 00:03:38.612613 2293 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 13 00:03:38.613388 kubelet[2293]: I1013 00:03:38.613369 2293 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 13 00:03:38.613480 kubelet[2293]: I1013 00:03:38.613469 2293 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Oct 13 00:03:38.613566 kubelet[2293]: W1013 00:03:38.613557 2293 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 13 00:03:38.616433 kubelet[2293]: I1013 00:03:38.616416 2293 server.go:1262] "Started kubelet" Oct 13 00:03:38.616946 kubelet[2293]: I1013 00:03:38.616901 2293 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 13 00:03:38.617107 kubelet[2293]: I1013 00:03:38.617067 2293 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 13 00:03:38.617144 kubelet[2293]: I1013 00:03:38.617135 2293 server_v1.go:49] "podresources" method="list" useActivePods=true Oct 13 00:03:38.617437 kubelet[2293]: I1013 00:03:38.617416 2293 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 13 00:03:38.622317 kubelet[2293]: I1013 00:03:38.622294 2293 server.go:310] "Adding debug handlers to kubelet server" Oct 13 00:03:38.623651 kubelet[2293]: E1013 00:03:38.623631 2293 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 13 00:03:38.627389 kubelet[2293]: I1013 00:03:38.627368 2293 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 13 00:03:38.627496 kubelet[2293]: I1013 00:03:38.627480 2293 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 13 00:03:38.628856 kubelet[2293]: E1013 00:03:38.623621 2293 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.63:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.63:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.186de4176bd74658 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-10-13 00:03:38.616383064 +0000 UTC m=+0.626479561,LastTimestamp:2025-10-13 00:03:38.616383064 +0000 UTC m=+0.626479561,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 13 00:03:38.628856 kubelet[2293]: E1013 00:03:38.628174 2293 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 13 00:03:38.628856 kubelet[2293]: I1013 00:03:38.628194 2293 volume_manager.go:313] "Starting Kubelet Volume Manager" Oct 13 00:03:38.629002 kubelet[2293]: I1013 00:03:38.628199 2293 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 13 00:03:38.629002 kubelet[2293]: I1013 00:03:38.628990 2293 reconciler.go:29] "Reconciler: start to sync state" Oct 13 00:03:38.629223 kubelet[2293]: E1013 00:03:38.629187 2293 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.63:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.63:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 13 00:03:38.629298 kubelet[2293]: E1013 00:03:38.629274 2293 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.63:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.63:6443: connect: connection refused" interval="200ms" Oct 13 00:03:38.629505 kubelet[2293]: I1013 00:03:38.629481 2293 factory.go:223] Registration of the systemd container factory successfully Oct 13 00:03:38.629574 kubelet[2293]: I1013 00:03:38.629556 2293 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 13 00:03:38.632008 kubelet[2293]: I1013 00:03:38.631504 2293 factory.go:223] Registration of the containerd container factory successfully Oct 13 00:03:38.642352 kubelet[2293]: I1013 00:03:38.642322 2293 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 13 00:03:38.642352 kubelet[2293]: I1013 00:03:38.642351 2293 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 13 00:03:38.642449 kubelet[2293]: I1013 00:03:38.642368 2293 state_mem.go:36] "Initialized new in-memory state store" Oct 13 00:03:38.644044 kubelet[2293]: I1013 00:03:38.644005 2293 policy_none.go:49] "None policy: Start" Oct 13 00:03:38.644044 kubelet[2293]: I1013 00:03:38.644029 2293 memory_manager.go:187] "Starting memorymanager" policy="None" Oct 13 00:03:38.644044 kubelet[2293]: I1013 00:03:38.644039 2293 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Oct 13 00:03:38.645151 kubelet[2293]: I1013 00:03:38.645125 2293 policy_none.go:47] "Start" Oct 13 00:03:38.651788 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 13 00:03:38.654077 kubelet[2293]: I1013 00:03:38.653916 2293 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Oct 13 00:03:38.655292 kubelet[2293]: I1013 00:03:38.655266 2293 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Oct 13 00:03:38.655292 kubelet[2293]: I1013 00:03:38.655290 2293 status_manager.go:244] "Starting to sync pod status with apiserver" Oct 13 00:03:38.655377 kubelet[2293]: I1013 00:03:38.655325 2293 kubelet.go:2427] "Starting kubelet main sync loop" Oct 13 00:03:38.655605 kubelet[2293]: E1013 00:03:38.655367 2293 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 13 00:03:38.656119 kubelet[2293]: E1013 00:03:38.656001 2293 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.63:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.63:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Oct 13 00:03:38.661930 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 13 00:03:38.664805 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 13 00:03:38.675977 kubelet[2293]: E1013 00:03:38.675956 2293 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 13 00:03:38.676877 kubelet[2293]: I1013 00:03:38.676443 2293 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 13 00:03:38.676877 kubelet[2293]: I1013 00:03:38.676460 2293 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 13 00:03:38.676877 kubelet[2293]: I1013 00:03:38.676801 2293 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 13 00:03:38.677727 kubelet[2293]: E1013 00:03:38.677707 2293 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 13 00:03:38.677853 kubelet[2293]: E1013 00:03:38.677840 2293 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Oct 13 00:03:38.770724 systemd[1]: Created slice kubepods-burstable-pod3c47676d5639d75044526d04cc38ba1c.slice - libcontainer container kubepods-burstable-pod3c47676d5639d75044526d04cc38ba1c.slice. Oct 13 00:03:38.777921 kubelet[2293]: I1013 00:03:38.777875 2293 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 13 00:03:38.778684 kubelet[2293]: E1013 00:03:38.778645 2293 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.63:6443/api/v1/nodes\": dial tcp 10.0.0.63:6443: connect: connection refused" node="localhost" Oct 13 00:03:38.793891 kubelet[2293]: E1013 00:03:38.793755 2293 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 00:03:38.797090 systemd[1]: Created slice kubepods-burstable-podce161b3b11c90b0b844f2e4f86b4e8cd.slice - libcontainer container kubepods-burstable-podce161b3b11c90b0b844f2e4f86b4e8cd.slice. Oct 13 00:03:38.807300 kubelet[2293]: E1013 00:03:38.807276 2293 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 00:03:38.809492 systemd[1]: Created slice kubepods-burstable-pod72ae43bf624d285361487631af8a6ba6.slice - libcontainer container kubepods-burstable-pod72ae43bf624d285361487631af8a6ba6.slice. Oct 13 00:03:38.811794 kubelet[2293]: E1013 00:03:38.811200 2293 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 00:03:38.829859 kubelet[2293]: E1013 00:03:38.829811 2293 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.63:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.63:6443: connect: connection refused" interval="400ms" Oct 13 00:03:38.831094 kubelet[2293]: I1013 00:03:38.830809 2293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 00:03:38.831094 kubelet[2293]: I1013 00:03:38.830839 2293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 00:03:38.831094 kubelet[2293]: I1013 00:03:38.830853 2293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 00:03:38.831094 kubelet[2293]: I1013 00:03:38.830869 2293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 00:03:38.831094 kubelet[2293]: I1013 00:03:38.830883 2293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 00:03:38.831284 kubelet[2293]: I1013 00:03:38.830897 2293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae43bf624d285361487631af8a6ba6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae43bf624d285361487631af8a6ba6\") " pod="kube-system/kube-scheduler-localhost" Oct 13 00:03:38.831284 kubelet[2293]: I1013 00:03:38.830911 2293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3c47676d5639d75044526d04cc38ba1c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"3c47676d5639d75044526d04cc38ba1c\") " pod="kube-system/kube-apiserver-localhost" Oct 13 00:03:38.831284 kubelet[2293]: I1013 00:03:38.830925 2293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3c47676d5639d75044526d04cc38ba1c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"3c47676d5639d75044526d04cc38ba1c\") " pod="kube-system/kube-apiserver-localhost" Oct 13 00:03:38.831284 kubelet[2293]: I1013 00:03:38.830939 2293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3c47676d5639d75044526d04cc38ba1c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"3c47676d5639d75044526d04cc38ba1c\") " pod="kube-system/kube-apiserver-localhost" Oct 13 00:03:38.981076 kubelet[2293]: I1013 00:03:38.980890 2293 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 13 00:03:38.981331 kubelet[2293]: E1013 00:03:38.981293 2293 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.63:6443/api/v1/nodes\": dial tcp 10.0.0.63:6443: connect: connection refused" node="localhost" Oct 13 00:03:39.097903 containerd[1536]: time="2025-10-13T00:03:39.097690762Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:3c47676d5639d75044526d04cc38ba1c,Namespace:kube-system,Attempt:0,}" Oct 13 00:03:39.110329 containerd[1536]: time="2025-10-13T00:03:39.110291295Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ce161b3b11c90b0b844f2e4f86b4e8cd,Namespace:kube-system,Attempt:0,}" Oct 13 00:03:39.116722 containerd[1536]: time="2025-10-13T00:03:39.116684253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae43bf624d285361487631af8a6ba6,Namespace:kube-system,Attempt:0,}" Oct 13 00:03:39.230655 kubelet[2293]: E1013 00:03:39.230617 2293 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.63:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.63:6443: connect: connection refused" interval="800ms" Oct 13 00:03:39.382973 kubelet[2293]: I1013 00:03:39.382858 2293 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 13 00:03:39.383192 kubelet[2293]: E1013 00:03:39.383164 2293 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.63:6443/api/v1/nodes\": dial tcp 10.0.0.63:6443: connect: connection refused" node="localhost" Oct 13 00:03:39.584343 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2475079679.mount: Deactivated successfully. Oct 13 00:03:39.588618 containerd[1536]: time="2025-10-13T00:03:39.588567502Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 13 00:03:39.590769 containerd[1536]: time="2025-10-13T00:03:39.590726345Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Oct 13 00:03:39.593565 containerd[1536]: time="2025-10-13T00:03:39.593525142Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 13 00:03:39.595586 containerd[1536]: time="2025-10-13T00:03:39.595415152Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 13 00:03:39.595586 containerd[1536]: time="2025-10-13T00:03:39.595535440Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Oct 13 00:03:39.595990 containerd[1536]: time="2025-10-13T00:03:39.595960879Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Oct 13 00:03:39.596138 containerd[1536]: time="2025-10-13T00:03:39.596112018Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 13 00:03:39.598204 containerd[1536]: time="2025-10-13T00:03:39.596892894Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 13 00:03:39.598204 containerd[1536]: time="2025-10-13T00:03:39.597646431Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 495.475391ms" Oct 13 00:03:39.600241 containerd[1536]: time="2025-10-13T00:03:39.600203973Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 487.386345ms" Oct 13 00:03:39.604108 containerd[1536]: time="2025-10-13T00:03:39.603879909Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 484.898007ms" Oct 13 00:03:39.614105 containerd[1536]: time="2025-10-13T00:03:39.614060269Z" level=info msg="connecting to shim ed2d564d8e7a9d361200e959d4612dd4a5294ca015c510c0cc389448ce48a9b6" address="unix:///run/containerd/s/7e269dc1c0ab6102ef0d66adf38ca2e16de1b61e48bf5eed5826e9c70ced22b4" namespace=k8s.io protocol=ttrpc version=3 Oct 13 00:03:39.622481 containerd[1536]: time="2025-10-13T00:03:39.622443410Z" level=info msg="connecting to shim 8396cde74242010a406d6df059d038041fd690478d4c32a78035f20750ab28e4" address="unix:///run/containerd/s/fcf2e48667b19ce2b3a9c765dbc14ca712e53c1b74ef608d9bfc892300bf6a1e" namespace=k8s.io protocol=ttrpc version=3 Oct 13 00:03:39.634124 containerd[1536]: time="2025-10-13T00:03:39.633974440Z" level=info msg="connecting to shim 744cd9ec9f16dcbd10649c9efe0d4a09cc1d1fc79678b1a1bdf1358bfdda2c26" address="unix:///run/containerd/s/638f038ea7aa8df0dc5f360d2216a1a86ef17c8b695f7244f1895b030f2fcc2c" namespace=k8s.io protocol=ttrpc version=3 Oct 13 00:03:39.646440 systemd[1]: Started cri-containerd-ed2d564d8e7a9d361200e959d4612dd4a5294ca015c510c0cc389448ce48a9b6.scope - libcontainer container ed2d564d8e7a9d361200e959d4612dd4a5294ca015c510c0cc389448ce48a9b6. Oct 13 00:03:39.649409 systemd[1]: Started cri-containerd-8396cde74242010a406d6df059d038041fd690478d4c32a78035f20750ab28e4.scope - libcontainer container 8396cde74242010a406d6df059d038041fd690478d4c32a78035f20750ab28e4. Oct 13 00:03:39.654797 systemd[1]: Started cri-containerd-744cd9ec9f16dcbd10649c9efe0d4a09cc1d1fc79678b1a1bdf1358bfdda2c26.scope - libcontainer container 744cd9ec9f16dcbd10649c9efe0d4a09cc1d1fc79678b1a1bdf1358bfdda2c26. Oct 13 00:03:39.687964 containerd[1536]: time="2025-10-13T00:03:39.687850803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ce161b3b11c90b0b844f2e4f86b4e8cd,Namespace:kube-system,Attempt:0,} returns sandbox id \"8396cde74242010a406d6df059d038041fd690478d4c32a78035f20750ab28e4\"" Oct 13 00:03:39.690659 containerd[1536]: time="2025-10-13T00:03:39.690627131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:3c47676d5639d75044526d04cc38ba1c,Namespace:kube-system,Attempt:0,} returns sandbox id \"ed2d564d8e7a9d361200e959d4612dd4a5294ca015c510c0cc389448ce48a9b6\"" Oct 13 00:03:39.695221 containerd[1536]: time="2025-10-13T00:03:39.695188905Z" level=info msg="CreateContainer within sandbox \"8396cde74242010a406d6df059d038041fd690478d4c32a78035f20750ab28e4\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 13 00:03:39.696788 containerd[1536]: time="2025-10-13T00:03:39.696757601Z" level=info msg="CreateContainer within sandbox \"ed2d564d8e7a9d361200e959d4612dd4a5294ca015c510c0cc389448ce48a9b6\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 13 00:03:39.702569 containerd[1536]: time="2025-10-13T00:03:39.702523056Z" level=info msg="Container 8380afcf7fef5ffef3d9bc77ead4b481a4d86fee11b65ad6c43e8143a6915eeb: CDI devices from CRI Config.CDIDevices: []" Oct 13 00:03:39.706298 containerd[1536]: time="2025-10-13T00:03:39.706068127Z" level=info msg="Container 1c768867b250b34c7f31c3a42ce25a64cb136cdc2b70c3f40429ddc20834d9ed: CDI devices from CRI Config.CDIDevices: []" Oct 13 00:03:39.708867 containerd[1536]: time="2025-10-13T00:03:39.708825857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae43bf624d285361487631af8a6ba6,Namespace:kube-system,Attempt:0,} returns sandbox id \"744cd9ec9f16dcbd10649c9efe0d4a09cc1d1fc79678b1a1bdf1358bfdda2c26\"" Oct 13 00:03:39.713830 containerd[1536]: time="2025-10-13T00:03:39.713789204Z" level=info msg="CreateContainer within sandbox \"744cd9ec9f16dcbd10649c9efe0d4a09cc1d1fc79678b1a1bdf1358bfdda2c26\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 13 00:03:39.715961 containerd[1536]: time="2025-10-13T00:03:39.715546992Z" level=info msg="CreateContainer within sandbox \"8396cde74242010a406d6df059d038041fd690478d4c32a78035f20750ab28e4\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"8380afcf7fef5ffef3d9bc77ead4b481a4d86fee11b65ad6c43e8143a6915eeb\"" Oct 13 00:03:39.715961 containerd[1536]: time="2025-10-13T00:03:39.715900474Z" level=info msg="CreateContainer within sandbox \"ed2d564d8e7a9d361200e959d4612dd4a5294ca015c510c0cc389448ce48a9b6\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1c768867b250b34c7f31c3a42ce25a64cb136cdc2b70c3f40429ddc20834d9ed\"" Oct 13 00:03:39.716170 containerd[1536]: time="2025-10-13T00:03:39.716137698Z" level=info msg="StartContainer for \"8380afcf7fef5ffef3d9bc77ead4b481a4d86fee11b65ad6c43e8143a6915eeb\"" Oct 13 00:03:39.717512 containerd[1536]: time="2025-10-13T00:03:39.717481741Z" level=info msg="StartContainer for \"1c768867b250b34c7f31c3a42ce25a64cb136cdc2b70c3f40429ddc20834d9ed\"" Oct 13 00:03:39.718121 containerd[1536]: time="2025-10-13T00:03:39.718085457Z" level=info msg="connecting to shim 8380afcf7fef5ffef3d9bc77ead4b481a4d86fee11b65ad6c43e8143a6915eeb" address="unix:///run/containerd/s/fcf2e48667b19ce2b3a9c765dbc14ca712e53c1b74ef608d9bfc892300bf6a1e" protocol=ttrpc version=3 Oct 13 00:03:39.718579 containerd[1536]: time="2025-10-13T00:03:39.718543064Z" level=info msg="connecting to shim 1c768867b250b34c7f31c3a42ce25a64cb136cdc2b70c3f40429ddc20834d9ed" address="unix:///run/containerd/s/7e269dc1c0ab6102ef0d66adf38ca2e16de1b61e48bf5eed5826e9c70ced22b4" protocol=ttrpc version=3 Oct 13 00:03:39.723294 containerd[1536]: time="2025-10-13T00:03:39.723264517Z" level=info msg="Container 542e82969f41f1bbf32f6f80a668b9ebf8ffbc56833f6294e7f328897d662d40: CDI devices from CRI Config.CDIDevices: []" Oct 13 00:03:39.729915 containerd[1536]: time="2025-10-13T00:03:39.729877896Z" level=info msg="CreateContainer within sandbox \"744cd9ec9f16dcbd10649c9efe0d4a09cc1d1fc79678b1a1bdf1358bfdda2c26\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"542e82969f41f1bbf32f6f80a668b9ebf8ffbc56833f6294e7f328897d662d40\"" Oct 13 00:03:39.731424 containerd[1536]: time="2025-10-13T00:03:39.731291702Z" level=info msg="StartContainer for \"542e82969f41f1bbf32f6f80a668b9ebf8ffbc56833f6294e7f328897d662d40\"" Oct 13 00:03:39.733512 containerd[1536]: time="2025-10-13T00:03:39.733477125Z" level=info msg="connecting to shim 542e82969f41f1bbf32f6f80a668b9ebf8ffbc56833f6294e7f328897d662d40" address="unix:///run/containerd/s/638f038ea7aa8df0dc5f360d2216a1a86ef17c8b695f7244f1895b030f2fcc2c" protocol=ttrpc version=3 Oct 13 00:03:39.736435 systemd[1]: Started cri-containerd-8380afcf7fef5ffef3d9bc77ead4b481a4d86fee11b65ad6c43e8143a6915eeb.scope - libcontainer container 8380afcf7fef5ffef3d9bc77ead4b481a4d86fee11b65ad6c43e8143a6915eeb. Oct 13 00:03:39.738858 systemd[1]: Started cri-containerd-1c768867b250b34c7f31c3a42ce25a64cb136cdc2b70c3f40429ddc20834d9ed.scope - libcontainer container 1c768867b250b34c7f31c3a42ce25a64cb136cdc2b70c3f40429ddc20834d9ed. Oct 13 00:03:39.760384 systemd[1]: Started cri-containerd-542e82969f41f1bbf32f6f80a668b9ebf8ffbc56833f6294e7f328897d662d40.scope - libcontainer container 542e82969f41f1bbf32f6f80a668b9ebf8ffbc56833f6294e7f328897d662d40. Oct 13 00:03:39.780562 containerd[1536]: time="2025-10-13T00:03:39.780481054Z" level=info msg="StartContainer for \"8380afcf7fef5ffef3d9bc77ead4b481a4d86fee11b65ad6c43e8143a6915eeb\" returns successfully" Oct 13 00:03:39.795513 containerd[1536]: time="2025-10-13T00:03:39.795458498Z" level=info msg="StartContainer for \"1c768867b250b34c7f31c3a42ce25a64cb136cdc2b70c3f40429ddc20834d9ed\" returns successfully" Oct 13 00:03:39.802177 containerd[1536]: time="2025-10-13T00:03:39.802141480Z" level=info msg="StartContainer for \"542e82969f41f1bbf32f6f80a668b9ebf8ffbc56833f6294e7f328897d662d40\" returns successfully" Oct 13 00:03:39.810032 kubelet[2293]: E1013 00:03:39.809937 2293 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.63:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.63:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Oct 13 00:03:40.184901 kubelet[2293]: I1013 00:03:40.184866 2293 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 13 00:03:40.668281 kubelet[2293]: E1013 00:03:40.667514 2293 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 00:03:40.671058 kubelet[2293]: E1013 00:03:40.670854 2293 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 00:03:40.672927 kubelet[2293]: E1013 00:03:40.672903 2293 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 00:03:41.510328 kubelet[2293]: E1013 00:03:41.510281 2293 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Oct 13 00:03:41.597274 kubelet[2293]: I1013 00:03:41.597032 2293 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 13 00:03:41.611460 kubelet[2293]: I1013 00:03:41.611426 2293 apiserver.go:52] "Watching apiserver" Oct 13 00:03:41.629445 kubelet[2293]: I1013 00:03:41.629405 2293 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 13 00:03:41.629889 kubelet[2293]: I1013 00:03:41.629875 2293 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Oct 13 00:03:41.674334 kubelet[2293]: I1013 00:03:41.674300 2293 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 13 00:03:41.676276 kubelet[2293]: I1013 00:03:41.674634 2293 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 13 00:03:41.680491 kubelet[2293]: E1013 00:03:41.680463 2293 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Oct 13 00:03:41.680717 kubelet[2293]: E1013 00:03:41.680691 2293 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Oct 13 00:03:41.682118 kubelet[2293]: E1013 00:03:41.682090 2293 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Oct 13 00:03:41.682118 kubelet[2293]: I1013 00:03:41.682113 2293 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 13 00:03:41.684452 kubelet[2293]: E1013 00:03:41.684415 2293 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Oct 13 00:03:41.684452 kubelet[2293]: I1013 00:03:41.684440 2293 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 13 00:03:41.685963 kubelet[2293]: E1013 00:03:41.685924 2293 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Oct 13 00:03:42.171250 kubelet[2293]: I1013 00:03:42.171199 2293 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 13 00:03:42.173079 kubelet[2293]: E1013 00:03:42.173033 2293 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Oct 13 00:03:43.424626 systemd[1]: Reload requested from client PID 2584 ('systemctl') (unit session-7.scope)... Oct 13 00:03:43.424648 systemd[1]: Reloading... Oct 13 00:03:43.481285 zram_generator::config[2627]: No configuration found. Oct 13 00:03:43.645637 systemd[1]: Reloading finished in 220 ms. Oct 13 00:03:43.672642 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 00:03:43.688597 systemd[1]: kubelet.service: Deactivated successfully. Oct 13 00:03:43.688929 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 00:03:43.688974 systemd[1]: kubelet.service: Consumed 916ms CPU time, 122M memory peak. Oct 13 00:03:43.690888 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 00:03:43.827894 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 00:03:43.831468 (kubelet)[2669]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 13 00:03:43.871694 kubelet[2669]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 13 00:03:43.871694 kubelet[2669]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 13 00:03:43.871988 kubelet[2669]: I1013 00:03:43.871790 2669 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 13 00:03:43.879668 kubelet[2669]: I1013 00:03:43.879622 2669 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Oct 13 00:03:43.879668 kubelet[2669]: I1013 00:03:43.879650 2669 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 13 00:03:43.879790 kubelet[2669]: I1013 00:03:43.879683 2669 watchdog_linux.go:95] "Systemd watchdog is not enabled" Oct 13 00:03:43.879790 kubelet[2669]: I1013 00:03:43.879690 2669 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 13 00:03:43.879883 kubelet[2669]: I1013 00:03:43.879867 2669 server.go:956] "Client rotation is on, will bootstrap in background" Oct 13 00:03:43.880973 kubelet[2669]: I1013 00:03:43.880945 2669 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Oct 13 00:03:43.883391 kubelet[2669]: I1013 00:03:43.883370 2669 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 13 00:03:43.885890 kubelet[2669]: I1013 00:03:43.885871 2669 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 13 00:03:43.888223 kubelet[2669]: I1013 00:03:43.888187 2669 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Oct 13 00:03:43.888401 kubelet[2669]: I1013 00:03:43.888380 2669 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 13 00:03:43.888530 kubelet[2669]: I1013 00:03:43.888398 2669 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 13 00:03:43.888598 kubelet[2669]: I1013 00:03:43.888534 2669 topology_manager.go:138] "Creating topology manager with none policy" Oct 13 00:03:43.888598 kubelet[2669]: I1013 00:03:43.888542 2669 container_manager_linux.go:306] "Creating device plugin manager" Oct 13 00:03:43.888598 kubelet[2669]: I1013 00:03:43.888565 2669 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Oct 13 00:03:43.889413 kubelet[2669]: I1013 00:03:43.889387 2669 state_mem.go:36] "Initialized new in-memory state store" Oct 13 00:03:43.889534 kubelet[2669]: I1013 00:03:43.889522 2669 kubelet.go:475] "Attempting to sync node with API server" Oct 13 00:03:43.889559 kubelet[2669]: I1013 00:03:43.889542 2669 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 13 00:03:43.889579 kubelet[2669]: I1013 00:03:43.889568 2669 kubelet.go:387] "Adding apiserver pod source" Oct 13 00:03:43.889579 kubelet[2669]: I1013 00:03:43.889577 2669 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 13 00:03:43.890536 kubelet[2669]: I1013 00:03:43.890502 2669 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 13 00:03:43.891050 kubelet[2669]: I1013 00:03:43.891020 2669 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 13 00:03:43.891098 kubelet[2669]: I1013 00:03:43.891064 2669 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Oct 13 00:03:43.896170 kubelet[2669]: I1013 00:03:43.894533 2669 server.go:1262] "Started kubelet" Oct 13 00:03:43.896170 kubelet[2669]: I1013 00:03:43.895217 2669 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 13 00:03:43.896170 kubelet[2669]: I1013 00:03:43.895293 2669 server_v1.go:49] "podresources" method="list" useActivePods=true Oct 13 00:03:43.896170 kubelet[2669]: I1013 00:03:43.895524 2669 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 13 00:03:43.896170 kubelet[2669]: I1013 00:03:43.896135 2669 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 13 00:03:43.902629 kubelet[2669]: I1013 00:03:43.902594 2669 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 13 00:03:43.902772 kubelet[2669]: I1013 00:03:43.902755 2669 volume_manager.go:313] "Starting Kubelet Volume Manager" Oct 13 00:03:43.902927 kubelet[2669]: E1013 00:03:43.902908 2669 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 13 00:03:43.902927 kubelet[2669]: I1013 00:03:43.902903 2669 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 13 00:03:43.903428 kubelet[2669]: I1013 00:03:43.903402 2669 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 13 00:03:43.903524 kubelet[2669]: I1013 00:03:43.903507 2669 reconciler.go:29] "Reconciler: start to sync state" Oct 13 00:03:43.906817 kubelet[2669]: I1013 00:03:43.906795 2669 server.go:310] "Adding debug handlers to kubelet server" Oct 13 00:03:43.908263 kubelet[2669]: I1013 00:03:43.908205 2669 factory.go:223] Registration of the systemd container factory successfully Oct 13 00:03:43.908343 kubelet[2669]: I1013 00:03:43.908319 2669 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 13 00:03:43.915753 kubelet[2669]: I1013 00:03:43.915737 2669 factory.go:223] Registration of the containerd container factory successfully Oct 13 00:03:43.922243 kubelet[2669]: I1013 00:03:43.922207 2669 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Oct 13 00:03:43.922925 kubelet[2669]: E1013 00:03:43.922899 2669 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 13 00:03:43.924294 kubelet[2669]: I1013 00:03:43.924200 2669 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Oct 13 00:03:43.924294 kubelet[2669]: I1013 00:03:43.924277 2669 status_manager.go:244] "Starting to sync pod status with apiserver" Oct 13 00:03:43.924294 kubelet[2669]: I1013 00:03:43.924300 2669 kubelet.go:2427] "Starting kubelet main sync loop" Oct 13 00:03:43.924398 kubelet[2669]: E1013 00:03:43.924339 2669 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 13 00:03:43.954125 kubelet[2669]: I1013 00:03:43.954029 2669 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 13 00:03:43.954125 kubelet[2669]: I1013 00:03:43.954054 2669 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 13 00:03:43.954125 kubelet[2669]: I1013 00:03:43.954076 2669 state_mem.go:36] "Initialized new in-memory state store" Oct 13 00:03:43.954277 kubelet[2669]: I1013 00:03:43.954200 2669 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 13 00:03:43.954277 kubelet[2669]: I1013 00:03:43.954210 2669 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 13 00:03:43.954277 kubelet[2669]: I1013 00:03:43.954226 2669 policy_none.go:49] "None policy: Start" Oct 13 00:03:43.954343 kubelet[2669]: I1013 00:03:43.954330 2669 memory_manager.go:187] "Starting memorymanager" policy="None" Oct 13 00:03:43.954361 kubelet[2669]: I1013 00:03:43.954344 2669 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Oct 13 00:03:43.954511 kubelet[2669]: I1013 00:03:43.954494 2669 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Oct 13 00:03:43.954511 kubelet[2669]: I1013 00:03:43.954511 2669 policy_none.go:47] "Start" Oct 13 00:03:43.958567 kubelet[2669]: E1013 00:03:43.958544 2669 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 13 00:03:43.958725 kubelet[2669]: I1013 00:03:43.958703 2669 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 13 00:03:43.958773 kubelet[2669]: I1013 00:03:43.958721 2669 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 13 00:03:43.958996 kubelet[2669]: I1013 00:03:43.958965 2669 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 13 00:03:43.960018 kubelet[2669]: E1013 00:03:43.959997 2669 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 13 00:03:44.025621 kubelet[2669]: I1013 00:03:44.025578 2669 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 13 00:03:44.025736 kubelet[2669]: I1013 00:03:44.025636 2669 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 13 00:03:44.025761 kubelet[2669]: I1013 00:03:44.025746 2669 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 13 00:03:44.060963 kubelet[2669]: I1013 00:03:44.060921 2669 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 13 00:03:44.067577 kubelet[2669]: I1013 00:03:44.067476 2669 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Oct 13 00:03:44.067659 kubelet[2669]: I1013 00:03:44.067626 2669 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 13 00:03:44.205386 kubelet[2669]: I1013 00:03:44.205229 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 00:03:44.205386 kubelet[2669]: I1013 00:03:44.205307 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 00:03:44.205386 kubelet[2669]: I1013 00:03:44.205343 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae43bf624d285361487631af8a6ba6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae43bf624d285361487631af8a6ba6\") " pod="kube-system/kube-scheduler-localhost" Oct 13 00:03:44.205386 kubelet[2669]: I1013 00:03:44.205370 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3c47676d5639d75044526d04cc38ba1c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"3c47676d5639d75044526d04cc38ba1c\") " pod="kube-system/kube-apiserver-localhost" Oct 13 00:03:44.205552 kubelet[2669]: I1013 00:03:44.205440 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3c47676d5639d75044526d04cc38ba1c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"3c47676d5639d75044526d04cc38ba1c\") " pod="kube-system/kube-apiserver-localhost" Oct 13 00:03:44.205552 kubelet[2669]: I1013 00:03:44.205484 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 00:03:44.205552 kubelet[2669]: I1013 00:03:44.205529 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 00:03:44.205606 kubelet[2669]: I1013 00:03:44.205582 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3c47676d5639d75044526d04cc38ba1c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"3c47676d5639d75044526d04cc38ba1c\") " pod="kube-system/kube-apiserver-localhost" Oct 13 00:03:44.205627 kubelet[2669]: I1013 00:03:44.205609 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 00:03:44.890882 kubelet[2669]: I1013 00:03:44.890827 2669 apiserver.go:52] "Watching apiserver" Oct 13 00:03:44.903852 kubelet[2669]: I1013 00:03:44.903787 2669 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Oct 13 00:03:44.940943 kubelet[2669]: I1013 00:03:44.940755 2669 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 13 00:03:44.940943 kubelet[2669]: I1013 00:03:44.940845 2669 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 13 00:03:44.953260 kubelet[2669]: E1013 00:03:44.952215 2669 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 13 00:03:44.953260 kubelet[2669]: E1013 00:03:44.952270 2669 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Oct 13 00:03:44.983120 kubelet[2669]: I1013 00:03:44.983052 2669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=0.983035588 podStartE2EDuration="983.035588ms" podCreationTimestamp="2025-10-13 00:03:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 00:03:44.969676531 +0000 UTC m=+1.134946192" watchObservedRunningTime="2025-10-13 00:03:44.983035588 +0000 UTC m=+1.148305249" Oct 13 00:03:44.983388 kubelet[2669]: I1013 00:03:44.983178 2669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=0.983173189 podStartE2EDuration="983.173189ms" podCreationTimestamp="2025-10-13 00:03:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 00:03:44.978363875 +0000 UTC m=+1.143633536" watchObservedRunningTime="2025-10-13 00:03:44.983173189 +0000 UTC m=+1.148442850" Oct 13 00:03:44.999640 kubelet[2669]: I1013 00:03:44.999590 2669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=0.999574684 podStartE2EDuration="999.574684ms" podCreationTimestamp="2025-10-13 00:03:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 00:03:44.990319117 +0000 UTC m=+1.155588738" watchObservedRunningTime="2025-10-13 00:03:44.999574684 +0000 UTC m=+1.164844385" Oct 13 00:03:49.499166 kubelet[2669]: I1013 00:03:49.499133 2669 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 13 00:03:49.500008 containerd[1536]: time="2025-10-13T00:03:49.499970975Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 13 00:03:49.500458 kubelet[2669]: I1013 00:03:49.500130 2669 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 13 00:03:50.401179 systemd[1]: Created slice kubepods-besteffort-podefdebe62_5a1a_4f1b_adac_e2d9a59c9257.slice - libcontainer container kubepods-besteffort-podefdebe62_5a1a_4f1b_adac_e2d9a59c9257.slice. Oct 13 00:03:50.447254 kubelet[2669]: I1013 00:03:50.447206 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/efdebe62-5a1a-4f1b-adac-e2d9a59c9257-kube-proxy\") pod \"kube-proxy-8qf6p\" (UID: \"efdebe62-5a1a-4f1b-adac-e2d9a59c9257\") " pod="kube-system/kube-proxy-8qf6p" Oct 13 00:03:50.447254 kubelet[2669]: I1013 00:03:50.447261 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/efdebe62-5a1a-4f1b-adac-e2d9a59c9257-xtables-lock\") pod \"kube-proxy-8qf6p\" (UID: \"efdebe62-5a1a-4f1b-adac-e2d9a59c9257\") " pod="kube-system/kube-proxy-8qf6p" Oct 13 00:03:50.447402 kubelet[2669]: I1013 00:03:50.447278 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvb6x\" (UniqueName: \"kubernetes.io/projected/efdebe62-5a1a-4f1b-adac-e2d9a59c9257-kube-api-access-pvb6x\") pod \"kube-proxy-8qf6p\" (UID: \"efdebe62-5a1a-4f1b-adac-e2d9a59c9257\") " pod="kube-system/kube-proxy-8qf6p" Oct 13 00:03:50.447402 kubelet[2669]: I1013 00:03:50.447299 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/efdebe62-5a1a-4f1b-adac-e2d9a59c9257-lib-modules\") pod \"kube-proxy-8qf6p\" (UID: \"efdebe62-5a1a-4f1b-adac-e2d9a59c9257\") " pod="kube-system/kube-proxy-8qf6p" Oct 13 00:03:50.556412 kubelet[2669]: E1013 00:03:50.556375 2669 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Oct 13 00:03:50.556412 kubelet[2669]: E1013 00:03:50.556410 2669 projected.go:196] Error preparing data for projected volume kube-api-access-pvb6x for pod kube-system/kube-proxy-8qf6p: configmap "kube-root-ca.crt" not found Oct 13 00:03:50.556821 kubelet[2669]: E1013 00:03:50.556474 2669 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/efdebe62-5a1a-4f1b-adac-e2d9a59c9257-kube-api-access-pvb6x podName:efdebe62-5a1a-4f1b-adac-e2d9a59c9257 nodeName:}" failed. No retries permitted until 2025-10-13 00:03:51.056451763 +0000 UTC m=+7.221721424 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-pvb6x" (UniqueName: "kubernetes.io/projected/efdebe62-5a1a-4f1b-adac-e2d9a59c9257-kube-api-access-pvb6x") pod "kube-proxy-8qf6p" (UID: "efdebe62-5a1a-4f1b-adac-e2d9a59c9257") : configmap "kube-root-ca.crt" not found Oct 13 00:03:50.764193 systemd[1]: Created slice kubepods-besteffort-podf20a5b57_691c_4af2_8845_fdc7c49c144b.slice - libcontainer container kubepods-besteffort-podf20a5b57_691c_4af2_8845_fdc7c49c144b.slice. Oct 13 00:03:50.850861 kubelet[2669]: I1013 00:03:50.850814 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jd9kg\" (UniqueName: \"kubernetes.io/projected/f20a5b57-691c-4af2-8845-fdc7c49c144b-kube-api-access-jd9kg\") pod \"tigera-operator-db78d5bd4-lvxtg\" (UID: \"f20a5b57-691c-4af2-8845-fdc7c49c144b\") " pod="tigera-operator/tigera-operator-db78d5bd4-lvxtg" Oct 13 00:03:50.850861 kubelet[2669]: I1013 00:03:50.850870 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f20a5b57-691c-4af2-8845-fdc7c49c144b-var-lib-calico\") pod \"tigera-operator-db78d5bd4-lvxtg\" (UID: \"f20a5b57-691c-4af2-8845-fdc7c49c144b\") " pod="tigera-operator/tigera-operator-db78d5bd4-lvxtg" Oct 13 00:03:51.073717 containerd[1536]: time="2025-10-13T00:03:51.073316865Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-db78d5bd4-lvxtg,Uid:f20a5b57-691c-4af2-8845-fdc7c49c144b,Namespace:tigera-operator,Attempt:0,}" Oct 13 00:03:51.089442 containerd[1536]: time="2025-10-13T00:03:51.088726045Z" level=info msg="connecting to shim d8e8e2aa2ab2744a5715e0b784d523a0fb8a377a3a82b133749502e8608d0a38" address="unix:///run/containerd/s/a64a38b805b669448afe338a7c8b32c4702d9d1a855beb5998133a38aba84631" namespace=k8s.io protocol=ttrpc version=3 Oct 13 00:03:51.123495 systemd[1]: Started cri-containerd-d8e8e2aa2ab2744a5715e0b784d523a0fb8a377a3a82b133749502e8608d0a38.scope - libcontainer container d8e8e2aa2ab2744a5715e0b784d523a0fb8a377a3a82b133749502e8608d0a38. Oct 13 00:03:51.169801 containerd[1536]: time="2025-10-13T00:03:51.169546680Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-db78d5bd4-lvxtg,Uid:f20a5b57-691c-4af2-8845-fdc7c49c144b,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"d8e8e2aa2ab2744a5715e0b784d523a0fb8a377a3a82b133749502e8608d0a38\"" Oct 13 00:03:51.172342 containerd[1536]: time="2025-10-13T00:03:51.172154643Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Oct 13 00:03:51.313455 containerd[1536]: time="2025-10-13T00:03:51.313108108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8qf6p,Uid:efdebe62-5a1a-4f1b-adac-e2d9a59c9257,Namespace:kube-system,Attempt:0,}" Oct 13 00:03:51.331136 containerd[1536]: time="2025-10-13T00:03:51.330528057Z" level=info msg="connecting to shim edb71469f8fd7a4498e1498fb3cd55f88e85d549e907bb9fc31ddcacea1d2580" address="unix:///run/containerd/s/d7a87ae803e23e1e8b5b428ed646228c3225051f1b6d5c71eb9a6871db7a6fa6" namespace=k8s.io protocol=ttrpc version=3 Oct 13 00:03:51.355431 systemd[1]: Started cri-containerd-edb71469f8fd7a4498e1498fb3cd55f88e85d549e907bb9fc31ddcacea1d2580.scope - libcontainer container edb71469f8fd7a4498e1498fb3cd55f88e85d549e907bb9fc31ddcacea1d2580. Oct 13 00:03:51.388321 containerd[1536]: time="2025-10-13T00:03:51.387778516Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8qf6p,Uid:efdebe62-5a1a-4f1b-adac-e2d9a59c9257,Namespace:kube-system,Attempt:0,} returns sandbox id \"edb71469f8fd7a4498e1498fb3cd55f88e85d549e907bb9fc31ddcacea1d2580\"" Oct 13 00:03:51.394518 containerd[1536]: time="2025-10-13T00:03:51.394482042Z" level=info msg="CreateContainer within sandbox \"edb71469f8fd7a4498e1498fb3cd55f88e85d549e907bb9fc31ddcacea1d2580\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 13 00:03:51.406092 containerd[1536]: time="2025-10-13T00:03:51.406031895Z" level=info msg="Container 41e1f53baf7e4ccff2ca7682f7297370cb3945b63aa28807a9cec2dc2949cced: CDI devices from CRI Config.CDIDevices: []" Oct 13 00:03:51.416252 containerd[1536]: time="2025-10-13T00:03:51.416203621Z" level=info msg="CreateContainer within sandbox \"edb71469f8fd7a4498e1498fb3cd55f88e85d549e907bb9fc31ddcacea1d2580\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"41e1f53baf7e4ccff2ca7682f7297370cb3945b63aa28807a9cec2dc2949cced\"" Oct 13 00:03:51.417021 containerd[1536]: time="2025-10-13T00:03:51.416809697Z" level=info msg="StartContainer for \"41e1f53baf7e4ccff2ca7682f7297370cb3945b63aa28807a9cec2dc2949cced\"" Oct 13 00:03:51.418337 containerd[1536]: time="2025-10-13T00:03:51.418311822Z" level=info msg="connecting to shim 41e1f53baf7e4ccff2ca7682f7297370cb3945b63aa28807a9cec2dc2949cced" address="unix:///run/containerd/s/d7a87ae803e23e1e8b5b428ed646228c3225051f1b6d5c71eb9a6871db7a6fa6" protocol=ttrpc version=3 Oct 13 00:03:51.438412 systemd[1]: Started cri-containerd-41e1f53baf7e4ccff2ca7682f7297370cb3945b63aa28807a9cec2dc2949cced.scope - libcontainer container 41e1f53baf7e4ccff2ca7682f7297370cb3945b63aa28807a9cec2dc2949cced. Oct 13 00:03:51.470855 containerd[1536]: time="2025-10-13T00:03:51.470814267Z" level=info msg="StartContainer for \"41e1f53baf7e4ccff2ca7682f7297370cb3945b63aa28807a9cec2dc2949cced\" returns successfully" Oct 13 00:03:51.967447 kubelet[2669]: I1013 00:03:51.967367 2669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8qf6p" podStartSLOduration=1.967349751 podStartE2EDuration="1.967349751s" podCreationTimestamp="2025-10-13 00:03:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 00:03:51.967001319 +0000 UTC m=+8.132270980" watchObservedRunningTime="2025-10-13 00:03:51.967349751 +0000 UTC m=+8.132619412" Oct 13 00:03:52.065266 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4289713684.mount: Deactivated successfully. Oct 13 00:03:52.443494 containerd[1536]: time="2025-10-13T00:03:52.443396589Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:03:52.444518 containerd[1536]: time="2025-10-13T00:03:52.444465796Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=22152365" Oct 13 00:03:52.445363 containerd[1536]: time="2025-10-13T00:03:52.445313775Z" level=info msg="ImageCreate event name:\"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:03:52.447633 containerd[1536]: time="2025-10-13T00:03:52.447600274Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:03:52.448221 containerd[1536]: time="2025-10-13T00:03:52.448189934Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"22148360\" in 1.27590873s" Oct 13 00:03:52.448287 containerd[1536]: time="2025-10-13T00:03:52.448222984Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\"" Oct 13 00:03:52.452609 containerd[1536]: time="2025-10-13T00:03:52.452306072Z" level=info msg="CreateContainer within sandbox \"d8e8e2aa2ab2744a5715e0b784d523a0fb8a377a3a82b133749502e8608d0a38\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 13 00:03:52.458920 containerd[1536]: time="2025-10-13T00:03:52.458419580Z" level=info msg="Container cb7b59395c67a7c3bb8a21ef8bac1ae71df475f40575bc97f7e0dcb3318069a9: CDI devices from CRI Config.CDIDevices: []" Oct 13 00:03:52.463114 containerd[1536]: time="2025-10-13T00:03:52.463069841Z" level=info msg="CreateContainer within sandbox \"d8e8e2aa2ab2744a5715e0b784d523a0fb8a377a3a82b133749502e8608d0a38\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"cb7b59395c67a7c3bb8a21ef8bac1ae71df475f40575bc97f7e0dcb3318069a9\"" Oct 13 00:03:52.463971 containerd[1536]: time="2025-10-13T00:03:52.463923502Z" level=info msg="StartContainer for \"cb7b59395c67a7c3bb8a21ef8bac1ae71df475f40575bc97f7e0dcb3318069a9\"" Oct 13 00:03:52.464807 containerd[1536]: time="2025-10-13T00:03:52.464769921Z" level=info msg="connecting to shim cb7b59395c67a7c3bb8a21ef8bac1ae71df475f40575bc97f7e0dcb3318069a9" address="unix:///run/containerd/s/a64a38b805b669448afe338a7c8b32c4702d9d1a855beb5998133a38aba84631" protocol=ttrpc version=3 Oct 13 00:03:52.495414 systemd[1]: Started cri-containerd-cb7b59395c67a7c3bb8a21ef8bac1ae71df475f40575bc97f7e0dcb3318069a9.scope - libcontainer container cb7b59395c67a7c3bb8a21ef8bac1ae71df475f40575bc97f7e0dcb3318069a9. Oct 13 00:03:52.517996 containerd[1536]: time="2025-10-13T00:03:52.517903598Z" level=info msg="StartContainer for \"cb7b59395c67a7c3bb8a21ef8bac1ae71df475f40575bc97f7e0dcb3318069a9\" returns successfully" Oct 13 00:03:57.708710 sudo[1748]: pam_unix(sudo:session): session closed for user root Oct 13 00:03:57.712727 sshd[1747]: Connection closed by 10.0.0.1 port 49742 Oct 13 00:03:57.712265 sshd-session[1744]: pam_unix(sshd:session): session closed for user core Oct 13 00:03:57.719585 systemd[1]: sshd@6-10.0.0.63:22-10.0.0.1:49742.service: Deactivated successfully. Oct 13 00:03:57.722806 systemd[1]: session-7.scope: Deactivated successfully. Oct 13 00:03:57.724556 systemd[1]: session-7.scope: Consumed 6.540s CPU time, 219.8M memory peak. Oct 13 00:03:57.726704 systemd-logind[1518]: Session 7 logged out. Waiting for processes to exit. Oct 13 00:03:57.730949 systemd-logind[1518]: Removed session 7. Oct 13 00:03:58.971357 kubelet[2669]: I1013 00:03:58.971213 2669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-db78d5bd4-lvxtg" podStartSLOduration=7.693735959 podStartE2EDuration="8.971159924s" podCreationTimestamp="2025-10-13 00:03:50 +0000 UTC" firstStartedPulling="2025-10-13 00:03:51.171611987 +0000 UTC m=+7.336881648" lastFinishedPulling="2025-10-13 00:03:52.449035952 +0000 UTC m=+8.614305613" observedRunningTime="2025-10-13 00:03:52.968171758 +0000 UTC m=+9.133441419" watchObservedRunningTime="2025-10-13 00:03:58.971159924 +0000 UTC m=+15.136429585" Oct 13 00:04:00.492310 update_engine[1522]: I20251013 00:04:00.491905 1522 update_attempter.cc:509] Updating boot flags... Oct 13 00:04:01.678714 systemd[1]: Created slice kubepods-besteffort-pod7161a8fb_be49_45a3_b1d1_4d876061b494.slice - libcontainer container kubepods-besteffort-pod7161a8fb_be49_45a3_b1d1_4d876061b494.slice. Oct 13 00:04:01.721288 kubelet[2669]: I1013 00:04:01.721182 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pv2j\" (UniqueName: \"kubernetes.io/projected/7161a8fb-be49-45a3-b1d1-4d876061b494-kube-api-access-7pv2j\") pod \"calico-typha-5f85cc76ff-zsqdw\" (UID: \"7161a8fb-be49-45a3-b1d1-4d876061b494\") " pod="calico-system/calico-typha-5f85cc76ff-zsqdw" Oct 13 00:04:01.721288 kubelet[2669]: I1013 00:04:01.721263 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/7161a8fb-be49-45a3-b1d1-4d876061b494-typha-certs\") pod \"calico-typha-5f85cc76ff-zsqdw\" (UID: \"7161a8fb-be49-45a3-b1d1-4d876061b494\") " pod="calico-system/calico-typha-5f85cc76ff-zsqdw" Oct 13 00:04:01.721653 kubelet[2669]: I1013 00:04:01.721376 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7161a8fb-be49-45a3-b1d1-4d876061b494-tigera-ca-bundle\") pod \"calico-typha-5f85cc76ff-zsqdw\" (UID: \"7161a8fb-be49-45a3-b1d1-4d876061b494\") " pod="calico-system/calico-typha-5f85cc76ff-zsqdw" Oct 13 00:04:01.988587 containerd[1536]: time="2025-10-13T00:04:01.988004694Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5f85cc76ff-zsqdw,Uid:7161a8fb-be49-45a3-b1d1-4d876061b494,Namespace:calico-system,Attempt:0,}" Oct 13 00:04:02.008415 systemd[1]: Created slice kubepods-besteffort-pod38247645_21fa_42ed_9408_e793e560d12a.slice - libcontainer container kubepods-besteffort-pod38247645_21fa_42ed_9408_e793e560d12a.slice. Oct 13 00:04:02.024257 kubelet[2669]: I1013 00:04:02.024207 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/38247645-21fa-42ed-9408-e793e560d12a-lib-modules\") pod \"calico-node-c4755\" (UID: \"38247645-21fa-42ed-9408-e793e560d12a\") " pod="calico-system/calico-node-c4755" Oct 13 00:04:02.024656 kubelet[2669]: I1013 00:04:02.024474 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/38247645-21fa-42ed-9408-e793e560d12a-var-lib-calico\") pod \"calico-node-c4755\" (UID: \"38247645-21fa-42ed-9408-e793e560d12a\") " pod="calico-system/calico-node-c4755" Oct 13 00:04:02.024656 kubelet[2669]: I1013 00:04:02.024503 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/38247645-21fa-42ed-9408-e793e560d12a-cni-net-dir\") pod \"calico-node-c4755\" (UID: \"38247645-21fa-42ed-9408-e793e560d12a\") " pod="calico-system/calico-node-c4755" Oct 13 00:04:02.024656 kubelet[2669]: I1013 00:04:02.024521 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/38247645-21fa-42ed-9408-e793e560d12a-node-certs\") pod \"calico-node-c4755\" (UID: \"38247645-21fa-42ed-9408-e793e560d12a\") " pod="calico-system/calico-node-c4755" Oct 13 00:04:02.024656 kubelet[2669]: I1013 00:04:02.024536 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/38247645-21fa-42ed-9408-e793e560d12a-policysync\") pod \"calico-node-c4755\" (UID: \"38247645-21fa-42ed-9408-e793e560d12a\") " pod="calico-system/calico-node-c4755" Oct 13 00:04:02.024656 kubelet[2669]: I1013 00:04:02.024556 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/38247645-21fa-42ed-9408-e793e560d12a-cni-bin-dir\") pod \"calico-node-c4755\" (UID: \"38247645-21fa-42ed-9408-e793e560d12a\") " pod="calico-system/calico-node-c4755" Oct 13 00:04:02.024864 kubelet[2669]: I1013 00:04:02.024591 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/38247645-21fa-42ed-9408-e793e560d12a-tigera-ca-bundle\") pod \"calico-node-c4755\" (UID: \"38247645-21fa-42ed-9408-e793e560d12a\") " pod="calico-system/calico-node-c4755" Oct 13 00:04:02.024864 kubelet[2669]: I1013 00:04:02.024609 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6ql7\" (UniqueName: \"kubernetes.io/projected/38247645-21fa-42ed-9408-e793e560d12a-kube-api-access-p6ql7\") pod \"calico-node-c4755\" (UID: \"38247645-21fa-42ed-9408-e793e560d12a\") " pod="calico-system/calico-node-c4755" Oct 13 00:04:02.024864 kubelet[2669]: I1013 00:04:02.024628 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/38247645-21fa-42ed-9408-e793e560d12a-cni-log-dir\") pod \"calico-node-c4755\" (UID: \"38247645-21fa-42ed-9408-e793e560d12a\") " pod="calico-system/calico-node-c4755" Oct 13 00:04:02.024864 kubelet[2669]: I1013 00:04:02.024668 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/38247645-21fa-42ed-9408-e793e560d12a-flexvol-driver-host\") pod \"calico-node-c4755\" (UID: \"38247645-21fa-42ed-9408-e793e560d12a\") " pod="calico-system/calico-node-c4755" Oct 13 00:04:02.024864 kubelet[2669]: I1013 00:04:02.024725 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/38247645-21fa-42ed-9408-e793e560d12a-xtables-lock\") pod \"calico-node-c4755\" (UID: \"38247645-21fa-42ed-9408-e793e560d12a\") " pod="calico-system/calico-node-c4755" Oct 13 00:04:02.024976 kubelet[2669]: I1013 00:04:02.024744 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/38247645-21fa-42ed-9408-e793e560d12a-var-run-calico\") pod \"calico-node-c4755\" (UID: \"38247645-21fa-42ed-9408-e793e560d12a\") " pod="calico-system/calico-node-c4755" Oct 13 00:04:02.072197 containerd[1536]: time="2025-10-13T00:04:02.072143276Z" level=info msg="connecting to shim ae70f799ddaf3c33e28ef3f078d4c4a7c5bfc2b3a432da2cd9a1fb88ca8c4a6b" address="unix:///run/containerd/s/11b5ebc2487c2e7db386cce876e5405e1be15521b811b5b2021b8542b04f42d4" namespace=k8s.io protocol=ttrpc version=3 Oct 13 00:04:02.129889 systemd[1]: Started cri-containerd-ae70f799ddaf3c33e28ef3f078d4c4a7c5bfc2b3a432da2cd9a1fb88ca8c4a6b.scope - libcontainer container ae70f799ddaf3c33e28ef3f078d4c4a7c5bfc2b3a432da2cd9a1fb88ca8c4a6b. Oct 13 00:04:02.133729 kubelet[2669]: E1013 00:04:02.133676 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:04:02.133729 kubelet[2669]: W1013 00:04:02.133721 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:04:02.133885 kubelet[2669]: E1013 00:04:02.133746 2669 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:04:02.136900 kubelet[2669]: E1013 00:04:02.136846 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:04:02.136900 kubelet[2669]: W1013 00:04:02.136883 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:04:02.136900 kubelet[2669]: E1013 00:04:02.136905 2669 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:04:02.143322 kubelet[2669]: E1013 00:04:02.143229 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:04:02.143322 kubelet[2669]: W1013 00:04:02.143310 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:04:02.143322 kubelet[2669]: E1013 00:04:02.143331 2669 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:04:02.167691 containerd[1536]: time="2025-10-13T00:04:02.167645158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5f85cc76ff-zsqdw,Uid:7161a8fb-be49-45a3-b1d1-4d876061b494,Namespace:calico-system,Attempt:0,} returns sandbox id \"ae70f799ddaf3c33e28ef3f078d4c4a7c5bfc2b3a432da2cd9a1fb88ca8c4a6b\"" Oct 13 00:04:02.169464 containerd[1536]: time="2025-10-13T00:04:02.169425519Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Oct 13 00:04:02.257082 kubelet[2669]: E1013 00:04:02.256971 2669 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pdk78" podUID="95c16496-d748-4240-90a8-51fe76dca721" Oct 13 00:04:02.320988 containerd[1536]: time="2025-10-13T00:04:02.320945034Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-c4755,Uid:38247645-21fa-42ed-9408-e793e560d12a,Namespace:calico-system,Attempt:0,}" Oct 13 00:04:02.323466 kubelet[2669]: E1013 00:04:02.323425 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:04:02.323466 kubelet[2669]: W1013 00:04:02.323451 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:04:02.323573 kubelet[2669]: E1013 00:04:02.323471 2669 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:04:02.323720 kubelet[2669]: E1013 00:04:02.323691 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:04:02.323749 kubelet[2669]: W1013 00:04:02.323703 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:04:02.323769 kubelet[2669]: E1013 00:04:02.323753 2669 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:04:02.323951 kubelet[2669]: E1013 00:04:02.323931 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:04:02.323951 kubelet[2669]: W1013 00:04:02.323949 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:04:02.323998 kubelet[2669]: E1013 00:04:02.323982 2669 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:04:02.324281 kubelet[2669]: E1013 00:04:02.324264 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:04:02.324281 kubelet[2669]: W1013 00:04:02.324279 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:04:02.324342 kubelet[2669]: E1013 00:04:02.324293 2669 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:04:02.324506 kubelet[2669]: E1013 00:04:02.324491 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:04:02.324535 kubelet[2669]: W1013 00:04:02.324514 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:04:02.324535 kubelet[2669]: E1013 00:04:02.324524 2669 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:04:02.324700 kubelet[2669]: E1013 00:04:02.324688 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:04:02.324700 kubelet[2669]: W1013 00:04:02.324698 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:04:02.324752 kubelet[2669]: E1013 00:04:02.324709 2669 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:04:02.324880 kubelet[2669]: E1013 00:04:02.324867 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:04:02.324880 kubelet[2669]: W1013 00:04:02.324879 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:04:02.324940 kubelet[2669]: E1013 00:04:02.324887 2669 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:04:02.325071 kubelet[2669]: E1013 00:04:02.325058 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:04:02.325071 kubelet[2669]: W1013 00:04:02.325070 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:04:02.325120 kubelet[2669]: E1013 00:04:02.325079 2669 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:04:02.325395 kubelet[2669]: E1013 00:04:02.325378 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:04:02.325395 kubelet[2669]: W1013 00:04:02.325393 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:04:02.325453 kubelet[2669]: E1013 00:04:02.325404 2669 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:04:02.325715 kubelet[2669]: E1013 00:04:02.325700 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:04:02.325715 kubelet[2669]: W1013 00:04:02.325714 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:04:02.325759 kubelet[2669]: E1013 00:04:02.325725 2669 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:04:02.325936 kubelet[2669]: E1013 00:04:02.325907 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:04:02.325936 kubelet[2669]: W1013 00:04:02.325935 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:04:02.326002 kubelet[2669]: E1013 00:04:02.325944 2669 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:04:02.326143 kubelet[2669]: E1013 00:04:02.326129 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:04:02.326172 kubelet[2669]: W1013 00:04:02.326142 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:04:02.326172 kubelet[2669]: E1013 00:04:02.326157 2669 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:04:02.326461 kubelet[2669]: E1013 00:04:02.326446 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:04:02.326461 kubelet[2669]: W1013 00:04:02.326459 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:04:02.326509 kubelet[2669]: E1013 00:04:02.326468 2669 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:04:02.326716 kubelet[2669]: E1013 00:04:02.326701 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:04:02.326716 kubelet[2669]: W1013 00:04:02.326715 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:04:02.326767 kubelet[2669]: E1013 00:04:02.326723 2669 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:04:02.326872 kubelet[2669]: E1013 00:04:02.326860 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:04:02.326872 kubelet[2669]: W1013 00:04:02.326870 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:04:02.326927 kubelet[2669]: E1013 00:04:02.326878 2669 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:04:02.327028 kubelet[2669]: E1013 00:04:02.327017 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:04:02.327051 kubelet[2669]: W1013 00:04:02.327028 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:04:02.327051 kubelet[2669]: E1013 00:04:02.327035 2669 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:04:02.327304 kubelet[2669]: E1013 00:04:02.327280 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:04:02.327339 kubelet[2669]: W1013 00:04:02.327307 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:04:02.327339 kubelet[2669]: E1013 00:04:02.327320 2669 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:04:02.327485 kubelet[2669]: E1013 00:04:02.327473 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:04:02.327506 kubelet[2669]: W1013 00:04:02.327485 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:04:02.327506 kubelet[2669]: E1013 00:04:02.327493 2669 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:04:02.327716 kubelet[2669]: E1013 00:04:02.327681 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:04:02.327745 kubelet[2669]: W1013 00:04:02.327715 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:04:02.327745 kubelet[2669]: E1013 00:04:02.327725 2669 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:04:02.327967 kubelet[2669]: E1013 00:04:02.327945 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:04:02.327997 kubelet[2669]: W1013 00:04:02.327974 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:04:02.327997 kubelet[2669]: E1013 00:04:02.327991 2669 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:04:02.328406 kubelet[2669]: E1013 00:04:02.328383 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:04:02.328406 kubelet[2669]: W1013 00:04:02.328400 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:04:02.328461 kubelet[2669]: E1013 00:04:02.328421 2669 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:04:02.328461 kubelet[2669]: I1013 00:04:02.328450 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/95c16496-d748-4240-90a8-51fe76dca721-varrun\") pod \"csi-node-driver-pdk78\" (UID: \"95c16496-d748-4240-90a8-51fe76dca721\") " pod="calico-system/csi-node-driver-pdk78" Oct 13 00:04:02.328678 kubelet[2669]: E1013 00:04:02.328645 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:04:02.328678 kubelet[2669]: W1013 00:04:02.328655 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:04:02.328678 kubelet[2669]: E1013 00:04:02.328665 2669 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:04:02.328741 kubelet[2669]: I1013 00:04:02.328685 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/95c16496-d748-4240-90a8-51fe76dca721-registration-dir\") pod \"csi-node-driver-pdk78\" (UID: \"95c16496-d748-4240-90a8-51fe76dca721\") " pod="calico-system/csi-node-driver-pdk78" Oct 13 00:04:02.328858 kubelet[2669]: E1013 00:04:02.328844 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:04:02.328881 kubelet[2669]: W1013 00:04:02.328857 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:04:02.328881 kubelet[2669]: E1013 00:04:02.328865 2669 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:04:02.328940 kubelet[2669]: I1013 00:04:02.328881 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/95c16496-d748-4240-90a8-51fe76dca721-socket-dir\") pod \"csi-node-driver-pdk78\" (UID: \"95c16496-d748-4240-90a8-51fe76dca721\") " pod="calico-system/csi-node-driver-pdk78" Oct 13 00:04:02.329125 kubelet[2669]: E1013 00:04:02.329108 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:04:02.329147 kubelet[2669]: W1013 00:04:02.329124 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:04:02.329147 kubelet[2669]: E1013 00:04:02.329134 2669 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:04:02.329183 kubelet[2669]: I1013 00:04:02.329170 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/95c16496-d748-4240-90a8-51fe76dca721-kubelet-dir\") pod \"csi-node-driver-pdk78\" (UID: \"95c16496-d748-4240-90a8-51fe76dca721\") " pod="calico-system/csi-node-driver-pdk78" Oct 13 00:04:02.329492 kubelet[2669]: E1013 00:04:02.329477 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:04:02.329525 kubelet[2669]: W1013 00:04:02.329492 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:04:02.329525 kubelet[2669]: E1013 00:04:02.329502 2669 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:04:02.329601 kubelet[2669]: I1013 00:04:02.329562 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qm482\" (UniqueName: \"kubernetes.io/projected/95c16496-d748-4240-90a8-51fe76dca721-kube-api-access-qm482\") pod \"csi-node-driver-pdk78\" (UID: \"95c16496-d748-4240-90a8-51fe76dca721\") " pod="calico-system/csi-node-driver-pdk78" Oct 13 00:04:02.329798 kubelet[2669]: E1013 00:04:02.329783 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:04:02.329821 kubelet[2669]: W1013 00:04:02.329798 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:04:02.329821 kubelet[2669]: E1013 00:04:02.329808 2669 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:04:02.330048 kubelet[2669]: E1013 00:04:02.330033 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:04:02.330048 kubelet[2669]: W1013 00:04:02.330046 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:04:02.330092 kubelet[2669]: E1013 00:04:02.330057 2669 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:04:02.330201 kubelet[2669]: E1013 00:04:02.330188 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:04:02.330201 kubelet[2669]: W1013 00:04:02.330199 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:04:02.330269 kubelet[2669]: E1013 00:04:02.330207 2669 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:04:02.330373 kubelet[2669]: E1013 00:04:02.330360 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:04:02.330373 kubelet[2669]: W1013 00:04:02.330371 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:04:02.330416 kubelet[2669]: E1013 00:04:02.330379 2669 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:04:02.330620 kubelet[2669]: E1013 00:04:02.330606 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:04:02.330645 kubelet[2669]: W1013 00:04:02.330621 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:04:02.330645 kubelet[2669]: E1013 00:04:02.330630 2669 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:04:02.330866 kubelet[2669]: E1013 00:04:02.330853 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:04:02.330886 kubelet[2669]: W1013 00:04:02.330868 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:04:02.330886 kubelet[2669]: E1013 00:04:02.330878 2669 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:04:02.331110 kubelet[2669]: E1013 00:04:02.331097 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:04:02.331138 kubelet[2669]: W1013 00:04:02.331110 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:04:02.331138 kubelet[2669]: E1013 00:04:02.331121 2669 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:04:02.331364 kubelet[2669]: E1013 00:04:02.331350 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:04:02.331395 kubelet[2669]: W1013 00:04:02.331362 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:04:02.331395 kubelet[2669]: E1013 00:04:02.331374 2669 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:04:02.331620 kubelet[2669]: E1013 00:04:02.331604 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:04:02.331620 kubelet[2669]: W1013 00:04:02.331618 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:04:02.331725 kubelet[2669]: E1013 00:04:02.331628 2669 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:04:02.331837 kubelet[2669]: E1013 00:04:02.331823 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:04:02.331837 kubelet[2669]: W1013 00:04:02.331835 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:04:02.331887 kubelet[2669]: E1013 00:04:02.331844 2669 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:04:02.350809 containerd[1536]: time="2025-10-13T00:04:02.350765858Z" level=info msg="connecting to shim 736e5ae23dae9514edcb337e006c0b119141ee8650bc9cd5b677c4d59ae6cf4d" address="unix:///run/containerd/s/d48c8a74d8e52d9357ca93e5ea10fcc3a02b8ff41aab6943549ca27ac6dd37d9" namespace=k8s.io protocol=ttrpc version=3 Oct 13 00:04:02.375458 systemd[1]: Started cri-containerd-736e5ae23dae9514edcb337e006c0b119141ee8650bc9cd5b677c4d59ae6cf4d.scope - libcontainer container 736e5ae23dae9514edcb337e006c0b119141ee8650bc9cd5b677c4d59ae6cf4d. Oct 13 00:04:02.411670 containerd[1536]: time="2025-10-13T00:04:02.411627526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-c4755,Uid:38247645-21fa-42ed-9408-e793e560d12a,Namespace:calico-system,Attempt:0,} returns sandbox id \"736e5ae23dae9514edcb337e006c0b119141ee8650bc9cd5b677c4d59ae6cf4d\"" Oct 13 00:04:02.430770 kubelet[2669]: E1013 00:04:02.430742 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:04:02.430770 kubelet[2669]: W1013 00:04:02.430765 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:04:02.431040 kubelet[2669]: E1013 00:04:02.430785 2669 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:04:02.431071 kubelet[2669]: E1013 00:04:02.431051 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:04:02.431071 kubelet[2669]: W1013 00:04:02.431061 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:04:02.431071 kubelet[2669]: E1013 00:04:02.431070 2669 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:04:02.431392 kubelet[2669]: E1013 00:04:02.431375 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:04:02.431392 kubelet[2669]: W1013 00:04:02.431390 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:04:02.431468 kubelet[2669]: E1013 00:04:02.431414 2669 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:04:02.431665 kubelet[2669]: E1013 00:04:02.431651 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:04:02.431702 kubelet[2669]: W1013 00:04:02.431666 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:04:02.431702 kubelet[2669]: E1013 00:04:02.431674 2669 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:04:02.432228 kubelet[2669]: E1013 00:04:02.432017 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:04:02.432228 kubelet[2669]: W1013 00:04:02.432035 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:04:02.432228 kubelet[2669]: E1013 00:04:02.432046 2669 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:04:02.433038 kubelet[2669]: E1013 00:04:02.433014 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:04:02.433113 kubelet[2669]: W1013 00:04:02.433074 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:04:02.433113 kubelet[2669]: E1013 00:04:02.433090 2669 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:04:02.433408 kubelet[2669]: E1013 00:04:02.433387 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:04:02.433408 kubelet[2669]: W1013 00:04:02.433402 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:04:02.433460 kubelet[2669]: E1013 00:04:02.433417 2669 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:04:02.434600 kubelet[2669]: E1013 00:04:02.434580 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:04:02.434600 kubelet[2669]: W1013 00:04:02.434594 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:04:02.434671 kubelet[2669]: E1013 00:04:02.434606 2669 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:04:02.434913 kubelet[2669]: E1013 00:04:02.434826 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:04:02.434913 kubelet[2669]: W1013 00:04:02.434839 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:04:02.434913 kubelet[2669]: E1013 00:04:02.434849 2669 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:04:02.435141 kubelet[2669]: E1013 00:04:02.435046 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:04:02.435141 kubelet[2669]: W1013 00:04:02.435060 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:04:02.435141 kubelet[2669]: E1013 00:04:02.435070 2669 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:04:02.435755 kubelet[2669]: E1013 00:04:02.435706 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:04:02.435755 kubelet[2669]: W1013 00:04:02.435724 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:04:02.435755 kubelet[2669]: E1013 00:04:02.435759 2669 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:04:02.436100 kubelet[2669]: E1013 00:04:02.436083 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:04:02.436100 kubelet[2669]: W1013 00:04:02.436098 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:04:02.436150 kubelet[2669]: E1013 00:04:02.436108 2669 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:04:02.436409 kubelet[2669]: E1013 00:04:02.436392 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:04:02.436409 kubelet[2669]: W1013 00:04:02.436408 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:04:02.436467 kubelet[2669]: E1013 00:04:02.436420 2669 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:04:02.436670 kubelet[2669]: E1013 00:04:02.436655 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:04:02.436702 kubelet[2669]: W1013 00:04:02.436670 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:04:02.436702 kubelet[2669]: E1013 00:04:02.436681 2669 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:04:02.436914 kubelet[2669]: E1013 00:04:02.436900 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:04:02.436914 kubelet[2669]: W1013 00:04:02.436913 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:04:02.436987 kubelet[2669]: E1013 00:04:02.436932 2669 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:04:02.437195 kubelet[2669]: E1013 00:04:02.437175 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:04:02.437195 kubelet[2669]: W1013 00:04:02.437193 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:04:02.437246 kubelet[2669]: E1013 00:04:02.437204 2669 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:04:02.438504 kubelet[2669]: E1013 00:04:02.438484 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:04:02.438537 kubelet[2669]: W1013 00:04:02.438506 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:04:02.438537 kubelet[2669]: E1013 00:04:02.438518 2669 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:04:02.438810 kubelet[2669]: E1013 00:04:02.438791 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:04:02.438810 kubelet[2669]: W1013 00:04:02.438807 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:04:02.438856 kubelet[2669]: E1013 00:04:02.438820 2669 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:04:02.439031 kubelet[2669]: E1013 00:04:02.439018 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:04:02.439031 kubelet[2669]: W1013 00:04:02.439029 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:04:02.439084 kubelet[2669]: E1013 00:04:02.439039 2669 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:04:02.439194 kubelet[2669]: E1013 00:04:02.439181 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:04:02.439194 kubelet[2669]: W1013 00:04:02.439192 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:04:02.439243 kubelet[2669]: E1013 00:04:02.439199 2669 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:04:02.439358 kubelet[2669]: E1013 00:04:02.439347 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:04:02.439379 kubelet[2669]: W1013 00:04:02.439357 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:04:02.439379 kubelet[2669]: E1013 00:04:02.439366 2669 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:04:02.439518 kubelet[2669]: E1013 00:04:02.439506 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:04:02.439518 kubelet[2669]: W1013 00:04:02.439516 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:04:02.439566 kubelet[2669]: E1013 00:04:02.439524 2669 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:04:02.439699 kubelet[2669]: E1013 00:04:02.439686 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:04:02.439699 kubelet[2669]: W1013 00:04:02.439697 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:04:02.439750 kubelet[2669]: E1013 00:04:02.439705 2669 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:04:02.440142 kubelet[2669]: E1013 00:04:02.440118 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:04:02.440142 kubelet[2669]: W1013 00:04:02.440135 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:04:02.440209 kubelet[2669]: E1013 00:04:02.440146 2669 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:04:02.441139 kubelet[2669]: E1013 00:04:02.441120 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:04:02.441139 kubelet[2669]: W1013 00:04:02.441138 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:04:02.441226 kubelet[2669]: E1013 00:04:02.441152 2669 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:04:02.451956 kubelet[2669]: E1013 00:04:02.451910 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:04:02.451956 kubelet[2669]: W1013 00:04:02.451960 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:04:02.452087 kubelet[2669]: E1013 00:04:02.451980 2669 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:04:03.187120 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount809201374.mount: Deactivated successfully. Oct 13 00:04:03.574230 containerd[1536]: time="2025-10-13T00:04:03.573989677Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:04:03.575051 containerd[1536]: time="2025-10-13T00:04:03.575022174Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=33105775" Oct 13 00:04:03.576652 containerd[1536]: time="2025-10-13T00:04:03.576585163Z" level=info msg="ImageCreate event name:\"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:04:03.580262 containerd[1536]: time="2025-10-13T00:04:03.578789982Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:04:03.580262 containerd[1536]: time="2025-10-13T00:04:03.579997710Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"33105629\" in 1.410534463s" Oct 13 00:04:03.580262 containerd[1536]: time="2025-10-13T00:04:03.580037797Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\"" Oct 13 00:04:03.582230 containerd[1536]: time="2025-10-13T00:04:03.582189006Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Oct 13 00:04:03.600408 containerd[1536]: time="2025-10-13T00:04:03.600368932Z" level=info msg="CreateContainer within sandbox \"ae70f799ddaf3c33e28ef3f078d4c4a7c5bfc2b3a432da2cd9a1fb88ca8c4a6b\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 13 00:04:03.610535 containerd[1536]: time="2025-10-13T00:04:03.610477630Z" level=info msg="Container 654744acc69ce0471a95e86ebf04a85d3b6248caabd3f117d45e7a8bad9c815e: CDI devices from CRI Config.CDIDevices: []" Oct 13 00:04:03.618921 containerd[1536]: time="2025-10-13T00:04:03.618856190Z" level=info msg="CreateContainer within sandbox \"ae70f799ddaf3c33e28ef3f078d4c4a7c5bfc2b3a432da2cd9a1fb88ca8c4a6b\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"654744acc69ce0471a95e86ebf04a85d3b6248caabd3f117d45e7a8bad9c815e\"" Oct 13 00:04:03.621346 containerd[1536]: time="2025-10-13T00:04:03.620431021Z" level=info msg="StartContainer for \"654744acc69ce0471a95e86ebf04a85d3b6248caabd3f117d45e7a8bad9c815e\"" Oct 13 00:04:03.623164 containerd[1536]: time="2025-10-13T00:04:03.622519260Z" level=info msg="connecting to shim 654744acc69ce0471a95e86ebf04a85d3b6248caabd3f117d45e7a8bad9c815e" address="unix:///run/containerd/s/11b5ebc2487c2e7db386cce876e5405e1be15521b811b5b2021b8542b04f42d4" protocol=ttrpc version=3 Oct 13 00:04:03.644468 systemd[1]: Started cri-containerd-654744acc69ce0471a95e86ebf04a85d3b6248caabd3f117d45e7a8bad9c815e.scope - libcontainer container 654744acc69ce0471a95e86ebf04a85d3b6248caabd3f117d45e7a8bad9c815e. Oct 13 00:04:03.682219 containerd[1536]: time="2025-10-13T00:04:03.682042533Z" level=info msg="StartContainer for \"654744acc69ce0471a95e86ebf04a85d3b6248caabd3f117d45e7a8bad9c815e\" returns successfully" Oct 13 00:04:03.925862 kubelet[2669]: E1013 00:04:03.925747 2669 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pdk78" podUID="95c16496-d748-4240-90a8-51fe76dca721" Oct 13 00:04:04.040109 kubelet[2669]: E1013 00:04:04.039682 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:04:04.040109 kubelet[2669]: W1013 00:04:04.039722 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:04:04.040109 kubelet[2669]: E1013 00:04:04.039741 2669 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:04:04.040109 kubelet[2669]: E1013 00:04:04.039902 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:04:04.040109 kubelet[2669]: W1013 00:04:04.039909 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:04:04.040109 kubelet[2669]: E1013 00:04:04.039943 2669 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:04:04.040109 kubelet[2669]: E1013 00:04:04.040081 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:04:04.040109 kubelet[2669]: W1013 00:04:04.040089 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:04:04.040109 kubelet[2669]: E1013 00:04:04.040096 2669 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:04:04.040444 kubelet[2669]: E1013 00:04:04.040231 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:04:04.040444 kubelet[2669]: W1013 00:04:04.040250 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:04:04.040444 kubelet[2669]: E1013 00:04:04.040258 2669 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:04:04.040444 kubelet[2669]: E1013 00:04:04.040393 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:04:04.040444 kubelet[2669]: W1013 00:04:04.040402 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:04:04.040444 kubelet[2669]: E1013 00:04:04.040409 2669 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:04:04.040561 kubelet[2669]: E1013 00:04:04.040531 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:04:04.040561 kubelet[2669]: W1013 00:04:04.040540 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:04:04.040561 kubelet[2669]: E1013 00:04:04.040547 2669 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:04:04.041304 kubelet[2669]: E1013 00:04:04.040659 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:04:04.041304 kubelet[2669]: W1013 00:04:04.040670 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:04:04.041304 kubelet[2669]: E1013 00:04:04.040677 2669 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:04:04.041304 kubelet[2669]: E1013 00:04:04.040790 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:04:04.041304 kubelet[2669]: W1013 00:04:04.040796 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:04:04.041304 kubelet[2669]: E1013 00:04:04.040802 2669 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:04:04.041304 kubelet[2669]: E1013 00:04:04.040922 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:04:04.041304 kubelet[2669]: W1013 00:04:04.040928 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:04:04.041304 kubelet[2669]: E1013 00:04:04.040934 2669 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:04:04.041304 kubelet[2669]: E1013 00:04:04.041054 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:04:04.041545 kubelet[2669]: W1013 00:04:04.041060 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:04:04.041545 kubelet[2669]: E1013 00:04:04.041066 2669 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:04:04.041545 kubelet[2669]: E1013 00:04:04.041168 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:04:04.041545 kubelet[2669]: W1013 00:04:04.041174 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:04:04.041545 kubelet[2669]: E1013 00:04:04.041180 2669 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:04:04.041545 kubelet[2669]: E1013 00:04:04.041399 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:04:04.041545 kubelet[2669]: W1013 00:04:04.041407 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:04:04.041545 kubelet[2669]: E1013 00:04:04.041415 2669 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:04:04.041688 kubelet[2669]: E1013 00:04:04.041571 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:04:04.041688 kubelet[2669]: W1013 00:04:04.041579 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:04:04.041688 kubelet[2669]: E1013 00:04:04.041589 2669 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:04:04.041743 kubelet[2669]: E1013 00:04:04.041708 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:04:04.041743 kubelet[2669]: W1013 00:04:04.041715 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:04:04.041743 kubelet[2669]: E1013 00:04:04.041721 2669 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:04:04.041851 kubelet[2669]: E1013 00:04:04.041836 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:04:04.041883 kubelet[2669]: W1013 00:04:04.041856 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:04:04.041883 kubelet[2669]: E1013 00:04:04.041864 2669 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:04:04.047767 kubelet[2669]: E1013 00:04:04.047743 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:04:04.047767 kubelet[2669]: W1013 00:04:04.047762 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:04:04.047922 kubelet[2669]: E1013 00:04:04.047774 2669 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:04:04.048116 kubelet[2669]: E1013 00:04:04.048099 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:04:04.048148 kubelet[2669]: W1013 00:04:04.048115 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:04:04.048148 kubelet[2669]: E1013 00:04:04.048127 2669 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:04:04.048354 kubelet[2669]: E1013 00:04:04.048341 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:04:04.048354 kubelet[2669]: W1013 00:04:04.048352 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:04:04.048407 kubelet[2669]: E1013 00:04:04.048361 2669 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:04:04.048519 kubelet[2669]: E1013 00:04:04.048496 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:04:04.048544 kubelet[2669]: W1013 00:04:04.048519 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:04:04.048544 kubelet[2669]: E1013 00:04:04.048528 2669 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:04:04.048588 kubelet[2669]: I1013 00:04:04.048549 2669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5f85cc76ff-zsqdw" podStartSLOduration=1.635760664 podStartE2EDuration="3.048537357s" podCreationTimestamp="2025-10-13 00:04:01 +0000 UTC" firstStartedPulling="2025-10-13 00:04:02.168978879 +0000 UTC m=+18.334248540" lastFinishedPulling="2025-10-13 00:04:03.581755532 +0000 UTC m=+19.747025233" observedRunningTime="2025-10-13 00:04:04.046528587 +0000 UTC m=+20.211798288" watchObservedRunningTime="2025-10-13 00:04:04.048537357 +0000 UTC m=+20.213807018" Oct 13 00:04:04.048718 kubelet[2669]: E1013 00:04:04.048706 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:04:04.048718 kubelet[2669]: W1013 00:04:04.048717 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:04:04.048762 kubelet[2669]: E1013 00:04:04.048727 2669 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:04:04.048981 kubelet[2669]: E1013 00:04:04.048964 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:04:04.049018 kubelet[2669]: W1013 00:04:04.048980 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:04:04.049018 kubelet[2669]: E1013 00:04:04.048991 2669 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:04:04.049177 kubelet[2669]: E1013 00:04:04.049166 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:04:04.049177 kubelet[2669]: W1013 00:04:04.049175 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:04:04.049243 kubelet[2669]: E1013 00:04:04.049183 2669 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:04:04.049387 kubelet[2669]: E1013 00:04:04.049373 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:04:04.049387 kubelet[2669]: W1013 00:04:04.049386 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:04:04.049456 kubelet[2669]: E1013 00:04:04.049397 2669 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:04:04.049574 kubelet[2669]: E1013 00:04:04.049562 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:04:04.049611 kubelet[2669]: W1013 00:04:04.049574 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:04:04.049611 kubelet[2669]: E1013 00:04:04.049583 2669 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:04:04.049730 kubelet[2669]: E1013 00:04:04.049719 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:04:04.049730 kubelet[2669]: W1013 00:04:04.049728 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:04:04.049786 kubelet[2669]: E1013 00:04:04.049737 2669 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:04:04.049864 kubelet[2669]: E1013 00:04:04.049855 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:04:04.049864 kubelet[2669]: W1013 00:04:04.049864 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:04:04.049906 kubelet[2669]: E1013 00:04:04.049871 2669 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:04:04.050012 kubelet[2669]: E1013 00:04:04.050003 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:04:04.050012 kubelet[2669]: W1013 00:04:04.050011 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:04:04.050068 kubelet[2669]: E1013 00:04:04.050019 2669 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:04:04.050446 kubelet[2669]: E1013 00:04:04.050338 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:04:04.050446 kubelet[2669]: W1013 00:04:04.050354 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:04:04.050446 kubelet[2669]: E1013 00:04:04.050365 2669 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:04:04.050604 kubelet[2669]: E1013 00:04:04.050592 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:04:04.050659 kubelet[2669]: W1013 00:04:04.050648 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:04:04.050706 kubelet[2669]: E1013 00:04:04.050697 2669 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:04:04.050900 kubelet[2669]: E1013 00:04:04.050889 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:04:04.050965 kubelet[2669]: W1013 00:04:04.050956 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:04:04.051011 kubelet[2669]: E1013 00:04:04.051002 2669 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:04:04.051616 kubelet[2669]: E1013 00:04:04.051222 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:04:04.051616 kubelet[2669]: W1013 00:04:04.051244 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:04:04.051616 kubelet[2669]: E1013 00:04:04.051254 2669 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:04:04.051616 kubelet[2669]: E1013 00:04:04.051468 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:04:04.051616 kubelet[2669]: W1013 00:04:04.051479 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:04:04.051616 kubelet[2669]: E1013 00:04:04.051488 2669 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:04:04.051840 kubelet[2669]: E1013 00:04:04.051827 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:04:04.051896 kubelet[2669]: W1013 00:04:04.051885 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:04:04.051944 kubelet[2669]: E1013 00:04:04.051934 2669 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:04:04.753300 containerd[1536]: time="2025-10-13T00:04:04.753220926Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:04:04.753797 containerd[1536]: time="2025-10-13T00:04:04.753760775Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4266814" Oct 13 00:04:04.754690 containerd[1536]: time="2025-10-13T00:04:04.754632318Z" level=info msg="ImageCreate event name:\"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:04:04.758136 containerd[1536]: time="2025-10-13T00:04:04.758091084Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:04:04.758804 containerd[1536]: time="2025-10-13T00:04:04.758773076Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5636015\" in 1.176540542s" Oct 13 00:04:04.758852 containerd[1536]: time="2025-10-13T00:04:04.758812162Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\"" Oct 13 00:04:04.764822 containerd[1536]: time="2025-10-13T00:04:04.764780580Z" level=info msg="CreateContainer within sandbox \"736e5ae23dae9514edcb337e006c0b119141ee8650bc9cd5b677c4d59ae6cf4d\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 13 00:04:04.773176 containerd[1536]: time="2025-10-13T00:04:04.772772930Z" level=info msg="Container b6163400539c74a748717ccfe97e98ac6443bdd9ad9a317389bc03a68f6766c6: CDI devices from CRI Config.CDIDevices: []" Oct 13 00:04:04.785623 containerd[1536]: time="2025-10-13T00:04:04.785553263Z" level=info msg="CreateContainer within sandbox \"736e5ae23dae9514edcb337e006c0b119141ee8650bc9cd5b677c4d59ae6cf4d\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"b6163400539c74a748717ccfe97e98ac6443bdd9ad9a317389bc03a68f6766c6\"" Oct 13 00:04:04.787055 containerd[1536]: time="2025-10-13T00:04:04.786322910Z" level=info msg="StartContainer for \"b6163400539c74a748717ccfe97e98ac6443bdd9ad9a317389bc03a68f6766c6\"" Oct 13 00:04:04.788016 containerd[1536]: time="2025-10-13T00:04:04.787981421Z" level=info msg="connecting to shim b6163400539c74a748717ccfe97e98ac6443bdd9ad9a317389bc03a68f6766c6" address="unix:///run/containerd/s/d48c8a74d8e52d9357ca93e5ea10fcc3a02b8ff41aab6943549ca27ac6dd37d9" protocol=ttrpc version=3 Oct 13 00:04:04.818491 systemd[1]: Started cri-containerd-b6163400539c74a748717ccfe97e98ac6443bdd9ad9a317389bc03a68f6766c6.scope - libcontainer container b6163400539c74a748717ccfe97e98ac6443bdd9ad9a317389bc03a68f6766c6. Oct 13 00:04:04.869896 systemd[1]: cri-containerd-b6163400539c74a748717ccfe97e98ac6443bdd9ad9a317389bc03a68f6766c6.scope: Deactivated successfully. Oct 13 00:04:04.897888 containerd[1536]: time="2025-10-13T00:04:04.897801173Z" level=info msg="StartContainer for \"b6163400539c74a748717ccfe97e98ac6443bdd9ad9a317389bc03a68f6766c6\" returns successfully" Oct 13 00:04:04.918721 containerd[1536]: time="2025-10-13T00:04:04.918666232Z" level=info msg="received exit event container_id:\"b6163400539c74a748717ccfe97e98ac6443bdd9ad9a317389bc03a68f6766c6\" id:\"b6163400539c74a748717ccfe97e98ac6443bdd9ad9a317389bc03a68f6766c6\" pid:3372 exited_at:{seconds:1760313844 nanos:913510787}" Oct 13 00:04:04.918944 containerd[1536]: time="2025-10-13T00:04:04.918763127Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b6163400539c74a748717ccfe97e98ac6443bdd9ad9a317389bc03a68f6766c6\" id:\"b6163400539c74a748717ccfe97e98ac6443bdd9ad9a317389bc03a68f6766c6\" pid:3372 exited_at:{seconds:1760313844 nanos:913510787}" Oct 13 00:04:04.943937 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b6163400539c74a748717ccfe97e98ac6443bdd9ad9a317389bc03a68f6766c6-rootfs.mount: Deactivated successfully. Oct 13 00:04:05.028333 kubelet[2669]: I1013 00:04:05.028206 2669 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 00:04:05.030256 containerd[1536]: time="2025-10-13T00:04:05.029988292Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Oct 13 00:04:05.925303 kubelet[2669]: E1013 00:04:05.925228 2669 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pdk78" podUID="95c16496-d748-4240-90a8-51fe76dca721" Oct 13 00:04:07.488149 containerd[1536]: time="2025-10-13T00:04:07.488093690Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:04:07.488858 containerd[1536]: time="2025-10-13T00:04:07.488830075Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=65913477" Oct 13 00:04:07.489593 containerd[1536]: time="2025-10-13T00:04:07.489555178Z" level=info msg="ImageCreate event name:\"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:04:07.492171 containerd[1536]: time="2025-10-13T00:04:07.492122024Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:04:07.492783 containerd[1536]: time="2025-10-13T00:04:07.492739632Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"67282718\" in 2.462711173s" Oct 13 00:04:07.492783 containerd[1536]: time="2025-10-13T00:04:07.492775357Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\"" Oct 13 00:04:07.499048 containerd[1536]: time="2025-10-13T00:04:07.498986042Z" level=info msg="CreateContainer within sandbox \"736e5ae23dae9514edcb337e006c0b119141ee8650bc9cd5b677c4d59ae6cf4d\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 13 00:04:07.506370 containerd[1536]: time="2025-10-13T00:04:07.506330688Z" level=info msg="Container eab8e93b5b4c3b6ba496fc984c4021e14de18ff69dfe5159fbfbe7c60bc29f00: CDI devices from CRI Config.CDIDevices: []" Oct 13 00:04:07.514427 containerd[1536]: time="2025-10-13T00:04:07.514383636Z" level=info msg="CreateContainer within sandbox \"736e5ae23dae9514edcb337e006c0b119141ee8650bc9cd5b677c4d59ae6cf4d\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"eab8e93b5b4c3b6ba496fc984c4021e14de18ff69dfe5159fbfbe7c60bc29f00\"" Oct 13 00:04:07.515268 containerd[1536]: time="2025-10-13T00:04:07.515191751Z" level=info msg="StartContainer for \"eab8e93b5b4c3b6ba496fc984c4021e14de18ff69dfe5159fbfbe7c60bc29f00\"" Oct 13 00:04:07.516777 containerd[1536]: time="2025-10-13T00:04:07.516738371Z" level=info msg="connecting to shim eab8e93b5b4c3b6ba496fc984c4021e14de18ff69dfe5159fbfbe7c60bc29f00" address="unix:///run/containerd/s/d48c8a74d8e52d9357ca93e5ea10fcc3a02b8ff41aab6943549ca27ac6dd37d9" protocol=ttrpc version=3 Oct 13 00:04:07.541434 systemd[1]: Started cri-containerd-eab8e93b5b4c3b6ba496fc984c4021e14de18ff69dfe5159fbfbe7c60bc29f00.scope - libcontainer container eab8e93b5b4c3b6ba496fc984c4021e14de18ff69dfe5159fbfbe7c60bc29f00. Oct 13 00:04:07.605430 containerd[1536]: time="2025-10-13T00:04:07.605391724Z" level=info msg="StartContainer for \"eab8e93b5b4c3b6ba496fc984c4021e14de18ff69dfe5159fbfbe7c60bc29f00\" returns successfully" Oct 13 00:04:07.925323 kubelet[2669]: E1013 00:04:07.925278 2669 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pdk78" podUID="95c16496-d748-4240-90a8-51fe76dca721" Oct 13 00:04:08.144448 systemd[1]: cri-containerd-eab8e93b5b4c3b6ba496fc984c4021e14de18ff69dfe5159fbfbe7c60bc29f00.scope: Deactivated successfully. Oct 13 00:04:08.145278 systemd[1]: cri-containerd-eab8e93b5b4c3b6ba496fc984c4021e14de18ff69dfe5159fbfbe7c60bc29f00.scope: Consumed 489ms CPU time, 180M memory peak, 2.7M read from disk, 165.8M written to disk. Oct 13 00:04:08.146819 containerd[1536]: time="2025-10-13T00:04:08.146777030Z" level=info msg="received exit event container_id:\"eab8e93b5b4c3b6ba496fc984c4021e14de18ff69dfe5159fbfbe7c60bc29f00\" id:\"eab8e93b5b4c3b6ba496fc984c4021e14de18ff69dfe5159fbfbe7c60bc29f00\" pid:3432 exited_at:{seconds:1760313848 nanos:146512634}" Oct 13 00:04:08.147071 containerd[1536]: time="2025-10-13T00:04:08.147049508Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eab8e93b5b4c3b6ba496fc984c4021e14de18ff69dfe5159fbfbe7c60bc29f00\" id:\"eab8e93b5b4c3b6ba496fc984c4021e14de18ff69dfe5159fbfbe7c60bc29f00\" pid:3432 exited_at:{seconds:1760313848 nanos:146512634}" Oct 13 00:04:08.167779 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eab8e93b5b4c3b6ba496fc984c4021e14de18ff69dfe5159fbfbe7c60bc29f00-rootfs.mount: Deactivated successfully. Oct 13 00:04:08.172263 kubelet[2669]: I1013 00:04:08.170890 2669 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Oct 13 00:04:08.348021 systemd[1]: Created slice kubepods-burstable-pod3d63c12c_b533_48c9_a0bb_e9884317fc41.slice - libcontainer container kubepods-burstable-pod3d63c12c_b533_48c9_a0bb_e9884317fc41.slice. Oct 13 00:04:08.359595 systemd[1]: Created slice kubepods-besteffort-pod1a4417d9_6c74_4135_852a_04179805653d.slice - libcontainer container kubepods-besteffort-pod1a4417d9_6c74_4135_852a_04179805653d.slice. Oct 13 00:04:08.368774 systemd[1]: Created slice kubepods-burstable-podb0ab16ba_64b0_4e22_a345_afb015a9a0a4.slice - libcontainer container kubepods-burstable-podb0ab16ba_64b0_4e22_a345_afb015a9a0a4.slice. Oct 13 00:04:08.378285 kubelet[2669]: I1013 00:04:08.378201 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgxhb\" (UniqueName: \"kubernetes.io/projected/9dd1b1dc-234c-4e88-ab75-26e6f1e943b5-kube-api-access-cgxhb\") pod \"calico-apiserver-964f9dc6f-58jz4\" (UID: \"9dd1b1dc-234c-4e88-ab75-26e6f1e943b5\") " pod="calico-apiserver/calico-apiserver-964f9dc6f-58jz4" Oct 13 00:04:08.378285 kubelet[2669]: I1013 00:04:08.378254 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/43e33533-46a8-4b8a-bb17-c138b6d6eb1d-goldmane-key-pair\") pod \"goldmane-854f97d977-g7xgn\" (UID: \"43e33533-46a8-4b8a-bb17-c138b6d6eb1d\") " pod="calico-system/goldmane-854f97d977-g7xgn" Oct 13 00:04:08.378464 kubelet[2669]: I1013 00:04:08.378325 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pf6g2\" (UniqueName: \"kubernetes.io/projected/43e33533-46a8-4b8a-bb17-c138b6d6eb1d-kube-api-access-pf6g2\") pod \"goldmane-854f97d977-g7xgn\" (UID: \"43e33533-46a8-4b8a-bb17-c138b6d6eb1d\") " pod="calico-system/goldmane-854f97d977-g7xgn" Oct 13 00:04:08.378464 kubelet[2669]: I1013 00:04:08.378359 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e0280023-afc0-4e97-b63a-2f04efc35c38-whisker-backend-key-pair\") pod \"whisker-6d55b4ff87-r52l8\" (UID: \"e0280023-afc0-4e97-b63a-2f04efc35c38\") " pod="calico-system/whisker-6d55b4ff87-r52l8" Oct 13 00:04:08.378464 kubelet[2669]: I1013 00:04:08.378380 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43e33533-46a8-4b8a-bb17-c138b6d6eb1d-goldmane-ca-bundle\") pod \"goldmane-854f97d977-g7xgn\" (UID: \"43e33533-46a8-4b8a-bb17-c138b6d6eb1d\") " pod="calico-system/goldmane-854f97d977-g7xgn" Oct 13 00:04:08.378464 kubelet[2669]: I1013 00:04:08.378396 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-km9c4\" (UniqueName: \"kubernetes.io/projected/39731893-90ec-438e-8f26-b921f2ff0284-kube-api-access-km9c4\") pod \"calico-kube-controllers-c99b8446c-xnkwj\" (UID: \"39731893-90ec-438e-8f26-b921f2ff0284\") " pod="calico-system/calico-kube-controllers-c99b8446c-xnkwj" Oct 13 00:04:08.378464 kubelet[2669]: I1013 00:04:08.378422 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2j2dj\" (UniqueName: \"kubernetes.io/projected/b0ab16ba-64b0-4e22-a345-afb015a9a0a4-kube-api-access-2j2dj\") pod \"coredns-66bc5c9577-8fb7b\" (UID: \"b0ab16ba-64b0-4e22-a345-afb015a9a0a4\") " pod="kube-system/coredns-66bc5c9577-8fb7b" Oct 13 00:04:08.378579 kubelet[2669]: I1013 00:04:08.378438 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3d63c12c-b533-48c9-a0bb-e9884317fc41-config-volume\") pod \"coredns-66bc5c9577-w65sl\" (UID: \"3d63c12c-b533-48c9-a0bb-e9884317fc41\") " pod="kube-system/coredns-66bc5c9577-w65sl" Oct 13 00:04:08.378579 kubelet[2669]: I1013 00:04:08.378455 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrdgq\" (UniqueName: \"kubernetes.io/projected/3d63c12c-b533-48c9-a0bb-e9884317fc41-kube-api-access-vrdgq\") pod \"coredns-66bc5c9577-w65sl\" (UID: \"3d63c12c-b533-48c9-a0bb-e9884317fc41\") " pod="kube-system/coredns-66bc5c9577-w65sl" Oct 13 00:04:08.378579 kubelet[2669]: I1013 00:04:08.378476 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zxst\" (UniqueName: \"kubernetes.io/projected/e0280023-afc0-4e97-b63a-2f04efc35c38-kube-api-access-6zxst\") pod \"whisker-6d55b4ff87-r52l8\" (UID: \"e0280023-afc0-4e97-b63a-2f04efc35c38\") " pod="calico-system/whisker-6d55b4ff87-r52l8" Oct 13 00:04:08.378579 kubelet[2669]: I1013 00:04:08.378496 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9dd1b1dc-234c-4e88-ab75-26e6f1e943b5-calico-apiserver-certs\") pod \"calico-apiserver-964f9dc6f-58jz4\" (UID: \"9dd1b1dc-234c-4e88-ab75-26e6f1e943b5\") " pod="calico-apiserver/calico-apiserver-964f9dc6f-58jz4" Oct 13 00:04:08.378579 kubelet[2669]: I1013 00:04:08.378517 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b0ab16ba-64b0-4e22-a345-afb015a9a0a4-config-volume\") pod \"coredns-66bc5c9577-8fb7b\" (UID: \"b0ab16ba-64b0-4e22-a345-afb015a9a0a4\") " pod="kube-system/coredns-66bc5c9577-8fb7b" Oct 13 00:04:08.378680 kubelet[2669]: I1013 00:04:08.378534 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2r5lj\" (UniqueName: \"kubernetes.io/projected/1a4417d9-6c74-4135-852a-04179805653d-kube-api-access-2r5lj\") pod \"calico-apiserver-964f9dc6f-wkqzm\" (UID: \"1a4417d9-6c74-4135-852a-04179805653d\") " pod="calico-apiserver/calico-apiserver-964f9dc6f-wkqzm" Oct 13 00:04:08.378680 kubelet[2669]: I1013 00:04:08.378553 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43e33533-46a8-4b8a-bb17-c138b6d6eb1d-config\") pod \"goldmane-854f97d977-g7xgn\" (UID: \"43e33533-46a8-4b8a-bb17-c138b6d6eb1d\") " pod="calico-system/goldmane-854f97d977-g7xgn" Oct 13 00:04:08.378680 kubelet[2669]: I1013 00:04:08.378568 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/39731893-90ec-438e-8f26-b921f2ff0284-tigera-ca-bundle\") pod \"calico-kube-controllers-c99b8446c-xnkwj\" (UID: \"39731893-90ec-438e-8f26-b921f2ff0284\") " pod="calico-system/calico-kube-controllers-c99b8446c-xnkwj" Oct 13 00:04:08.378680 kubelet[2669]: I1013 00:04:08.378585 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1a4417d9-6c74-4135-852a-04179805653d-calico-apiserver-certs\") pod \"calico-apiserver-964f9dc6f-wkqzm\" (UID: \"1a4417d9-6c74-4135-852a-04179805653d\") " pod="calico-apiserver/calico-apiserver-964f9dc6f-wkqzm" Oct 13 00:04:08.378680 kubelet[2669]: I1013 00:04:08.378608 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e0280023-afc0-4e97-b63a-2f04efc35c38-whisker-ca-bundle\") pod \"whisker-6d55b4ff87-r52l8\" (UID: \"e0280023-afc0-4e97-b63a-2f04efc35c38\") " pod="calico-system/whisker-6d55b4ff87-r52l8" Oct 13 00:04:08.386737 systemd[1]: Created slice kubepods-besteffort-pod9dd1b1dc_234c_4e88_ab75_26e6f1e943b5.slice - libcontainer container kubepods-besteffort-pod9dd1b1dc_234c_4e88_ab75_26e6f1e943b5.slice. Oct 13 00:04:08.395719 systemd[1]: Created slice kubepods-besteffort-pod43e33533_46a8_4b8a_bb17_c138b6d6eb1d.slice - libcontainer container kubepods-besteffort-pod43e33533_46a8_4b8a_bb17_c138b6d6eb1d.slice. Oct 13 00:04:08.401428 systemd[1]: Created slice kubepods-besteffort-pode0280023_afc0_4e97_b63a_2f04efc35c38.slice - libcontainer container kubepods-besteffort-pode0280023_afc0_4e97_b63a_2f04efc35c38.slice. Oct 13 00:04:08.407151 systemd[1]: Created slice kubepods-besteffort-pod39731893_90ec_438e_8f26_b921f2ff0284.slice - libcontainer container kubepods-besteffort-pod39731893_90ec_438e_8f26_b921f2ff0284.slice. Oct 13 00:04:08.658166 containerd[1536]: time="2025-10-13T00:04:08.658066131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-w65sl,Uid:3d63c12c-b533-48c9-a0bb-e9884317fc41,Namespace:kube-system,Attempt:0,}" Oct 13 00:04:08.668262 containerd[1536]: time="2025-10-13T00:04:08.668113260Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-964f9dc6f-wkqzm,Uid:1a4417d9-6c74-4135-852a-04179805653d,Namespace:calico-apiserver,Attempt:0,}" Oct 13 00:04:08.674091 containerd[1536]: time="2025-10-13T00:04:08.674057910Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-8fb7b,Uid:b0ab16ba-64b0-4e22-a345-afb015a9a0a4,Namespace:kube-system,Attempt:0,}" Oct 13 00:04:08.692207 containerd[1536]: time="2025-10-13T00:04:08.692167578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-964f9dc6f-58jz4,Uid:9dd1b1dc-234c-4e88-ab75-26e6f1e943b5,Namespace:calico-apiserver,Attempt:0,}" Oct 13 00:04:08.702795 containerd[1536]: time="2025-10-13T00:04:08.702757180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-854f97d977-g7xgn,Uid:43e33533-46a8-4b8a-bb17-c138b6d6eb1d,Namespace:calico-system,Attempt:0,}" Oct 13 00:04:08.709065 containerd[1536]: time="2025-10-13T00:04:08.708874254Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6d55b4ff87-r52l8,Uid:e0280023-afc0-4e97-b63a-2f04efc35c38,Namespace:calico-system,Attempt:0,}" Oct 13 00:04:08.717469 containerd[1536]: time="2025-10-13T00:04:08.716459527Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c99b8446c-xnkwj,Uid:39731893-90ec-438e-8f26-b921f2ff0284,Namespace:calico-system,Attempt:0,}" Oct 13 00:04:08.788558 containerd[1536]: time="2025-10-13T00:04:08.788495542Z" level=error msg="Failed to destroy network for sandbox \"d36c2a27e1f72e1cf6d3a7e50fccaf714bcd9ce30243e7069de2dee405bedb3a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:04:08.794346 containerd[1536]: time="2025-10-13T00:04:08.794283771Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-964f9dc6f-58jz4,Uid:9dd1b1dc-234c-4e88-ab75-26e6f1e943b5,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d36c2a27e1f72e1cf6d3a7e50fccaf714bcd9ce30243e7069de2dee405bedb3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:04:08.794873 kubelet[2669]: E1013 00:04:08.794819 2669 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d36c2a27e1f72e1cf6d3a7e50fccaf714bcd9ce30243e7069de2dee405bedb3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:04:08.795011 kubelet[2669]: E1013 00:04:08.794919 2669 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d36c2a27e1f72e1cf6d3a7e50fccaf714bcd9ce30243e7069de2dee405bedb3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-964f9dc6f-58jz4" Oct 13 00:04:08.795011 kubelet[2669]: E1013 00:04:08.794988 2669 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d36c2a27e1f72e1cf6d3a7e50fccaf714bcd9ce30243e7069de2dee405bedb3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-964f9dc6f-58jz4" Oct 13 00:04:08.795172 kubelet[2669]: E1013 00:04:08.795074 2669 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-964f9dc6f-58jz4_calico-apiserver(9dd1b1dc-234c-4e88-ab75-26e6f1e943b5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-964f9dc6f-58jz4_calico-apiserver(9dd1b1dc-234c-4e88-ab75-26e6f1e943b5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d36c2a27e1f72e1cf6d3a7e50fccaf714bcd9ce30243e7069de2dee405bedb3a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-964f9dc6f-58jz4" podUID="9dd1b1dc-234c-4e88-ab75-26e6f1e943b5" Oct 13 00:04:08.799960 containerd[1536]: time="2025-10-13T00:04:08.799884014Z" level=error msg="Failed to destroy network for sandbox \"037f3fb54acefe606e2e3146c9bd58879d139c53426679ef78b01bf01fef78df\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:04:08.801990 containerd[1536]: time="2025-10-13T00:04:08.801907849Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-w65sl,Uid:3d63c12c-b533-48c9-a0bb-e9884317fc41,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"037f3fb54acefe606e2e3146c9bd58879d139c53426679ef78b01bf01fef78df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:04:08.802269 kubelet[2669]: E1013 00:04:08.802207 2669 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"037f3fb54acefe606e2e3146c9bd58879d139c53426679ef78b01bf01fef78df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:04:08.802332 kubelet[2669]: E1013 00:04:08.802289 2669 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"037f3fb54acefe606e2e3146c9bd58879d139c53426679ef78b01bf01fef78df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-w65sl" Oct 13 00:04:08.802332 kubelet[2669]: E1013 00:04:08.802310 2669 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"037f3fb54acefe606e2e3146c9bd58879d139c53426679ef78b01bf01fef78df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-w65sl" Oct 13 00:04:08.811716 kubelet[2669]: E1013 00:04:08.811644 2669 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-w65sl_kube-system(3d63c12c-b533-48c9-a0bb-e9884317fc41)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-w65sl_kube-system(3d63c12c-b533-48c9-a0bb-e9884317fc41)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"037f3fb54acefe606e2e3146c9bd58879d139c53426679ef78b01bf01fef78df\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-w65sl" podUID="3d63c12c-b533-48c9-a0bb-e9884317fc41" Oct 13 00:04:08.812066 containerd[1536]: time="2025-10-13T00:04:08.812027188Z" level=error msg="Failed to destroy network for sandbox \"d9b1c19f5e573f48fc627fa707b4dbf47a20bb915dbcee96abdf3f57d4ae92c0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:04:08.812311 containerd[1536]: time="2025-10-13T00:04:08.812209813Z" level=error msg="Failed to destroy network for sandbox \"2f14a17baa7b46470565850b56e49b27ddee667b1d5bb9cd78341b07e85eabb1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:04:08.813767 containerd[1536]: time="2025-10-13T00:04:08.813724459Z" level=error msg="Failed to destroy network for sandbox \"b7a2572b98bd58e8ad48455f3da7717536a3679f9b7e713de4b5eb211ae8d009\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:04:08.814344 containerd[1536]: time="2025-10-13T00:04:08.814299698Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6d55b4ff87-r52l8,Uid:e0280023-afc0-4e97-b63a-2f04efc35c38,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d9b1c19f5e573f48fc627fa707b4dbf47a20bb915dbcee96abdf3f57d4ae92c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:04:08.814939 containerd[1536]: time="2025-10-13T00:04:08.814653026Z" level=error msg="Failed to destroy network for sandbox \"b062c1a9fcf76b5e32991cb9a3ba138019b4d04f10fc15d1c7c55b2efeb8c7c1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:04:08.815021 kubelet[2669]: E1013 00:04:08.814675 2669 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d9b1c19f5e573f48fc627fa707b4dbf47a20bb915dbcee96abdf3f57d4ae92c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:04:08.815021 kubelet[2669]: E1013 00:04:08.814743 2669 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d9b1c19f5e573f48fc627fa707b4dbf47a20bb915dbcee96abdf3f57d4ae92c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6d55b4ff87-r52l8" Oct 13 00:04:08.815021 kubelet[2669]: E1013 00:04:08.814764 2669 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d9b1c19f5e573f48fc627fa707b4dbf47a20bb915dbcee96abdf3f57d4ae92c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6d55b4ff87-r52l8" Oct 13 00:04:08.815264 containerd[1536]: time="2025-10-13T00:04:08.815184578Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-854f97d977-g7xgn,Uid:43e33533-46a8-4b8a-bb17-c138b6d6eb1d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f14a17baa7b46470565850b56e49b27ddee667b1d5bb9cd78341b07e85eabb1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:04:08.815809 kubelet[2669]: E1013 00:04:08.814814 2669 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6d55b4ff87-r52l8_calico-system(e0280023-afc0-4e97-b63a-2f04efc35c38)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6d55b4ff87-r52l8_calico-system(e0280023-afc0-4e97-b63a-2f04efc35c38)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d9b1c19f5e573f48fc627fa707b4dbf47a20bb915dbcee96abdf3f57d4ae92c0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6d55b4ff87-r52l8" podUID="e0280023-afc0-4e97-b63a-2f04efc35c38" Oct 13 00:04:08.816386 containerd[1536]: time="2025-10-13T00:04:08.816340216Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-8fb7b,Uid:b0ab16ba-64b0-4e22-a345-afb015a9a0a4,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7a2572b98bd58e8ad48455f3da7717536a3679f9b7e713de4b5eb211ae8d009\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:04:08.817211 kubelet[2669]: E1013 00:04:08.817174 2669 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f14a17baa7b46470565850b56e49b27ddee667b1d5bb9cd78341b07e85eabb1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:04:08.817407 kubelet[2669]: E1013 00:04:08.817169 2669 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7a2572b98bd58e8ad48455f3da7717536a3679f9b7e713de4b5eb211ae8d009\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:04:08.817448 kubelet[2669]: E1013 00:04:08.817421 2669 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7a2572b98bd58e8ad48455f3da7717536a3679f9b7e713de4b5eb211ae8d009\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-8fb7b" Oct 13 00:04:08.817473 kubelet[2669]: E1013 00:04:08.817447 2669 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7a2572b98bd58e8ad48455f3da7717536a3679f9b7e713de4b5eb211ae8d009\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-8fb7b" Oct 13 00:04:08.817533 kubelet[2669]: E1013 00:04:08.817502 2669 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-8fb7b_kube-system(b0ab16ba-64b0-4e22-a345-afb015a9a0a4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-8fb7b_kube-system(b0ab16ba-64b0-4e22-a345-afb015a9a0a4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b7a2572b98bd58e8ad48455f3da7717536a3679f9b7e713de4b5eb211ae8d009\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-8fb7b" podUID="b0ab16ba-64b0-4e22-a345-afb015a9a0a4" Oct 13 00:04:08.817533 kubelet[2669]: E1013 00:04:08.817391 2669 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f14a17baa7b46470565850b56e49b27ddee667b1d5bb9cd78341b07e85eabb1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-854f97d977-g7xgn" Oct 13 00:04:08.817621 kubelet[2669]: E1013 00:04:08.817538 2669 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f14a17baa7b46470565850b56e49b27ddee667b1d5bb9cd78341b07e85eabb1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-854f97d977-g7xgn" Oct 13 00:04:08.817621 kubelet[2669]: E1013 00:04:08.817563 2669 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-854f97d977-g7xgn_calico-system(43e33533-46a8-4b8a-bb17-c138b6d6eb1d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-854f97d977-g7xgn_calico-system(43e33533-46a8-4b8a-bb17-c138b6d6eb1d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2f14a17baa7b46470565850b56e49b27ddee667b1d5bb9cd78341b07e85eabb1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-854f97d977-g7xgn" podUID="43e33533-46a8-4b8a-bb17-c138b6d6eb1d" Oct 13 00:04:08.817686 containerd[1536]: time="2025-10-13T00:04:08.817518416Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-964f9dc6f-wkqzm,Uid:1a4417d9-6c74-4135-852a-04179805653d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b062c1a9fcf76b5e32991cb9a3ba138019b4d04f10fc15d1c7c55b2efeb8c7c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:04:08.817761 kubelet[2669]: E1013 00:04:08.817732 2669 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b062c1a9fcf76b5e32991cb9a3ba138019b4d04f10fc15d1c7c55b2efeb8c7c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:04:08.817801 kubelet[2669]: E1013 00:04:08.817766 2669 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b062c1a9fcf76b5e32991cb9a3ba138019b4d04f10fc15d1c7c55b2efeb8c7c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-964f9dc6f-wkqzm" Oct 13 00:04:08.817801 kubelet[2669]: E1013 00:04:08.817783 2669 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b062c1a9fcf76b5e32991cb9a3ba138019b4d04f10fc15d1c7c55b2efeb8c7c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-964f9dc6f-wkqzm" Oct 13 00:04:08.817844 kubelet[2669]: E1013 00:04:08.817810 2669 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-964f9dc6f-wkqzm_calico-apiserver(1a4417d9-6c74-4135-852a-04179805653d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-964f9dc6f-wkqzm_calico-apiserver(1a4417d9-6c74-4135-852a-04179805653d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b062c1a9fcf76b5e32991cb9a3ba138019b4d04f10fc15d1c7c55b2efeb8c7c1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-964f9dc6f-wkqzm" podUID="1a4417d9-6c74-4135-852a-04179805653d" Oct 13 00:04:08.820626 containerd[1536]: time="2025-10-13T00:04:08.820567672Z" level=error msg="Failed to destroy network for sandbox \"52cf45a6f41a7b94c210057d2cf99e7eb35c9b8525f77930536e49e37a14ef52\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:04:08.821634 containerd[1536]: time="2025-10-13T00:04:08.821590811Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c99b8446c-xnkwj,Uid:39731893-90ec-438e-8f26-b921f2ff0284,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"52cf45a6f41a7b94c210057d2cf99e7eb35c9b8525f77930536e49e37a14ef52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:04:08.821871 kubelet[2669]: E1013 00:04:08.821830 2669 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"52cf45a6f41a7b94c210057d2cf99e7eb35c9b8525f77930536e49e37a14ef52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:04:08.821948 kubelet[2669]: E1013 00:04:08.821892 2669 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"52cf45a6f41a7b94c210057d2cf99e7eb35c9b8525f77930536e49e37a14ef52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-c99b8446c-xnkwj" Oct 13 00:04:08.821948 kubelet[2669]: E1013 00:04:08.821921 2669 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"52cf45a6f41a7b94c210057d2cf99e7eb35c9b8525f77930536e49e37a14ef52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-c99b8446c-xnkwj" Oct 13 00:04:08.822165 kubelet[2669]: E1013 00:04:08.822042 2669 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-c99b8446c-xnkwj_calico-system(39731893-90ec-438e-8f26-b921f2ff0284)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-c99b8446c-xnkwj_calico-system(39731893-90ec-438e-8f26-b921f2ff0284)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"52cf45a6f41a7b94c210057d2cf99e7eb35c9b8525f77930536e49e37a14ef52\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-c99b8446c-xnkwj" podUID="39731893-90ec-438e-8f26-b921f2ff0284" Oct 13 00:04:09.058002 containerd[1536]: time="2025-10-13T00:04:09.057961563Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Oct 13 00:04:09.508447 systemd[1]: run-netns-cni\x2d38b5d168\x2df781\x2de8ba\x2d7dbb\x2d950790b6f310.mount: Deactivated successfully. Oct 13 00:04:09.508536 systemd[1]: run-netns-cni\x2d69bc52f1\x2dd5c4\x2d295e\x2d6dd6\x2dffb8fe6e845c.mount: Deactivated successfully. Oct 13 00:04:09.508593 systemd[1]: run-netns-cni\x2d4e129217\x2dcafc\x2df9c9\x2d248b\x2db078da04c803.mount: Deactivated successfully. Oct 13 00:04:09.936338 systemd[1]: Created slice kubepods-besteffort-pod95c16496_d748_4240_90a8_51fe76dca721.slice - libcontainer container kubepods-besteffort-pod95c16496_d748_4240_90a8_51fe76dca721.slice. Oct 13 00:04:09.991896 containerd[1536]: time="2025-10-13T00:04:09.991844854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pdk78,Uid:95c16496-d748-4240-90a8-51fe76dca721,Namespace:calico-system,Attempt:0,}" Oct 13 00:04:10.063476 containerd[1536]: time="2025-10-13T00:04:10.063415645Z" level=error msg="Failed to destroy network for sandbox \"c12e9764c257f6317061b921f254fbb9b55daaac075de21faa0495da22b654a2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:04:10.065338 systemd[1]: run-netns-cni\x2df318e3a5\x2df6b9\x2d453a\x2d8103\x2dfa117baace14.mount: Deactivated successfully. Oct 13 00:04:10.108274 containerd[1536]: time="2025-10-13T00:04:10.107382296Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pdk78,Uid:95c16496-d748-4240-90a8-51fe76dca721,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c12e9764c257f6317061b921f254fbb9b55daaac075de21faa0495da22b654a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:04:10.108438 kubelet[2669]: E1013 00:04:10.107725 2669 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c12e9764c257f6317061b921f254fbb9b55daaac075de21faa0495da22b654a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:04:10.108438 kubelet[2669]: E1013 00:04:10.107785 2669 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c12e9764c257f6317061b921f254fbb9b55daaac075de21faa0495da22b654a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-pdk78" Oct 13 00:04:10.108438 kubelet[2669]: E1013 00:04:10.107807 2669 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c12e9764c257f6317061b921f254fbb9b55daaac075de21faa0495da22b654a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-pdk78" Oct 13 00:04:10.108773 kubelet[2669]: E1013 00:04:10.107862 2669 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-pdk78_calico-system(95c16496-d748-4240-90a8-51fe76dca721)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-pdk78_calico-system(95c16496-d748-4240-90a8-51fe76dca721)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c12e9764c257f6317061b921f254fbb9b55daaac075de21faa0495da22b654a2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-pdk78" podUID="95c16496-d748-4240-90a8-51fe76dca721" Oct 13 00:04:11.809953 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4154558734.mount: Deactivated successfully. Oct 13 00:04:12.012060 containerd[1536]: time="2025-10-13T00:04:12.012003336Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=151100457" Oct 13 00:04:12.017479 containerd[1536]: time="2025-10-13T00:04:12.017339390Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:04:12.027355 containerd[1536]: time="2025-10-13T00:04:12.027297334Z" level=info msg="ImageCreate event name:\"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:04:12.028007 containerd[1536]: time="2025-10-13T00:04:12.027972212Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:04:12.035514 containerd[1536]: time="2025-10-13T00:04:12.035461792Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"151100319\" in 2.977453504s" Oct 13 00:04:12.035514 containerd[1536]: time="2025-10-13T00:04:12.035507478Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\"" Oct 13 00:04:12.054659 containerd[1536]: time="2025-10-13T00:04:12.054600792Z" level=info msg="CreateContainer within sandbox \"736e5ae23dae9514edcb337e006c0b119141ee8650bc9cd5b677c4d59ae6cf4d\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 13 00:04:12.074345 containerd[1536]: time="2025-10-13T00:04:12.073689386Z" level=info msg="Container 7bfccd11879fd4d470d056f82df4213d0bfcf81a3215fe2d505cb983283f5a54: CDI devices from CRI Config.CDIDevices: []" Oct 13 00:04:12.084350 containerd[1536]: time="2025-10-13T00:04:12.084259881Z" level=info msg="CreateContainer within sandbox \"736e5ae23dae9514edcb337e006c0b119141ee8650bc9cd5b677c4d59ae6cf4d\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"7bfccd11879fd4d470d056f82df4213d0bfcf81a3215fe2d505cb983283f5a54\"" Oct 13 00:04:12.084941 containerd[1536]: time="2025-10-13T00:04:12.084905155Z" level=info msg="StartContainer for \"7bfccd11879fd4d470d056f82df4213d0bfcf81a3215fe2d505cb983283f5a54\"" Oct 13 00:04:12.086570 containerd[1536]: time="2025-10-13T00:04:12.086536623Z" level=info msg="connecting to shim 7bfccd11879fd4d470d056f82df4213d0bfcf81a3215fe2d505cb983283f5a54" address="unix:///run/containerd/s/d48c8a74d8e52d9357ca93e5ea10fcc3a02b8ff41aab6943549ca27ac6dd37d9" protocol=ttrpc version=3 Oct 13 00:04:12.129461 systemd[1]: Started cri-containerd-7bfccd11879fd4d470d056f82df4213d0bfcf81a3215fe2d505cb983283f5a54.scope - libcontainer container 7bfccd11879fd4d470d056f82df4213d0bfcf81a3215fe2d505cb983283f5a54. Oct 13 00:04:12.186369 containerd[1536]: time="2025-10-13T00:04:12.186296809Z" level=info msg="StartContainer for \"7bfccd11879fd4d470d056f82df4213d0bfcf81a3215fe2d505cb983283f5a54\" returns successfully" Oct 13 00:04:12.303081 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 13 00:04:12.303190 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 13 00:04:12.508189 kubelet[2669]: I1013 00:04:12.508141 2669 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e0280023-afc0-4e97-b63a-2f04efc35c38-whisker-ca-bundle\") pod \"e0280023-afc0-4e97-b63a-2f04efc35c38\" (UID: \"e0280023-afc0-4e97-b63a-2f04efc35c38\") " Oct 13 00:04:12.508613 kubelet[2669]: I1013 00:04:12.508206 2669 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e0280023-afc0-4e97-b63a-2f04efc35c38-whisker-backend-key-pair\") pod \"e0280023-afc0-4e97-b63a-2f04efc35c38\" (UID: \"e0280023-afc0-4e97-b63a-2f04efc35c38\") " Oct 13 00:04:12.508613 kubelet[2669]: I1013 00:04:12.508262 2669 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6zxst\" (UniqueName: \"kubernetes.io/projected/e0280023-afc0-4e97-b63a-2f04efc35c38-kube-api-access-6zxst\") pod \"e0280023-afc0-4e97-b63a-2f04efc35c38\" (UID: \"e0280023-afc0-4e97-b63a-2f04efc35c38\") " Oct 13 00:04:12.527270 kubelet[2669]: I1013 00:04:12.526842 2669 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0280023-afc0-4e97-b63a-2f04efc35c38-kube-api-access-6zxst" (OuterVolumeSpecName: "kube-api-access-6zxst") pod "e0280023-afc0-4e97-b63a-2f04efc35c38" (UID: "e0280023-afc0-4e97-b63a-2f04efc35c38"). InnerVolumeSpecName "kube-api-access-6zxst". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 13 00:04:12.527270 kubelet[2669]: I1013 00:04:12.526869 2669 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0280023-afc0-4e97-b63a-2f04efc35c38-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "e0280023-afc0-4e97-b63a-2f04efc35c38" (UID: "e0280023-afc0-4e97-b63a-2f04efc35c38"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Oct 13 00:04:12.528534 kubelet[2669]: I1013 00:04:12.528456 2669 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0280023-afc0-4e97-b63a-2f04efc35c38-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "e0280023-afc0-4e97-b63a-2f04efc35c38" (UID: "e0280023-afc0-4e97-b63a-2f04efc35c38"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Oct 13 00:04:12.609489 kubelet[2669]: I1013 00:04:12.609438 2669 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6zxst\" (UniqueName: \"kubernetes.io/projected/e0280023-afc0-4e97-b63a-2f04efc35c38-kube-api-access-6zxst\") on node \"localhost\" DevicePath \"\"" Oct 13 00:04:12.609489 kubelet[2669]: I1013 00:04:12.609471 2669 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e0280023-afc0-4e97-b63a-2f04efc35c38-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Oct 13 00:04:12.609489 kubelet[2669]: I1013 00:04:12.609479 2669 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e0280023-afc0-4e97-b63a-2f04efc35c38-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Oct 13 00:04:12.811838 systemd[1]: var-lib-kubelet-pods-e0280023\x2dafc0\x2d4e97\x2db63a\x2d2f04efc35c38-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6zxst.mount: Deactivated successfully. Oct 13 00:04:12.811942 systemd[1]: var-lib-kubelet-pods-e0280023\x2dafc0\x2d4e97\x2db63a\x2d2f04efc35c38-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Oct 13 00:04:13.082264 systemd[1]: Removed slice kubepods-besteffort-pode0280023_afc0_4e97_b63a_2f04efc35c38.slice - libcontainer container kubepods-besteffort-pode0280023_afc0_4e97_b63a_2f04efc35c38.slice. Oct 13 00:04:13.111112 kubelet[2669]: I1013 00:04:13.110880 2669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-c4755" podStartSLOduration=2.483428273 podStartE2EDuration="12.106096743s" podCreationTimestamp="2025-10-13 00:04:01 +0000 UTC" firstStartedPulling="2025-10-13 00:04:02.413802358 +0000 UTC m=+18.579072019" lastFinishedPulling="2025-10-13 00:04:12.036470828 +0000 UTC m=+28.201740489" observedRunningTime="2025-10-13 00:04:13.095515574 +0000 UTC m=+29.260785235" watchObservedRunningTime="2025-10-13 00:04:13.106096743 +0000 UTC m=+29.271366404" Oct 13 00:04:13.177469 systemd[1]: Created slice kubepods-besteffort-podeb3abfe5_3bc0_489a_9764_a782e428d7bd.slice - libcontainer container kubepods-besteffort-podeb3abfe5_3bc0_489a_9764_a782e428d7bd.slice. Oct 13 00:04:13.214013 kubelet[2669]: I1013 00:04:13.213919 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eb3abfe5-3bc0-489a-9764-a782e428d7bd-whisker-ca-bundle\") pod \"whisker-57d4776ddb-h8h4b\" (UID: \"eb3abfe5-3bc0-489a-9764-a782e428d7bd\") " pod="calico-system/whisker-57d4776ddb-h8h4b" Oct 13 00:04:13.214013 kubelet[2669]: I1013 00:04:13.213969 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/eb3abfe5-3bc0-489a-9764-a782e428d7bd-whisker-backend-key-pair\") pod \"whisker-57d4776ddb-h8h4b\" (UID: \"eb3abfe5-3bc0-489a-9764-a782e428d7bd\") " pod="calico-system/whisker-57d4776ddb-h8h4b" Oct 13 00:04:13.214013 kubelet[2669]: I1013 00:04:13.213986 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rtlb\" (UniqueName: \"kubernetes.io/projected/eb3abfe5-3bc0-489a-9764-a782e428d7bd-kube-api-access-6rtlb\") pod \"whisker-57d4776ddb-h8h4b\" (UID: \"eb3abfe5-3bc0-489a-9764-a782e428d7bd\") " pod="calico-system/whisker-57d4776ddb-h8h4b" Oct 13 00:04:13.487891 containerd[1536]: time="2025-10-13T00:04:13.487846093Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-57d4776ddb-h8h4b,Uid:eb3abfe5-3bc0-489a-9764-a782e428d7bd,Namespace:calico-system,Attempt:0,}" Oct 13 00:04:13.644843 systemd-networkd[1448]: cali9638f1d34f7: Link UP Oct 13 00:04:13.645426 systemd-networkd[1448]: cali9638f1d34f7: Gained carrier Oct 13 00:04:13.660436 containerd[1536]: 2025-10-13 00:04:13.508 [INFO][3807] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 13 00:04:13.660436 containerd[1536]: 2025-10-13 00:04:13.544 [INFO][3807] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--57d4776ddb--h8h4b-eth0 whisker-57d4776ddb- calico-system eb3abfe5-3bc0-489a-9764-a782e428d7bd 854 0 2025-10-13 00:04:13 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:57d4776ddb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-57d4776ddb-h8h4b eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali9638f1d34f7 [] [] }} ContainerID="83308a198710940576ddcc50a8c36a0564790416e1bcc67aa85351d82e597b8e" Namespace="calico-system" Pod="whisker-57d4776ddb-h8h4b" WorkloadEndpoint="localhost-k8s-whisker--57d4776ddb--h8h4b-" Oct 13 00:04:13.660436 containerd[1536]: 2025-10-13 00:04:13.544 [INFO][3807] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="83308a198710940576ddcc50a8c36a0564790416e1bcc67aa85351d82e597b8e" Namespace="calico-system" Pod="whisker-57d4776ddb-h8h4b" WorkloadEndpoint="localhost-k8s-whisker--57d4776ddb--h8h4b-eth0" Oct 13 00:04:13.660436 containerd[1536]: 2025-10-13 00:04:13.602 [INFO][3822] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="83308a198710940576ddcc50a8c36a0564790416e1bcc67aa85351d82e597b8e" HandleID="k8s-pod-network.83308a198710940576ddcc50a8c36a0564790416e1bcc67aa85351d82e597b8e" Workload="localhost-k8s-whisker--57d4776ddb--h8h4b-eth0" Oct 13 00:04:13.660683 containerd[1536]: 2025-10-13 00:04:13.602 [INFO][3822] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="83308a198710940576ddcc50a8c36a0564790416e1bcc67aa85351d82e597b8e" HandleID="k8s-pod-network.83308a198710940576ddcc50a8c36a0564790416e1bcc67aa85351d82e597b8e" Workload="localhost-k8s-whisker--57d4776ddb--h8h4b-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000120410), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-57d4776ddb-h8h4b", "timestamp":"2025-10-13 00:04:13.602544357 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 00:04:13.660683 containerd[1536]: 2025-10-13 00:04:13.602 [INFO][3822] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 00:04:13.660683 containerd[1536]: 2025-10-13 00:04:13.602 [INFO][3822] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 00:04:13.660683 containerd[1536]: 2025-10-13 00:04:13.602 [INFO][3822] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 13 00:04:13.660683 containerd[1536]: 2025-10-13 00:04:13.613 [INFO][3822] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.83308a198710940576ddcc50a8c36a0564790416e1bcc67aa85351d82e597b8e" host="localhost" Oct 13 00:04:13.660683 containerd[1536]: 2025-10-13 00:04:13.618 [INFO][3822] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 13 00:04:13.660683 containerd[1536]: 2025-10-13 00:04:13.622 [INFO][3822] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 13 00:04:13.660683 containerd[1536]: 2025-10-13 00:04:13.623 [INFO][3822] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 13 00:04:13.660683 containerd[1536]: 2025-10-13 00:04:13.625 [INFO][3822] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 13 00:04:13.660683 containerd[1536]: 2025-10-13 00:04:13.625 [INFO][3822] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.83308a198710940576ddcc50a8c36a0564790416e1bcc67aa85351d82e597b8e" host="localhost" Oct 13 00:04:13.660876 containerd[1536]: 2025-10-13 00:04:13.627 [INFO][3822] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.83308a198710940576ddcc50a8c36a0564790416e1bcc67aa85351d82e597b8e Oct 13 00:04:13.660876 containerd[1536]: 2025-10-13 00:04:13.630 [INFO][3822] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.83308a198710940576ddcc50a8c36a0564790416e1bcc67aa85351d82e597b8e" host="localhost" Oct 13 00:04:13.660876 containerd[1536]: 2025-10-13 00:04:13.636 [INFO][3822] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.83308a198710940576ddcc50a8c36a0564790416e1bcc67aa85351d82e597b8e" host="localhost" Oct 13 00:04:13.660876 containerd[1536]: 2025-10-13 00:04:13.636 [INFO][3822] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.83308a198710940576ddcc50a8c36a0564790416e1bcc67aa85351d82e597b8e" host="localhost" Oct 13 00:04:13.660876 containerd[1536]: 2025-10-13 00:04:13.636 [INFO][3822] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 00:04:13.660876 containerd[1536]: 2025-10-13 00:04:13.636 [INFO][3822] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="83308a198710940576ddcc50a8c36a0564790416e1bcc67aa85351d82e597b8e" HandleID="k8s-pod-network.83308a198710940576ddcc50a8c36a0564790416e1bcc67aa85351d82e597b8e" Workload="localhost-k8s-whisker--57d4776ddb--h8h4b-eth0" Oct 13 00:04:13.660987 containerd[1536]: 2025-10-13 00:04:13.638 [INFO][3807] cni-plugin/k8s.go 418: Populated endpoint ContainerID="83308a198710940576ddcc50a8c36a0564790416e1bcc67aa85351d82e597b8e" Namespace="calico-system" Pod="whisker-57d4776ddb-h8h4b" WorkloadEndpoint="localhost-k8s-whisker--57d4776ddb--h8h4b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--57d4776ddb--h8h4b-eth0", GenerateName:"whisker-57d4776ddb-", Namespace:"calico-system", SelfLink:"", UID:"eb3abfe5-3bc0-489a-9764-a782e428d7bd", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 0, 4, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"57d4776ddb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-57d4776ddb-h8h4b", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali9638f1d34f7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 00:04:13.660987 containerd[1536]: 2025-10-13 00:04:13.638 [INFO][3807] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="83308a198710940576ddcc50a8c36a0564790416e1bcc67aa85351d82e597b8e" Namespace="calico-system" Pod="whisker-57d4776ddb-h8h4b" WorkloadEndpoint="localhost-k8s-whisker--57d4776ddb--h8h4b-eth0" Oct 13 00:04:13.661051 containerd[1536]: 2025-10-13 00:04:13.639 [INFO][3807] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9638f1d34f7 ContainerID="83308a198710940576ddcc50a8c36a0564790416e1bcc67aa85351d82e597b8e" Namespace="calico-system" Pod="whisker-57d4776ddb-h8h4b" WorkloadEndpoint="localhost-k8s-whisker--57d4776ddb--h8h4b-eth0" Oct 13 00:04:13.661051 containerd[1536]: 2025-10-13 00:04:13.646 [INFO][3807] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="83308a198710940576ddcc50a8c36a0564790416e1bcc67aa85351d82e597b8e" Namespace="calico-system" Pod="whisker-57d4776ddb-h8h4b" WorkloadEndpoint="localhost-k8s-whisker--57d4776ddb--h8h4b-eth0" Oct 13 00:04:13.661090 containerd[1536]: 2025-10-13 00:04:13.646 [INFO][3807] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="83308a198710940576ddcc50a8c36a0564790416e1bcc67aa85351d82e597b8e" Namespace="calico-system" Pod="whisker-57d4776ddb-h8h4b" WorkloadEndpoint="localhost-k8s-whisker--57d4776ddb--h8h4b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--57d4776ddb--h8h4b-eth0", GenerateName:"whisker-57d4776ddb-", Namespace:"calico-system", SelfLink:"", UID:"eb3abfe5-3bc0-489a-9764-a782e428d7bd", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 0, 4, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"57d4776ddb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"83308a198710940576ddcc50a8c36a0564790416e1bcc67aa85351d82e597b8e", Pod:"whisker-57d4776ddb-h8h4b", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali9638f1d34f7", MAC:"5e:e5:be:81:d4:bc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 00:04:13.661135 containerd[1536]: 2025-10-13 00:04:13.658 [INFO][3807] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="83308a198710940576ddcc50a8c36a0564790416e1bcc67aa85351d82e597b8e" Namespace="calico-system" Pod="whisker-57d4776ddb-h8h4b" WorkloadEndpoint="localhost-k8s-whisker--57d4776ddb--h8h4b-eth0" Oct 13 00:04:13.712353 containerd[1536]: time="2025-10-13T00:04:13.711750695Z" level=info msg="connecting to shim 83308a198710940576ddcc50a8c36a0564790416e1bcc67aa85351d82e597b8e" address="unix:///run/containerd/s/0b42a8fee31f43e7acab6225c3676ad6b7eee33855be28ae9362653d778b5173" namespace=k8s.io protocol=ttrpc version=3 Oct 13 00:04:13.755659 systemd[1]: Started cri-containerd-83308a198710940576ddcc50a8c36a0564790416e1bcc67aa85351d82e597b8e.scope - libcontainer container 83308a198710940576ddcc50a8c36a0564790416e1bcc67aa85351d82e597b8e. Oct 13 00:04:13.784185 systemd-resolved[1355]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 13 00:04:13.862610 containerd[1536]: time="2025-10-13T00:04:13.862546384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-57d4776ddb-h8h4b,Uid:eb3abfe5-3bc0-489a-9764-a782e428d7bd,Namespace:calico-system,Attempt:0,} returns sandbox id \"83308a198710940576ddcc50a8c36a0564790416e1bcc67aa85351d82e597b8e\"" Oct 13 00:04:13.865428 containerd[1536]: time="2025-10-13T00:04:13.865123269Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Oct 13 00:04:13.929788 kubelet[2669]: I1013 00:04:13.929744 2669 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0280023-afc0-4e97-b63a-2f04efc35c38" path="/var/lib/kubelet/pods/e0280023-afc0-4e97-b63a-2f04efc35c38/volumes" Oct 13 00:04:14.076735 kubelet[2669]: I1013 00:04:14.076637 2669 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 00:04:14.712353 containerd[1536]: time="2025-10-13T00:04:14.712280831Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:04:14.713692 containerd[1536]: time="2025-10-13T00:04:14.713657538Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4605606" Oct 13 00:04:14.714989 containerd[1536]: time="2025-10-13T00:04:14.714942354Z" level=info msg="ImageCreate event name:\"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:04:14.719026 containerd[1536]: time="2025-10-13T00:04:14.718970062Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:04:14.720429 containerd[1536]: time="2025-10-13T00:04:14.720391373Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"5974839\" in 855.227499ms" Oct 13 00:04:14.720478 containerd[1536]: time="2025-10-13T00:04:14.720464820Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\"" Oct 13 00:04:14.726147 containerd[1536]: time="2025-10-13T00:04:14.726101379Z" level=info msg="CreateContainer within sandbox \"83308a198710940576ddcc50a8c36a0564790416e1bcc67aa85351d82e597b8e\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Oct 13 00:04:14.733992 containerd[1536]: time="2025-10-13T00:04:14.733938011Z" level=info msg="Container 2075a54ff616553abeaec529482d2cfba7e58c33e88ad6d9e42e73b3f7a73efa: CDI devices from CRI Config.CDIDevices: []" Oct 13 00:04:14.744307 containerd[1536]: time="2025-10-13T00:04:14.744265747Z" level=info msg="CreateContainer within sandbox \"83308a198710940576ddcc50a8c36a0564790416e1bcc67aa85351d82e597b8e\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"2075a54ff616553abeaec529482d2cfba7e58c33e88ad6d9e42e73b3f7a73efa\"" Oct 13 00:04:14.745328 containerd[1536]: time="2025-10-13T00:04:14.745291136Z" level=info msg="StartContainer for \"2075a54ff616553abeaec529482d2cfba7e58c33e88ad6d9e42e73b3f7a73efa\"" Oct 13 00:04:14.747545 containerd[1536]: time="2025-10-13T00:04:14.747444485Z" level=info msg="connecting to shim 2075a54ff616553abeaec529482d2cfba7e58c33e88ad6d9e42e73b3f7a73efa" address="unix:///run/containerd/s/0b42a8fee31f43e7acab6225c3676ad6b7eee33855be28ae9362653d778b5173" protocol=ttrpc version=3 Oct 13 00:04:14.767469 systemd[1]: Started cri-containerd-2075a54ff616553abeaec529482d2cfba7e58c33e88ad6d9e42e73b3f7a73efa.scope - libcontainer container 2075a54ff616553abeaec529482d2cfba7e58c33e88ad6d9e42e73b3f7a73efa. Oct 13 00:04:14.804188 containerd[1536]: time="2025-10-13T00:04:14.804037373Z" level=info msg="StartContainer for \"2075a54ff616553abeaec529482d2cfba7e58c33e88ad6d9e42e73b3f7a73efa\" returns successfully" Oct 13 00:04:14.805417 containerd[1536]: time="2025-10-13T00:04:14.805377236Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Oct 13 00:04:15.000379 systemd-networkd[1448]: cali9638f1d34f7: Gained IPv6LL Oct 13 00:04:15.995772 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3355889672.mount: Deactivated successfully. Oct 13 00:04:16.034336 containerd[1536]: time="2025-10-13T00:04:16.034276235Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:04:16.035319 containerd[1536]: time="2025-10-13T00:04:16.035163722Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=30823700" Oct 13 00:04:16.037120 containerd[1536]: time="2025-10-13T00:04:16.037082311Z" level=info msg="ImageCreate event name:\"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:04:16.043529 containerd[1536]: time="2025-10-13T00:04:16.043146588Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:04:16.043789 containerd[1536]: time="2025-10-13T00:04:16.043757168Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"30823530\" in 1.23834741s" Oct 13 00:04:16.043833 containerd[1536]: time="2025-10-13T00:04:16.043793972Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\"" Oct 13 00:04:16.053498 containerd[1536]: time="2025-10-13T00:04:16.053444442Z" level=info msg="CreateContainer within sandbox \"83308a198710940576ddcc50a8c36a0564790416e1bcc67aa85351d82e597b8e\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Oct 13 00:04:16.060177 containerd[1536]: time="2025-10-13T00:04:16.060141062Z" level=info msg="Container 0e8c7e096bc2afa2518ed0f0b24c8d61a717958964a48ef0bb49ca0b0b4411dd: CDI devices from CRI Config.CDIDevices: []" Oct 13 00:04:16.070792 containerd[1536]: time="2025-10-13T00:04:16.070751187Z" level=info msg="CreateContainer within sandbox \"83308a198710940576ddcc50a8c36a0564790416e1bcc67aa85351d82e597b8e\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"0e8c7e096bc2afa2518ed0f0b24c8d61a717958964a48ef0bb49ca0b0b4411dd\"" Oct 13 00:04:16.071562 containerd[1536]: time="2025-10-13T00:04:16.071532384Z" level=info msg="StartContainer for \"0e8c7e096bc2afa2518ed0f0b24c8d61a717958964a48ef0bb49ca0b0b4411dd\"" Oct 13 00:04:16.072645 containerd[1536]: time="2025-10-13T00:04:16.072615730Z" level=info msg="connecting to shim 0e8c7e096bc2afa2518ed0f0b24c8d61a717958964a48ef0bb49ca0b0b4411dd" address="unix:///run/containerd/s/0b42a8fee31f43e7acab6225c3676ad6b7eee33855be28ae9362653d778b5173" protocol=ttrpc version=3 Oct 13 00:04:16.100688 systemd[1]: Started cri-containerd-0e8c7e096bc2afa2518ed0f0b24c8d61a717958964a48ef0bb49ca0b0b4411dd.scope - libcontainer container 0e8c7e096bc2afa2518ed0f0b24c8d61a717958964a48ef0bb49ca0b0b4411dd. Oct 13 00:04:16.140995 containerd[1536]: time="2025-10-13T00:04:16.140948659Z" level=info msg="StartContainer for \"0e8c7e096bc2afa2518ed0f0b24c8d61a717958964a48ef0bb49ca0b0b4411dd\" returns successfully" Oct 13 00:04:17.126003 kubelet[2669]: I1013 00:04:17.125900 2669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-57d4776ddb-h8h4b" podStartSLOduration=1.945005245 podStartE2EDuration="4.125884729s" podCreationTimestamp="2025-10-13 00:04:13 +0000 UTC" firstStartedPulling="2025-10-13 00:04:13.864849399 +0000 UTC m=+30.030119020" lastFinishedPulling="2025-10-13 00:04:16.045728883 +0000 UTC m=+32.210998504" observedRunningTime="2025-10-13 00:04:17.125508213 +0000 UTC m=+33.290777874" watchObservedRunningTime="2025-10-13 00:04:17.125884729 +0000 UTC m=+33.291154390" Oct 13 00:04:19.930896 containerd[1536]: time="2025-10-13T00:04:19.930843568Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-854f97d977-g7xgn,Uid:43e33533-46a8-4b8a-bb17-c138b6d6eb1d,Namespace:calico-system,Attempt:0,}" Oct 13 00:04:19.932589 containerd[1536]: time="2025-10-13T00:04:19.932436709Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-8fb7b,Uid:b0ab16ba-64b0-4e22-a345-afb015a9a0a4,Namespace:kube-system,Attempt:0,}" Oct 13 00:04:19.933923 containerd[1536]: time="2025-10-13T00:04:19.933880037Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-w65sl,Uid:3d63c12c-b533-48c9-a0bb-e9884317fc41,Namespace:kube-system,Attempt:0,}" Oct 13 00:04:20.059255 systemd-networkd[1448]: cali5cc804b7a16: Link UP Oct 13 00:04:20.059689 systemd-networkd[1448]: cali5cc804b7a16: Gained carrier Oct 13 00:04:20.072433 containerd[1536]: 2025-10-13 00:04:19.974 [INFO][4210] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 13 00:04:20.072433 containerd[1536]: 2025-10-13 00:04:19.992 [INFO][4210] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--w65sl-eth0 coredns-66bc5c9577- kube-system 3d63c12c-b533-48c9-a0bb-e9884317fc41 788 0 2025-10-13 00:03:50 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-w65sl eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali5cc804b7a16 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="69f9a6d3792b1e79a79658e661b2878803c2a636eb80bffb8527ab7b6f169910" Namespace="kube-system" Pod="coredns-66bc5c9577-w65sl" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--w65sl-" Oct 13 00:04:20.072433 containerd[1536]: 2025-10-13 00:04:19.992 [INFO][4210] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="69f9a6d3792b1e79a79658e661b2878803c2a636eb80bffb8527ab7b6f169910" Namespace="kube-system" Pod="coredns-66bc5c9577-w65sl" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--w65sl-eth0" Oct 13 00:04:20.072433 containerd[1536]: 2025-10-13 00:04:20.019 [INFO][4242] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="69f9a6d3792b1e79a79658e661b2878803c2a636eb80bffb8527ab7b6f169910" HandleID="k8s-pod-network.69f9a6d3792b1e79a79658e661b2878803c2a636eb80bffb8527ab7b6f169910" Workload="localhost-k8s-coredns--66bc5c9577--w65sl-eth0" Oct 13 00:04:20.072627 containerd[1536]: 2025-10-13 00:04:20.019 [INFO][4242] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="69f9a6d3792b1e79a79658e661b2878803c2a636eb80bffb8527ab7b6f169910" HandleID="k8s-pod-network.69f9a6d3792b1e79a79658e661b2878803c2a636eb80bffb8527ab7b6f169910" Workload="localhost-k8s-coredns--66bc5c9577--w65sl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40005aaa60), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-w65sl", "timestamp":"2025-10-13 00:04:20.019127262 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 00:04:20.072627 containerd[1536]: 2025-10-13 00:04:20.019 [INFO][4242] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 00:04:20.072627 containerd[1536]: 2025-10-13 00:04:20.019 [INFO][4242] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 00:04:20.072627 containerd[1536]: 2025-10-13 00:04:20.019 [INFO][4242] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 13 00:04:20.072627 containerd[1536]: 2025-10-13 00:04:20.030 [INFO][4242] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.69f9a6d3792b1e79a79658e661b2878803c2a636eb80bffb8527ab7b6f169910" host="localhost" Oct 13 00:04:20.072627 containerd[1536]: 2025-10-13 00:04:20.036 [INFO][4242] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 13 00:04:20.072627 containerd[1536]: 2025-10-13 00:04:20.040 [INFO][4242] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 13 00:04:20.072627 containerd[1536]: 2025-10-13 00:04:20.041 [INFO][4242] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 13 00:04:20.072627 containerd[1536]: 2025-10-13 00:04:20.043 [INFO][4242] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 13 00:04:20.072627 containerd[1536]: 2025-10-13 00:04:20.043 [INFO][4242] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.69f9a6d3792b1e79a79658e661b2878803c2a636eb80bffb8527ab7b6f169910" host="localhost" Oct 13 00:04:20.072840 containerd[1536]: 2025-10-13 00:04:20.045 [INFO][4242] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.69f9a6d3792b1e79a79658e661b2878803c2a636eb80bffb8527ab7b6f169910 Oct 13 00:04:20.072840 containerd[1536]: 2025-10-13 00:04:20.048 [INFO][4242] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.69f9a6d3792b1e79a79658e661b2878803c2a636eb80bffb8527ab7b6f169910" host="localhost" Oct 13 00:04:20.072840 containerd[1536]: 2025-10-13 00:04:20.052 [INFO][4242] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.69f9a6d3792b1e79a79658e661b2878803c2a636eb80bffb8527ab7b6f169910" host="localhost" Oct 13 00:04:20.072840 containerd[1536]: 2025-10-13 00:04:20.052 [INFO][4242] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.69f9a6d3792b1e79a79658e661b2878803c2a636eb80bffb8527ab7b6f169910" host="localhost" Oct 13 00:04:20.072840 containerd[1536]: 2025-10-13 00:04:20.052 [INFO][4242] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 00:04:20.072840 containerd[1536]: 2025-10-13 00:04:20.052 [INFO][4242] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="69f9a6d3792b1e79a79658e661b2878803c2a636eb80bffb8527ab7b6f169910" HandleID="k8s-pod-network.69f9a6d3792b1e79a79658e661b2878803c2a636eb80bffb8527ab7b6f169910" Workload="localhost-k8s-coredns--66bc5c9577--w65sl-eth0" Oct 13 00:04:20.072961 containerd[1536]: 2025-10-13 00:04:20.055 [INFO][4210] cni-plugin/k8s.go 418: Populated endpoint ContainerID="69f9a6d3792b1e79a79658e661b2878803c2a636eb80bffb8527ab7b6f169910" Namespace="kube-system" Pod="coredns-66bc5c9577-w65sl" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--w65sl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--w65sl-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"3d63c12c-b533-48c9-a0bb-e9884317fc41", ResourceVersion:"788", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 0, 3, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-w65sl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5cc804b7a16", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 00:04:20.072961 containerd[1536]: 2025-10-13 00:04:20.055 [INFO][4210] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="69f9a6d3792b1e79a79658e661b2878803c2a636eb80bffb8527ab7b6f169910" Namespace="kube-system" Pod="coredns-66bc5c9577-w65sl" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--w65sl-eth0" Oct 13 00:04:20.072961 containerd[1536]: 2025-10-13 00:04:20.055 [INFO][4210] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5cc804b7a16 ContainerID="69f9a6d3792b1e79a79658e661b2878803c2a636eb80bffb8527ab7b6f169910" Namespace="kube-system" Pod="coredns-66bc5c9577-w65sl" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--w65sl-eth0" Oct 13 00:04:20.072961 containerd[1536]: 2025-10-13 00:04:20.060 [INFO][4210] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="69f9a6d3792b1e79a79658e661b2878803c2a636eb80bffb8527ab7b6f169910" Namespace="kube-system" Pod="coredns-66bc5c9577-w65sl" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--w65sl-eth0" Oct 13 00:04:20.072961 containerd[1536]: 2025-10-13 00:04:20.060 [INFO][4210] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="69f9a6d3792b1e79a79658e661b2878803c2a636eb80bffb8527ab7b6f169910" Namespace="kube-system" Pod="coredns-66bc5c9577-w65sl" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--w65sl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--w65sl-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"3d63c12c-b533-48c9-a0bb-e9884317fc41", ResourceVersion:"788", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 0, 3, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"69f9a6d3792b1e79a79658e661b2878803c2a636eb80bffb8527ab7b6f169910", Pod:"coredns-66bc5c9577-w65sl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5cc804b7a16", MAC:"6e:49:75:a2:ca:dc", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 00:04:20.072961 containerd[1536]: 2025-10-13 00:04:20.070 [INFO][4210] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="69f9a6d3792b1e79a79658e661b2878803c2a636eb80bffb8527ab7b6f169910" Namespace="kube-system" Pod="coredns-66bc5c9577-w65sl" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--w65sl-eth0" Oct 13 00:04:20.095898 containerd[1536]: time="2025-10-13T00:04:20.095837760Z" level=info msg="connecting to shim 69f9a6d3792b1e79a79658e661b2878803c2a636eb80bffb8527ab7b6f169910" address="unix:///run/containerd/s/e4c28ae997bb7fc08717b736f80d37b85d1cc241f1433e2cdfe9f3fbba46bdac" namespace=k8s.io protocol=ttrpc version=3 Oct 13 00:04:20.122408 systemd[1]: Started cri-containerd-69f9a6d3792b1e79a79658e661b2878803c2a636eb80bffb8527ab7b6f169910.scope - libcontainer container 69f9a6d3792b1e79a79658e661b2878803c2a636eb80bffb8527ab7b6f169910. Oct 13 00:04:20.134141 systemd-resolved[1355]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 13 00:04:20.160340 systemd-networkd[1448]: cali5ce4538153f: Link UP Oct 13 00:04:20.160694 systemd-networkd[1448]: cali5ce4538153f: Gained carrier Oct 13 00:04:20.174328 containerd[1536]: 2025-10-13 00:04:19.960 [INFO][4191] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 13 00:04:20.174328 containerd[1536]: 2025-10-13 00:04:19.983 [INFO][4191] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--854f97d977--g7xgn-eth0 goldmane-854f97d977- calico-system 43e33533-46a8-4b8a-bb17-c138b6d6eb1d 792 0 2025-10-13 00:04:02 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:854f97d977 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-854f97d977-g7xgn eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali5ce4538153f [] [] }} ContainerID="c061a0a8f1a62b2d1444deb9222e47929eea21660fee822ccf0b46067dba47cb" Namespace="calico-system" Pod="goldmane-854f97d977-g7xgn" WorkloadEndpoint="localhost-k8s-goldmane--854f97d977--g7xgn-" Oct 13 00:04:20.174328 containerd[1536]: 2025-10-13 00:04:19.983 [INFO][4191] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c061a0a8f1a62b2d1444deb9222e47929eea21660fee822ccf0b46067dba47cb" Namespace="calico-system" Pod="goldmane-854f97d977-g7xgn" WorkloadEndpoint="localhost-k8s-goldmane--854f97d977--g7xgn-eth0" Oct 13 00:04:20.174328 containerd[1536]: 2025-10-13 00:04:20.024 [INFO][4235] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c061a0a8f1a62b2d1444deb9222e47929eea21660fee822ccf0b46067dba47cb" HandleID="k8s-pod-network.c061a0a8f1a62b2d1444deb9222e47929eea21660fee822ccf0b46067dba47cb" Workload="localhost-k8s-goldmane--854f97d977--g7xgn-eth0" Oct 13 00:04:20.174328 containerd[1536]: 2025-10-13 00:04:20.024 [INFO][4235] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c061a0a8f1a62b2d1444deb9222e47929eea21660fee822ccf0b46067dba47cb" HandleID="k8s-pod-network.c061a0a8f1a62b2d1444deb9222e47929eea21660fee822ccf0b46067dba47cb" Workload="localhost-k8s-goldmane--854f97d977--g7xgn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000483300), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-854f97d977-g7xgn", "timestamp":"2025-10-13 00:04:20.024340189 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 00:04:20.174328 containerd[1536]: 2025-10-13 00:04:20.024 [INFO][4235] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 00:04:20.174328 containerd[1536]: 2025-10-13 00:04:20.052 [INFO][4235] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 00:04:20.174328 containerd[1536]: 2025-10-13 00:04:20.052 [INFO][4235] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 13 00:04:20.174328 containerd[1536]: 2025-10-13 00:04:20.131 [INFO][4235] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c061a0a8f1a62b2d1444deb9222e47929eea21660fee822ccf0b46067dba47cb" host="localhost" Oct 13 00:04:20.174328 containerd[1536]: 2025-10-13 00:04:20.136 [INFO][4235] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 13 00:04:20.174328 containerd[1536]: 2025-10-13 00:04:20.140 [INFO][4235] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 13 00:04:20.174328 containerd[1536]: 2025-10-13 00:04:20.142 [INFO][4235] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 13 00:04:20.174328 containerd[1536]: 2025-10-13 00:04:20.144 [INFO][4235] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 13 00:04:20.174328 containerd[1536]: 2025-10-13 00:04:20.144 [INFO][4235] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c061a0a8f1a62b2d1444deb9222e47929eea21660fee822ccf0b46067dba47cb" host="localhost" Oct 13 00:04:20.174328 containerd[1536]: 2025-10-13 00:04:20.146 [INFO][4235] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c061a0a8f1a62b2d1444deb9222e47929eea21660fee822ccf0b46067dba47cb Oct 13 00:04:20.174328 containerd[1536]: 2025-10-13 00:04:20.149 [INFO][4235] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c061a0a8f1a62b2d1444deb9222e47929eea21660fee822ccf0b46067dba47cb" host="localhost" Oct 13 00:04:20.174328 containerd[1536]: 2025-10-13 00:04:20.154 [INFO][4235] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.c061a0a8f1a62b2d1444deb9222e47929eea21660fee822ccf0b46067dba47cb" host="localhost" Oct 13 00:04:20.174328 containerd[1536]: 2025-10-13 00:04:20.154 [INFO][4235] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.c061a0a8f1a62b2d1444deb9222e47929eea21660fee822ccf0b46067dba47cb" host="localhost" Oct 13 00:04:20.174328 containerd[1536]: 2025-10-13 00:04:20.154 [INFO][4235] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 00:04:20.174328 containerd[1536]: 2025-10-13 00:04:20.154 [INFO][4235] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="c061a0a8f1a62b2d1444deb9222e47929eea21660fee822ccf0b46067dba47cb" HandleID="k8s-pod-network.c061a0a8f1a62b2d1444deb9222e47929eea21660fee822ccf0b46067dba47cb" Workload="localhost-k8s-goldmane--854f97d977--g7xgn-eth0" Oct 13 00:04:20.175015 containerd[1536]: 2025-10-13 00:04:20.156 [INFO][4191] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c061a0a8f1a62b2d1444deb9222e47929eea21660fee822ccf0b46067dba47cb" Namespace="calico-system" Pod="goldmane-854f97d977-g7xgn" WorkloadEndpoint="localhost-k8s-goldmane--854f97d977--g7xgn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--854f97d977--g7xgn-eth0", GenerateName:"goldmane-854f97d977-", Namespace:"calico-system", SelfLink:"", UID:"43e33533-46a8-4b8a-bb17-c138b6d6eb1d", ResourceVersion:"792", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 0, 4, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"854f97d977", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-854f97d977-g7xgn", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali5ce4538153f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 00:04:20.175015 containerd[1536]: 2025-10-13 00:04:20.157 [INFO][4191] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="c061a0a8f1a62b2d1444deb9222e47929eea21660fee822ccf0b46067dba47cb" Namespace="calico-system" Pod="goldmane-854f97d977-g7xgn" WorkloadEndpoint="localhost-k8s-goldmane--854f97d977--g7xgn-eth0" Oct 13 00:04:20.175015 containerd[1536]: 2025-10-13 00:04:20.157 [INFO][4191] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ce4538153f ContainerID="c061a0a8f1a62b2d1444deb9222e47929eea21660fee822ccf0b46067dba47cb" Namespace="calico-system" Pod="goldmane-854f97d977-g7xgn" WorkloadEndpoint="localhost-k8s-goldmane--854f97d977--g7xgn-eth0" Oct 13 00:04:20.175015 containerd[1536]: 2025-10-13 00:04:20.159 [INFO][4191] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c061a0a8f1a62b2d1444deb9222e47929eea21660fee822ccf0b46067dba47cb" Namespace="calico-system" Pod="goldmane-854f97d977-g7xgn" WorkloadEndpoint="localhost-k8s-goldmane--854f97d977--g7xgn-eth0" Oct 13 00:04:20.175015 containerd[1536]: 2025-10-13 00:04:20.160 [INFO][4191] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c061a0a8f1a62b2d1444deb9222e47929eea21660fee822ccf0b46067dba47cb" Namespace="calico-system" Pod="goldmane-854f97d977-g7xgn" WorkloadEndpoint="localhost-k8s-goldmane--854f97d977--g7xgn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--854f97d977--g7xgn-eth0", GenerateName:"goldmane-854f97d977-", Namespace:"calico-system", SelfLink:"", UID:"43e33533-46a8-4b8a-bb17-c138b6d6eb1d", ResourceVersion:"792", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 0, 4, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"854f97d977", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c061a0a8f1a62b2d1444deb9222e47929eea21660fee822ccf0b46067dba47cb", Pod:"goldmane-854f97d977-g7xgn", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali5ce4538153f", MAC:"6a:da:66:76:22:48", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 00:04:20.175015 containerd[1536]: 2025-10-13 00:04:20.170 [INFO][4191] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c061a0a8f1a62b2d1444deb9222e47929eea21660fee822ccf0b46067dba47cb" Namespace="calico-system" Pod="goldmane-854f97d977-g7xgn" WorkloadEndpoint="localhost-k8s-goldmane--854f97d977--g7xgn-eth0" Oct 13 00:04:20.176989 containerd[1536]: time="2025-10-13T00:04:20.176955676Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-w65sl,Uid:3d63c12c-b533-48c9-a0bb-e9884317fc41,Namespace:kube-system,Attempt:0,} returns sandbox id \"69f9a6d3792b1e79a79658e661b2878803c2a636eb80bffb8527ab7b6f169910\"" Oct 13 00:04:20.181894 containerd[1536]: time="2025-10-13T00:04:20.181817733Z" level=info msg="CreateContainer within sandbox \"69f9a6d3792b1e79a79658e661b2878803c2a636eb80bffb8527ab7b6f169910\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 13 00:04:20.191949 containerd[1536]: time="2025-10-13T00:04:20.191919239Z" level=info msg="Container 32491a8a311539a39e12d7afd43f6bc4ec77beb70f1e3af9c7e1878b7d7ad789: CDI devices from CRI Config.CDIDevices: []" Oct 13 00:04:20.194425 containerd[1536]: time="2025-10-13T00:04:20.194395212Z" level=info msg="connecting to shim c061a0a8f1a62b2d1444deb9222e47929eea21660fee822ccf0b46067dba47cb" address="unix:///run/containerd/s/e747e3cad411a32baf0708cd7ed3ec0c9610e79c37d582d4dd4527c5afcb802f" namespace=k8s.io protocol=ttrpc version=3 Oct 13 00:04:20.197482 containerd[1536]: time="2025-10-13T00:04:20.197346065Z" level=info msg="CreateContainer within sandbox \"69f9a6d3792b1e79a79658e661b2878803c2a636eb80bffb8527ab7b6f169910\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"32491a8a311539a39e12d7afd43f6bc4ec77beb70f1e3af9c7e1878b7d7ad789\"" Oct 13 00:04:20.197888 containerd[1536]: time="2025-10-13T00:04:20.197866229Z" level=info msg="StartContainer for \"32491a8a311539a39e12d7afd43f6bc4ec77beb70f1e3af9c7e1878b7d7ad789\"" Oct 13 00:04:20.198912 containerd[1536]: time="2025-10-13T00:04:20.198885117Z" level=info msg="connecting to shim 32491a8a311539a39e12d7afd43f6bc4ec77beb70f1e3af9c7e1878b7d7ad789" address="unix:///run/containerd/s/e4c28ae997bb7fc08717b736f80d37b85d1cc241f1433e2cdfe9f3fbba46bdac" protocol=ttrpc version=3 Oct 13 00:04:20.225400 systemd[1]: Started cri-containerd-32491a8a311539a39e12d7afd43f6bc4ec77beb70f1e3af9c7e1878b7d7ad789.scope - libcontainer container 32491a8a311539a39e12d7afd43f6bc4ec77beb70f1e3af9c7e1878b7d7ad789. Oct 13 00:04:20.226647 systemd[1]: Started cri-containerd-c061a0a8f1a62b2d1444deb9222e47929eea21660fee822ccf0b46067dba47cb.scope - libcontainer container c061a0a8f1a62b2d1444deb9222e47929eea21660fee822ccf0b46067dba47cb. Oct 13 00:04:20.245130 systemd-resolved[1355]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 13 00:04:20.265127 containerd[1536]: time="2025-10-13T00:04:20.265080313Z" level=info msg="StartContainer for \"32491a8a311539a39e12d7afd43f6bc4ec77beb70f1e3af9c7e1878b7d7ad789\" returns successfully" Oct 13 00:04:20.275711 systemd-networkd[1448]: cali6b76da143fe: Link UP Oct 13 00:04:20.275842 systemd-networkd[1448]: cali6b76da143fe: Gained carrier Oct 13 00:04:20.288487 containerd[1536]: time="2025-10-13T00:04:20.286248848Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-854f97d977-g7xgn,Uid:43e33533-46a8-4b8a-bb17-c138b6d6eb1d,Namespace:calico-system,Attempt:0,} returns sandbox id \"c061a0a8f1a62b2d1444deb9222e47929eea21660fee822ccf0b46067dba47cb\"" Oct 13 00:04:20.291412 containerd[1536]: time="2025-10-13T00:04:20.291379408Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Oct 13 00:04:20.295281 containerd[1536]: 2025-10-13 00:04:19.965 [INFO][4203] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 13 00:04:20.295281 containerd[1536]: 2025-10-13 00:04:19.987 [INFO][4203] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--8fb7b-eth0 coredns-66bc5c9577- kube-system b0ab16ba-64b0-4e22-a345-afb015a9a0a4 790 0 2025-10-13 00:03:50 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-8fb7b eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6b76da143fe [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="65183a4d7e64c645b3a5907bbcf6f79294a50dfb7debeec8d23f9fbf8b867a68" Namespace="kube-system" Pod="coredns-66bc5c9577-8fb7b" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--8fb7b-" Oct 13 00:04:20.295281 containerd[1536]: 2025-10-13 00:04:19.987 [INFO][4203] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="65183a4d7e64c645b3a5907bbcf6f79294a50dfb7debeec8d23f9fbf8b867a68" Namespace="kube-system" Pod="coredns-66bc5c9577-8fb7b" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--8fb7b-eth0" Oct 13 00:04:20.295281 containerd[1536]: 2025-10-13 00:04:20.028 [INFO][4240] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="65183a4d7e64c645b3a5907bbcf6f79294a50dfb7debeec8d23f9fbf8b867a68" HandleID="k8s-pod-network.65183a4d7e64c645b3a5907bbcf6f79294a50dfb7debeec8d23f9fbf8b867a68" Workload="localhost-k8s-coredns--66bc5c9577--8fb7b-eth0" Oct 13 00:04:20.295281 containerd[1536]: 2025-10-13 00:04:20.028 [INFO][4240] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="65183a4d7e64c645b3a5907bbcf6f79294a50dfb7debeec8d23f9fbf8b867a68" HandleID="k8s-pod-network.65183a4d7e64c645b3a5907bbcf6f79294a50dfb7debeec8d23f9fbf8b867a68" Workload="localhost-k8s-coredns--66bc5c9577--8fb7b-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d4d0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-8fb7b", "timestamp":"2025-10-13 00:04:20.028297568 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 00:04:20.295281 containerd[1536]: 2025-10-13 00:04:20.028 [INFO][4240] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 00:04:20.295281 containerd[1536]: 2025-10-13 00:04:20.154 [INFO][4240] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 00:04:20.295281 containerd[1536]: 2025-10-13 00:04:20.154 [INFO][4240] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 13 00:04:20.295281 containerd[1536]: 2025-10-13 00:04:20.233 [INFO][4240] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.65183a4d7e64c645b3a5907bbcf6f79294a50dfb7debeec8d23f9fbf8b867a68" host="localhost" Oct 13 00:04:20.295281 containerd[1536]: 2025-10-13 00:04:20.238 [INFO][4240] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 13 00:04:20.295281 containerd[1536]: 2025-10-13 00:04:20.245 [INFO][4240] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 13 00:04:20.295281 containerd[1536]: 2025-10-13 00:04:20.250 [INFO][4240] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 13 00:04:20.295281 containerd[1536]: 2025-10-13 00:04:20.253 [INFO][4240] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 13 00:04:20.295281 containerd[1536]: 2025-10-13 00:04:20.253 [INFO][4240] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.65183a4d7e64c645b3a5907bbcf6f79294a50dfb7debeec8d23f9fbf8b867a68" host="localhost" Oct 13 00:04:20.295281 containerd[1536]: 2025-10-13 00:04:20.255 [INFO][4240] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.65183a4d7e64c645b3a5907bbcf6f79294a50dfb7debeec8d23f9fbf8b867a68 Oct 13 00:04:20.295281 containerd[1536]: 2025-10-13 00:04:20.258 [INFO][4240] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.65183a4d7e64c645b3a5907bbcf6f79294a50dfb7debeec8d23f9fbf8b867a68" host="localhost" Oct 13 00:04:20.295281 containerd[1536]: 2025-10-13 00:04:20.269 [INFO][4240] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.65183a4d7e64c645b3a5907bbcf6f79294a50dfb7debeec8d23f9fbf8b867a68" host="localhost" Oct 13 00:04:20.295281 containerd[1536]: 2025-10-13 00:04:20.269 [INFO][4240] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.65183a4d7e64c645b3a5907bbcf6f79294a50dfb7debeec8d23f9fbf8b867a68" host="localhost" Oct 13 00:04:20.295281 containerd[1536]: 2025-10-13 00:04:20.269 [INFO][4240] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 00:04:20.295281 containerd[1536]: 2025-10-13 00:04:20.270 [INFO][4240] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="65183a4d7e64c645b3a5907bbcf6f79294a50dfb7debeec8d23f9fbf8b867a68" HandleID="k8s-pod-network.65183a4d7e64c645b3a5907bbcf6f79294a50dfb7debeec8d23f9fbf8b867a68" Workload="localhost-k8s-coredns--66bc5c9577--8fb7b-eth0" Oct 13 00:04:20.295793 containerd[1536]: 2025-10-13 00:04:20.273 [INFO][4203] cni-plugin/k8s.go 418: Populated endpoint ContainerID="65183a4d7e64c645b3a5907bbcf6f79294a50dfb7debeec8d23f9fbf8b867a68" Namespace="kube-system" Pod="coredns-66bc5c9577-8fb7b" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--8fb7b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--8fb7b-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"b0ab16ba-64b0-4e22-a345-afb015a9a0a4", ResourceVersion:"790", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 0, 3, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-8fb7b", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6b76da143fe", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 00:04:20.295793 containerd[1536]: 2025-10-13 00:04:20.274 [INFO][4203] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="65183a4d7e64c645b3a5907bbcf6f79294a50dfb7debeec8d23f9fbf8b867a68" Namespace="kube-system" Pod="coredns-66bc5c9577-8fb7b" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--8fb7b-eth0" Oct 13 00:04:20.295793 containerd[1536]: 2025-10-13 00:04:20.274 [INFO][4203] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6b76da143fe ContainerID="65183a4d7e64c645b3a5907bbcf6f79294a50dfb7debeec8d23f9fbf8b867a68" Namespace="kube-system" Pod="coredns-66bc5c9577-8fb7b" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--8fb7b-eth0" Oct 13 00:04:20.295793 containerd[1536]: 2025-10-13 00:04:20.276 [INFO][4203] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="65183a4d7e64c645b3a5907bbcf6f79294a50dfb7debeec8d23f9fbf8b867a68" Namespace="kube-system" Pod="coredns-66bc5c9577-8fb7b" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--8fb7b-eth0" Oct 13 00:04:20.295793 containerd[1536]: 2025-10-13 00:04:20.276 [INFO][4203] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="65183a4d7e64c645b3a5907bbcf6f79294a50dfb7debeec8d23f9fbf8b867a68" Namespace="kube-system" Pod="coredns-66bc5c9577-8fb7b" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--8fb7b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--8fb7b-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"b0ab16ba-64b0-4e22-a345-afb015a9a0a4", ResourceVersion:"790", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 0, 3, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"65183a4d7e64c645b3a5907bbcf6f79294a50dfb7debeec8d23f9fbf8b867a68", Pod:"coredns-66bc5c9577-8fb7b", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6b76da143fe", MAC:"16:c9:45:33:2f:20", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 00:04:20.295793 containerd[1536]: 2025-10-13 00:04:20.291 [INFO][4203] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="65183a4d7e64c645b3a5907bbcf6f79294a50dfb7debeec8d23f9fbf8b867a68" Namespace="kube-system" Pod="coredns-66bc5c9577-8fb7b" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--8fb7b-eth0" Oct 13 00:04:20.314890 containerd[1536]: time="2025-10-13T00:04:20.314825979Z" level=info msg="connecting to shim 65183a4d7e64c645b3a5907bbcf6f79294a50dfb7debeec8d23f9fbf8b867a68" address="unix:///run/containerd/s/e365b80261e60a69d35e2edfda5c4d66116355a1abe4fac7707bc23d29ef1eb1" namespace=k8s.io protocol=ttrpc version=3 Oct 13 00:04:20.339410 systemd[1]: Started cri-containerd-65183a4d7e64c645b3a5907bbcf6f79294a50dfb7debeec8d23f9fbf8b867a68.scope - libcontainer container 65183a4d7e64c645b3a5907bbcf6f79294a50dfb7debeec8d23f9fbf8b867a68. Oct 13 00:04:20.352783 systemd-resolved[1355]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 13 00:04:20.392123 containerd[1536]: time="2025-10-13T00:04:20.392066483Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-8fb7b,Uid:b0ab16ba-64b0-4e22-a345-afb015a9a0a4,Namespace:kube-system,Attempt:0,} returns sandbox id \"65183a4d7e64c645b3a5907bbcf6f79294a50dfb7debeec8d23f9fbf8b867a68\"" Oct 13 00:04:20.401848 containerd[1536]: time="2025-10-13T00:04:20.401809958Z" level=info msg="CreateContainer within sandbox \"65183a4d7e64c645b3a5907bbcf6f79294a50dfb7debeec8d23f9fbf8b867a68\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 13 00:04:20.431964 containerd[1536]: time="2025-10-13T00:04:20.431862936Z" level=info msg="Container 622573949411d1995aa08616b330baa4b420fe3a96bbfcc6f9df80ad7508f9a1: CDI devices from CRI Config.CDIDevices: []" Oct 13 00:04:20.440562 containerd[1536]: time="2025-10-13T00:04:20.440497316Z" level=info msg="CreateContainer within sandbox \"65183a4d7e64c645b3a5907bbcf6f79294a50dfb7debeec8d23f9fbf8b867a68\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"622573949411d1995aa08616b330baa4b420fe3a96bbfcc6f9df80ad7508f9a1\"" Oct 13 00:04:20.442703 containerd[1536]: time="2025-10-13T00:04:20.442570414Z" level=info msg="StartContainer for \"622573949411d1995aa08616b330baa4b420fe3a96bbfcc6f9df80ad7508f9a1\"" Oct 13 00:04:20.445485 containerd[1536]: time="2025-10-13T00:04:20.445444180Z" level=info msg="connecting to shim 622573949411d1995aa08616b330baa4b420fe3a96bbfcc6f9df80ad7508f9a1" address="unix:///run/containerd/s/e365b80261e60a69d35e2edfda5c4d66116355a1abe4fac7707bc23d29ef1eb1" protocol=ttrpc version=3 Oct 13 00:04:20.474396 systemd[1]: Started cri-containerd-622573949411d1995aa08616b330baa4b420fe3a96bbfcc6f9df80ad7508f9a1.scope - libcontainer container 622573949411d1995aa08616b330baa4b420fe3a96bbfcc6f9df80ad7508f9a1. Oct 13 00:04:20.507278 containerd[1536]: time="2025-10-13T00:04:20.507212037Z" level=info msg="StartContainer for \"622573949411d1995aa08616b330baa4b420fe3a96bbfcc6f9df80ad7508f9a1\" returns successfully" Oct 13 00:04:20.927432 containerd[1536]: time="2025-10-13T00:04:20.927387109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c99b8446c-xnkwj,Uid:39731893-90ec-438e-8f26-b921f2ff0284,Namespace:calico-system,Attempt:0,}" Oct 13 00:04:21.033567 systemd-networkd[1448]: cali81b571405cf: Link UP Oct 13 00:04:21.033695 systemd-networkd[1448]: cali81b571405cf: Gained carrier Oct 13 00:04:21.045769 containerd[1536]: 2025-10-13 00:04:20.955 [INFO][4515] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 13 00:04:21.045769 containerd[1536]: 2025-10-13 00:04:20.971 [INFO][4515] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--c99b8446c--xnkwj-eth0 calico-kube-controllers-c99b8446c- calico-system 39731893-90ec-438e-8f26-b921f2ff0284 793 0 2025-10-13 00:04:02 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:c99b8446c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-c99b8446c-xnkwj eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali81b571405cf [] [] }} ContainerID="2e6278c8cd49afb6f0ec3f1fb84f42b42b200cad524c51d6c5152c70e56a0c3e" Namespace="calico-system" Pod="calico-kube-controllers-c99b8446c-xnkwj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c99b8446c--xnkwj-" Oct 13 00:04:21.045769 containerd[1536]: 2025-10-13 00:04:20.971 [INFO][4515] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2e6278c8cd49afb6f0ec3f1fb84f42b42b200cad524c51d6c5152c70e56a0c3e" Namespace="calico-system" Pod="calico-kube-controllers-c99b8446c-xnkwj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c99b8446c--xnkwj-eth0" Oct 13 00:04:21.045769 containerd[1536]: 2025-10-13 00:04:20.994 [INFO][4529] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2e6278c8cd49afb6f0ec3f1fb84f42b42b200cad524c51d6c5152c70e56a0c3e" HandleID="k8s-pod-network.2e6278c8cd49afb6f0ec3f1fb84f42b42b200cad524c51d6c5152c70e56a0c3e" Workload="localhost-k8s-calico--kube--controllers--c99b8446c--xnkwj-eth0" Oct 13 00:04:21.045769 containerd[1536]: 2025-10-13 00:04:20.995 [INFO][4529] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2e6278c8cd49afb6f0ec3f1fb84f42b42b200cad524c51d6c5152c70e56a0c3e" HandleID="k8s-pod-network.2e6278c8cd49afb6f0ec3f1fb84f42b42b200cad524c51d6c5152c70e56a0c3e" Workload="localhost-k8s-calico--kube--controllers--c99b8446c--xnkwj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002dd630), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-c99b8446c-xnkwj", "timestamp":"2025-10-13 00:04:20.994953623 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 00:04:21.045769 containerd[1536]: 2025-10-13 00:04:20.995 [INFO][4529] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 00:04:21.045769 containerd[1536]: 2025-10-13 00:04:20.995 [INFO][4529] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 00:04:21.045769 containerd[1536]: 2025-10-13 00:04:20.995 [INFO][4529] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 13 00:04:21.045769 containerd[1536]: 2025-10-13 00:04:21.004 [INFO][4529] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2e6278c8cd49afb6f0ec3f1fb84f42b42b200cad524c51d6c5152c70e56a0c3e" host="localhost" Oct 13 00:04:21.045769 containerd[1536]: 2025-10-13 00:04:21.009 [INFO][4529] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 13 00:04:21.045769 containerd[1536]: 2025-10-13 00:04:21.013 [INFO][4529] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 13 00:04:21.045769 containerd[1536]: 2025-10-13 00:04:21.015 [INFO][4529] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 13 00:04:21.045769 containerd[1536]: 2025-10-13 00:04:21.017 [INFO][4529] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 13 00:04:21.045769 containerd[1536]: 2025-10-13 00:04:21.017 [INFO][4529] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2e6278c8cd49afb6f0ec3f1fb84f42b42b200cad524c51d6c5152c70e56a0c3e" host="localhost" Oct 13 00:04:21.045769 containerd[1536]: 2025-10-13 00:04:21.019 [INFO][4529] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2e6278c8cd49afb6f0ec3f1fb84f42b42b200cad524c51d6c5152c70e56a0c3e Oct 13 00:04:21.045769 containerd[1536]: 2025-10-13 00:04:21.023 [INFO][4529] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2e6278c8cd49afb6f0ec3f1fb84f42b42b200cad524c51d6c5152c70e56a0c3e" host="localhost" Oct 13 00:04:21.045769 containerd[1536]: 2025-10-13 00:04:21.029 [INFO][4529] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.2e6278c8cd49afb6f0ec3f1fb84f42b42b200cad524c51d6c5152c70e56a0c3e" host="localhost" Oct 13 00:04:21.045769 containerd[1536]: 2025-10-13 00:04:21.029 [INFO][4529] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.2e6278c8cd49afb6f0ec3f1fb84f42b42b200cad524c51d6c5152c70e56a0c3e" host="localhost" Oct 13 00:04:21.045769 containerd[1536]: 2025-10-13 00:04:21.029 [INFO][4529] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 00:04:21.045769 containerd[1536]: 2025-10-13 00:04:21.029 [INFO][4529] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="2e6278c8cd49afb6f0ec3f1fb84f42b42b200cad524c51d6c5152c70e56a0c3e" HandleID="k8s-pod-network.2e6278c8cd49afb6f0ec3f1fb84f42b42b200cad524c51d6c5152c70e56a0c3e" Workload="localhost-k8s-calico--kube--controllers--c99b8446c--xnkwj-eth0" Oct 13 00:04:21.046530 containerd[1536]: 2025-10-13 00:04:21.031 [INFO][4515] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2e6278c8cd49afb6f0ec3f1fb84f42b42b200cad524c51d6c5152c70e56a0c3e" Namespace="calico-system" Pod="calico-kube-controllers-c99b8446c-xnkwj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c99b8446c--xnkwj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--c99b8446c--xnkwj-eth0", GenerateName:"calico-kube-controllers-c99b8446c-", Namespace:"calico-system", SelfLink:"", UID:"39731893-90ec-438e-8f26-b921f2ff0284", ResourceVersion:"793", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 0, 4, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"c99b8446c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-c99b8446c-xnkwj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali81b571405cf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 00:04:21.046530 containerd[1536]: 2025-10-13 00:04:21.031 [INFO][4515] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="2e6278c8cd49afb6f0ec3f1fb84f42b42b200cad524c51d6c5152c70e56a0c3e" Namespace="calico-system" Pod="calico-kube-controllers-c99b8446c-xnkwj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c99b8446c--xnkwj-eth0" Oct 13 00:04:21.046530 containerd[1536]: 2025-10-13 00:04:21.031 [INFO][4515] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali81b571405cf ContainerID="2e6278c8cd49afb6f0ec3f1fb84f42b42b200cad524c51d6c5152c70e56a0c3e" Namespace="calico-system" Pod="calico-kube-controllers-c99b8446c-xnkwj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c99b8446c--xnkwj-eth0" Oct 13 00:04:21.046530 containerd[1536]: 2025-10-13 00:04:21.033 [INFO][4515] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2e6278c8cd49afb6f0ec3f1fb84f42b42b200cad524c51d6c5152c70e56a0c3e" Namespace="calico-system" Pod="calico-kube-controllers-c99b8446c-xnkwj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c99b8446c--xnkwj-eth0" Oct 13 00:04:21.046530 containerd[1536]: 2025-10-13 00:04:21.033 [INFO][4515] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2e6278c8cd49afb6f0ec3f1fb84f42b42b200cad524c51d6c5152c70e56a0c3e" Namespace="calico-system" Pod="calico-kube-controllers-c99b8446c-xnkwj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c99b8446c--xnkwj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--c99b8446c--xnkwj-eth0", GenerateName:"calico-kube-controllers-c99b8446c-", Namespace:"calico-system", SelfLink:"", UID:"39731893-90ec-438e-8f26-b921f2ff0284", ResourceVersion:"793", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 0, 4, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"c99b8446c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2e6278c8cd49afb6f0ec3f1fb84f42b42b200cad524c51d6c5152c70e56a0c3e", Pod:"calico-kube-controllers-c99b8446c-xnkwj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali81b571405cf", MAC:"b2:d2:2f:7e:cd:b5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 00:04:21.046530 containerd[1536]: 2025-10-13 00:04:21.043 [INFO][4515] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2e6278c8cd49afb6f0ec3f1fb84f42b42b200cad524c51d6c5152c70e56a0c3e" Namespace="calico-system" Pod="calico-kube-controllers-c99b8446c-xnkwj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c99b8446c--xnkwj-eth0" Oct 13 00:04:21.065625 containerd[1536]: time="2025-10-13T00:04:21.065573266Z" level=info msg="connecting to shim 2e6278c8cd49afb6f0ec3f1fb84f42b42b200cad524c51d6c5152c70e56a0c3e" address="unix:///run/containerd/s/63098f19b7b6c85a0c9a0d78185f5d26e8713629a55b563f458723fdb5474495" namespace=k8s.io protocol=ttrpc version=3 Oct 13 00:04:21.093431 systemd[1]: Started cri-containerd-2e6278c8cd49afb6f0ec3f1fb84f42b42b200cad524c51d6c5152c70e56a0c3e.scope - libcontainer container 2e6278c8cd49afb6f0ec3f1fb84f42b42b200cad524c51d6c5152c70e56a0c3e. Oct 13 00:04:21.109808 systemd-resolved[1355]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 13 00:04:21.157387 kubelet[2669]: I1013 00:04:21.157324 2669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-8fb7b" podStartSLOduration=31.157306485 podStartE2EDuration="31.157306485s" podCreationTimestamp="2025-10-13 00:03:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 00:04:21.156833085 +0000 UTC m=+37.322102786" watchObservedRunningTime="2025-10-13 00:04:21.157306485 +0000 UTC m=+37.322576146" Oct 13 00:04:21.173099 containerd[1536]: time="2025-10-13T00:04:21.172995548Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c99b8446c-xnkwj,Uid:39731893-90ec-438e-8f26-b921f2ff0284,Namespace:calico-system,Attempt:0,} returns sandbox id \"2e6278c8cd49afb6f0ec3f1fb84f42b42b200cad524c51d6c5152c70e56a0c3e\"" Oct 13 00:04:21.201263 kubelet[2669]: I1013 00:04:21.200935 2669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-w65sl" podStartSLOduration=31.200917227 podStartE2EDuration="31.200917227s" podCreationTimestamp="2025-10-13 00:03:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 00:04:21.200497672 +0000 UTC m=+37.365767333" watchObservedRunningTime="2025-10-13 00:04:21.200917227 +0000 UTC m=+37.366186888" Oct 13 00:04:21.336402 systemd-networkd[1448]: cali5cc804b7a16: Gained IPv6LL Oct 13 00:04:21.622960 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1284896664.mount: Deactivated successfully. Oct 13 00:04:21.656406 systemd-networkd[1448]: cali6b76da143fe: Gained IPv6LL Oct 13 00:04:21.976389 systemd-networkd[1448]: cali5ce4538153f: Gained IPv6LL Oct 13 00:04:22.051185 containerd[1536]: time="2025-10-13T00:04:22.051121156Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pdk78,Uid:95c16496-d748-4240-90a8-51fe76dca721,Namespace:calico-system,Attempt:0,}" Oct 13 00:04:22.053500 containerd[1536]: time="2025-10-13T00:04:22.053468305Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-964f9dc6f-wkqzm,Uid:1a4417d9-6c74-4135-852a-04179805653d,Namespace:calico-apiserver,Attempt:0,}" Oct 13 00:04:22.054835 containerd[1536]: time="2025-10-13T00:04:22.054800532Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-964f9dc6f-58jz4,Uid:9dd1b1dc-234c-4e88-ab75-26e6f1e943b5,Namespace:calico-apiserver,Attempt:0,}" Oct 13 00:04:22.090863 containerd[1536]: time="2025-10-13T00:04:22.090558491Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:04:22.092717 containerd[1536]: time="2025-10-13T00:04:22.092681862Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=61845332" Oct 13 00:04:22.093012 containerd[1536]: time="2025-10-13T00:04:22.092985447Z" level=info msg="ImageCreate event name:\"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:04:22.107665 containerd[1536]: time="2025-10-13T00:04:22.107446971Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:04:22.110605 containerd[1536]: time="2025-10-13T00:04:22.110575983Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"61845178\" in 1.81914753s" Oct 13 00:04:22.110735 containerd[1536]: time="2025-10-13T00:04:22.110717235Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\"" Oct 13 00:04:22.112668 containerd[1536]: time="2025-10-13T00:04:22.112633309Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Oct 13 00:04:22.133469 containerd[1536]: time="2025-10-13T00:04:22.133426143Z" level=info msg="CreateContainer within sandbox \"c061a0a8f1a62b2d1444deb9222e47929eea21660fee822ccf0b46067dba47cb\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Oct 13 00:04:22.171387 containerd[1536]: time="2025-10-13T00:04:22.170519610Z" level=info msg="Container 6604707643be502fc940ccc817af7a595db4ab8c6140c2ce734f0b3b7ad48f84: CDI devices from CRI Config.CDIDevices: []" Oct 13 00:04:22.184990 containerd[1536]: time="2025-10-13T00:04:22.184939291Z" level=info msg="CreateContainer within sandbox \"c061a0a8f1a62b2d1444deb9222e47929eea21660fee822ccf0b46067dba47cb\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"6604707643be502fc940ccc817af7a595db4ab8c6140c2ce734f0b3b7ad48f84\"" Oct 13 00:04:22.185830 containerd[1536]: time="2025-10-13T00:04:22.185757037Z" level=info msg="StartContainer for \"6604707643be502fc940ccc817af7a595db4ab8c6140c2ce734f0b3b7ad48f84\"" Oct 13 00:04:22.187538 containerd[1536]: time="2025-10-13T00:04:22.187510338Z" level=info msg="connecting to shim 6604707643be502fc940ccc817af7a595db4ab8c6140c2ce734f0b3b7ad48f84" address="unix:///run/containerd/s/e747e3cad411a32baf0708cd7ed3ec0c9610e79c37d582d4dd4527c5afcb802f" protocol=ttrpc version=3 Oct 13 00:04:22.210292 systemd[1]: Started cri-containerd-6604707643be502fc940ccc817af7a595db4ab8c6140c2ce734f0b3b7ad48f84.scope - libcontainer container 6604707643be502fc940ccc817af7a595db4ab8c6140c2ce734f0b3b7ad48f84. Oct 13 00:04:22.221790 systemd-networkd[1448]: cali36f9916c28b: Link UP Oct 13 00:04:22.221932 systemd-networkd[1448]: cali36f9916c28b: Gained carrier Oct 13 00:04:22.238296 containerd[1536]: 2025-10-13 00:04:22.087 [INFO][4635] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 13 00:04:22.238296 containerd[1536]: 2025-10-13 00:04:22.118 [INFO][4635] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--964f9dc6f--wkqzm-eth0 calico-apiserver-964f9dc6f- calico-apiserver 1a4417d9-6c74-4135-852a-04179805653d 789 0 2025-10-13 00:03:58 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:964f9dc6f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-964f9dc6f-wkqzm eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali36f9916c28b [] [] }} ContainerID="54fe9ade2f90470b49e7bf967b898c1ee5b2dcfe260e9d344cc6400102130742" Namespace="calico-apiserver" Pod="calico-apiserver-964f9dc6f-wkqzm" WorkloadEndpoint="localhost-k8s-calico--apiserver--964f9dc6f--wkqzm-" Oct 13 00:04:22.238296 containerd[1536]: 2025-10-13 00:04:22.120 [INFO][4635] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="54fe9ade2f90470b49e7bf967b898c1ee5b2dcfe260e9d344cc6400102130742" Namespace="calico-apiserver" Pod="calico-apiserver-964f9dc6f-wkqzm" WorkloadEndpoint="localhost-k8s-calico--apiserver--964f9dc6f--wkqzm-eth0" Oct 13 00:04:22.238296 containerd[1536]: 2025-10-13 00:04:22.162 [INFO][4680] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="54fe9ade2f90470b49e7bf967b898c1ee5b2dcfe260e9d344cc6400102130742" HandleID="k8s-pod-network.54fe9ade2f90470b49e7bf967b898c1ee5b2dcfe260e9d344cc6400102130742" Workload="localhost-k8s-calico--apiserver--964f9dc6f--wkqzm-eth0" Oct 13 00:04:22.238296 containerd[1536]: 2025-10-13 00:04:22.163 [INFO][4680] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="54fe9ade2f90470b49e7bf967b898c1ee5b2dcfe260e9d344cc6400102130742" HandleID="k8s-pod-network.54fe9ade2f90470b49e7bf967b898c1ee5b2dcfe260e9d344cc6400102130742" Workload="localhost-k8s-calico--apiserver--964f9dc6f--wkqzm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001a3740), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-964f9dc6f-wkqzm", "timestamp":"2025-10-13 00:04:22.16293524 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 00:04:22.238296 containerd[1536]: 2025-10-13 00:04:22.163 [INFO][4680] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 00:04:22.238296 containerd[1536]: 2025-10-13 00:04:22.163 [INFO][4680] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 00:04:22.238296 containerd[1536]: 2025-10-13 00:04:22.163 [INFO][4680] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 13 00:04:22.238296 containerd[1536]: 2025-10-13 00:04:22.178 [INFO][4680] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.54fe9ade2f90470b49e7bf967b898c1ee5b2dcfe260e9d344cc6400102130742" host="localhost" Oct 13 00:04:22.238296 containerd[1536]: 2025-10-13 00:04:22.185 [INFO][4680] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 13 00:04:22.238296 containerd[1536]: 2025-10-13 00:04:22.189 [INFO][4680] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 13 00:04:22.238296 containerd[1536]: 2025-10-13 00:04:22.194 [INFO][4680] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 13 00:04:22.238296 containerd[1536]: 2025-10-13 00:04:22.197 [INFO][4680] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 13 00:04:22.238296 containerd[1536]: 2025-10-13 00:04:22.197 [INFO][4680] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.54fe9ade2f90470b49e7bf967b898c1ee5b2dcfe260e9d344cc6400102130742" host="localhost" Oct 13 00:04:22.238296 containerd[1536]: 2025-10-13 00:04:22.200 [INFO][4680] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.54fe9ade2f90470b49e7bf967b898c1ee5b2dcfe260e9d344cc6400102130742 Oct 13 00:04:22.238296 containerd[1536]: 2025-10-13 00:04:22.204 [INFO][4680] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.54fe9ade2f90470b49e7bf967b898c1ee5b2dcfe260e9d344cc6400102130742" host="localhost" Oct 13 00:04:22.238296 containerd[1536]: 2025-10-13 00:04:22.212 [INFO][4680] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.54fe9ade2f90470b49e7bf967b898c1ee5b2dcfe260e9d344cc6400102130742" host="localhost" Oct 13 00:04:22.238296 containerd[1536]: 2025-10-13 00:04:22.213 [INFO][4680] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.54fe9ade2f90470b49e7bf967b898c1ee5b2dcfe260e9d344cc6400102130742" host="localhost" Oct 13 00:04:22.238296 containerd[1536]: 2025-10-13 00:04:22.213 [INFO][4680] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 00:04:22.238296 containerd[1536]: 2025-10-13 00:04:22.213 [INFO][4680] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="54fe9ade2f90470b49e7bf967b898c1ee5b2dcfe260e9d344cc6400102130742" HandleID="k8s-pod-network.54fe9ade2f90470b49e7bf967b898c1ee5b2dcfe260e9d344cc6400102130742" Workload="localhost-k8s-calico--apiserver--964f9dc6f--wkqzm-eth0" Oct 13 00:04:22.238805 containerd[1536]: 2025-10-13 00:04:22.218 [INFO][4635] cni-plugin/k8s.go 418: Populated endpoint ContainerID="54fe9ade2f90470b49e7bf967b898c1ee5b2dcfe260e9d344cc6400102130742" Namespace="calico-apiserver" Pod="calico-apiserver-964f9dc6f-wkqzm" WorkloadEndpoint="localhost-k8s-calico--apiserver--964f9dc6f--wkqzm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--964f9dc6f--wkqzm-eth0", GenerateName:"calico-apiserver-964f9dc6f-", Namespace:"calico-apiserver", SelfLink:"", UID:"1a4417d9-6c74-4135-852a-04179805653d", ResourceVersion:"789", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 0, 3, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"964f9dc6f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-964f9dc6f-wkqzm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali36f9916c28b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 00:04:22.238805 containerd[1536]: 2025-10-13 00:04:22.218 [INFO][4635] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="54fe9ade2f90470b49e7bf967b898c1ee5b2dcfe260e9d344cc6400102130742" Namespace="calico-apiserver" Pod="calico-apiserver-964f9dc6f-wkqzm" WorkloadEndpoint="localhost-k8s-calico--apiserver--964f9dc6f--wkqzm-eth0" Oct 13 00:04:22.238805 containerd[1536]: 2025-10-13 00:04:22.218 [INFO][4635] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali36f9916c28b ContainerID="54fe9ade2f90470b49e7bf967b898c1ee5b2dcfe260e9d344cc6400102130742" Namespace="calico-apiserver" Pod="calico-apiserver-964f9dc6f-wkqzm" WorkloadEndpoint="localhost-k8s-calico--apiserver--964f9dc6f--wkqzm-eth0" Oct 13 00:04:22.238805 containerd[1536]: 2025-10-13 00:04:22.221 [INFO][4635] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="54fe9ade2f90470b49e7bf967b898c1ee5b2dcfe260e9d344cc6400102130742" Namespace="calico-apiserver" Pod="calico-apiserver-964f9dc6f-wkqzm" WorkloadEndpoint="localhost-k8s-calico--apiserver--964f9dc6f--wkqzm-eth0" Oct 13 00:04:22.238805 containerd[1536]: 2025-10-13 00:04:22.221 [INFO][4635] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="54fe9ade2f90470b49e7bf967b898c1ee5b2dcfe260e9d344cc6400102130742" Namespace="calico-apiserver" Pod="calico-apiserver-964f9dc6f-wkqzm" WorkloadEndpoint="localhost-k8s-calico--apiserver--964f9dc6f--wkqzm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--964f9dc6f--wkqzm-eth0", GenerateName:"calico-apiserver-964f9dc6f-", Namespace:"calico-apiserver", SelfLink:"", UID:"1a4417d9-6c74-4135-852a-04179805653d", ResourceVersion:"789", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 0, 3, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"964f9dc6f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"54fe9ade2f90470b49e7bf967b898c1ee5b2dcfe260e9d344cc6400102130742", Pod:"calico-apiserver-964f9dc6f-wkqzm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali36f9916c28b", MAC:"4e:36:78:e6:1f:a9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 00:04:22.238805 containerd[1536]: 2025-10-13 00:04:22.233 [INFO][4635] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="54fe9ade2f90470b49e7bf967b898c1ee5b2dcfe260e9d344cc6400102130742" Namespace="calico-apiserver" Pod="calico-apiserver-964f9dc6f-wkqzm" WorkloadEndpoint="localhost-k8s-calico--apiserver--964f9dc6f--wkqzm-eth0" Oct 13 00:04:22.261975 containerd[1536]: time="2025-10-13T00:04:22.261889968Z" level=info msg="connecting to shim 54fe9ade2f90470b49e7bf967b898c1ee5b2dcfe260e9d344cc6400102130742" address="unix:///run/containerd/s/5a3d4b0512749f6e5096e4376656dd56d6661f994ba067736d190b55383788b6" namespace=k8s.io protocol=ttrpc version=3 Oct 13 00:04:22.275910 containerd[1536]: time="2025-10-13T00:04:22.275861973Z" level=info msg="StartContainer for \"6604707643be502fc940ccc817af7a595db4ab8c6140c2ce734f0b3b7ad48f84\" returns successfully" Oct 13 00:04:22.299555 systemd[1]: Started cri-containerd-54fe9ade2f90470b49e7bf967b898c1ee5b2dcfe260e9d344cc6400102130742.scope - libcontainer container 54fe9ade2f90470b49e7bf967b898c1ee5b2dcfe260e9d344cc6400102130742. Oct 13 00:04:22.315362 systemd-networkd[1448]: cali6117ea06ffb: Link UP Oct 13 00:04:22.315852 systemd-networkd[1448]: cali6117ea06ffb: Gained carrier Oct 13 00:04:22.317873 systemd-resolved[1355]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 13 00:04:22.334227 containerd[1536]: 2025-10-13 00:04:22.114 [INFO][4660] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 13 00:04:22.334227 containerd[1536]: 2025-10-13 00:04:22.131 [INFO][4660] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--964f9dc6f--58jz4-eth0 calico-apiserver-964f9dc6f- calico-apiserver 9dd1b1dc-234c-4e88-ab75-26e6f1e943b5 791 0 2025-10-13 00:03:58 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:964f9dc6f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-964f9dc6f-58jz4 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali6117ea06ffb [] [] }} ContainerID="f2cea00568d22dc248e0050446295096f78763b2f03a76fa9c968e24c0598f91" Namespace="calico-apiserver" Pod="calico-apiserver-964f9dc6f-58jz4" WorkloadEndpoint="localhost-k8s-calico--apiserver--964f9dc6f--58jz4-" Oct 13 00:04:22.334227 containerd[1536]: 2025-10-13 00:04:22.132 [INFO][4660] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f2cea00568d22dc248e0050446295096f78763b2f03a76fa9c968e24c0598f91" Namespace="calico-apiserver" Pod="calico-apiserver-964f9dc6f-58jz4" WorkloadEndpoint="localhost-k8s-calico--apiserver--964f9dc6f--58jz4-eth0" Oct 13 00:04:22.334227 containerd[1536]: 2025-10-13 00:04:22.173 [INFO][4693] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f2cea00568d22dc248e0050446295096f78763b2f03a76fa9c968e24c0598f91" HandleID="k8s-pod-network.f2cea00568d22dc248e0050446295096f78763b2f03a76fa9c968e24c0598f91" Workload="localhost-k8s-calico--apiserver--964f9dc6f--58jz4-eth0" Oct 13 00:04:22.334227 containerd[1536]: 2025-10-13 00:04:22.173 [INFO][4693] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f2cea00568d22dc248e0050446295096f78763b2f03a76fa9c968e24c0598f91" HandleID="k8s-pod-network.f2cea00568d22dc248e0050446295096f78763b2f03a76fa9c968e24c0598f91" Workload="localhost-k8s-calico--apiserver--964f9dc6f--58jz4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c2fe0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-964f9dc6f-58jz4", "timestamp":"2025-10-13 00:04:22.173062415 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 00:04:22.334227 containerd[1536]: 2025-10-13 00:04:22.173 [INFO][4693] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 00:04:22.334227 containerd[1536]: 2025-10-13 00:04:22.214 [INFO][4693] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 00:04:22.334227 containerd[1536]: 2025-10-13 00:04:22.214 [INFO][4693] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 13 00:04:22.334227 containerd[1536]: 2025-10-13 00:04:22.278 [INFO][4693] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f2cea00568d22dc248e0050446295096f78763b2f03a76fa9c968e24c0598f91" host="localhost" Oct 13 00:04:22.334227 containerd[1536]: 2025-10-13 00:04:22.288 [INFO][4693] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 13 00:04:22.334227 containerd[1536]: 2025-10-13 00:04:22.293 [INFO][4693] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 13 00:04:22.334227 containerd[1536]: 2025-10-13 00:04:22.295 [INFO][4693] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 13 00:04:22.334227 containerd[1536]: 2025-10-13 00:04:22.297 [INFO][4693] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 13 00:04:22.334227 containerd[1536]: 2025-10-13 00:04:22.298 [INFO][4693] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f2cea00568d22dc248e0050446295096f78763b2f03a76fa9c968e24c0598f91" host="localhost" Oct 13 00:04:22.334227 containerd[1536]: 2025-10-13 00:04:22.299 [INFO][4693] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f2cea00568d22dc248e0050446295096f78763b2f03a76fa9c968e24c0598f91 Oct 13 00:04:22.334227 containerd[1536]: 2025-10-13 00:04:22.304 [INFO][4693] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f2cea00568d22dc248e0050446295096f78763b2f03a76fa9c968e24c0598f91" host="localhost" Oct 13 00:04:22.334227 containerd[1536]: 2025-10-13 00:04:22.310 [INFO][4693] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.f2cea00568d22dc248e0050446295096f78763b2f03a76fa9c968e24c0598f91" host="localhost" Oct 13 00:04:22.334227 containerd[1536]: 2025-10-13 00:04:22.310 [INFO][4693] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.f2cea00568d22dc248e0050446295096f78763b2f03a76fa9c968e24c0598f91" host="localhost" Oct 13 00:04:22.334227 containerd[1536]: 2025-10-13 00:04:22.311 [INFO][4693] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 00:04:22.334227 containerd[1536]: 2025-10-13 00:04:22.311 [INFO][4693] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="f2cea00568d22dc248e0050446295096f78763b2f03a76fa9c968e24c0598f91" HandleID="k8s-pod-network.f2cea00568d22dc248e0050446295096f78763b2f03a76fa9c968e24c0598f91" Workload="localhost-k8s-calico--apiserver--964f9dc6f--58jz4-eth0" Oct 13 00:04:22.334809 containerd[1536]: 2025-10-13 00:04:22.313 [INFO][4660] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f2cea00568d22dc248e0050446295096f78763b2f03a76fa9c968e24c0598f91" Namespace="calico-apiserver" Pod="calico-apiserver-964f9dc6f-58jz4" WorkloadEndpoint="localhost-k8s-calico--apiserver--964f9dc6f--58jz4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--964f9dc6f--58jz4-eth0", GenerateName:"calico-apiserver-964f9dc6f-", Namespace:"calico-apiserver", SelfLink:"", UID:"9dd1b1dc-234c-4e88-ab75-26e6f1e943b5", ResourceVersion:"791", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 0, 3, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"964f9dc6f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-964f9dc6f-58jz4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6117ea06ffb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 00:04:22.334809 containerd[1536]: 2025-10-13 00:04:22.313 [INFO][4660] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="f2cea00568d22dc248e0050446295096f78763b2f03a76fa9c968e24c0598f91" Namespace="calico-apiserver" Pod="calico-apiserver-964f9dc6f-58jz4" WorkloadEndpoint="localhost-k8s-calico--apiserver--964f9dc6f--58jz4-eth0" Oct 13 00:04:22.334809 containerd[1536]: 2025-10-13 00:04:22.313 [INFO][4660] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6117ea06ffb ContainerID="f2cea00568d22dc248e0050446295096f78763b2f03a76fa9c968e24c0598f91" Namespace="calico-apiserver" Pod="calico-apiserver-964f9dc6f-58jz4" WorkloadEndpoint="localhost-k8s-calico--apiserver--964f9dc6f--58jz4-eth0" Oct 13 00:04:22.334809 containerd[1536]: 2025-10-13 00:04:22.316 [INFO][4660] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f2cea00568d22dc248e0050446295096f78763b2f03a76fa9c968e24c0598f91" Namespace="calico-apiserver" Pod="calico-apiserver-964f9dc6f-58jz4" WorkloadEndpoint="localhost-k8s-calico--apiserver--964f9dc6f--58jz4-eth0" Oct 13 00:04:22.334809 containerd[1536]: 2025-10-13 00:04:22.317 [INFO][4660] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f2cea00568d22dc248e0050446295096f78763b2f03a76fa9c968e24c0598f91" Namespace="calico-apiserver" Pod="calico-apiserver-964f9dc6f-58jz4" WorkloadEndpoint="localhost-k8s-calico--apiserver--964f9dc6f--58jz4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--964f9dc6f--58jz4-eth0", GenerateName:"calico-apiserver-964f9dc6f-", Namespace:"calico-apiserver", SelfLink:"", UID:"9dd1b1dc-234c-4e88-ab75-26e6f1e943b5", ResourceVersion:"791", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 0, 3, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"964f9dc6f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f2cea00568d22dc248e0050446295096f78763b2f03a76fa9c968e24c0598f91", Pod:"calico-apiserver-964f9dc6f-58jz4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6117ea06ffb", MAC:"e2:2d:f3:56:c2:73", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 00:04:22.334809 containerd[1536]: 2025-10-13 00:04:22.328 [INFO][4660] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f2cea00568d22dc248e0050446295096f78763b2f03a76fa9c968e24c0598f91" Namespace="calico-apiserver" Pod="calico-apiserver-964f9dc6f-58jz4" WorkloadEndpoint="localhost-k8s-calico--apiserver--964f9dc6f--58jz4-eth0" Oct 13 00:04:22.393095 containerd[1536]: time="2025-10-13T00:04:22.393056490Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-964f9dc6f-wkqzm,Uid:1a4417d9-6c74-4135-852a-04179805653d,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"54fe9ade2f90470b49e7bf967b898c1ee5b2dcfe260e9d344cc6400102130742\"" Oct 13 00:04:22.449097 containerd[1536]: time="2025-10-13T00:04:22.449009996Z" level=info msg="connecting to shim f2cea00568d22dc248e0050446295096f78763b2f03a76fa9c968e24c0598f91" address="unix:///run/containerd/s/b4b71a5413078fc4e2beeb5eece7c16e5bd32562c71234955db4a1235052f92c" namespace=k8s.io protocol=ttrpc version=3 Oct 13 00:04:22.452668 systemd-networkd[1448]: cali43bf869fcc2: Link UP Oct 13 00:04:22.454976 systemd-networkd[1448]: cali43bf869fcc2: Gained carrier Oct 13 00:04:22.473036 containerd[1536]: 2025-10-13 00:04:22.102 [INFO][4630] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 13 00:04:22.473036 containerd[1536]: 2025-10-13 00:04:22.126 [INFO][4630] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--pdk78-eth0 csi-node-driver- calico-system 95c16496-d748-4240-90a8-51fe76dca721 694 0 2025-10-13 00:04:02 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:f8549cf5c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-pdk78 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali43bf869fcc2 [] [] }} ContainerID="938981c1facc13cbad21af8f39bf9bfe82d12f47061105ef3a878418204397f7" Namespace="calico-system" Pod="csi-node-driver-pdk78" WorkloadEndpoint="localhost-k8s-csi--node--driver--pdk78-" Oct 13 00:04:22.473036 containerd[1536]: 2025-10-13 00:04:22.126 [INFO][4630] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="938981c1facc13cbad21af8f39bf9bfe82d12f47061105ef3a878418204397f7" Namespace="calico-system" Pod="csi-node-driver-pdk78" WorkloadEndpoint="localhost-k8s-csi--node--driver--pdk78-eth0" Oct 13 00:04:22.473036 containerd[1536]: 2025-10-13 00:04:22.177 [INFO][4686] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="938981c1facc13cbad21af8f39bf9bfe82d12f47061105ef3a878418204397f7" HandleID="k8s-pod-network.938981c1facc13cbad21af8f39bf9bfe82d12f47061105ef3a878418204397f7" Workload="localhost-k8s-csi--node--driver--pdk78-eth0" Oct 13 00:04:22.473036 containerd[1536]: 2025-10-13 00:04:22.178 [INFO][4686] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="938981c1facc13cbad21af8f39bf9bfe82d12f47061105ef3a878418204397f7" HandleID="k8s-pod-network.938981c1facc13cbad21af8f39bf9bfe82d12f47061105ef3a878418204397f7" Workload="localhost-k8s-csi--node--driver--pdk78-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001a3b80), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-pdk78", "timestamp":"2025-10-13 00:04:22.177511573 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 00:04:22.473036 containerd[1536]: 2025-10-13 00:04:22.178 [INFO][4686] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 00:04:22.473036 containerd[1536]: 2025-10-13 00:04:22.311 [INFO][4686] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 00:04:22.473036 containerd[1536]: 2025-10-13 00:04:22.311 [INFO][4686] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 13 00:04:22.473036 containerd[1536]: 2025-10-13 00:04:22.391 [INFO][4686] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.938981c1facc13cbad21af8f39bf9bfe82d12f47061105ef3a878418204397f7" host="localhost" Oct 13 00:04:22.473036 containerd[1536]: 2025-10-13 00:04:22.398 [INFO][4686] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 13 00:04:22.473036 containerd[1536]: 2025-10-13 00:04:22.409 [INFO][4686] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 13 00:04:22.473036 containerd[1536]: 2025-10-13 00:04:22.412 [INFO][4686] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 13 00:04:22.473036 containerd[1536]: 2025-10-13 00:04:22.416 [INFO][4686] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 13 00:04:22.473036 containerd[1536]: 2025-10-13 00:04:22.416 [INFO][4686] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.938981c1facc13cbad21af8f39bf9bfe82d12f47061105ef3a878418204397f7" host="localhost" Oct 13 00:04:22.473036 containerd[1536]: 2025-10-13 00:04:22.421 [INFO][4686] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.938981c1facc13cbad21af8f39bf9bfe82d12f47061105ef3a878418204397f7 Oct 13 00:04:22.473036 containerd[1536]: 2025-10-13 00:04:22.426 [INFO][4686] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.938981c1facc13cbad21af8f39bf9bfe82d12f47061105ef3a878418204397f7" host="localhost" Oct 13 00:04:22.473036 containerd[1536]: 2025-10-13 00:04:22.438 [INFO][4686] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.938981c1facc13cbad21af8f39bf9bfe82d12f47061105ef3a878418204397f7" host="localhost" Oct 13 00:04:22.473036 containerd[1536]: 2025-10-13 00:04:22.438 [INFO][4686] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.938981c1facc13cbad21af8f39bf9bfe82d12f47061105ef3a878418204397f7" host="localhost" Oct 13 00:04:22.473036 containerd[1536]: 2025-10-13 00:04:22.438 [INFO][4686] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 00:04:22.473036 containerd[1536]: 2025-10-13 00:04:22.438 [INFO][4686] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="938981c1facc13cbad21af8f39bf9bfe82d12f47061105ef3a878418204397f7" HandleID="k8s-pod-network.938981c1facc13cbad21af8f39bf9bfe82d12f47061105ef3a878418204397f7" Workload="localhost-k8s-csi--node--driver--pdk78-eth0" Oct 13 00:04:22.473699 containerd[1536]: 2025-10-13 00:04:22.448 [INFO][4630] cni-plugin/k8s.go 418: Populated endpoint ContainerID="938981c1facc13cbad21af8f39bf9bfe82d12f47061105ef3a878418204397f7" Namespace="calico-system" Pod="csi-node-driver-pdk78" WorkloadEndpoint="localhost-k8s-csi--node--driver--pdk78-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--pdk78-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"95c16496-d748-4240-90a8-51fe76dca721", ResourceVersion:"694", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 0, 4, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"f8549cf5c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-pdk78", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali43bf869fcc2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 00:04:22.473699 containerd[1536]: 2025-10-13 00:04:22.448 [INFO][4630] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="938981c1facc13cbad21af8f39bf9bfe82d12f47061105ef3a878418204397f7" Namespace="calico-system" Pod="csi-node-driver-pdk78" WorkloadEndpoint="localhost-k8s-csi--node--driver--pdk78-eth0" Oct 13 00:04:22.473699 containerd[1536]: 2025-10-13 00:04:22.449 [INFO][4630] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali43bf869fcc2 ContainerID="938981c1facc13cbad21af8f39bf9bfe82d12f47061105ef3a878418204397f7" Namespace="calico-system" Pod="csi-node-driver-pdk78" WorkloadEndpoint="localhost-k8s-csi--node--driver--pdk78-eth0" Oct 13 00:04:22.473699 containerd[1536]: 2025-10-13 00:04:22.455 [INFO][4630] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="938981c1facc13cbad21af8f39bf9bfe82d12f47061105ef3a878418204397f7" Namespace="calico-system" Pod="csi-node-driver-pdk78" WorkloadEndpoint="localhost-k8s-csi--node--driver--pdk78-eth0" Oct 13 00:04:22.473699 containerd[1536]: 2025-10-13 00:04:22.456 [INFO][4630] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="938981c1facc13cbad21af8f39bf9bfe82d12f47061105ef3a878418204397f7" Namespace="calico-system" Pod="csi-node-driver-pdk78" WorkloadEndpoint="localhost-k8s-csi--node--driver--pdk78-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--pdk78-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"95c16496-d748-4240-90a8-51fe76dca721", ResourceVersion:"694", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 0, 4, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"f8549cf5c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"938981c1facc13cbad21af8f39bf9bfe82d12f47061105ef3a878418204397f7", Pod:"csi-node-driver-pdk78", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali43bf869fcc2", MAC:"72:41:a7:60:93:97", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 00:04:22.473699 containerd[1536]: 2025-10-13 00:04:22.469 [INFO][4630] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="938981c1facc13cbad21af8f39bf9bfe82d12f47061105ef3a878418204397f7" Namespace="calico-system" Pod="csi-node-driver-pdk78" WorkloadEndpoint="localhost-k8s-csi--node--driver--pdk78-eth0" Oct 13 00:04:22.476413 systemd[1]: Started cri-containerd-f2cea00568d22dc248e0050446295096f78763b2f03a76fa9c968e24c0598f91.scope - libcontainer container f2cea00568d22dc248e0050446295096f78763b2f03a76fa9c968e24c0598f91. Oct 13 00:04:22.488867 systemd-resolved[1355]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 13 00:04:22.491880 containerd[1536]: time="2025-10-13T00:04:22.491830084Z" level=info msg="connecting to shim 938981c1facc13cbad21af8f39bf9bfe82d12f47061105ef3a878418204397f7" address="unix:///run/containerd/s/959f8b400a87ca207cd883240c60a682989dc98cf85370496bde222248f2a127" namespace=k8s.io protocol=ttrpc version=3 Oct 13 00:04:22.514410 systemd[1]: Started cri-containerd-938981c1facc13cbad21af8f39bf9bfe82d12f47061105ef3a878418204397f7.scope - libcontainer container 938981c1facc13cbad21af8f39bf9bfe82d12f47061105ef3a878418204397f7. Oct 13 00:04:22.519248 containerd[1536]: time="2025-10-13T00:04:22.519196088Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-964f9dc6f-58jz4,Uid:9dd1b1dc-234c-4e88-ab75-26e6f1e943b5,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"f2cea00568d22dc248e0050446295096f78763b2f03a76fa9c968e24c0598f91\"" Oct 13 00:04:22.527228 systemd-resolved[1355]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 13 00:04:22.541066 containerd[1536]: time="2025-10-13T00:04:22.541020205Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pdk78,Uid:95c16496-d748-4240-90a8-51fe76dca721,Namespace:calico-system,Attempt:0,} returns sandbox id \"938981c1facc13cbad21af8f39bf9bfe82d12f47061105ef3a878418204397f7\"" Oct 13 00:04:22.936372 systemd-networkd[1448]: cali81b571405cf: Gained IPv6LL Oct 13 00:04:23.189326 kubelet[2669]: I1013 00:04:23.188387 2669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-854f97d977-g7xgn" podStartSLOduration=19.367214396 podStartE2EDuration="21.18836849s" podCreationTimestamp="2025-10-13 00:04:02 +0000 UTC" firstStartedPulling="2025-10-13 00:04:20.290964613 +0000 UTC m=+36.456234274" lastFinishedPulling="2025-10-13 00:04:22.112118707 +0000 UTC m=+38.277388368" observedRunningTime="2025-10-13 00:04:23.186853971 +0000 UTC m=+39.352123632" watchObservedRunningTime="2025-10-13 00:04:23.18836849 +0000 UTC m=+39.353638151" Oct 13 00:04:23.359746 containerd[1536]: time="2025-10-13T00:04:23.359665117Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6604707643be502fc940ccc817af7a595db4ab8c6140c2ce734f0b3b7ad48f84\" id:\"07878069c6652e68506211ddef67e0173b2cb8725c406915b1c5323d55b17138\" pid:4936 exit_status:1 exited_at:{seconds:1760313863 nanos:359224163}" Oct 13 00:04:23.705604 systemd-networkd[1448]: cali6117ea06ffb: Gained IPv6LL Oct 13 00:04:23.852582 containerd[1536]: time="2025-10-13T00:04:23.852513715Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:04:23.853870 containerd[1536]: time="2025-10-13T00:04:23.853820577Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=48134957" Oct 13 00:04:23.854749 containerd[1536]: time="2025-10-13T00:04:23.854704367Z" level=info msg="ImageCreate event name:\"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:04:23.857539 containerd[1536]: time="2025-10-13T00:04:23.857492704Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:04:23.858397 containerd[1536]: time="2025-10-13T00:04:23.857906897Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"49504166\" in 1.745128696s" Oct 13 00:04:23.858397 containerd[1536]: time="2025-10-13T00:04:23.857948780Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\"" Oct 13 00:04:23.859099 containerd[1536]: time="2025-10-13T00:04:23.859074788Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Oct 13 00:04:23.871673 containerd[1536]: time="2025-10-13T00:04:23.871615008Z" level=info msg="CreateContainer within sandbox \"2e6278c8cd49afb6f0ec3f1fb84f42b42b200cad524c51d6c5152c70e56a0c3e\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Oct 13 00:04:23.880922 containerd[1536]: time="2025-10-13T00:04:23.880015225Z" level=info msg="Container 07b69478c0017b845fb3a8c20910f9414a33dbaa7154650e70ca18c3f39388ec: CDI devices from CRI Config.CDIDevices: []" Oct 13 00:04:23.887162 containerd[1536]: time="2025-10-13T00:04:23.887118620Z" level=info msg="CreateContainer within sandbox \"2e6278c8cd49afb6f0ec3f1fb84f42b42b200cad524c51d6c5152c70e56a0c3e\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"07b69478c0017b845fb3a8c20910f9414a33dbaa7154650e70ca18c3f39388ec\"" Oct 13 00:04:23.887948 containerd[1536]: time="2025-10-13T00:04:23.887925363Z" level=info msg="StartContainer for \"07b69478c0017b845fb3a8c20910f9414a33dbaa7154650e70ca18c3f39388ec\"" Oct 13 00:04:23.889305 containerd[1536]: time="2025-10-13T00:04:23.889218024Z" level=info msg="connecting to shim 07b69478c0017b845fb3a8c20910f9414a33dbaa7154650e70ca18c3f39388ec" address="unix:///run/containerd/s/63098f19b7b6c85a0c9a0d78185f5d26e8713629a55b563f458723fdb5474495" protocol=ttrpc version=3 Oct 13 00:04:23.913474 systemd[1]: Started cri-containerd-07b69478c0017b845fb3a8c20910f9414a33dbaa7154650e70ca18c3f39388ec.scope - libcontainer container 07b69478c0017b845fb3a8c20910f9414a33dbaa7154650e70ca18c3f39388ec. Oct 13 00:04:23.964864 containerd[1536]: time="2025-10-13T00:04:23.964650959Z" level=info msg="StartContainer for \"07b69478c0017b845fb3a8c20910f9414a33dbaa7154650e70ca18c3f39388ec\" returns successfully" Oct 13 00:04:24.089691 systemd-networkd[1448]: cali43bf869fcc2: Gained IPv6LL Oct 13 00:04:24.152380 systemd-networkd[1448]: cali36f9916c28b: Gained IPv6LL Oct 13 00:04:24.223841 kubelet[2669]: I1013 00:04:24.223698 2669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-c99b8446c-xnkwj" podStartSLOduration=19.542585362 podStartE2EDuration="22.223680029s" podCreationTimestamp="2025-10-13 00:04:02 +0000 UTC" firstStartedPulling="2025-10-13 00:04:21.177822869 +0000 UTC m=+37.343092530" lastFinishedPulling="2025-10-13 00:04:23.858917536 +0000 UTC m=+40.024187197" observedRunningTime="2025-10-13 00:04:24.222763599 +0000 UTC m=+40.388033300" watchObservedRunningTime="2025-10-13 00:04:24.223680029 +0000 UTC m=+40.388949690" Oct 13 00:04:24.249208 containerd[1536]: time="2025-10-13T00:04:24.249162764Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6604707643be502fc940ccc817af7a595db4ab8c6140c2ce734f0b3b7ad48f84\" id:\"5e47f2b8f1b346779affc8840e282f4bb0530f2a07eb56c1ebde7b17635649ab\" pid:5039 exit_status:1 exited_at:{seconds:1760313864 nanos:248847220}" Oct 13 00:04:25.055428 systemd[1]: Started sshd@7-10.0.0.63:22-10.0.0.1:35590.service - OpenSSH per-connection server daemon (10.0.0.1:35590). Oct 13 00:04:25.150222 sshd[5079]: Accepted publickey for core from 10.0.0.1 port 35590 ssh2: RSA SHA256:TSsUX0+AOpz/e050I2QTTANvLGIa9yseBHOQ57c0ZcY Oct 13 00:04:25.152215 sshd-session[5079]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:04:25.160692 systemd-logind[1518]: New session 8 of user core. Oct 13 00:04:25.165434 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 13 00:04:25.221880 containerd[1536]: time="2025-10-13T00:04:25.221835439Z" level=info msg="TaskExit event in podsandbox handler container_id:\"07b69478c0017b845fb3a8c20910f9414a33dbaa7154650e70ca18c3f39388ec\" id:\"7d638ad83e8ef2cb2df06e3b30dd5ff9d9864b7e6650dbd0da6b03eaf84688bd\" pid:5095 exited_at:{seconds:1760313865 nanos:219727564}" Oct 13 00:04:25.268930 kubelet[2669]: I1013 00:04:25.268432 2669 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 00:04:25.489595 sshd[5082]: Connection closed by 10.0.0.1 port 35590 Oct 13 00:04:25.489908 sshd-session[5079]: pam_unix(sshd:session): session closed for user core Oct 13 00:04:25.494817 systemd[1]: sshd@7-10.0.0.63:22-10.0.0.1:35590.service: Deactivated successfully. Oct 13 00:04:25.498933 systemd[1]: session-8.scope: Deactivated successfully. Oct 13 00:04:25.500145 systemd-logind[1518]: Session 8 logged out. Waiting for processes to exit. Oct 13 00:04:25.501954 systemd-logind[1518]: Removed session 8. Oct 13 00:04:25.668034 containerd[1536]: time="2025-10-13T00:04:25.667789971Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:04:25.668358 containerd[1536]: time="2025-10-13T00:04:25.668314570Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=44530807" Oct 13 00:04:25.669145 containerd[1536]: time="2025-10-13T00:04:25.669118469Z" level=info msg="ImageCreate event name:\"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:04:25.671223 containerd[1536]: time="2025-10-13T00:04:25.671169181Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:04:25.671810 containerd[1536]: time="2025-10-13T00:04:25.671778706Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 1.812585828s" Oct 13 00:04:25.671852 containerd[1536]: time="2025-10-13T00:04:25.671823029Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Oct 13 00:04:25.673273 containerd[1536]: time="2025-10-13T00:04:25.673206331Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Oct 13 00:04:25.678480 containerd[1536]: time="2025-10-13T00:04:25.678449918Z" level=info msg="CreateContainer within sandbox \"54fe9ade2f90470b49e7bf967b898c1ee5b2dcfe260e9d344cc6400102130742\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Oct 13 00:04:25.683972 containerd[1536]: time="2025-10-13T00:04:25.683201989Z" level=info msg="Container 534e537b1bf670591acd2c2648e839344a7df886fb33ea19b5d7c2109f374421: CDI devices from CRI Config.CDIDevices: []" Oct 13 00:04:25.691694 containerd[1536]: time="2025-10-13T00:04:25.691630092Z" level=info msg="CreateContainer within sandbox \"54fe9ade2f90470b49e7bf967b898c1ee5b2dcfe260e9d344cc6400102130742\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"534e537b1bf670591acd2c2648e839344a7df886fb33ea19b5d7c2109f374421\"" Oct 13 00:04:25.693269 containerd[1536]: time="2025-10-13T00:04:25.693202168Z" level=info msg="StartContainer for \"534e537b1bf670591acd2c2648e839344a7df886fb33ea19b5d7c2109f374421\"" Oct 13 00:04:25.694819 containerd[1536]: time="2025-10-13T00:04:25.694713719Z" level=info msg="connecting to shim 534e537b1bf670591acd2c2648e839344a7df886fb33ea19b5d7c2109f374421" address="unix:///run/containerd/s/5a3d4b0512749f6e5096e4376656dd56d6661f994ba067736d190b55383788b6" protocol=ttrpc version=3 Oct 13 00:04:25.717474 systemd[1]: Started cri-containerd-534e537b1bf670591acd2c2648e839344a7df886fb33ea19b5d7c2109f374421.scope - libcontainer container 534e537b1bf670591acd2c2648e839344a7df886fb33ea19b5d7c2109f374421. Oct 13 00:04:25.917203 containerd[1536]: time="2025-10-13T00:04:25.917097221Z" level=info msg="StartContainer for \"534e537b1bf670591acd2c2648e839344a7df886fb33ea19b5d7c2109f374421\" returns successfully" Oct 13 00:04:26.325159 containerd[1536]: time="2025-10-13T00:04:26.325110839Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:04:26.325868 containerd[1536]: time="2025-10-13T00:04:26.325837611Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Oct 13 00:04:26.327973 containerd[1536]: time="2025-10-13T00:04:26.327935522Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 654.700548ms" Oct 13 00:04:26.328036 containerd[1536]: time="2025-10-13T00:04:26.327978645Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Oct 13 00:04:26.329412 containerd[1536]: time="2025-10-13T00:04:26.329365064Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Oct 13 00:04:26.334530 containerd[1536]: time="2025-10-13T00:04:26.334489433Z" level=info msg="CreateContainer within sandbox \"f2cea00568d22dc248e0050446295096f78763b2f03a76fa9c968e24c0598f91\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Oct 13 00:04:26.340384 containerd[1536]: time="2025-10-13T00:04:26.340337613Z" level=info msg="Container 9aa97f9c2a1432fa68cbdf719341b3a0eb95d55e7a9453d72886ced580354cff: CDI devices from CRI Config.CDIDevices: []" Oct 13 00:04:26.351095 containerd[1536]: time="2025-10-13T00:04:26.351051183Z" level=info msg="CreateContainer within sandbox \"f2cea00568d22dc248e0050446295096f78763b2f03a76fa9c968e24c0598f91\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"9aa97f9c2a1432fa68cbdf719341b3a0eb95d55e7a9453d72886ced580354cff\"" Oct 13 00:04:26.351699 containerd[1536]: time="2025-10-13T00:04:26.351633505Z" level=info msg="StartContainer for \"9aa97f9c2a1432fa68cbdf719341b3a0eb95d55e7a9453d72886ced580354cff\"" Oct 13 00:04:26.352877 containerd[1536]: time="2025-10-13T00:04:26.352734064Z" level=info msg="connecting to shim 9aa97f9c2a1432fa68cbdf719341b3a0eb95d55e7a9453d72886ced580354cff" address="unix:///run/containerd/s/b4b71a5413078fc4e2beeb5eece7c16e5bd32562c71234955db4a1235052f92c" protocol=ttrpc version=3 Oct 13 00:04:26.383484 systemd[1]: Started cri-containerd-9aa97f9c2a1432fa68cbdf719341b3a0eb95d55e7a9453d72886ced580354cff.scope - libcontainer container 9aa97f9c2a1432fa68cbdf719341b3a0eb95d55e7a9453d72886ced580354cff. Oct 13 00:04:26.430639 containerd[1536]: time="2025-10-13T00:04:26.430592022Z" level=info msg="StartContainer for \"9aa97f9c2a1432fa68cbdf719341b3a0eb95d55e7a9453d72886ced580354cff\" returns successfully" Oct 13 00:04:26.584674 systemd-networkd[1448]: vxlan.calico: Link UP Oct 13 00:04:26.584686 systemd-networkd[1448]: vxlan.calico: Gained carrier Oct 13 00:04:27.195265 kubelet[2669]: I1013 00:04:27.194924 2669 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 00:04:27.208249 kubelet[2669]: I1013 00:04:27.208165 2669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-964f9dc6f-wkqzm" podStartSLOduration=25.930678111 podStartE2EDuration="29.20814694s" podCreationTimestamp="2025-10-13 00:03:58 +0000 UTC" firstStartedPulling="2025-10-13 00:04:22.395232705 +0000 UTC m=+38.560502326" lastFinishedPulling="2025-10-13 00:04:25.672701494 +0000 UTC m=+41.837971155" observedRunningTime="2025-10-13 00:04:26.22289477 +0000 UTC m=+42.388164431" watchObservedRunningTime="2025-10-13 00:04:27.20814694 +0000 UTC m=+43.373416721" Oct 13 00:04:27.209356 kubelet[2669]: I1013 00:04:27.208657 2669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-964f9dc6f-58jz4" podStartSLOduration=25.400916113 podStartE2EDuration="29.208648335s" podCreationTimestamp="2025-10-13 00:03:58 +0000 UTC" firstStartedPulling="2025-10-13 00:04:22.521485192 +0000 UTC m=+38.686754853" lastFinishedPulling="2025-10-13 00:04:26.329217414 +0000 UTC m=+42.494487075" observedRunningTime="2025-10-13 00:04:27.207162911 +0000 UTC m=+43.372432612" watchObservedRunningTime="2025-10-13 00:04:27.208648335 +0000 UTC m=+43.373917956" Oct 13 00:04:27.539372 containerd[1536]: time="2025-10-13T00:04:27.539263578Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:04:27.542331 containerd[1536]: time="2025-10-13T00:04:27.542286550Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8227489" Oct 13 00:04:27.543080 containerd[1536]: time="2025-10-13T00:04:27.543058124Z" level=info msg="ImageCreate event name:\"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:04:27.546326 containerd[1536]: time="2025-10-13T00:04:27.545552579Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:04:27.546450 containerd[1536]: time="2025-10-13T00:04:27.546364395Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"9596730\" in 1.216970329s" Oct 13 00:04:27.546450 containerd[1536]: time="2025-10-13T00:04:27.546390957Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\"" Oct 13 00:04:27.553226 containerd[1536]: time="2025-10-13T00:04:27.553190514Z" level=info msg="CreateContainer within sandbox \"938981c1facc13cbad21af8f39bf9bfe82d12f47061105ef3a878418204397f7\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Oct 13 00:04:27.562973 containerd[1536]: time="2025-10-13T00:04:27.562930076Z" level=info msg="Container 2cbd835dd768b41a92ec5db265add600c21b1643bb456cc847c9b9e7883cd532: CDI devices from CRI Config.CDIDevices: []" Oct 13 00:04:27.583089 containerd[1536]: time="2025-10-13T00:04:27.583043605Z" level=info msg="CreateContainer within sandbox \"938981c1facc13cbad21af8f39bf9bfe82d12f47061105ef3a878418204397f7\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"2cbd835dd768b41a92ec5db265add600c21b1643bb456cc847c9b9e7883cd532\"" Oct 13 00:04:27.583583 containerd[1536]: time="2025-10-13T00:04:27.583558441Z" level=info msg="StartContainer for \"2cbd835dd768b41a92ec5db265add600c21b1643bb456cc847c9b9e7883cd532\"" Oct 13 00:04:27.586915 containerd[1536]: time="2025-10-13T00:04:27.586325275Z" level=info msg="connecting to shim 2cbd835dd768b41a92ec5db265add600c21b1643bb456cc847c9b9e7883cd532" address="unix:///run/containerd/s/959f8b400a87ca207cd883240c60a682989dc98cf85370496bde222248f2a127" protocol=ttrpc version=3 Oct 13 00:04:27.599948 kubelet[2669]: I1013 00:04:27.599866 2669 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 00:04:27.609579 systemd[1]: Started cri-containerd-2cbd835dd768b41a92ec5db265add600c21b1643bb456cc847c9b9e7883cd532.scope - libcontainer container 2cbd835dd768b41a92ec5db265add600c21b1643bb456cc847c9b9e7883cd532. Oct 13 00:04:27.665005 containerd[1536]: time="2025-10-13T00:04:27.664960024Z" level=info msg="StartContainer for \"2cbd835dd768b41a92ec5db265add600c21b1643bb456cc847c9b9e7883cd532\" returns successfully" Oct 13 00:04:27.666077 containerd[1536]: time="2025-10-13T00:04:27.666051901Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Oct 13 00:04:27.707186 containerd[1536]: time="2025-10-13T00:04:27.707148740Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7bfccd11879fd4d470d056f82df4213d0bfcf81a3215fe2d505cb983283f5a54\" id:\"ad19546b42c4aa1f360018e42375b5972ff02a198ad43cb886cac80ea4aa717b\" pid:5379 exit_status:1 exited_at:{seconds:1760313867 nanos:706886682}" Oct 13 00:04:27.795297 containerd[1536]: time="2025-10-13T00:04:27.795070780Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7bfccd11879fd4d470d056f82df4213d0bfcf81a3215fe2d505cb983283f5a54\" id:\"5c68135f527111873c3f4949b479790a2aba9f5083fbd3ef5499375cb27d80d4\" pid:5415 exit_status:1 exited_at:{seconds:1760313867 nanos:794632949}" Oct 13 00:04:27.992436 systemd-networkd[1448]: vxlan.calico: Gained IPv6LL Oct 13 00:04:28.200598 kubelet[2669]: I1013 00:04:28.200561 2669 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 00:04:28.667383 containerd[1536]: time="2025-10-13T00:04:28.666851831Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:04:28.667755 containerd[1536]: time="2025-10-13T00:04:28.667582121Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=13761208" Oct 13 00:04:28.668720 containerd[1536]: time="2025-10-13T00:04:28.668676795Z" level=info msg="ImageCreate event name:\"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:04:28.670982 containerd[1536]: time="2025-10-13T00:04:28.670926269Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:04:28.672357 containerd[1536]: time="2025-10-13T00:04:28.672326325Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"15130401\" in 1.006247782s" Oct 13 00:04:28.672428 containerd[1536]: time="2025-10-13T00:04:28.672359567Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\"" Oct 13 00:04:28.677231 containerd[1536]: time="2025-10-13T00:04:28.676901838Z" level=info msg="CreateContainer within sandbox \"938981c1facc13cbad21af8f39bf9bfe82d12f47061105ef3a878418204397f7\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Oct 13 00:04:28.688515 containerd[1536]: time="2025-10-13T00:04:28.687332310Z" level=info msg="Container 210ff5c0ee2c75365d1465cf5874d96acd4f1b36141f9e0b6c4c00b3288c4c81: CDI devices from CRI Config.CDIDevices: []" Oct 13 00:04:28.690490 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3846914430.mount: Deactivated successfully. Oct 13 00:04:28.696558 containerd[1536]: time="2025-10-13T00:04:28.696519578Z" level=info msg="CreateContainer within sandbox \"938981c1facc13cbad21af8f39bf9bfe82d12f47061105ef3a878418204397f7\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"210ff5c0ee2c75365d1465cf5874d96acd4f1b36141f9e0b6c4c00b3288c4c81\"" Oct 13 00:04:28.697304 containerd[1536]: time="2025-10-13T00:04:28.697171543Z" level=info msg="StartContainer for \"210ff5c0ee2c75365d1465cf5874d96acd4f1b36141f9e0b6c4c00b3288c4c81\"" Oct 13 00:04:28.699159 containerd[1536]: time="2025-10-13T00:04:28.699129157Z" level=info msg="connecting to shim 210ff5c0ee2c75365d1465cf5874d96acd4f1b36141f9e0b6c4c00b3288c4c81" address="unix:///run/containerd/s/959f8b400a87ca207cd883240c60a682989dc98cf85370496bde222248f2a127" protocol=ttrpc version=3 Oct 13 00:04:28.719416 systemd[1]: Started cri-containerd-210ff5c0ee2c75365d1465cf5874d96acd4f1b36141f9e0b6c4c00b3288c4c81.scope - libcontainer container 210ff5c0ee2c75365d1465cf5874d96acd4f1b36141f9e0b6c4c00b3288c4c81. Oct 13 00:04:28.753873 containerd[1536]: time="2025-10-13T00:04:28.753822054Z" level=info msg="StartContainer for \"210ff5c0ee2c75365d1465cf5874d96acd4f1b36141f9e0b6c4c00b3288c4c81\" returns successfully" Oct 13 00:04:29.009387 kubelet[2669]: I1013 00:04:29.009342 2669 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Oct 13 00:04:29.010173 kubelet[2669]: I1013 00:04:29.010128 2669 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Oct 13 00:04:29.220622 kubelet[2669]: I1013 00:04:29.220555 2669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-pdk78" podStartSLOduration=21.089036638 podStartE2EDuration="27.220535155s" podCreationTimestamp="2025-10-13 00:04:02 +0000 UTC" firstStartedPulling="2025-10-13 00:04:22.542280987 +0000 UTC m=+38.707550648" lastFinishedPulling="2025-10-13 00:04:28.673779504 +0000 UTC m=+44.839049165" observedRunningTime="2025-10-13 00:04:29.219356277 +0000 UTC m=+45.384625898" watchObservedRunningTime="2025-10-13 00:04:29.220535155 +0000 UTC m=+45.385804816" Oct 13 00:04:30.505770 systemd[1]: Started sshd@8-10.0.0.63:22-10.0.0.1:36778.service - OpenSSH per-connection server daemon (10.0.0.1:36778). Oct 13 00:04:30.576953 sshd[5468]: Accepted publickey for core from 10.0.0.1 port 36778 ssh2: RSA SHA256:TSsUX0+AOpz/e050I2QTTANvLGIa9yseBHOQ57c0ZcY Oct 13 00:04:30.578586 sshd-session[5468]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:04:30.583021 systemd-logind[1518]: New session 9 of user core. Oct 13 00:04:30.592443 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 13 00:04:30.840903 sshd[5471]: Connection closed by 10.0.0.1 port 36778 Oct 13 00:04:30.841196 sshd-session[5468]: pam_unix(sshd:session): session closed for user core Oct 13 00:04:30.846860 systemd[1]: sshd@8-10.0.0.63:22-10.0.0.1:36778.service: Deactivated successfully. Oct 13 00:04:30.848676 systemd[1]: session-9.scope: Deactivated successfully. Oct 13 00:04:30.852190 systemd-logind[1518]: Session 9 logged out. Waiting for processes to exit. Oct 13 00:04:30.853302 systemd-logind[1518]: Removed session 9. Oct 13 00:04:34.045066 kubelet[2669]: I1013 00:04:34.041842 2669 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 00:04:35.860788 systemd[1]: Started sshd@9-10.0.0.63:22-10.0.0.1:41292.service - OpenSSH per-connection server daemon (10.0.0.1:41292). Oct 13 00:04:35.928555 sshd[5497]: Accepted publickey for core from 10.0.0.1 port 41292 ssh2: RSA SHA256:TSsUX0+AOpz/e050I2QTTANvLGIa9yseBHOQ57c0ZcY Oct 13 00:04:35.930504 sshd-session[5497]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:04:35.935598 systemd-logind[1518]: New session 10 of user core. Oct 13 00:04:35.943419 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 13 00:04:36.091498 sshd[5500]: Connection closed by 10.0.0.1 port 41292 Oct 13 00:04:36.092950 sshd-session[5497]: pam_unix(sshd:session): session closed for user core Oct 13 00:04:36.105403 systemd[1]: sshd@9-10.0.0.63:22-10.0.0.1:41292.service: Deactivated successfully. Oct 13 00:04:36.108089 systemd[1]: session-10.scope: Deactivated successfully. Oct 13 00:04:36.109967 systemd-logind[1518]: Session 10 logged out. Waiting for processes to exit. Oct 13 00:04:36.114399 systemd[1]: Started sshd@10-10.0.0.63:22-10.0.0.1:41296.service - OpenSSH per-connection server daemon (10.0.0.1:41296). Oct 13 00:04:36.118379 systemd-logind[1518]: Removed session 10. Oct 13 00:04:36.175787 sshd[5518]: Accepted publickey for core from 10.0.0.1 port 41296 ssh2: RSA SHA256:TSsUX0+AOpz/e050I2QTTANvLGIa9yseBHOQ57c0ZcY Oct 13 00:04:36.177459 sshd-session[5518]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:04:36.183567 systemd-logind[1518]: New session 11 of user core. Oct 13 00:04:36.202452 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 13 00:04:36.418312 sshd[5521]: Connection closed by 10.0.0.1 port 41296 Oct 13 00:04:36.418889 sshd-session[5518]: pam_unix(sshd:session): session closed for user core Oct 13 00:04:36.428058 systemd[1]: sshd@10-10.0.0.63:22-10.0.0.1:41296.service: Deactivated successfully. Oct 13 00:04:36.431653 systemd[1]: session-11.scope: Deactivated successfully. Oct 13 00:04:36.433463 systemd-logind[1518]: Session 11 logged out. Waiting for processes to exit. Oct 13 00:04:36.441636 systemd[1]: Started sshd@11-10.0.0.63:22-10.0.0.1:41306.service - OpenSSH per-connection server daemon (10.0.0.1:41306). Oct 13 00:04:36.443625 systemd-logind[1518]: Removed session 11. Oct 13 00:04:36.506282 sshd[5535]: Accepted publickey for core from 10.0.0.1 port 41306 ssh2: RSA SHA256:TSsUX0+AOpz/e050I2QTTANvLGIa9yseBHOQ57c0ZcY Oct 13 00:04:36.507409 sshd-session[5535]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:04:36.511173 systemd-logind[1518]: New session 12 of user core. Oct 13 00:04:36.521398 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 13 00:04:36.691568 sshd[5538]: Connection closed by 10.0.0.1 port 41306 Oct 13 00:04:36.691900 sshd-session[5535]: pam_unix(sshd:session): session closed for user core Oct 13 00:04:36.695536 systemd[1]: sshd@11-10.0.0.63:22-10.0.0.1:41306.service: Deactivated successfully. Oct 13 00:04:36.697228 systemd[1]: session-12.scope: Deactivated successfully. Oct 13 00:04:36.699372 systemd-logind[1518]: Session 12 logged out. Waiting for processes to exit. Oct 13 00:04:36.700570 systemd-logind[1518]: Removed session 12. Oct 13 00:04:41.708792 systemd[1]: Started sshd@12-10.0.0.63:22-10.0.0.1:41320.service - OpenSSH per-connection server daemon (10.0.0.1:41320). Oct 13 00:04:41.772510 sshd[5560]: Accepted publickey for core from 10.0.0.1 port 41320 ssh2: RSA SHA256:TSsUX0+AOpz/e050I2QTTANvLGIa9yseBHOQ57c0ZcY Oct 13 00:04:41.773974 sshd-session[5560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:04:41.779401 systemd-logind[1518]: New session 13 of user core. Oct 13 00:04:41.793489 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 13 00:04:41.913263 sshd[5563]: Connection closed by 10.0.0.1 port 41320 Oct 13 00:04:41.913545 sshd-session[5560]: pam_unix(sshd:session): session closed for user core Oct 13 00:04:41.927569 systemd[1]: sshd@12-10.0.0.63:22-10.0.0.1:41320.service: Deactivated successfully. Oct 13 00:04:41.929752 systemd[1]: session-13.scope: Deactivated successfully. Oct 13 00:04:41.930539 systemd-logind[1518]: Session 13 logged out. Waiting for processes to exit. Oct 13 00:04:41.933153 systemd[1]: Started sshd@13-10.0.0.63:22-10.0.0.1:41334.service - OpenSSH per-connection server daemon (10.0.0.1:41334). Oct 13 00:04:41.934306 systemd-logind[1518]: Removed session 13. Oct 13 00:04:41.996214 sshd[5576]: Accepted publickey for core from 10.0.0.1 port 41334 ssh2: RSA SHA256:TSsUX0+AOpz/e050I2QTTANvLGIa9yseBHOQ57c0ZcY Oct 13 00:04:41.997806 sshd-session[5576]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:04:42.002132 systemd-logind[1518]: New session 14 of user core. Oct 13 00:04:42.007382 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 13 00:04:42.206347 sshd[5579]: Connection closed by 10.0.0.1 port 41334 Oct 13 00:04:42.207000 sshd-session[5576]: pam_unix(sshd:session): session closed for user core Oct 13 00:04:42.224859 systemd[1]: sshd@13-10.0.0.63:22-10.0.0.1:41334.service: Deactivated successfully. Oct 13 00:04:42.226689 systemd[1]: session-14.scope: Deactivated successfully. Oct 13 00:04:42.227496 systemd-logind[1518]: Session 14 logged out. Waiting for processes to exit. Oct 13 00:04:42.229762 systemd[1]: Started sshd@14-10.0.0.63:22-10.0.0.1:41342.service - OpenSSH per-connection server daemon (10.0.0.1:41342). Oct 13 00:04:42.230650 systemd-logind[1518]: Removed session 14. Oct 13 00:04:42.308808 sshd[5590]: Accepted publickey for core from 10.0.0.1 port 41342 ssh2: RSA SHA256:TSsUX0+AOpz/e050I2QTTANvLGIa9yseBHOQ57c0ZcY Oct 13 00:04:42.310179 sshd-session[5590]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:04:42.314984 systemd-logind[1518]: New session 15 of user core. Oct 13 00:04:42.327435 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 13 00:04:42.978608 sshd[5593]: Connection closed by 10.0.0.1 port 41342 Oct 13 00:04:42.979338 sshd-session[5590]: pam_unix(sshd:session): session closed for user core Oct 13 00:04:42.992107 systemd[1]: sshd@14-10.0.0.63:22-10.0.0.1:41342.service: Deactivated successfully. Oct 13 00:04:42.995571 systemd[1]: session-15.scope: Deactivated successfully. Oct 13 00:04:42.999409 systemd-logind[1518]: Session 15 logged out. Waiting for processes to exit. Oct 13 00:04:43.005528 systemd[1]: Started sshd@15-10.0.0.63:22-10.0.0.1:41350.service - OpenSSH per-connection server daemon (10.0.0.1:41350). Oct 13 00:04:43.006397 systemd-logind[1518]: Removed session 15. Oct 13 00:04:43.060997 sshd[5610]: Accepted publickey for core from 10.0.0.1 port 41350 ssh2: RSA SHA256:TSsUX0+AOpz/e050I2QTTANvLGIa9yseBHOQ57c0ZcY Oct 13 00:04:43.062342 sshd-session[5610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:04:43.066134 systemd-logind[1518]: New session 16 of user core. Oct 13 00:04:43.079442 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 13 00:04:43.396589 sshd[5613]: Connection closed by 10.0.0.1 port 41350 Oct 13 00:04:43.395353 sshd-session[5610]: pam_unix(sshd:session): session closed for user core Oct 13 00:04:43.407128 systemd[1]: sshd@15-10.0.0.63:22-10.0.0.1:41350.service: Deactivated successfully. Oct 13 00:04:43.411338 systemd[1]: session-16.scope: Deactivated successfully. Oct 13 00:04:43.412270 systemd-logind[1518]: Session 16 logged out. Waiting for processes to exit. Oct 13 00:04:43.417068 systemd[1]: Started sshd@16-10.0.0.63:22-10.0.0.1:41358.service - OpenSSH per-connection server daemon (10.0.0.1:41358). Oct 13 00:04:43.418911 systemd-logind[1518]: Removed session 16. Oct 13 00:04:43.482656 sshd[5625]: Accepted publickey for core from 10.0.0.1 port 41358 ssh2: RSA SHA256:TSsUX0+AOpz/e050I2QTTANvLGIa9yseBHOQ57c0ZcY Oct 13 00:04:43.484044 sshd-session[5625]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:04:43.488196 systemd-logind[1518]: New session 17 of user core. Oct 13 00:04:43.498431 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 13 00:04:43.659388 sshd[5628]: Connection closed by 10.0.0.1 port 41358 Oct 13 00:04:43.660551 sshd-session[5625]: pam_unix(sshd:session): session closed for user core Oct 13 00:04:43.664479 systemd[1]: sshd@16-10.0.0.63:22-10.0.0.1:41358.service: Deactivated successfully. Oct 13 00:04:43.666345 systemd[1]: session-17.scope: Deactivated successfully. Oct 13 00:04:43.667310 systemd-logind[1518]: Session 17 logged out. Waiting for processes to exit. Oct 13 00:04:43.668739 systemd-logind[1518]: Removed session 17. Oct 13 00:04:44.661603 kubelet[2669]: I1013 00:04:44.661490 2669 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 00:04:48.673980 systemd[1]: Started sshd@17-10.0.0.63:22-10.0.0.1:49404.service - OpenSSH per-connection server daemon (10.0.0.1:49404). Oct 13 00:04:48.737618 sshd[5658]: Accepted publickey for core from 10.0.0.1 port 49404 ssh2: RSA SHA256:TSsUX0+AOpz/e050I2QTTANvLGIa9yseBHOQ57c0ZcY Oct 13 00:04:48.738962 sshd-session[5658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:04:48.742791 systemd-logind[1518]: New session 18 of user core. Oct 13 00:04:48.757429 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 13 00:04:48.887028 sshd[5661]: Connection closed by 10.0.0.1 port 49404 Oct 13 00:04:48.887404 sshd-session[5658]: pam_unix(sshd:session): session closed for user core Oct 13 00:04:48.890881 systemd[1]: sshd@17-10.0.0.63:22-10.0.0.1:49404.service: Deactivated successfully. Oct 13 00:04:48.892594 systemd[1]: session-18.scope: Deactivated successfully. Oct 13 00:04:48.894712 systemd-logind[1518]: Session 18 logged out. Waiting for processes to exit. Oct 13 00:04:48.896083 systemd-logind[1518]: Removed session 18. Oct 13 00:04:53.904851 systemd[1]: Started sshd@18-10.0.0.63:22-10.0.0.1:49418.service - OpenSSH per-connection server daemon (10.0.0.1:49418). Oct 13 00:04:53.962171 sshd[5678]: Accepted publickey for core from 10.0.0.1 port 49418 ssh2: RSA SHA256:TSsUX0+AOpz/e050I2QTTANvLGIa9yseBHOQ57c0ZcY Oct 13 00:04:53.963561 sshd-session[5678]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:04:53.967353 systemd-logind[1518]: New session 19 of user core. Oct 13 00:04:53.977413 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 13 00:04:54.107193 sshd[5681]: Connection closed by 10.0.0.1 port 49418 Oct 13 00:04:54.107501 sshd-session[5678]: pam_unix(sshd:session): session closed for user core Oct 13 00:04:54.110898 systemd-logind[1518]: Session 19 logged out. Waiting for processes to exit. Oct 13 00:04:54.111107 systemd[1]: sshd@18-10.0.0.63:22-10.0.0.1:49418.service: Deactivated successfully. Oct 13 00:04:54.112771 systemd[1]: session-19.scope: Deactivated successfully. Oct 13 00:04:54.114564 systemd-logind[1518]: Removed session 19. Oct 13 00:04:54.260044 containerd[1536]: time="2025-10-13T00:04:54.260000638Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6604707643be502fc940ccc817af7a595db4ab8c6140c2ce734f0b3b7ad48f84\" id:\"a2638089adbdb89aaa1b6860fee3369c5bb6ac66f8a8f1a290ee14ee861b5474\" pid:5704 exited_at:{seconds:1760313894 nanos:259642807}" Oct 13 00:04:55.219410 containerd[1536]: time="2025-10-13T00:04:55.219372613Z" level=info msg="TaskExit event in podsandbox handler container_id:\"07b69478c0017b845fb3a8c20910f9414a33dbaa7154650e70ca18c3f39388ec\" id:\"5c6d58a2e953df2a0fe4c8e1441566ca8bfb3df1c9209a5f4a186b8c945d6011\" pid:5730 exited_at:{seconds:1760313895 nanos:219147298}"