Jul 15 04:43:06.830679 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 15 04:43:06.830700 kernel: Linux version 6.12.36-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Tue Jul 15 03:28:41 -00 2025 Jul 15 04:43:06.830710 kernel: KASLR enabled Jul 15 04:43:06.830716 kernel: efi: EFI v2.7 by EDK II Jul 15 04:43:06.830722 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb228018 ACPI 2.0=0xdb9b8018 RNG=0xdb9b8a18 MEMRESERVE=0xdb221f18 Jul 15 04:43:06.830727 kernel: random: crng init done Jul 15 04:43:06.830734 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 Jul 15 04:43:06.830740 kernel: secureboot: Secure boot enabled Jul 15 04:43:06.830746 kernel: ACPI: Early table checksum verification disabled Jul 15 04:43:06.830753 kernel: ACPI: RSDP 0x00000000DB9B8018 000024 (v02 BOCHS ) Jul 15 04:43:06.830759 kernel: ACPI: XSDT 0x00000000DB9B8F18 000064 (v01 BOCHS BXPC 00000001 01000013) Jul 15 04:43:06.830765 kernel: ACPI: FACP 0x00000000DB9B8B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 04:43:06.830771 kernel: ACPI: DSDT 0x00000000DB904018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 04:43:06.830777 kernel: ACPI: APIC 0x00000000DB9B8C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 04:43:06.830784 kernel: ACPI: PPTT 0x00000000DB9B8098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 04:43:06.830792 kernel: ACPI: GTDT 0x00000000DB9B8818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 04:43:06.830798 kernel: ACPI: MCFG 0x00000000DB9B8A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 04:43:06.830804 kernel: ACPI: SPCR 0x00000000DB9B8918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 04:43:06.830810 kernel: ACPI: DBG2 0x00000000DB9B8998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 04:43:06.830817 kernel: ACPI: IORT 0x00000000DB9B8198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 04:43:06.830823 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jul 15 04:43:06.830829 kernel: ACPI: Use ACPI SPCR as default console: Yes Jul 15 04:43:06.830835 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jul 15 04:43:06.830841 kernel: NODE_DATA(0) allocated [mem 0xdc737a00-0xdc73efff] Jul 15 04:43:06.830847 kernel: Zone ranges: Jul 15 04:43:06.830854 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jul 15 04:43:06.830860 kernel: DMA32 empty Jul 15 04:43:06.830867 kernel: Normal empty Jul 15 04:43:06.830873 kernel: Device empty Jul 15 04:43:06.830879 kernel: Movable zone start for each node Jul 15 04:43:06.830885 kernel: Early memory node ranges Jul 15 04:43:06.830891 kernel: node 0: [mem 0x0000000040000000-0x00000000dbb4ffff] Jul 15 04:43:06.830897 kernel: node 0: [mem 0x00000000dbb50000-0x00000000dbe7ffff] Jul 15 04:43:06.830903 kernel: node 0: [mem 0x00000000dbe80000-0x00000000dbe9ffff] Jul 15 04:43:06.830909 kernel: node 0: [mem 0x00000000dbea0000-0x00000000dbedffff] Jul 15 04:43:06.830916 kernel: node 0: [mem 0x00000000dbee0000-0x00000000dbf1ffff] Jul 15 04:43:06.830922 kernel: node 0: [mem 0x00000000dbf20000-0x00000000dbf6ffff] Jul 15 04:43:06.830930 kernel: node 0: [mem 0x00000000dbf70000-0x00000000dcbfffff] Jul 15 04:43:06.830936 kernel: node 0: [mem 0x00000000dcc00000-0x00000000dcfdffff] Jul 15 04:43:06.830942 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jul 15 04:43:06.830951 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jul 15 04:43:06.830958 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jul 15 04:43:06.830964 kernel: cma: Reserved 16 MiB at 0x00000000d7a00000 on node -1 Jul 15 04:43:06.830971 kernel: psci: probing for conduit method from ACPI. Jul 15 04:43:06.830979 kernel: psci: PSCIv1.1 detected in firmware. Jul 15 04:43:06.830986 kernel: psci: Using standard PSCI v0.2 function IDs Jul 15 04:43:06.830992 kernel: psci: Trusted OS migration not required Jul 15 04:43:06.830999 kernel: psci: SMC Calling Convention v1.1 Jul 15 04:43:06.831006 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jul 15 04:43:06.831012 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jul 15 04:43:06.831019 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jul 15 04:43:06.831026 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jul 15 04:43:06.831045 kernel: Detected PIPT I-cache on CPU0 Jul 15 04:43:06.831054 kernel: CPU features: detected: GIC system register CPU interface Jul 15 04:43:06.831061 kernel: CPU features: detected: Spectre-v4 Jul 15 04:43:06.831067 kernel: CPU features: detected: Spectre-BHB Jul 15 04:43:06.831074 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 15 04:43:06.831080 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 15 04:43:06.831087 kernel: CPU features: detected: ARM erratum 1418040 Jul 15 04:43:06.831094 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 15 04:43:06.831100 kernel: alternatives: applying boot alternatives Jul 15 04:43:06.831108 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=71133d47dc7355ed63f3db64861b54679726ebf08c2975c3bf327e76b39a3acd Jul 15 04:43:06.831115 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 15 04:43:06.831122 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 15 04:43:06.831130 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 15 04:43:06.831137 kernel: Fallback order for Node 0: 0 Jul 15 04:43:06.831143 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Jul 15 04:43:06.831150 kernel: Policy zone: DMA Jul 15 04:43:06.831156 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 15 04:43:06.831163 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Jul 15 04:43:06.831170 kernel: software IO TLB: area num 4. Jul 15 04:43:06.831176 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Jul 15 04:43:06.831183 kernel: software IO TLB: mapped [mem 0x00000000db504000-0x00000000db904000] (4MB) Jul 15 04:43:06.831189 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 15 04:43:06.831196 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 15 04:43:06.831203 kernel: rcu: RCU event tracing is enabled. Jul 15 04:43:06.831211 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 15 04:43:06.831218 kernel: Trampoline variant of Tasks RCU enabled. Jul 15 04:43:06.831225 kernel: Tracing variant of Tasks RCU enabled. Jul 15 04:43:06.831239 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 15 04:43:06.831245 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 15 04:43:06.831252 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 15 04:43:06.831259 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 15 04:43:06.831265 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 15 04:43:06.831272 kernel: GICv3: 256 SPIs implemented Jul 15 04:43:06.831278 kernel: GICv3: 0 Extended SPIs implemented Jul 15 04:43:06.831285 kernel: Root IRQ handler: gic_handle_irq Jul 15 04:43:06.831293 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jul 15 04:43:06.831300 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Jul 15 04:43:06.831306 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jul 15 04:43:06.831313 kernel: ITS [mem 0x08080000-0x0809ffff] Jul 15 04:43:06.831319 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Jul 15 04:43:06.831327 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Jul 15 04:43:06.831336 kernel: GICv3: using LPI property table @0x0000000040130000 Jul 15 04:43:06.831342 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Jul 15 04:43:06.831349 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 15 04:43:06.831356 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 15 04:43:06.831362 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 15 04:43:06.831369 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 15 04:43:06.831377 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 15 04:43:06.831384 kernel: arm-pv: using stolen time PV Jul 15 04:43:06.831391 kernel: Console: colour dummy device 80x25 Jul 15 04:43:06.831398 kernel: ACPI: Core revision 20240827 Jul 15 04:43:06.831405 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 15 04:43:06.831412 kernel: pid_max: default: 32768 minimum: 301 Jul 15 04:43:06.831419 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 15 04:43:06.831426 kernel: landlock: Up and running. Jul 15 04:43:06.831432 kernel: SELinux: Initializing. Jul 15 04:43:06.831440 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 15 04:43:06.831447 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 15 04:43:06.831454 kernel: rcu: Hierarchical SRCU implementation. Jul 15 04:43:06.831461 kernel: rcu: Max phase no-delay instances is 400. Jul 15 04:43:06.831468 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 15 04:43:06.831475 kernel: Remapping and enabling EFI services. Jul 15 04:43:06.831482 kernel: smp: Bringing up secondary CPUs ... Jul 15 04:43:06.831489 kernel: Detected PIPT I-cache on CPU1 Jul 15 04:43:06.831500 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jul 15 04:43:06.831509 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Jul 15 04:43:06.831520 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 15 04:43:06.831527 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 15 04:43:06.831535 kernel: Detected PIPT I-cache on CPU2 Jul 15 04:43:06.831542 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jul 15 04:43:06.831560 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Jul 15 04:43:06.831567 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 15 04:43:06.831574 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jul 15 04:43:06.831582 kernel: Detected PIPT I-cache on CPU3 Jul 15 04:43:06.831590 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jul 15 04:43:06.831597 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Jul 15 04:43:06.831604 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 15 04:43:06.831611 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jul 15 04:43:06.831618 kernel: smp: Brought up 1 node, 4 CPUs Jul 15 04:43:06.831624 kernel: SMP: Total of 4 processors activated. Jul 15 04:43:06.831631 kernel: CPU: All CPU(s) started at EL1 Jul 15 04:43:06.831638 kernel: CPU features: detected: 32-bit EL0 Support Jul 15 04:43:06.831645 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 15 04:43:06.831653 kernel: CPU features: detected: Common not Private translations Jul 15 04:43:06.831660 kernel: CPU features: detected: CRC32 instructions Jul 15 04:43:06.831666 kernel: CPU features: detected: Enhanced Virtualization Traps Jul 15 04:43:06.831673 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 15 04:43:06.831680 kernel: CPU features: detected: LSE atomic instructions Jul 15 04:43:06.831687 kernel: CPU features: detected: Privileged Access Never Jul 15 04:43:06.831694 kernel: CPU features: detected: RAS Extension Support Jul 15 04:43:06.831701 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 15 04:43:06.831708 kernel: alternatives: applying system-wide alternatives Jul 15 04:43:06.831717 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Jul 15 04:43:06.831725 kernel: Memory: 2421924K/2572288K available (11136K kernel code, 2436K rwdata, 9056K rodata, 39424K init, 1038K bss, 128028K reserved, 16384K cma-reserved) Jul 15 04:43:06.831732 kernel: devtmpfs: initialized Jul 15 04:43:06.831739 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 15 04:43:06.831746 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 15 04:43:06.831753 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 15 04:43:06.831761 kernel: 0 pages in range for non-PLT usage Jul 15 04:43:06.831768 kernel: 508448 pages in range for PLT usage Jul 15 04:43:06.831775 kernel: pinctrl core: initialized pinctrl subsystem Jul 15 04:43:06.831784 kernel: SMBIOS 3.0.0 present. Jul 15 04:43:06.831791 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Jul 15 04:43:06.831798 kernel: DMI: Memory slots populated: 1/1 Jul 15 04:43:06.831805 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 15 04:43:06.831812 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 15 04:43:06.831819 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 15 04:43:06.831826 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 15 04:43:06.831833 kernel: audit: initializing netlink subsys (disabled) Jul 15 04:43:06.831841 kernel: audit: type=2000 audit(0.024:1): state=initialized audit_enabled=0 res=1 Jul 15 04:43:06.831849 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 15 04:43:06.831857 kernel: cpuidle: using governor menu Jul 15 04:43:06.831864 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 15 04:43:06.831871 kernel: ASID allocator initialised with 32768 entries Jul 15 04:43:06.831878 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 15 04:43:06.831885 kernel: Serial: AMBA PL011 UART driver Jul 15 04:43:06.831892 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 15 04:43:06.831899 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 15 04:43:06.831906 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 15 04:43:06.831915 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 15 04:43:06.831922 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 15 04:43:06.831929 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 15 04:43:06.831949 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 15 04:43:06.831956 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 15 04:43:06.831963 kernel: ACPI: Added _OSI(Module Device) Jul 15 04:43:06.831969 kernel: ACPI: Added _OSI(Processor Device) Jul 15 04:43:06.831976 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 15 04:43:06.831983 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 15 04:43:06.831991 kernel: ACPI: Interpreter enabled Jul 15 04:43:06.831998 kernel: ACPI: Using GIC for interrupt routing Jul 15 04:43:06.832005 kernel: ACPI: MCFG table detected, 1 entries Jul 15 04:43:06.832012 kernel: ACPI: CPU0 has been hot-added Jul 15 04:43:06.832018 kernel: ACPI: CPU1 has been hot-added Jul 15 04:43:06.832025 kernel: ACPI: CPU2 has been hot-added Jul 15 04:43:06.832042 kernel: ACPI: CPU3 has been hot-added Jul 15 04:43:06.832050 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jul 15 04:43:06.832056 kernel: printk: legacy console [ttyAMA0] enabled Jul 15 04:43:06.832066 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 15 04:43:06.832198 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 15 04:43:06.832278 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 15 04:43:06.832343 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 15 04:43:06.832402 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jul 15 04:43:06.832459 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jul 15 04:43:06.832468 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jul 15 04:43:06.832478 kernel: PCI host bridge to bus 0000:00 Jul 15 04:43:06.832562 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jul 15 04:43:06.832626 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 15 04:43:06.832678 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jul 15 04:43:06.832733 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 15 04:43:06.832810 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Jul 15 04:43:06.832883 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jul 15 04:43:06.832946 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Jul 15 04:43:06.833006 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Jul 15 04:43:06.833134 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Jul 15 04:43:06.833196 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Jul 15 04:43:06.833266 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Jul 15 04:43:06.833329 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Jul 15 04:43:06.833394 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jul 15 04:43:06.833487 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 15 04:43:06.833545 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jul 15 04:43:06.833554 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 15 04:43:06.833562 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 15 04:43:06.833569 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 15 04:43:06.833576 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 15 04:43:06.833583 kernel: iommu: Default domain type: Translated Jul 15 04:43:06.833590 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 15 04:43:06.833600 kernel: efivars: Registered efivars operations Jul 15 04:43:06.833607 kernel: vgaarb: loaded Jul 15 04:43:06.833614 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 15 04:43:06.833622 kernel: VFS: Disk quotas dquot_6.6.0 Jul 15 04:43:06.833629 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 15 04:43:06.833636 kernel: pnp: PnP ACPI init Jul 15 04:43:06.833706 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jul 15 04:43:06.833716 kernel: pnp: PnP ACPI: found 1 devices Jul 15 04:43:06.833724 kernel: NET: Registered PF_INET protocol family Jul 15 04:43:06.833732 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 15 04:43:06.833739 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 15 04:43:06.833746 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 15 04:43:06.833753 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 15 04:43:06.833761 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 15 04:43:06.833768 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 15 04:43:06.833775 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 15 04:43:06.833782 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 15 04:43:06.833791 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 15 04:43:06.833797 kernel: PCI: CLS 0 bytes, default 64 Jul 15 04:43:06.833804 kernel: kvm [1]: HYP mode not available Jul 15 04:43:06.833811 kernel: Initialise system trusted keyrings Jul 15 04:43:06.833818 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 15 04:43:06.833825 kernel: Key type asymmetric registered Jul 15 04:43:06.833832 kernel: Asymmetric key parser 'x509' registered Jul 15 04:43:06.833839 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 15 04:43:06.833846 kernel: io scheduler mq-deadline registered Jul 15 04:43:06.833854 kernel: io scheduler kyber registered Jul 15 04:43:06.833861 kernel: io scheduler bfq registered Jul 15 04:43:06.833868 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 15 04:43:06.833876 kernel: ACPI: button: Power Button [PWRB] Jul 15 04:43:06.833883 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 15 04:43:06.833944 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jul 15 04:43:06.833954 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 15 04:43:06.833961 kernel: thunder_xcv, ver 1.0 Jul 15 04:43:06.833968 kernel: thunder_bgx, ver 1.0 Jul 15 04:43:06.833977 kernel: nicpf, ver 1.0 Jul 15 04:43:06.833984 kernel: nicvf, ver 1.0 Jul 15 04:43:06.834061 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 15 04:43:06.834121 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-15T04:43:06 UTC (1752554586) Jul 15 04:43:06.834131 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 15 04:43:06.834139 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Jul 15 04:43:06.834146 kernel: watchdog: NMI not fully supported Jul 15 04:43:06.834153 kernel: watchdog: Hard watchdog permanently disabled Jul 15 04:43:06.834162 kernel: NET: Registered PF_INET6 protocol family Jul 15 04:43:06.834169 kernel: Segment Routing with IPv6 Jul 15 04:43:06.834176 kernel: In-situ OAM (IOAM) with IPv6 Jul 15 04:43:06.834183 kernel: NET: Registered PF_PACKET protocol family Jul 15 04:43:06.834190 kernel: Key type dns_resolver registered Jul 15 04:43:06.834197 kernel: registered taskstats version 1 Jul 15 04:43:06.834204 kernel: Loading compiled-in X.509 certificates Jul 15 04:43:06.834211 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.36-flatcar: b5c59c413839929aea5bd4b52ae6eaff0e245cd2' Jul 15 04:43:06.834218 kernel: Demotion targets for Node 0: null Jul 15 04:43:06.834231 kernel: Key type .fscrypt registered Jul 15 04:43:06.834239 kernel: Key type fscrypt-provisioning registered Jul 15 04:43:06.834247 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 15 04:43:06.834254 kernel: ima: Allocated hash algorithm: sha1 Jul 15 04:43:06.834261 kernel: ima: No architecture policies found Jul 15 04:43:06.834268 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 15 04:43:06.834275 kernel: clk: Disabling unused clocks Jul 15 04:43:06.834283 kernel: PM: genpd: Disabling unused power domains Jul 15 04:43:06.834290 kernel: Warning: unable to open an initial console. Jul 15 04:43:06.834299 kernel: Freeing unused kernel memory: 39424K Jul 15 04:43:06.834306 kernel: Run /init as init process Jul 15 04:43:06.834313 kernel: with arguments: Jul 15 04:43:06.834320 kernel: /init Jul 15 04:43:06.834327 kernel: with environment: Jul 15 04:43:06.834334 kernel: HOME=/ Jul 15 04:43:06.834341 kernel: TERM=linux Jul 15 04:43:06.834348 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 15 04:43:06.834356 systemd[1]: Successfully made /usr/ read-only. Jul 15 04:43:06.834367 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 15 04:43:06.834375 systemd[1]: Detected virtualization kvm. Jul 15 04:43:06.834382 systemd[1]: Detected architecture arm64. Jul 15 04:43:06.834389 systemd[1]: Running in initrd. Jul 15 04:43:06.834396 systemd[1]: No hostname configured, using default hostname. Jul 15 04:43:06.834404 systemd[1]: Hostname set to . Jul 15 04:43:06.834412 systemd[1]: Initializing machine ID from VM UUID. Jul 15 04:43:06.834420 systemd[1]: Queued start job for default target initrd.target. Jul 15 04:43:06.834428 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 15 04:43:06.834435 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 15 04:43:06.834443 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 15 04:43:06.834451 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 15 04:43:06.834458 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 15 04:43:06.834466 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 15 04:43:06.834476 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 15 04:43:06.834484 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 15 04:43:06.834491 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 15 04:43:06.834499 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 15 04:43:06.834506 systemd[1]: Reached target paths.target - Path Units. Jul 15 04:43:06.834514 systemd[1]: Reached target slices.target - Slice Units. Jul 15 04:43:06.834521 systemd[1]: Reached target swap.target - Swaps. Jul 15 04:43:06.834529 systemd[1]: Reached target timers.target - Timer Units. Jul 15 04:43:06.834538 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 15 04:43:06.834545 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 15 04:43:06.834553 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 15 04:43:06.834561 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 15 04:43:06.834568 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 15 04:43:06.834576 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 15 04:43:06.834583 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 15 04:43:06.834591 systemd[1]: Reached target sockets.target - Socket Units. Jul 15 04:43:06.834598 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 15 04:43:06.834607 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 15 04:43:06.834614 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 15 04:43:06.834622 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 15 04:43:06.834630 systemd[1]: Starting systemd-fsck-usr.service... Jul 15 04:43:06.834637 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 15 04:43:06.834645 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 15 04:43:06.834653 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 04:43:06.834661 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 15 04:43:06.834670 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 15 04:43:06.834678 systemd[1]: Finished systemd-fsck-usr.service. Jul 15 04:43:06.834686 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 15 04:43:06.834709 systemd-journald[243]: Collecting audit messages is disabled. Jul 15 04:43:06.834730 systemd-journald[243]: Journal started Jul 15 04:43:06.834747 systemd-journald[243]: Runtime Journal (/run/log/journal/f540ea87b29242ee9fb97a9e8902b672) is 6M, max 48.5M, 42.4M free. Jul 15 04:43:06.831534 systemd-modules-load[246]: Inserted module 'overlay' Jul 15 04:43:06.841150 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 04:43:06.844389 systemd[1]: Started systemd-journald.service - Journal Service. Jul 15 04:43:06.846653 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 15 04:43:06.849953 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 15 04:43:06.849317 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 15 04:43:06.853736 systemd-modules-load[246]: Inserted module 'br_netfilter' Jul 15 04:43:06.854649 kernel: Bridge firewalling registered Jul 15 04:43:06.855190 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 15 04:43:06.856492 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 15 04:43:06.861026 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 15 04:43:06.861611 systemd-tmpfiles[263]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 15 04:43:06.862580 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 15 04:43:06.864977 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 15 04:43:06.872153 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 15 04:43:06.873984 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 15 04:43:06.880210 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 15 04:43:06.882704 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 15 04:43:06.885350 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 15 04:43:06.890817 dracut-cmdline[283]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=71133d47dc7355ed63f3db64861b54679726ebf08c2975c3bf327e76b39a3acd Jul 15 04:43:06.924672 systemd-resolved[294]: Positive Trust Anchors: Jul 15 04:43:06.924688 systemd-resolved[294]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 15 04:43:06.924718 systemd-resolved[294]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 15 04:43:06.929399 systemd-resolved[294]: Defaulting to hostname 'linux'. Jul 15 04:43:06.930277 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 15 04:43:06.933397 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 15 04:43:06.959057 kernel: SCSI subsystem initialized Jul 15 04:43:06.964046 kernel: Loading iSCSI transport class v2.0-870. Jul 15 04:43:06.971056 kernel: iscsi: registered transport (tcp) Jul 15 04:43:06.983430 kernel: iscsi: registered transport (qla4xxx) Jul 15 04:43:06.983466 kernel: QLogic iSCSI HBA Driver Jul 15 04:43:06.999261 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 15 04:43:07.026102 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 15 04:43:07.028386 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 15 04:43:07.072082 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 15 04:43:07.073861 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 15 04:43:07.141069 kernel: raid6: neonx8 gen() 15675 MB/s Jul 15 04:43:07.158052 kernel: raid6: neonx4 gen() 15726 MB/s Jul 15 04:43:07.175058 kernel: raid6: neonx2 gen() 13148 MB/s Jul 15 04:43:07.192052 kernel: raid6: neonx1 gen() 10410 MB/s Jul 15 04:43:07.209049 kernel: raid6: int64x8 gen() 6852 MB/s Jul 15 04:43:07.226059 kernel: raid6: int64x4 gen() 7290 MB/s Jul 15 04:43:07.243046 kernel: raid6: int64x2 gen() 6064 MB/s Jul 15 04:43:07.260107 kernel: raid6: int64x1 gen() 5031 MB/s Jul 15 04:43:07.260123 kernel: raid6: using algorithm neonx4 gen() 15726 MB/s Jul 15 04:43:07.278052 kernel: raid6: .... xor() 12312 MB/s, rmw enabled Jul 15 04:43:07.278068 kernel: raid6: using neon recovery algorithm Jul 15 04:43:07.283384 kernel: xor: measuring software checksum speed Jul 15 04:43:07.283410 kernel: 8regs : 21624 MB/sec Jul 15 04:43:07.284054 kernel: 32regs : 21676 MB/sec Jul 15 04:43:07.285233 kernel: arm64_neon : 23222 MB/sec Jul 15 04:43:07.285247 kernel: xor: using function: arm64_neon (23222 MB/sec) Jul 15 04:43:07.344073 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 15 04:43:07.350400 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 15 04:43:07.352964 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 15 04:43:07.380322 systemd-udevd[498]: Using default interface naming scheme 'v255'. Jul 15 04:43:07.384387 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 15 04:43:07.386841 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 15 04:43:07.409507 dracut-pre-trigger[508]: rd.md=0: removing MD RAID activation Jul 15 04:43:07.432530 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 15 04:43:07.434845 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 15 04:43:07.488725 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 15 04:43:07.491759 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 15 04:43:07.540853 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jul 15 04:43:07.541019 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 15 04:43:07.547164 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 15 04:43:07.556650 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 15 04:43:07.556670 kernel: GPT:9289727 != 19775487 Jul 15 04:43:07.556685 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 15 04:43:07.556696 kernel: GPT:9289727 != 19775487 Jul 15 04:43:07.556705 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 15 04:43:07.556713 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 15 04:43:07.547340 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 04:43:07.557671 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 04:43:07.559424 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 04:43:07.585107 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 15 04:43:07.586436 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 04:43:07.588808 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 15 04:43:07.600546 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 15 04:43:07.601648 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 15 04:43:07.610594 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 15 04:43:07.622073 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 15 04:43:07.623211 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 15 04:43:07.625367 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 15 04:43:07.627319 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 15 04:43:07.629727 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 15 04:43:07.631484 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 15 04:43:07.644538 disk-uuid[590]: Primary Header is updated. Jul 15 04:43:07.644538 disk-uuid[590]: Secondary Entries is updated. Jul 15 04:43:07.644538 disk-uuid[590]: Secondary Header is updated. Jul 15 04:43:07.651066 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 15 04:43:07.653891 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 15 04:43:08.659066 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 15 04:43:08.659806 disk-uuid[593]: The operation has completed successfully. Jul 15 04:43:08.684831 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 15 04:43:08.684928 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 15 04:43:08.709113 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 15 04:43:08.735082 sh[610]: Success Jul 15 04:43:08.750837 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 15 04:43:08.752870 kernel: device-mapper: uevent: version 1.0.3 Jul 15 04:43:08.752903 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 15 04:43:08.765059 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jul 15 04:43:08.792958 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 15 04:43:08.795852 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 15 04:43:08.818209 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 15 04:43:08.825365 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 15 04:43:08.825412 kernel: BTRFS: device fsid a7b7592d-2d1d-4236-b04f-dc58147b4692 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (622) Jul 15 04:43:08.826657 kernel: BTRFS info (device dm-0): first mount of filesystem a7b7592d-2d1d-4236-b04f-dc58147b4692 Jul 15 04:43:08.826694 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 15 04:43:08.827540 kernel: BTRFS info (device dm-0): using free-space-tree Jul 15 04:43:08.831824 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 15 04:43:08.833123 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 15 04:43:08.834472 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 15 04:43:08.835187 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 15 04:43:08.838875 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 15 04:43:08.863808 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (654) Jul 15 04:43:08.863856 kernel: BTRFS info (device vda6): first mount of filesystem 1ba6da34-80a1-4a8c-bd4d-0f30640013e8 Jul 15 04:43:08.863866 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 15 04:43:08.865329 kernel: BTRFS info (device vda6): using free-space-tree Jul 15 04:43:08.871043 kernel: BTRFS info (device vda6): last unmount of filesystem 1ba6da34-80a1-4a8c-bd4d-0f30640013e8 Jul 15 04:43:08.871202 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 15 04:43:08.873328 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 15 04:43:08.937638 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 15 04:43:08.942479 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 15 04:43:08.985548 systemd-networkd[797]: lo: Link UP Jul 15 04:43:08.985560 systemd-networkd[797]: lo: Gained carrier Jul 15 04:43:08.986303 systemd-networkd[797]: Enumeration completed Jul 15 04:43:08.986687 systemd-networkd[797]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 04:43:08.986690 systemd-networkd[797]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 15 04:43:08.986866 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 15 04:43:08.987158 systemd-networkd[797]: eth0: Link UP Jul 15 04:43:08.987161 systemd-networkd[797]: eth0: Gained carrier Jul 15 04:43:08.987168 systemd-networkd[797]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 04:43:08.991303 systemd[1]: Reached target network.target - Network. Jul 15 04:43:09.016606 ignition[697]: Ignition 2.21.0 Jul 15 04:43:09.016625 ignition[697]: Stage: fetch-offline Jul 15 04:43:09.016656 ignition[697]: no configs at "/usr/lib/ignition/base.d" Jul 15 04:43:09.018104 systemd-networkd[797]: eth0: DHCPv4 address 10.0.0.60/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 15 04:43:09.016664 ignition[697]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 04:43:09.016894 ignition[697]: parsed url from cmdline: "" Jul 15 04:43:09.016898 ignition[697]: no config URL provided Jul 15 04:43:09.016906 ignition[697]: reading system config file "/usr/lib/ignition/user.ign" Jul 15 04:43:09.016913 ignition[697]: no config at "/usr/lib/ignition/user.ign" Jul 15 04:43:09.016932 ignition[697]: op(1): [started] loading QEMU firmware config module Jul 15 04:43:09.016937 ignition[697]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 15 04:43:09.024126 ignition[697]: op(1): [finished] loading QEMU firmware config module Jul 15 04:43:09.024149 ignition[697]: QEMU firmware config was not found. Ignoring... Jul 15 04:43:09.065112 ignition[697]: parsing config with SHA512: 4e11696fd03dca5e33d3dd1ede116daa9073abf0b7242f270aa972dc06ac3b1a1c25863c5fdf760be399676fb0e3e0a3b21fe4618994a6865e16e28ebf8c242a Jul 15 04:43:09.069105 unknown[697]: fetched base config from "system" Jul 15 04:43:09.069118 unknown[697]: fetched user config from "qemu" Jul 15 04:43:09.069476 ignition[697]: fetch-offline: fetch-offline passed Jul 15 04:43:09.071124 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 15 04:43:09.069525 ignition[697]: Ignition finished successfully Jul 15 04:43:09.072727 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 15 04:43:09.073630 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 15 04:43:09.104384 ignition[810]: Ignition 2.21.0 Jul 15 04:43:09.104400 ignition[810]: Stage: kargs Jul 15 04:43:09.104543 ignition[810]: no configs at "/usr/lib/ignition/base.d" Jul 15 04:43:09.104553 ignition[810]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 04:43:09.105483 ignition[810]: kargs: kargs passed Jul 15 04:43:09.108023 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 15 04:43:09.105535 ignition[810]: Ignition finished successfully Jul 15 04:43:09.109955 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 15 04:43:09.136940 ignition[818]: Ignition 2.21.0 Jul 15 04:43:09.136958 ignition[818]: Stage: disks Jul 15 04:43:09.137120 ignition[818]: no configs at "/usr/lib/ignition/base.d" Jul 15 04:43:09.137132 ignition[818]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 04:43:09.140192 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 15 04:43:09.138731 ignition[818]: disks: disks passed Jul 15 04:43:09.141852 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 15 04:43:09.138782 ignition[818]: Ignition finished successfully Jul 15 04:43:09.143465 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 15 04:43:09.145091 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 15 04:43:09.146865 systemd[1]: Reached target sysinit.target - System Initialization. Jul 15 04:43:09.148444 systemd[1]: Reached target basic.target - Basic System. Jul 15 04:43:09.151082 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 15 04:43:09.183771 systemd-fsck[828]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jul 15 04:43:09.187765 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 15 04:43:09.191145 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 15 04:43:09.263053 kernel: EXT4-fs (vda9): mounted filesystem 4818953b-9d82-47bd-ab58-d0aa5641a19a r/w with ordered data mode. Quota mode: none. Jul 15 04:43:09.264989 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 15 04:43:09.266869 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 15 04:43:09.269073 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 15 04:43:09.270637 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 15 04:43:09.271604 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 15 04:43:09.271646 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 15 04:43:09.271681 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 15 04:43:09.280355 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 15 04:43:09.282655 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 15 04:43:09.288158 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (836) Jul 15 04:43:09.288181 kernel: BTRFS info (device vda6): first mount of filesystem 1ba6da34-80a1-4a8c-bd4d-0f30640013e8 Jul 15 04:43:09.288192 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 15 04:43:09.288201 kernel: BTRFS info (device vda6): using free-space-tree Jul 15 04:43:09.291202 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 15 04:43:09.320643 initrd-setup-root[860]: cut: /sysroot/etc/passwd: No such file or directory Jul 15 04:43:09.325010 initrd-setup-root[867]: cut: /sysroot/etc/group: No such file or directory Jul 15 04:43:09.328813 initrd-setup-root[874]: cut: /sysroot/etc/shadow: No such file or directory Jul 15 04:43:09.332513 initrd-setup-root[881]: cut: /sysroot/etc/gshadow: No such file or directory Jul 15 04:43:09.403766 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 15 04:43:09.405782 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 15 04:43:09.407365 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 15 04:43:09.424050 kernel: BTRFS info (device vda6): last unmount of filesystem 1ba6da34-80a1-4a8c-bd4d-0f30640013e8 Jul 15 04:43:09.438146 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 15 04:43:09.449966 ignition[949]: INFO : Ignition 2.21.0 Jul 15 04:43:09.449966 ignition[949]: INFO : Stage: mount Jul 15 04:43:09.451891 ignition[949]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 04:43:09.451891 ignition[949]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 04:43:09.451891 ignition[949]: INFO : mount: mount passed Jul 15 04:43:09.451891 ignition[949]: INFO : Ignition finished successfully Jul 15 04:43:09.453645 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 15 04:43:09.455685 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 15 04:43:09.823908 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 15 04:43:09.825511 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 15 04:43:09.842801 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (963) Jul 15 04:43:09.842832 kernel: BTRFS info (device vda6): first mount of filesystem 1ba6da34-80a1-4a8c-bd4d-0f30640013e8 Jul 15 04:43:09.842843 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 15 04:43:09.844339 kernel: BTRFS info (device vda6): using free-space-tree Jul 15 04:43:09.846834 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 15 04:43:09.876209 ignition[980]: INFO : Ignition 2.21.0 Jul 15 04:43:09.876209 ignition[980]: INFO : Stage: files Jul 15 04:43:09.878589 ignition[980]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 04:43:09.878589 ignition[980]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 04:43:09.880691 ignition[980]: DEBUG : files: compiled without relabeling support, skipping Jul 15 04:43:09.880691 ignition[980]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 15 04:43:09.880691 ignition[980]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 15 04:43:09.884549 ignition[980]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 15 04:43:09.884549 ignition[980]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 15 04:43:09.884549 ignition[980]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 15 04:43:09.884549 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jul 15 04:43:09.884549 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Jul 15 04:43:09.881665 unknown[980]: wrote ssh authorized keys file for user: core Jul 15 04:43:10.114142 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 15 04:43:10.246025 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jul 15 04:43:10.247889 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 15 04:43:10.247889 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 15 04:43:10.247889 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 15 04:43:10.247889 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 15 04:43:10.247889 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 15 04:43:10.247889 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 15 04:43:10.247889 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 15 04:43:10.247889 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 15 04:43:10.261698 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 15 04:43:10.261698 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 15 04:43:10.261698 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 15 04:43:10.261698 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 15 04:43:10.261698 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 15 04:43:10.261698 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Jul 15 04:43:10.715238 systemd-networkd[797]: eth0: Gained IPv6LL Jul 15 04:43:10.813022 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 15 04:43:11.599284 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 15 04:43:11.599284 ignition[980]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 15 04:43:11.603365 ignition[980]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 15 04:43:11.606289 ignition[980]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 15 04:43:11.606289 ignition[980]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 15 04:43:11.606289 ignition[980]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jul 15 04:43:11.611695 ignition[980]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 15 04:43:11.611695 ignition[980]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 15 04:43:11.611695 ignition[980]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jul 15 04:43:11.611695 ignition[980]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jul 15 04:43:11.624743 ignition[980]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 15 04:43:11.628336 ignition[980]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 15 04:43:11.629860 ignition[980]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jul 15 04:43:11.629860 ignition[980]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jul 15 04:43:11.629860 ignition[980]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jul 15 04:43:11.629860 ignition[980]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 15 04:43:11.629860 ignition[980]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 15 04:43:11.629860 ignition[980]: INFO : files: files passed Jul 15 04:43:11.629860 ignition[980]: INFO : Ignition finished successfully Jul 15 04:43:11.630639 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 15 04:43:11.637178 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 15 04:43:11.640369 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 15 04:43:11.656664 initrd-setup-root-after-ignition[1009]: grep: /sysroot/oem/oem-release: No such file or directory Jul 15 04:43:11.655954 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 15 04:43:11.656066 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 15 04:43:11.660874 initrd-setup-root-after-ignition[1011]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 15 04:43:11.660874 initrd-setup-root-after-ignition[1011]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 15 04:43:11.663911 initrd-setup-root-after-ignition[1015]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 15 04:43:11.663112 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 15 04:43:11.665334 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 15 04:43:11.668319 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 15 04:43:11.735000 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 15 04:43:11.736046 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 15 04:43:11.737420 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 15 04:43:11.738456 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 15 04:43:11.740583 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 15 04:43:11.741412 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 15 04:43:11.778097 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 15 04:43:11.780501 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 15 04:43:11.830928 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 15 04:43:11.833664 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 15 04:43:11.837494 systemd[1]: Stopped target timers.target - Timer Units. Jul 15 04:43:11.838436 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 15 04:43:11.838548 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 15 04:43:11.842040 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 15 04:43:11.844448 systemd[1]: Stopped target basic.target - Basic System. Jul 15 04:43:11.846052 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 15 04:43:11.847713 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 15 04:43:11.851514 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 15 04:43:11.852647 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 15 04:43:11.855125 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 15 04:43:11.856898 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 15 04:43:11.859131 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 15 04:43:11.861407 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 15 04:43:11.862944 systemd[1]: Stopped target swap.target - Swaps. Jul 15 04:43:11.864626 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 15 04:43:11.864766 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 15 04:43:11.866957 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 15 04:43:11.869013 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 15 04:43:11.870951 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 15 04:43:11.874118 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 15 04:43:11.875338 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 15 04:43:11.875453 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 15 04:43:11.878257 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 15 04:43:11.878375 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 15 04:43:11.880318 systemd[1]: Stopped target paths.target - Path Units. Jul 15 04:43:11.881862 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 15 04:43:11.886093 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 15 04:43:11.887328 systemd[1]: Stopped target slices.target - Slice Units. Jul 15 04:43:11.889463 systemd[1]: Stopped target sockets.target - Socket Units. Jul 15 04:43:11.890792 systemd[1]: iscsid.socket: Deactivated successfully. Jul 15 04:43:11.890877 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 15 04:43:11.892400 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 15 04:43:11.892485 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 15 04:43:11.894055 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 15 04:43:11.894175 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 15 04:43:11.895931 systemd[1]: ignition-files.service: Deactivated successfully. Jul 15 04:43:11.896049 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 15 04:43:11.898793 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 15 04:43:11.900235 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 15 04:43:11.900364 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 15 04:43:11.903148 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 15 04:43:11.904760 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 15 04:43:11.904888 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 15 04:43:11.906724 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 15 04:43:11.906818 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 15 04:43:11.912179 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 15 04:43:11.916206 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 15 04:43:11.925568 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 15 04:43:11.931984 ignition[1035]: INFO : Ignition 2.21.0 Jul 15 04:43:11.933329 ignition[1035]: INFO : Stage: umount Jul 15 04:43:11.934427 ignition[1035]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 04:43:11.936113 ignition[1035]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 04:43:11.937333 ignition[1035]: INFO : umount: umount passed Jul 15 04:43:11.937333 ignition[1035]: INFO : Ignition finished successfully Jul 15 04:43:11.939258 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 15 04:43:11.939355 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 15 04:43:11.940861 systemd[1]: Stopped target network.target - Network. Jul 15 04:43:11.941797 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 15 04:43:11.941865 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 15 04:43:11.942915 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 15 04:43:11.942959 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 15 04:43:11.943948 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 15 04:43:11.943995 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 15 04:43:11.945788 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 15 04:43:11.945830 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 15 04:43:11.947734 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 15 04:43:11.949298 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 15 04:43:11.957336 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 15 04:43:11.957465 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 15 04:43:11.961560 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 15 04:43:11.961767 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 15 04:43:11.961888 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 15 04:43:11.965702 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 15 04:43:11.966839 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 15 04:43:11.968681 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 15 04:43:11.968717 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 15 04:43:11.971940 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 15 04:43:11.973012 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 15 04:43:11.973102 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 15 04:43:11.975120 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 15 04:43:11.975177 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 15 04:43:11.977867 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 15 04:43:11.977912 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 15 04:43:11.980270 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 15 04:43:11.980316 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 15 04:43:11.983603 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 15 04:43:11.987357 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 15 04:43:11.987419 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 15 04:43:11.988835 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 15 04:43:11.990138 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 15 04:43:11.993014 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 15 04:43:11.993259 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 15 04:43:12.004340 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 15 04:43:12.004480 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 15 04:43:12.006734 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 15 04:43:12.006797 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 15 04:43:12.008481 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 15 04:43:12.008511 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 15 04:43:12.010167 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 15 04:43:12.010225 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 15 04:43:12.012827 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 15 04:43:12.012879 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 15 04:43:12.015410 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 15 04:43:12.015471 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 15 04:43:12.019147 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 15 04:43:12.020232 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 15 04:43:12.020301 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 15 04:43:12.023793 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 15 04:43:12.023834 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 15 04:43:12.026816 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 15 04:43:12.026858 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 04:43:12.030920 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jul 15 04:43:12.030966 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 15 04:43:12.030996 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 15 04:43:12.031270 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 15 04:43:12.045333 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 15 04:43:12.050663 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 15 04:43:12.050756 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 15 04:43:12.052947 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 15 04:43:12.055392 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 15 04:43:12.076793 systemd[1]: Switching root. Jul 15 04:43:12.107154 systemd-journald[243]: Journal stopped Jul 15 04:43:12.872449 systemd-journald[243]: Received SIGTERM from PID 1 (systemd). Jul 15 04:43:12.872497 kernel: SELinux: policy capability network_peer_controls=1 Jul 15 04:43:12.872508 kernel: SELinux: policy capability open_perms=1 Jul 15 04:43:12.872521 kernel: SELinux: policy capability extended_socket_class=1 Jul 15 04:43:12.872530 kernel: SELinux: policy capability always_check_network=0 Jul 15 04:43:12.872539 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 15 04:43:12.872551 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 15 04:43:12.872560 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 15 04:43:12.872576 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 15 04:43:12.872587 kernel: SELinux: policy capability userspace_initial_context=0 Jul 15 04:43:12.872597 kernel: audit: type=1403 audit(1752554592.278:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 15 04:43:12.872608 systemd[1]: Successfully loaded SELinux policy in 53.891ms. Jul 15 04:43:12.872623 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.104ms. Jul 15 04:43:12.872635 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 15 04:43:12.872655 systemd[1]: Detected virtualization kvm. Jul 15 04:43:12.872666 systemd[1]: Detected architecture arm64. Jul 15 04:43:12.872676 systemd[1]: Detected first boot. Jul 15 04:43:12.872686 systemd[1]: Initializing machine ID from VM UUID. Jul 15 04:43:12.872700 zram_generator::config[1081]: No configuration found. Jul 15 04:43:12.872711 kernel: NET: Registered PF_VSOCK protocol family Jul 15 04:43:12.872721 systemd[1]: Populated /etc with preset unit settings. Jul 15 04:43:12.872731 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 15 04:43:12.872742 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 15 04:43:12.872751 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 15 04:43:12.872761 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 15 04:43:12.872771 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 15 04:43:12.872781 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 15 04:43:12.872794 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 15 04:43:12.872804 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 15 04:43:12.872814 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 15 04:43:12.872825 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 15 04:43:12.872835 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 15 04:43:12.872844 systemd[1]: Created slice user.slice - User and Session Slice. Jul 15 04:43:12.872854 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 15 04:43:12.872864 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 15 04:43:12.872874 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 15 04:43:12.872885 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 15 04:43:12.872895 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 15 04:43:12.872905 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 15 04:43:12.872914 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jul 15 04:43:12.872924 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 15 04:43:12.872934 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 15 04:43:12.872944 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 15 04:43:12.872954 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 15 04:43:12.872965 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 15 04:43:12.872975 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 15 04:43:12.872985 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 15 04:43:12.872995 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 15 04:43:12.873015 systemd[1]: Reached target slices.target - Slice Units. Jul 15 04:43:12.873043 systemd[1]: Reached target swap.target - Swaps. Jul 15 04:43:12.873074 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 15 04:43:12.873084 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 15 04:43:12.873094 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 15 04:43:12.873106 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 15 04:43:12.873117 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 15 04:43:12.873127 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 15 04:43:12.873137 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 15 04:43:12.873146 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 15 04:43:12.873156 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 15 04:43:12.873166 systemd[1]: Mounting media.mount - External Media Directory... Jul 15 04:43:12.873176 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 15 04:43:12.873186 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 15 04:43:12.873197 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 15 04:43:12.873213 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 15 04:43:12.873225 systemd[1]: Reached target machines.target - Containers. Jul 15 04:43:12.873235 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 15 04:43:12.873245 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 04:43:12.873255 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 15 04:43:12.873265 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 15 04:43:12.873274 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 15 04:43:12.873286 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 15 04:43:12.873296 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 15 04:43:12.873306 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 15 04:43:12.873316 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 15 04:43:12.873326 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 15 04:43:12.873336 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 15 04:43:12.873346 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 15 04:43:12.873356 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 15 04:43:12.873365 systemd[1]: Stopped systemd-fsck-usr.service. Jul 15 04:43:12.873377 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 04:43:12.873387 kernel: fuse: init (API version 7.41) Jul 15 04:43:12.873396 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 15 04:43:12.873406 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 15 04:43:12.873416 kernel: loop: module loaded Jul 15 04:43:12.873425 kernel: ACPI: bus type drm_connector registered Jul 15 04:43:12.873435 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 15 04:43:12.873444 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 15 04:43:12.873454 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 15 04:43:12.873465 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 15 04:43:12.873475 systemd[1]: verity-setup.service: Deactivated successfully. Jul 15 04:43:12.873485 systemd[1]: Stopped verity-setup.service. Jul 15 04:43:12.873495 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 15 04:43:12.873506 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 15 04:43:12.873540 systemd-journald[1151]: Collecting audit messages is disabled. Jul 15 04:43:12.873560 systemd[1]: Mounted media.mount - External Media Directory. Jul 15 04:43:12.873571 systemd-journald[1151]: Journal started Jul 15 04:43:12.873592 systemd-journald[1151]: Runtime Journal (/run/log/journal/f540ea87b29242ee9fb97a9e8902b672) is 6M, max 48.5M, 42.4M free. Jul 15 04:43:12.876225 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 15 04:43:12.876267 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 15 04:43:12.640193 systemd[1]: Queued start job for default target multi-user.target. Jul 15 04:43:12.665965 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 15 04:43:12.666353 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 15 04:43:12.883229 systemd[1]: Started systemd-journald.service - Journal Service. Jul 15 04:43:12.880930 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 15 04:43:12.883510 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 15 04:43:12.884847 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 15 04:43:12.886434 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 15 04:43:12.886580 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 15 04:43:12.888058 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 04:43:12.888217 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 15 04:43:12.889523 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 15 04:43:12.889684 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 15 04:43:12.890900 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 04:43:12.891156 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 15 04:43:12.892527 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 15 04:43:12.892679 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 15 04:43:12.893908 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 04:43:12.894089 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 15 04:43:12.895367 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 15 04:43:12.896821 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 15 04:43:12.898327 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 15 04:43:12.899747 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 15 04:43:12.911470 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 15 04:43:12.913676 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 15 04:43:12.915632 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 15 04:43:12.916796 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 15 04:43:12.916831 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 15 04:43:12.918640 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 15 04:43:12.924176 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 15 04:43:12.925245 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 04:43:12.926182 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 15 04:43:12.928732 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 15 04:43:12.929874 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 15 04:43:12.930935 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 15 04:43:12.932152 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 15 04:43:12.934952 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 15 04:43:12.938839 systemd-journald[1151]: Time spent on flushing to /var/log/journal/f540ea87b29242ee9fb97a9e8902b672 is 15.268ms for 881 entries. Jul 15 04:43:12.938839 systemd-journald[1151]: System Journal (/var/log/journal/f540ea87b29242ee9fb97a9e8902b672) is 8M, max 195.6M, 187.6M free. Jul 15 04:43:12.964624 systemd-journald[1151]: Received client request to flush runtime journal. Jul 15 04:43:12.939187 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 15 04:43:12.941907 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 15 04:43:12.944782 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 15 04:43:12.946196 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 15 04:43:12.947521 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 15 04:43:12.957261 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 15 04:43:12.958619 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 15 04:43:12.962660 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 15 04:43:12.966899 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 15 04:43:12.971077 kernel: loop0: detected capacity change from 0 to 134232 Jul 15 04:43:12.973317 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 15 04:43:12.985057 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 15 04:43:12.988174 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 15 04:43:12.994681 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 15 04:43:12.997750 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 15 04:43:13.003052 kernel: loop1: detected capacity change from 0 to 207008 Jul 15 04:43:13.029434 systemd-tmpfiles[1217]: ACLs are not supported, ignoring. Jul 15 04:43:13.029748 systemd-tmpfiles[1217]: ACLs are not supported, ignoring. Jul 15 04:43:13.032772 kernel: loop2: detected capacity change from 0 to 105936 Jul 15 04:43:13.033778 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 15 04:43:13.064081 kernel: loop3: detected capacity change from 0 to 134232 Jul 15 04:43:13.071046 kernel: loop4: detected capacity change from 0 to 207008 Jul 15 04:43:13.077062 kernel: loop5: detected capacity change from 0 to 105936 Jul 15 04:43:13.079819 (sd-merge)[1223]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 15 04:43:13.080233 (sd-merge)[1223]: Merged extensions into '/usr'. Jul 15 04:43:13.083769 systemd[1]: Reload requested from client PID 1198 ('systemd-sysext') (unit systemd-sysext.service)... Jul 15 04:43:13.083792 systemd[1]: Reloading... Jul 15 04:43:13.131613 zram_generator::config[1248]: No configuration found. Jul 15 04:43:13.200977 ldconfig[1193]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 15 04:43:13.211261 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 04:43:13.273335 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 15 04:43:13.273453 systemd[1]: Reloading finished in 188 ms. Jul 15 04:43:13.301618 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 15 04:43:13.303224 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 15 04:43:13.316240 systemd[1]: Starting ensure-sysext.service... Jul 15 04:43:13.318012 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 15 04:43:13.333822 systemd-tmpfiles[1284]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 15 04:43:13.334019 systemd-tmpfiles[1284]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 15 04:43:13.334289 systemd[1]: Reload requested from client PID 1283 ('systemctl') (unit ensure-sysext.service)... Jul 15 04:43:13.334307 systemd[1]: Reloading... Jul 15 04:43:13.334337 systemd-tmpfiles[1284]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 15 04:43:13.334515 systemd-tmpfiles[1284]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 15 04:43:13.335136 systemd-tmpfiles[1284]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 15 04:43:13.335357 systemd-tmpfiles[1284]: ACLs are not supported, ignoring. Jul 15 04:43:13.335409 systemd-tmpfiles[1284]: ACLs are not supported, ignoring. Jul 15 04:43:13.337622 systemd-tmpfiles[1284]: Detected autofs mount point /boot during canonicalization of boot. Jul 15 04:43:13.337629 systemd-tmpfiles[1284]: Skipping /boot Jul 15 04:43:13.343375 systemd-tmpfiles[1284]: Detected autofs mount point /boot during canonicalization of boot. Jul 15 04:43:13.343390 systemd-tmpfiles[1284]: Skipping /boot Jul 15 04:43:13.379075 zram_generator::config[1314]: No configuration found. Jul 15 04:43:13.440636 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 04:43:13.501693 systemd[1]: Reloading finished in 167 ms. Jul 15 04:43:13.527491 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 15 04:43:13.530063 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 15 04:43:13.546873 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 15 04:43:13.549057 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 15 04:43:13.551100 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 15 04:43:13.554090 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 15 04:43:13.557295 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 15 04:43:13.561286 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 15 04:43:13.569483 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 04:43:13.577926 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 15 04:43:13.581838 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 15 04:43:13.584793 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 15 04:43:13.586140 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 04:43:13.586276 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 04:43:13.591259 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 15 04:43:13.594799 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 15 04:43:13.597268 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 04:43:13.599060 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 15 04:43:13.602008 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 04:43:13.608216 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 15 04:43:13.610101 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 04:43:13.610250 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 15 04:43:13.612469 systemd-udevd[1352]: Using default interface naming scheme 'v255'. Jul 15 04:43:13.616707 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 04:43:13.617991 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 15 04:43:13.620135 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 15 04:43:13.620765 augenrules[1381]: No rules Jul 15 04:43:13.622314 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 15 04:43:13.623470 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 04:43:13.623675 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 04:43:13.632012 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 15 04:43:13.634901 systemd[1]: audit-rules.service: Deactivated successfully. Jul 15 04:43:13.635946 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 15 04:43:13.638820 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 15 04:43:13.641399 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 04:43:13.641552 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 15 04:43:13.643425 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 15 04:43:13.645902 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 15 04:43:13.648475 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 04:43:13.648657 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 15 04:43:13.651496 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 04:43:13.651645 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 15 04:43:13.659658 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 15 04:43:13.677380 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 15 04:43:13.692466 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 15 04:43:13.694348 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 04:43:13.695419 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 15 04:43:13.699103 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 15 04:43:13.704226 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 15 04:43:13.707314 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 15 04:43:13.708466 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 04:43:13.708512 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 04:43:13.716181 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 15 04:43:13.717570 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 15 04:43:13.718576 systemd[1]: Finished ensure-sysext.service. Jul 15 04:43:13.722420 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 15 04:43:13.724104 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 15 04:43:13.725478 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 04:43:13.725649 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 15 04:43:13.744227 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 04:43:13.744640 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 15 04:43:13.746968 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 04:43:13.749172 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 15 04:43:13.756321 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jul 15 04:43:13.757218 augenrules[1428]: /sbin/augenrules: No change Jul 15 04:43:13.760835 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 15 04:43:13.760890 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 15 04:43:13.763793 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 15 04:43:13.768897 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 15 04:43:13.771546 augenrules[1464]: No rules Jul 15 04:43:13.771870 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 15 04:43:13.773491 systemd[1]: audit-rules.service: Deactivated successfully. Jul 15 04:43:13.773789 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 15 04:43:13.793749 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 15 04:43:13.799870 systemd-resolved[1350]: Positive Trust Anchors: Jul 15 04:43:13.799886 systemd-resolved[1350]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 15 04:43:13.799921 systemd-resolved[1350]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 15 04:43:13.806300 systemd-resolved[1350]: Defaulting to hostname 'linux'. Jul 15 04:43:13.811534 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 15 04:43:13.813067 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 15 04:43:13.824128 systemd-networkd[1435]: lo: Link UP Jul 15 04:43:13.824135 systemd-networkd[1435]: lo: Gained carrier Jul 15 04:43:13.824962 systemd-networkd[1435]: Enumeration completed Jul 15 04:43:13.825088 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 15 04:43:13.826658 systemd[1]: Reached target network.target - Network. Jul 15 04:43:13.827520 systemd-networkd[1435]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 04:43:13.827530 systemd-networkd[1435]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 15 04:43:13.828145 systemd-networkd[1435]: eth0: Link UP Jul 15 04:43:13.828296 systemd-networkd[1435]: eth0: Gained carrier Jul 15 04:43:13.828315 systemd-networkd[1435]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 04:43:13.830456 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 15 04:43:13.832723 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 15 04:43:13.838544 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 15 04:43:13.840119 systemd[1]: Reached target sysinit.target - System Initialization. Jul 15 04:43:13.841404 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 15 04:43:13.842678 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 15 04:43:13.844168 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 15 04:43:13.845402 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 15 04:43:13.845434 systemd[1]: Reached target paths.target - Path Units. Jul 15 04:43:13.846434 systemd-networkd[1435]: eth0: DHCPv4 address 10.0.0.60/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 15 04:43:13.846592 systemd[1]: Reached target time-set.target - System Time Set. Jul 15 04:43:13.847795 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 15 04:43:13.849112 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 15 04:43:13.850328 systemd[1]: Reached target timers.target - Timer Units. Jul 15 04:43:13.851750 systemd-timesyncd[1462]: Network configuration changed, trying to establish connection. Jul 15 04:43:13.851837 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 15 04:43:13.854360 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 15 04:43:13.856627 systemd-timesyncd[1462]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 15 04:43:13.856732 systemd-timesyncd[1462]: Initial clock synchronization to Tue 2025-07-15 04:43:14.110866 UTC. Jul 15 04:43:13.857985 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 15 04:43:13.859372 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 15 04:43:13.860646 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 15 04:43:13.866729 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 15 04:43:13.868607 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 15 04:43:13.870904 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 15 04:43:13.873126 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 15 04:43:13.874867 systemd[1]: Reached target sockets.target - Socket Units. Jul 15 04:43:13.876198 systemd[1]: Reached target basic.target - Basic System. Jul 15 04:43:13.877146 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 15 04:43:13.877179 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 15 04:43:13.878230 systemd[1]: Starting containerd.service - containerd container runtime... Jul 15 04:43:13.880261 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 15 04:43:13.882976 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 15 04:43:13.885735 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 15 04:43:13.888130 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 15 04:43:13.889226 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 15 04:43:13.892826 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 15 04:43:13.898212 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 15 04:43:13.900780 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 15 04:43:13.904340 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 15 04:43:13.910048 jq[1496]: false Jul 15 04:43:13.907740 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 15 04:43:13.910107 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 15 04:43:13.910552 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 15 04:43:13.911486 systemd[1]: Starting update-engine.service - Update Engine... Jul 15 04:43:13.919199 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 15 04:43:13.926868 extend-filesystems[1497]: Found /dev/vda6 Jul 15 04:43:13.938568 extend-filesystems[1497]: Found /dev/vda9 Jul 15 04:43:13.939654 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 15 04:43:13.941666 extend-filesystems[1497]: Checking size of /dev/vda9 Jul 15 04:43:13.942518 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 15 04:43:13.943558 jq[1508]: true Jul 15 04:43:13.942696 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 15 04:43:13.944440 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 15 04:43:13.944617 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 15 04:43:13.947111 systemd[1]: motdgen.service: Deactivated successfully. Jul 15 04:43:13.947316 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 15 04:43:13.959435 extend-filesystems[1497]: Resized partition /dev/vda9 Jul 15 04:43:13.968099 extend-filesystems[1535]: resize2fs 1.47.2 (1-Jan-2025) Jul 15 04:43:13.971002 (ntainerd)[1524]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 15 04:43:13.977432 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 04:43:13.978356 jq[1523]: true Jul 15 04:43:13.996958 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 15 04:43:14.025860 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 15 04:43:14.025917 update_engine[1507]: I20250715 04:43:14.007231 1507 main.cc:92] Flatcar Update Engine starting Jul 15 04:43:14.026943 extend-filesystems[1535]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 15 04:43:14.026943 extend-filesystems[1535]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 15 04:43:14.026943 extend-filesystems[1535]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 15 04:43:14.034876 extend-filesystems[1497]: Resized filesystem in /dev/vda9 Jul 15 04:43:14.030038 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 15 04:43:14.032252 dbus-daemon[1494]: [system] SELinux support is enabled Jul 15 04:43:14.042758 update_engine[1507]: I20250715 04:43:14.042213 1507 update_check_scheduler.cc:74] Next update check in 3m12s Jul 15 04:43:14.034335 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 15 04:43:14.044943 systemd-logind[1506]: Watching system buttons on /dev/input/event0 (Power Button) Jul 15 04:43:14.045444 systemd-logind[1506]: New seat seat0. Jul 15 04:43:14.065344 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 15 04:43:14.068645 systemd[1]: Started systemd-logind.service - User Login Management. Jul 15 04:43:14.072110 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 04:43:14.077256 tar[1521]: linux-arm64/LICENSE Jul 15 04:43:14.077256 tar[1521]: linux-arm64/helm Jul 15 04:43:14.079185 dbus-daemon[1494]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 15 04:43:14.080013 systemd[1]: Started update-engine.service - Update Engine. Jul 15 04:43:14.082400 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 15 04:43:14.082560 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 15 04:43:14.084158 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 15 04:43:14.084300 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 15 04:43:14.088528 bash[1561]: Updated "/home/core/.ssh/authorized_keys" Jul 15 04:43:14.089280 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 15 04:43:14.091983 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 15 04:43:14.094220 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 15 04:43:14.154300 locksmithd[1569]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 15 04:43:14.215476 containerd[1524]: time="2025-07-15T04:43:14Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 15 04:43:14.218253 containerd[1524]: time="2025-07-15T04:43:14.218206808Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Jul 15 04:43:14.228082 containerd[1524]: time="2025-07-15T04:43:14.227947548Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.823µs" Jul 15 04:43:14.228082 containerd[1524]: time="2025-07-15T04:43:14.227986384Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 15 04:43:14.228082 containerd[1524]: time="2025-07-15T04:43:14.228004833Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 15 04:43:14.228334 containerd[1524]: time="2025-07-15T04:43:14.228309251Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 15 04:43:14.228406 containerd[1524]: time="2025-07-15T04:43:14.228391960Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 15 04:43:14.228469 containerd[1524]: time="2025-07-15T04:43:14.228456508Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 15 04:43:14.228574 containerd[1524]: time="2025-07-15T04:43:14.228556096Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 15 04:43:14.228628 containerd[1524]: time="2025-07-15T04:43:14.228613010Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 15 04:43:14.228929 containerd[1524]: time="2025-07-15T04:43:14.228902942Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 15 04:43:14.229007 containerd[1524]: time="2025-07-15T04:43:14.228990892Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 15 04:43:14.229085 containerd[1524]: time="2025-07-15T04:43:14.229045412Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 15 04:43:14.229136 containerd[1524]: time="2025-07-15T04:43:14.229123085Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 15 04:43:14.229292 containerd[1524]: time="2025-07-15T04:43:14.229262046Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 15 04:43:14.229568 containerd[1524]: time="2025-07-15T04:43:14.229543600Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 15 04:43:14.229658 containerd[1524]: time="2025-07-15T04:43:14.229641042Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 15 04:43:14.229710 containerd[1524]: time="2025-07-15T04:43:14.229695108Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 15 04:43:14.229806 containerd[1524]: time="2025-07-15T04:43:14.229788836Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 15 04:43:14.230127 containerd[1524]: time="2025-07-15T04:43:14.230106214Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 15 04:43:14.230269 containerd[1524]: time="2025-07-15T04:43:14.230250705Z" level=info msg="metadata content store policy set" policy=shared Jul 15 04:43:14.234400 containerd[1524]: time="2025-07-15T04:43:14.234373235Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 15 04:43:14.234521 containerd[1524]: time="2025-07-15T04:43:14.234505139Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 15 04:43:14.234608 containerd[1524]: time="2025-07-15T04:43:14.234594863Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 15 04:43:14.234683 containerd[1524]: time="2025-07-15T04:43:14.234668533Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 15 04:43:14.234736 containerd[1524]: time="2025-07-15T04:43:14.234722351Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 15 04:43:14.234789 containerd[1524]: time="2025-07-15T04:43:14.234775922Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 15 04:43:14.234842 containerd[1524]: time="2025-07-15T04:43:14.234828749Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 15 04:43:14.234893 containerd[1524]: time="2025-07-15T04:43:14.234880792Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 15 04:43:14.234947 containerd[1524]: time="2025-07-15T04:43:14.234934280Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 15 04:43:14.235008 containerd[1524]: time="2025-07-15T04:43:14.234995527Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 15 04:43:14.235092 containerd[1524]: time="2025-07-15T04:43:14.235047158Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 15 04:43:14.235148 containerd[1524]: time="2025-07-15T04:43:14.235135231Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 15 04:43:14.235312 containerd[1524]: time="2025-07-15T04:43:14.235289009Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 15 04:43:14.235386 containerd[1524]: time="2025-07-15T04:43:14.235370809Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 15 04:43:14.235445 containerd[1524]: time="2025-07-15T04:43:14.235432056Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 15 04:43:14.235525 containerd[1524]: time="2025-07-15T04:43:14.235509771Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 15 04:43:14.235595 containerd[1524]: time="2025-07-15T04:43:14.235582243Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 15 04:43:14.235648 containerd[1524]: time="2025-07-15T04:43:14.235636350Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 15 04:43:14.235702 containerd[1524]: time="2025-07-15T04:43:14.235689508Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 15 04:43:14.235759 containerd[1524]: time="2025-07-15T04:43:14.235746793Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 15 04:43:14.235825 containerd[1524]: time="2025-07-15T04:43:14.235810888Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 15 04:43:14.235875 containerd[1524]: time="2025-07-15T04:43:14.235863591Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 15 04:43:14.235931 containerd[1524]: time="2025-07-15T04:43:14.235916171Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 15 04:43:14.236197 containerd[1524]: time="2025-07-15T04:43:14.236177626Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 15 04:43:14.236272 containerd[1524]: time="2025-07-15T04:43:14.236258106Z" level=info msg="Start snapshots syncer" Jul 15 04:43:14.236345 containerd[1524]: time="2025-07-15T04:43:14.236330785Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 15 04:43:14.236865 containerd[1524]: time="2025-07-15T04:43:14.236746142Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 15 04:43:14.236865 containerd[1524]: time="2025-07-15T04:43:14.236806027Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 15 04:43:14.238020 containerd[1524]: time="2025-07-15T04:43:14.237980160Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 15 04:43:14.238537 containerd[1524]: time="2025-07-15T04:43:14.238230101Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 15 04:43:14.238537 containerd[1524]: time="2025-07-15T04:43:14.238257959Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 15 04:43:14.238537 containerd[1524]: time="2025-07-15T04:43:14.238269515Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 15 04:43:14.238537 containerd[1524]: time="2025-07-15T04:43:14.238291306Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 15 04:43:14.238537 containerd[1524]: time="2025-07-15T04:43:14.238304678Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 15 04:43:14.238537 containerd[1524]: time="2025-07-15T04:43:14.238315574Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 15 04:43:14.238537 containerd[1524]: time="2025-07-15T04:43:14.238326304Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 15 04:43:14.238537 containerd[1524]: time="2025-07-15T04:43:14.238352388Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 15 04:43:14.238537 containerd[1524]: time="2025-07-15T04:43:14.238364522Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 15 04:43:14.238537 containerd[1524]: time="2025-07-15T04:43:14.238379627Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 15 04:43:14.238537 containerd[1524]: time="2025-07-15T04:43:14.238441204Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 15 04:43:14.238537 containerd[1524]: time="2025-07-15T04:43:14.238456268Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 15 04:43:14.238537 containerd[1524]: time="2025-07-15T04:43:14.238464605Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 15 04:43:14.238790 containerd[1524]: time="2025-07-15T04:43:14.238473974Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 15 04:43:14.238790 containerd[1524]: time="2025-07-15T04:43:14.238481650Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 15 04:43:14.238790 containerd[1524]: time="2025-07-15T04:43:14.238490854Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 15 04:43:14.238876 containerd[1524]: time="2025-07-15T04:43:14.238501131Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 15 04:43:14.239010 containerd[1524]: time="2025-07-15T04:43:14.238996348Z" level=info msg="runtime interface created" Jul 15 04:43:14.239055 containerd[1524]: time="2025-07-15T04:43:14.239043315Z" level=info msg="created NRI interface" Jul 15 04:43:14.239135 containerd[1524]: time="2025-07-15T04:43:14.239121029Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 15 04:43:14.239187 containerd[1524]: time="2025-07-15T04:43:14.239174434Z" level=info msg="Connect containerd service" Jul 15 04:43:14.239259 containerd[1524]: time="2025-07-15T04:43:14.239246659Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 15 04:43:14.240417 containerd[1524]: time="2025-07-15T04:43:14.240385671Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 15 04:43:14.349093 containerd[1524]: time="2025-07-15T04:43:14.348666468Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 15 04:43:14.349093 containerd[1524]: time="2025-07-15T04:43:14.348726064Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 15 04:43:14.349093 containerd[1524]: time="2025-07-15T04:43:14.348744925Z" level=info msg="Start subscribing containerd event" Jul 15 04:43:14.349093 containerd[1524]: time="2025-07-15T04:43:14.348771628Z" level=info msg="Start recovering state" Jul 15 04:43:14.349093 containerd[1524]: time="2025-07-15T04:43:14.348855450Z" level=info msg="Start event monitor" Jul 15 04:43:14.349093 containerd[1524]: time="2025-07-15T04:43:14.348867543Z" level=info msg="Start cni network conf syncer for default" Jul 15 04:43:14.349093 containerd[1524]: time="2025-07-15T04:43:14.348874394Z" level=info msg="Start streaming server" Jul 15 04:43:14.349093 containerd[1524]: time="2025-07-15T04:43:14.348882277Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 15 04:43:14.349093 containerd[1524]: time="2025-07-15T04:43:14.348889458Z" level=info msg="runtime interface starting up..." Jul 15 04:43:14.349093 containerd[1524]: time="2025-07-15T04:43:14.348895277Z" level=info msg="starting plugins..." Jul 15 04:43:14.349093 containerd[1524]: time="2025-07-15T04:43:14.348906586Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 15 04:43:14.349093 containerd[1524]: time="2025-07-15T04:43:14.349023260Z" level=info msg="containerd successfully booted in 0.134041s" Jul 15 04:43:14.349149 systemd[1]: Started containerd.service - containerd container runtime. Jul 15 04:43:14.384505 tar[1521]: linux-arm64/README.md Jul 15 04:43:14.401037 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 15 04:43:14.764669 sshd_keygen[1514]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 15 04:43:14.784835 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 15 04:43:14.788484 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 15 04:43:14.812900 systemd[1]: issuegen.service: Deactivated successfully. Jul 15 04:43:14.813145 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 15 04:43:14.816014 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 15 04:43:14.837370 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 15 04:43:14.840275 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 15 04:43:14.842462 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 15 04:43:14.843868 systemd[1]: Reached target getty.target - Login Prompts. Jul 15 04:43:15.195219 systemd-networkd[1435]: eth0: Gained IPv6LL Jul 15 04:43:15.197749 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 15 04:43:15.199489 systemd[1]: Reached target network-online.target - Network is Online. Jul 15 04:43:15.201802 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 15 04:43:15.204192 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 04:43:15.218822 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 15 04:43:15.233556 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 15 04:43:15.235116 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 15 04:43:15.237231 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 15 04:43:15.242184 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 15 04:43:15.779297 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 04:43:15.780887 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 15 04:43:15.782781 (kubelet)[1635]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 15 04:43:15.786792 systemd[1]: Startup finished in 2.096s (kernel) + 5.649s (initrd) + 3.564s (userspace) = 11.310s. Jul 15 04:43:16.201791 kubelet[1635]: E0715 04:43:16.201732 1635 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 04:43:16.203958 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 04:43:16.204135 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 04:43:16.204516 systemd[1]: kubelet.service: Consumed 802ms CPU time, 256.3M memory peak. Jul 15 04:43:20.049294 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 15 04:43:20.050338 systemd[1]: Started sshd@0-10.0.0.60:22-10.0.0.1:58206.service - OpenSSH per-connection server daemon (10.0.0.1:58206). Jul 15 04:43:20.141855 sshd[1648]: Accepted publickey for core from 10.0.0.1 port 58206 ssh2: RSA SHA256:sv36Sv5cF+dK4scc2r2cUvpDU+BCYvXiqSSRxSnX4+c Jul 15 04:43:20.143746 sshd-session[1648]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:43:20.151395 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 15 04:43:20.152271 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 15 04:43:20.157560 systemd-logind[1506]: New session 1 of user core. Jul 15 04:43:20.180083 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 15 04:43:20.182572 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 15 04:43:20.197125 (systemd)[1653]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 15 04:43:20.199361 systemd-logind[1506]: New session c1 of user core. Jul 15 04:43:20.305486 systemd[1653]: Queued start job for default target default.target. Jul 15 04:43:20.321145 systemd[1653]: Created slice app.slice - User Application Slice. Jul 15 04:43:20.321959 systemd[1653]: Reached target paths.target - Paths. Jul 15 04:43:20.322023 systemd[1653]: Reached target timers.target - Timers. Jul 15 04:43:20.323244 systemd[1653]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 15 04:43:20.332853 systemd[1653]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 15 04:43:20.332920 systemd[1653]: Reached target sockets.target - Sockets. Jul 15 04:43:20.332958 systemd[1653]: Reached target basic.target - Basic System. Jul 15 04:43:20.332985 systemd[1653]: Reached target default.target - Main User Target. Jul 15 04:43:20.333011 systemd[1653]: Startup finished in 128ms. Jul 15 04:43:20.333215 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 15 04:43:20.334690 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 15 04:43:20.402605 systemd[1]: Started sshd@1-10.0.0.60:22-10.0.0.1:58210.service - OpenSSH per-connection server daemon (10.0.0.1:58210). Jul 15 04:43:20.457762 sshd[1664]: Accepted publickey for core from 10.0.0.1 port 58210 ssh2: RSA SHA256:sv36Sv5cF+dK4scc2r2cUvpDU+BCYvXiqSSRxSnX4+c Jul 15 04:43:20.458881 sshd-session[1664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:43:20.463305 systemd-logind[1506]: New session 2 of user core. Jul 15 04:43:20.472202 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 15 04:43:20.524455 sshd[1667]: Connection closed by 10.0.0.1 port 58210 Jul 15 04:43:20.525082 sshd-session[1664]: pam_unix(sshd:session): session closed for user core Jul 15 04:43:20.539919 systemd[1]: sshd@1-10.0.0.60:22-10.0.0.1:58210.service: Deactivated successfully. Jul 15 04:43:20.541829 systemd[1]: session-2.scope: Deactivated successfully. Jul 15 04:43:20.544138 systemd-logind[1506]: Session 2 logged out. Waiting for processes to exit. Jul 15 04:43:20.545536 systemd[1]: Started sshd@2-10.0.0.60:22-10.0.0.1:58214.service - OpenSSH per-connection server daemon (10.0.0.1:58214). Jul 15 04:43:20.546373 systemd-logind[1506]: Removed session 2. Jul 15 04:43:20.599811 sshd[1673]: Accepted publickey for core from 10.0.0.1 port 58214 ssh2: RSA SHA256:sv36Sv5cF+dK4scc2r2cUvpDU+BCYvXiqSSRxSnX4+c Jul 15 04:43:20.600958 sshd-session[1673]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:43:20.605130 systemd-logind[1506]: New session 3 of user core. Jul 15 04:43:20.615225 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 15 04:43:20.662735 sshd[1676]: Connection closed by 10.0.0.1 port 58214 Jul 15 04:43:20.663002 sshd-session[1673]: pam_unix(sshd:session): session closed for user core Jul 15 04:43:20.681965 systemd[1]: sshd@2-10.0.0.60:22-10.0.0.1:58214.service: Deactivated successfully. Jul 15 04:43:20.684342 systemd[1]: session-3.scope: Deactivated successfully. Jul 15 04:43:20.687218 systemd-logind[1506]: Session 3 logged out. Waiting for processes to exit. Jul 15 04:43:20.688817 systemd[1]: Started sshd@3-10.0.0.60:22-10.0.0.1:58226.service - OpenSSH per-connection server daemon (10.0.0.1:58226). Jul 15 04:43:20.689880 systemd-logind[1506]: Removed session 3. Jul 15 04:43:20.752419 sshd[1682]: Accepted publickey for core from 10.0.0.1 port 58226 ssh2: RSA SHA256:sv36Sv5cF+dK4scc2r2cUvpDU+BCYvXiqSSRxSnX4+c Jul 15 04:43:20.753617 sshd-session[1682]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:43:20.758097 systemd-logind[1506]: New session 4 of user core. Jul 15 04:43:20.771284 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 15 04:43:20.825104 sshd[1685]: Connection closed by 10.0.0.1 port 58226 Jul 15 04:43:20.823328 sshd-session[1682]: pam_unix(sshd:session): session closed for user core Jul 15 04:43:20.837233 systemd[1]: sshd@3-10.0.0.60:22-10.0.0.1:58226.service: Deactivated successfully. Jul 15 04:43:20.840406 systemd[1]: session-4.scope: Deactivated successfully. Jul 15 04:43:20.841108 systemd-logind[1506]: Session 4 logged out. Waiting for processes to exit. Jul 15 04:43:20.843730 systemd[1]: Started sshd@4-10.0.0.60:22-10.0.0.1:58230.service - OpenSSH per-connection server daemon (10.0.0.1:58230). Jul 15 04:43:20.844536 systemd-logind[1506]: Removed session 4. Jul 15 04:43:20.906405 sshd[1691]: Accepted publickey for core from 10.0.0.1 port 58230 ssh2: RSA SHA256:sv36Sv5cF+dK4scc2r2cUvpDU+BCYvXiqSSRxSnX4+c Jul 15 04:43:20.909759 sshd-session[1691]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:43:20.918765 systemd-logind[1506]: New session 5 of user core. Jul 15 04:43:20.926367 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 15 04:43:20.994413 sudo[1695]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 15 04:43:20.994979 sudo[1695]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 04:43:21.009010 sudo[1695]: pam_unix(sudo:session): session closed for user root Jul 15 04:43:21.011031 sshd[1694]: Connection closed by 10.0.0.1 port 58230 Jul 15 04:43:21.010927 sshd-session[1691]: pam_unix(sshd:session): session closed for user core Jul 15 04:43:21.021100 systemd[1]: sshd@4-10.0.0.60:22-10.0.0.1:58230.service: Deactivated successfully. Jul 15 04:43:21.023662 systemd[1]: session-5.scope: Deactivated successfully. Jul 15 04:43:21.024467 systemd-logind[1506]: Session 5 logged out. Waiting for processes to exit. Jul 15 04:43:21.026710 systemd[1]: Started sshd@5-10.0.0.60:22-10.0.0.1:58234.service - OpenSSH per-connection server daemon (10.0.0.1:58234). Jul 15 04:43:21.027220 systemd-logind[1506]: Removed session 5. Jul 15 04:43:21.106224 sshd[1701]: Accepted publickey for core from 10.0.0.1 port 58234 ssh2: RSA SHA256:sv36Sv5cF+dK4scc2r2cUvpDU+BCYvXiqSSRxSnX4+c Jul 15 04:43:21.107561 sshd-session[1701]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:43:21.111543 systemd-logind[1506]: New session 6 of user core. Jul 15 04:43:21.123219 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 15 04:43:21.173948 sudo[1706]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 15 04:43:21.174243 sudo[1706]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 04:43:21.248917 sudo[1706]: pam_unix(sudo:session): session closed for user root Jul 15 04:43:21.254133 sudo[1705]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 15 04:43:21.254414 sudo[1705]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 04:43:21.263422 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 15 04:43:21.315866 augenrules[1728]: No rules Jul 15 04:43:21.317378 systemd[1]: audit-rules.service: Deactivated successfully. Jul 15 04:43:21.318164 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 15 04:43:21.319233 sudo[1705]: pam_unix(sudo:session): session closed for user root Jul 15 04:43:21.320815 sshd[1704]: Connection closed by 10.0.0.1 port 58234 Jul 15 04:43:21.321230 sshd-session[1701]: pam_unix(sshd:session): session closed for user core Jul 15 04:43:21.330964 systemd[1]: sshd@5-10.0.0.60:22-10.0.0.1:58234.service: Deactivated successfully. Jul 15 04:43:21.332524 systemd[1]: session-6.scope: Deactivated successfully. Jul 15 04:43:21.334584 systemd-logind[1506]: Session 6 logged out. Waiting for processes to exit. Jul 15 04:43:21.336763 systemd[1]: Started sshd@6-10.0.0.60:22-10.0.0.1:58236.service - OpenSSH per-connection server daemon (10.0.0.1:58236). Jul 15 04:43:21.337523 systemd-logind[1506]: Removed session 6. Jul 15 04:43:21.390659 sshd[1737]: Accepted publickey for core from 10.0.0.1 port 58236 ssh2: RSA SHA256:sv36Sv5cF+dK4scc2r2cUvpDU+BCYvXiqSSRxSnX4+c Jul 15 04:43:21.391780 sshd-session[1737]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:43:21.396110 systemd-logind[1506]: New session 7 of user core. Jul 15 04:43:21.411220 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 15 04:43:21.462314 sudo[1741]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 15 04:43:21.462599 sudo[1741]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 04:43:21.847674 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 15 04:43:21.865410 (dockerd)[1762]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 15 04:43:22.184394 dockerd[1762]: time="2025-07-15T04:43:22.184270994Z" level=info msg="Starting up" Jul 15 04:43:22.186762 dockerd[1762]: time="2025-07-15T04:43:22.186727860Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 15 04:43:22.198204 dockerd[1762]: time="2025-07-15T04:43:22.198151448Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jul 15 04:43:22.239579 dockerd[1762]: time="2025-07-15T04:43:22.239532152Z" level=info msg="Loading containers: start." Jul 15 04:43:22.247064 kernel: Initializing XFRM netlink socket Jul 15 04:43:22.479173 systemd-networkd[1435]: docker0: Link UP Jul 15 04:43:22.484429 dockerd[1762]: time="2025-07-15T04:43:22.484382356Z" level=info msg="Loading containers: done." Jul 15 04:43:22.496917 dockerd[1762]: time="2025-07-15T04:43:22.496865593Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 15 04:43:22.497059 dockerd[1762]: time="2025-07-15T04:43:22.496943960Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jul 15 04:43:22.497059 dockerd[1762]: time="2025-07-15T04:43:22.497020467Z" level=info msg="Initializing buildkit" Jul 15 04:43:22.516865 dockerd[1762]: time="2025-07-15T04:43:22.516822905Z" level=info msg="Completed buildkit initialization" Jul 15 04:43:22.522114 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 15 04:43:22.523219 dockerd[1762]: time="2025-07-15T04:43:22.523161552Z" level=info msg="Daemon has completed initialization" Jul 15 04:43:22.523404 dockerd[1762]: time="2025-07-15T04:43:22.523302273Z" level=info msg="API listen on /run/docker.sock" Jul 15 04:43:23.193728 containerd[1524]: time="2025-07-15T04:43:23.193642497Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\"" Jul 15 04:43:23.980193 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4134575968.mount: Deactivated successfully. Jul 15 04:43:24.997737 containerd[1524]: time="2025-07-15T04:43:24.997272040Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:43:25.000608 containerd[1524]: time="2025-07-15T04:43:25.000554906Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.6: active requests=0, bytes read=26328196" Jul 15 04:43:25.002166 containerd[1524]: time="2025-07-15T04:43:25.002128647Z" level=info msg="ImageCreate event name:\"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:43:25.007626 containerd[1524]: time="2025-07-15T04:43:25.007557845Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:43:25.009063 containerd[1524]: time="2025-07-15T04:43:25.008931947Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.6\" with image id \"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\", size \"26324994\" in 1.815245815s" Jul 15 04:43:25.009063 containerd[1524]: time="2025-07-15T04:43:25.008980379Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\" returns image reference \"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\"" Jul 15 04:43:25.010180 containerd[1524]: time="2025-07-15T04:43:25.010139277Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\"" Jul 15 04:43:26.259145 containerd[1524]: time="2025-07-15T04:43:26.259094684Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:43:26.260258 containerd[1524]: time="2025-07-15T04:43:26.260222780Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.6: active requests=0, bytes read=22529230" Jul 15 04:43:26.261547 containerd[1524]: time="2025-07-15T04:43:26.261481871Z" level=info msg="ImageCreate event name:\"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:43:26.264130 containerd[1524]: time="2025-07-15T04:43:26.264070580Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:43:26.267180 containerd[1524]: time="2025-07-15T04:43:26.265188693Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.6\" with image id \"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\", size \"24065018\" in 1.25501542s" Jul 15 04:43:26.267180 containerd[1524]: time="2025-07-15T04:43:26.265466581Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\" returns image reference \"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\"" Jul 15 04:43:26.268371 containerd[1524]: time="2025-07-15T04:43:26.268340384Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\"" Jul 15 04:43:26.313889 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 15 04:43:26.315326 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 04:43:26.479119 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 04:43:26.484266 (kubelet)[2044]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 15 04:43:26.526848 kubelet[2044]: E0715 04:43:26.526698 2044 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 04:43:26.529759 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 04:43:26.529903 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 04:43:26.530497 systemd[1]: kubelet.service: Consumed 161ms CPU time, 108.3M memory peak. Jul 15 04:43:27.608444 containerd[1524]: time="2025-07-15T04:43:27.608393800Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:43:27.609399 containerd[1524]: time="2025-07-15T04:43:27.609357328Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.6: active requests=0, bytes read=17484143" Jul 15 04:43:27.610106 containerd[1524]: time="2025-07-15T04:43:27.610022513Z" level=info msg="ImageCreate event name:\"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:43:27.613635 containerd[1524]: time="2025-07-15T04:43:27.613596947Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:43:27.614731 containerd[1524]: time="2025-07-15T04:43:27.614674993Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.6\" with image id \"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\", size \"19019949\" in 1.346295688s" Jul 15 04:43:27.614863 containerd[1524]: time="2025-07-15T04:43:27.614818392Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\" returns image reference \"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\"" Jul 15 04:43:27.615418 containerd[1524]: time="2025-07-15T04:43:27.615360854Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\"" Jul 15 04:43:28.513121 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2180712663.mount: Deactivated successfully. Jul 15 04:43:28.756071 containerd[1524]: time="2025-07-15T04:43:28.755570901Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:43:28.756549 containerd[1524]: time="2025-07-15T04:43:28.756146951Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.6: active requests=0, bytes read=27378408" Jul 15 04:43:28.756727 containerd[1524]: time="2025-07-15T04:43:28.756693457Z" level=info msg="ImageCreate event name:\"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:43:28.758915 containerd[1524]: time="2025-07-15T04:43:28.758873773Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:43:28.760245 containerd[1524]: time="2025-07-15T04:43:28.760204907Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.6\" with image id \"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\", repo tag \"registry.k8s.io/kube-proxy:v1.32.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\", size \"27377425\" in 1.144695593s" Jul 15 04:43:28.760289 containerd[1524]: time="2025-07-15T04:43:28.760257322Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\" returns image reference \"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\"" Jul 15 04:43:28.760713 containerd[1524]: time="2025-07-15T04:43:28.760689350Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 15 04:43:29.383989 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1930102445.mount: Deactivated successfully. Jul 15 04:43:30.224057 containerd[1524]: time="2025-07-15T04:43:30.223701865Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:43:30.224057 containerd[1524]: time="2025-07-15T04:43:30.224044185Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Jul 15 04:43:30.225971 containerd[1524]: time="2025-07-15T04:43:30.225398007Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:43:30.228800 containerd[1524]: time="2025-07-15T04:43:30.228717459Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:43:30.229801 containerd[1524]: time="2025-07-15T04:43:30.229757909Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.469038021s" Jul 15 04:43:30.229801 containerd[1524]: time="2025-07-15T04:43:30.229800308Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jul 15 04:43:30.230521 containerd[1524]: time="2025-07-15T04:43:30.230458770Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 15 04:43:30.650009 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount78342846.mount: Deactivated successfully. Jul 15 04:43:30.654587 containerd[1524]: time="2025-07-15T04:43:30.654530174Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 15 04:43:30.654931 containerd[1524]: time="2025-07-15T04:43:30.654903610Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Jul 15 04:43:30.655845 containerd[1524]: time="2025-07-15T04:43:30.655811324Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 15 04:43:30.657507 containerd[1524]: time="2025-07-15T04:43:30.657477916Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 15 04:43:30.658694 containerd[1524]: time="2025-07-15T04:43:30.658667765Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 428.178239ms" Jul 15 04:43:30.658743 containerd[1524]: time="2025-07-15T04:43:30.658704221Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 15 04:43:30.659121 containerd[1524]: time="2025-07-15T04:43:30.659101105Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jul 15 04:43:31.195496 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1987446248.mount: Deactivated successfully. Jul 15 04:43:32.804383 containerd[1524]: time="2025-07-15T04:43:32.804335209Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:43:32.805432 containerd[1524]: time="2025-07-15T04:43:32.805157165Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812471" Jul 15 04:43:32.806145 containerd[1524]: time="2025-07-15T04:43:32.806114427Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:43:32.808845 containerd[1524]: time="2025-07-15T04:43:32.808803572Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:43:32.810249 containerd[1524]: time="2025-07-15T04:43:32.810206833Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.150992151s" Jul 15 04:43:32.810249 containerd[1524]: time="2025-07-15T04:43:32.810245584Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Jul 15 04:43:36.563883 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 15 04:43:36.568286 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 04:43:36.730633 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 04:43:36.743315 (kubelet)[2206]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 15 04:43:36.779630 kubelet[2206]: E0715 04:43:36.779568 2206 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 04:43:36.782306 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 04:43:36.782443 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 04:43:36.782825 systemd[1]: kubelet.service: Consumed 142ms CPU time, 107M memory peak. Jul 15 04:43:38.743840 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 04:43:38.744098 systemd[1]: kubelet.service: Consumed 142ms CPU time, 107M memory peak. Jul 15 04:43:38.747406 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 04:43:38.767484 systemd[1]: Reload requested from client PID 2220 ('systemctl') (unit session-7.scope)... Jul 15 04:43:38.767506 systemd[1]: Reloading... Jul 15 04:43:38.846083 zram_generator::config[2264]: No configuration found. Jul 15 04:43:39.071500 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 04:43:39.157785 systemd[1]: Reloading finished in 389 ms. Jul 15 04:43:39.213705 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 15 04:43:39.213791 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 15 04:43:39.214062 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 04:43:39.214116 systemd[1]: kubelet.service: Consumed 95ms CPU time, 95M memory peak. Jul 15 04:43:39.216443 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 04:43:39.337511 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 04:43:39.342011 (kubelet)[2309]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 15 04:43:39.379040 kubelet[2309]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 04:43:39.379040 kubelet[2309]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 15 04:43:39.379040 kubelet[2309]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 04:43:39.379415 kubelet[2309]: I0715 04:43:39.379101 2309 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 15 04:43:40.264985 kubelet[2309]: I0715 04:43:40.264928 2309 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 15 04:43:40.264985 kubelet[2309]: I0715 04:43:40.264969 2309 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 15 04:43:40.265298 kubelet[2309]: I0715 04:43:40.265265 2309 server.go:954] "Client rotation is on, will bootstrap in background" Jul 15 04:43:40.296050 kubelet[2309]: E0715 04:43:40.295630 2309 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.60:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.60:6443: connect: connection refused" logger="UnhandledError" Jul 15 04:43:40.296835 kubelet[2309]: I0715 04:43:40.296794 2309 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 15 04:43:40.303519 kubelet[2309]: I0715 04:43:40.303479 2309 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 15 04:43:40.306388 kubelet[2309]: I0715 04:43:40.306353 2309 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 15 04:43:40.307116 kubelet[2309]: I0715 04:43:40.307051 2309 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 15 04:43:40.307297 kubelet[2309]: I0715 04:43:40.307103 2309 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 15 04:43:40.307398 kubelet[2309]: I0715 04:43:40.307374 2309 topology_manager.go:138] "Creating topology manager with none policy" Jul 15 04:43:40.307398 kubelet[2309]: I0715 04:43:40.307385 2309 container_manager_linux.go:304] "Creating device plugin manager" Jul 15 04:43:40.307617 kubelet[2309]: I0715 04:43:40.307589 2309 state_mem.go:36] "Initialized new in-memory state store" Jul 15 04:43:40.310158 kubelet[2309]: I0715 04:43:40.310122 2309 kubelet.go:446] "Attempting to sync node with API server" Jul 15 04:43:40.310158 kubelet[2309]: I0715 04:43:40.310154 2309 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 15 04:43:40.310215 kubelet[2309]: I0715 04:43:40.310180 2309 kubelet.go:352] "Adding apiserver pod source" Jul 15 04:43:40.310215 kubelet[2309]: I0715 04:43:40.310191 2309 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 15 04:43:40.311708 kubelet[2309]: W0715 04:43:40.311645 2309 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.60:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.60:6443: connect: connection refused Jul 15 04:43:40.311816 kubelet[2309]: E0715 04:43:40.311710 2309 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.60:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.60:6443: connect: connection refused" logger="UnhandledError" Jul 15 04:43:40.312363 kubelet[2309]: W0715 04:43:40.312319 2309 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.60:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.60:6443: connect: connection refused Jul 15 04:43:40.312417 kubelet[2309]: E0715 04:43:40.312368 2309 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.60:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.60:6443: connect: connection refused" logger="UnhandledError" Jul 15 04:43:40.314443 kubelet[2309]: I0715 04:43:40.314417 2309 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Jul 15 04:43:40.315095 kubelet[2309]: I0715 04:43:40.315072 2309 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 15 04:43:40.315221 kubelet[2309]: W0715 04:43:40.315204 2309 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 15 04:43:40.316134 kubelet[2309]: I0715 04:43:40.316110 2309 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 15 04:43:40.316198 kubelet[2309]: I0715 04:43:40.316149 2309 server.go:1287] "Started kubelet" Jul 15 04:43:40.318149 kubelet[2309]: I0715 04:43:40.317662 2309 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 15 04:43:40.318149 kubelet[2309]: I0715 04:43:40.318074 2309 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 15 04:43:40.318216 kubelet[2309]: I0715 04:43:40.318144 2309 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 15 04:43:40.318587 kubelet[2309]: I0715 04:43:40.318549 2309 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 15 04:43:40.319365 kubelet[2309]: I0715 04:43:40.319336 2309 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 15 04:43:40.322960 kubelet[2309]: E0715 04:43:40.322676 2309 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 04:43:40.322960 kubelet[2309]: I0715 04:43:40.322735 2309 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 15 04:43:40.323077 kubelet[2309]: I0715 04:43:40.322967 2309 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 15 04:43:40.323077 kubelet[2309]: I0715 04:43:40.323058 2309 reconciler.go:26] "Reconciler: start to sync state" Jul 15 04:43:40.323523 kubelet[2309]: W0715 04:43:40.323481 2309 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.60:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.60:6443: connect: connection refused Jul 15 04:43:40.323861 kubelet[2309]: I0715 04:43:40.323823 2309 server.go:479] "Adding debug handlers to kubelet server" Jul 15 04:43:40.323995 kubelet[2309]: E0715 04:43:40.323817 2309 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.60:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.60:6443: connect: connection refused" logger="UnhandledError" Jul 15 04:43:40.323995 kubelet[2309]: E0715 04:43:40.323759 2309 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.60:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.60:6443: connect: connection refused" interval="200ms" Jul 15 04:43:40.328382 kubelet[2309]: I0715 04:43:40.328357 2309 factory.go:221] Registration of the systemd container factory successfully Jul 15 04:43:40.328477 kubelet[2309]: I0715 04:43:40.328455 2309 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 15 04:43:40.330135 kubelet[2309]: E0715 04:43:40.330104 2309 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 15 04:43:40.330135 kubelet[2309]: I0715 04:43:40.330126 2309 factory.go:221] Registration of the containerd container factory successfully Jul 15 04:43:40.330316 kubelet[2309]: E0715 04:43:40.329977 2309 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.60:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.60:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1852532449699c4a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-15 04:43:40.316130378 +0000 UTC m=+0.970965259,LastTimestamp:2025-07-15 04:43:40.316130378 +0000 UTC m=+0.970965259,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 15 04:43:40.340287 kubelet[2309]: I0715 04:43:40.340245 2309 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 15 04:43:40.341589 kubelet[2309]: I0715 04:43:40.341563 2309 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 15 04:43:40.342134 kubelet[2309]: I0715 04:43:40.341764 2309 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 15 04:43:40.342134 kubelet[2309]: I0715 04:43:40.341807 2309 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 15 04:43:40.342134 kubelet[2309]: I0715 04:43:40.341816 2309 kubelet.go:2382] "Starting kubelet main sync loop" Jul 15 04:43:40.342134 kubelet[2309]: E0715 04:43:40.341857 2309 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 15 04:43:40.344857 kubelet[2309]: W0715 04:43:40.344820 2309 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.60:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.60:6443: connect: connection refused Jul 15 04:43:40.344927 kubelet[2309]: E0715 04:43:40.344862 2309 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.60:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.60:6443: connect: connection refused" logger="UnhandledError" Jul 15 04:43:40.345553 kubelet[2309]: I0715 04:43:40.345520 2309 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 15 04:43:40.345553 kubelet[2309]: I0715 04:43:40.345537 2309 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 15 04:43:40.345553 kubelet[2309]: I0715 04:43:40.345557 2309 state_mem.go:36] "Initialized new in-memory state store" Jul 15 04:43:40.418712 kubelet[2309]: I0715 04:43:40.418662 2309 policy_none.go:49] "None policy: Start" Jul 15 04:43:40.418712 kubelet[2309]: I0715 04:43:40.418705 2309 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 15 04:43:40.418712 kubelet[2309]: I0715 04:43:40.418719 2309 state_mem.go:35] "Initializing new in-memory state store" Jul 15 04:43:40.423780 kubelet[2309]: E0715 04:43:40.423667 2309 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 04:43:40.424310 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 15 04:43:40.438335 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 15 04:43:40.441648 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 15 04:43:40.442232 kubelet[2309]: E0715 04:43:40.442085 2309 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 15 04:43:40.460921 kubelet[2309]: I0715 04:43:40.460877 2309 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 15 04:43:40.461210 kubelet[2309]: I0715 04:43:40.461116 2309 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 15 04:43:40.461210 kubelet[2309]: I0715 04:43:40.461133 2309 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 15 04:43:40.461351 kubelet[2309]: I0715 04:43:40.461313 2309 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 15 04:43:40.462152 kubelet[2309]: E0715 04:43:40.462124 2309 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 15 04:43:40.462285 kubelet[2309]: E0715 04:43:40.462223 2309 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 15 04:43:40.525053 kubelet[2309]: E0715 04:43:40.524917 2309 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.60:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.60:6443: connect: connection refused" interval="400ms" Jul 15 04:43:40.563187 kubelet[2309]: I0715 04:43:40.563146 2309 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 15 04:43:40.563683 kubelet[2309]: E0715 04:43:40.563641 2309 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.60:6443/api/v1/nodes\": dial tcp 10.0.0.60:6443: connect: connection refused" node="localhost" Jul 15 04:43:40.651229 systemd[1]: Created slice kubepods-burstable-podb1879a1ecfad9740c346637a56d2cb49.slice - libcontainer container kubepods-burstable-podb1879a1ecfad9740c346637a56d2cb49.slice. Jul 15 04:43:40.677495 kubelet[2309]: E0715 04:43:40.677462 2309 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 15 04:43:40.680764 systemd[1]: Created slice kubepods-burstable-podd1af03769b64da1b1e8089a7035018fc.slice - libcontainer container kubepods-burstable-podd1af03769b64da1b1e8089a7035018fc.slice. Jul 15 04:43:40.683065 kubelet[2309]: E0715 04:43:40.682699 2309 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 15 04:43:40.685242 systemd[1]: Created slice kubepods-burstable-pod8a75e163f27396b2168da0f88f85f8a5.slice - libcontainer container kubepods-burstable-pod8a75e163f27396b2168da0f88f85f8a5.slice. Jul 15 04:43:40.686713 kubelet[2309]: E0715 04:43:40.686689 2309 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 15 04:43:40.726156 kubelet[2309]: I0715 04:43:40.726065 2309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 04:43:40.726156 kubelet[2309]: I0715 04:43:40.726119 2309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a75e163f27396b2168da0f88f85f8a5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8a75e163f27396b2168da0f88f85f8a5\") " pod="kube-system/kube-scheduler-localhost" Jul 15 04:43:40.726388 kubelet[2309]: I0715 04:43:40.726165 2309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b1879a1ecfad9740c346637a56d2cb49-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b1879a1ecfad9740c346637a56d2cb49\") " pod="kube-system/kube-apiserver-localhost" Jul 15 04:43:40.726388 kubelet[2309]: I0715 04:43:40.726189 2309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 04:43:40.726388 kubelet[2309]: I0715 04:43:40.726210 2309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 04:43:40.726388 kubelet[2309]: I0715 04:43:40.726238 2309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 04:43:40.726388 kubelet[2309]: I0715 04:43:40.726253 2309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b1879a1ecfad9740c346637a56d2cb49-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b1879a1ecfad9740c346637a56d2cb49\") " pod="kube-system/kube-apiserver-localhost" Jul 15 04:43:40.726495 kubelet[2309]: I0715 04:43:40.726270 2309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b1879a1ecfad9740c346637a56d2cb49-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b1879a1ecfad9740c346637a56d2cb49\") " pod="kube-system/kube-apiserver-localhost" Jul 15 04:43:40.726495 kubelet[2309]: I0715 04:43:40.726285 2309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 04:43:40.765271 kubelet[2309]: I0715 04:43:40.765245 2309 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 15 04:43:40.765680 kubelet[2309]: E0715 04:43:40.765641 2309 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.60:6443/api/v1/nodes\": dial tcp 10.0.0.60:6443: connect: connection refused" node="localhost" Jul 15 04:43:40.926368 kubelet[2309]: E0715 04:43:40.926313 2309 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.60:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.60:6443: connect: connection refused" interval="800ms" Jul 15 04:43:40.979851 containerd[1524]: time="2025-07-15T04:43:40.979595609Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b1879a1ecfad9740c346637a56d2cb49,Namespace:kube-system,Attempt:0,}" Jul 15 04:43:40.984397 containerd[1524]: time="2025-07-15T04:43:40.984360749Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d1af03769b64da1b1e8089a7035018fc,Namespace:kube-system,Attempt:0,}" Jul 15 04:43:40.988010 containerd[1524]: time="2025-07-15T04:43:40.987970550Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8a75e163f27396b2168da0f88f85f8a5,Namespace:kube-system,Attempt:0,}" Jul 15 04:43:41.001550 containerd[1524]: time="2025-07-15T04:43:41.001504602Z" level=info msg="connecting to shim 88c43f43b64978a0395865e72ab5fb9dcedcac32d46b3e0da552a8daee6c449d" address="unix:///run/containerd/s/7f6c36d1d930bee0bbefe4ab286f757688dd20951419173eed2e3bf3a5709e0c" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:43:41.018105 containerd[1524]: time="2025-07-15T04:43:41.018056489Z" level=info msg="connecting to shim f13e3d21b0c97b8fe21fdb1f6f7591a45dce5a57d26ca797d56f8339f54bcf71" address="unix:///run/containerd/s/a7109aa87a0ff91bc3d1bac432689f559f25479569aceaa39bd36712b33cbde4" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:43:41.026051 containerd[1524]: time="2025-07-15T04:43:41.025970200Z" level=info msg="connecting to shim 8536b42bf2377d28de510edd6f55ffed185ab5f71fff64d2969ecd248fca20c3" address="unix:///run/containerd/s/f2995913202225432d3cebcd04f43a4bc985ed36af5f48a1b641fd23ea7daa0d" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:43:41.035201 systemd[1]: Started cri-containerd-88c43f43b64978a0395865e72ab5fb9dcedcac32d46b3e0da552a8daee6c449d.scope - libcontainer container 88c43f43b64978a0395865e72ab5fb9dcedcac32d46b3e0da552a8daee6c449d. Jul 15 04:43:41.055195 systemd[1]: Started cri-containerd-8536b42bf2377d28de510edd6f55ffed185ab5f71fff64d2969ecd248fca20c3.scope - libcontainer container 8536b42bf2377d28de510edd6f55ffed185ab5f71fff64d2969ecd248fca20c3. Jul 15 04:43:41.056540 systemd[1]: Started cri-containerd-f13e3d21b0c97b8fe21fdb1f6f7591a45dce5a57d26ca797d56f8339f54bcf71.scope - libcontainer container f13e3d21b0c97b8fe21fdb1f6f7591a45dce5a57d26ca797d56f8339f54bcf71. Jul 15 04:43:41.112046 containerd[1524]: time="2025-07-15T04:43:41.111985808Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b1879a1ecfad9740c346637a56d2cb49,Namespace:kube-system,Attempt:0,} returns sandbox id \"88c43f43b64978a0395865e72ab5fb9dcedcac32d46b3e0da552a8daee6c449d\"" Jul 15 04:43:41.114636 containerd[1524]: time="2025-07-15T04:43:41.114607951Z" level=info msg="CreateContainer within sandbox \"88c43f43b64978a0395865e72ab5fb9dcedcac32d46b3e0da552a8daee6c449d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 15 04:43:41.124126 kubelet[2309]: W0715 04:43:41.124063 2309 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.60:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.60:6443: connect: connection refused Jul 15 04:43:41.124201 kubelet[2309]: E0715 04:43:41.124129 2309 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.60:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.60:6443: connect: connection refused" logger="UnhandledError" Jul 15 04:43:41.125644 containerd[1524]: time="2025-07-15T04:43:41.125606525Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8a75e163f27396b2168da0f88f85f8a5,Namespace:kube-system,Attempt:0,} returns sandbox id \"8536b42bf2377d28de510edd6f55ffed185ab5f71fff64d2969ecd248fca20c3\"" Jul 15 04:43:41.127223 containerd[1524]: time="2025-07-15T04:43:41.127189932Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d1af03769b64da1b1e8089a7035018fc,Namespace:kube-system,Attempt:0,} returns sandbox id \"f13e3d21b0c97b8fe21fdb1f6f7591a45dce5a57d26ca797d56f8339f54bcf71\"" Jul 15 04:43:41.128798 containerd[1524]: time="2025-07-15T04:43:41.128760328Z" level=info msg="CreateContainer within sandbox \"8536b42bf2377d28de510edd6f55ffed185ab5f71fff64d2969ecd248fca20c3\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 15 04:43:41.129291 containerd[1524]: time="2025-07-15T04:43:41.129268806Z" level=info msg="CreateContainer within sandbox \"f13e3d21b0c97b8fe21fdb1f6f7591a45dce5a57d26ca797d56f8339f54bcf71\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 15 04:43:41.135958 containerd[1524]: time="2025-07-15T04:43:41.135926233Z" level=info msg="Container 9a19a92075f570b3d28be3946ce75a8e2e04a7dc9729fe971f2965a6806546e4: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:43:41.137740 containerd[1524]: time="2025-07-15T04:43:41.137707250Z" level=info msg="Container b80373f5279e5f6dc6acab24e0d669d04a175a6fed36bb80c182b41b985747e1: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:43:41.139556 containerd[1524]: time="2025-07-15T04:43:41.139510087Z" level=info msg="Container 22206619fe21612904786784de155565f751072a1b62786b6e6bfc478a764530: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:43:41.144857 containerd[1524]: time="2025-07-15T04:43:41.144804777Z" level=info msg="CreateContainer within sandbox \"88c43f43b64978a0395865e72ab5fb9dcedcac32d46b3e0da552a8daee6c449d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"9a19a92075f570b3d28be3946ce75a8e2e04a7dc9729fe971f2965a6806546e4\"" Jul 15 04:43:41.146262 containerd[1524]: time="2025-07-15T04:43:41.146234131Z" level=info msg="StartContainer for \"9a19a92075f570b3d28be3946ce75a8e2e04a7dc9729fe971f2965a6806546e4\"" Jul 15 04:43:41.147078 containerd[1524]: time="2025-07-15T04:43:41.147011882Z" level=info msg="CreateContainer within sandbox \"f13e3d21b0c97b8fe21fdb1f6f7591a45dce5a57d26ca797d56f8339f54bcf71\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b80373f5279e5f6dc6acab24e0d669d04a175a6fed36bb80c182b41b985747e1\"" Jul 15 04:43:41.147403 containerd[1524]: time="2025-07-15T04:43:41.147381201Z" level=info msg="StartContainer for \"b80373f5279e5f6dc6acab24e0d669d04a175a6fed36bb80c182b41b985747e1\"" Jul 15 04:43:41.147587 containerd[1524]: time="2025-07-15T04:43:41.147553269Z" level=info msg="connecting to shim 9a19a92075f570b3d28be3946ce75a8e2e04a7dc9729fe971f2965a6806546e4" address="unix:///run/containerd/s/7f6c36d1d930bee0bbefe4ab286f757688dd20951419173eed2e3bf3a5709e0c" protocol=ttrpc version=3 Jul 15 04:43:41.148485 containerd[1524]: time="2025-07-15T04:43:41.148452005Z" level=info msg="connecting to shim b80373f5279e5f6dc6acab24e0d669d04a175a6fed36bb80c182b41b985747e1" address="unix:///run/containerd/s/a7109aa87a0ff91bc3d1bac432689f559f25479569aceaa39bd36712b33cbde4" protocol=ttrpc version=3 Jul 15 04:43:41.149580 containerd[1524]: time="2025-07-15T04:43:41.149525892Z" level=info msg="CreateContainer within sandbox \"8536b42bf2377d28de510edd6f55ffed185ab5f71fff64d2969ecd248fca20c3\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"22206619fe21612904786784de155565f751072a1b62786b6e6bfc478a764530\"" Jul 15 04:43:41.150044 containerd[1524]: time="2025-07-15T04:43:41.149997900Z" level=info msg="StartContainer for \"22206619fe21612904786784de155565f751072a1b62786b6e6bfc478a764530\"" Jul 15 04:43:41.150977 containerd[1524]: time="2025-07-15T04:43:41.150916413Z" level=info msg="connecting to shim 22206619fe21612904786784de155565f751072a1b62786b6e6bfc478a764530" address="unix:///run/containerd/s/f2995913202225432d3cebcd04f43a4bc985ed36af5f48a1b641fd23ea7daa0d" protocol=ttrpc version=3 Jul 15 04:43:41.167528 kubelet[2309]: I0715 04:43:41.167502 2309 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 15 04:43:41.167859 kubelet[2309]: E0715 04:43:41.167824 2309 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.60:6443/api/v1/nodes\": dial tcp 10.0.0.60:6443: connect: connection refused" node="localhost" Jul 15 04:43:41.185243 systemd[1]: Started cri-containerd-9a19a92075f570b3d28be3946ce75a8e2e04a7dc9729fe971f2965a6806546e4.scope - libcontainer container 9a19a92075f570b3d28be3946ce75a8e2e04a7dc9729fe971f2965a6806546e4. Jul 15 04:43:41.186764 systemd[1]: Started cri-containerd-b80373f5279e5f6dc6acab24e0d669d04a175a6fed36bb80c182b41b985747e1.scope - libcontainer container b80373f5279e5f6dc6acab24e0d669d04a175a6fed36bb80c182b41b985747e1. Jul 15 04:43:41.190528 systemd[1]: Started cri-containerd-22206619fe21612904786784de155565f751072a1b62786b6e6bfc478a764530.scope - libcontainer container 22206619fe21612904786784de155565f751072a1b62786b6e6bfc478a764530. Jul 15 04:43:41.236291 containerd[1524]: time="2025-07-15T04:43:41.236239823Z" level=info msg="StartContainer for \"b80373f5279e5f6dc6acab24e0d669d04a175a6fed36bb80c182b41b985747e1\" returns successfully" Jul 15 04:43:41.244857 containerd[1524]: time="2025-07-15T04:43:41.244815065Z" level=info msg="StartContainer for \"9a19a92075f570b3d28be3946ce75a8e2e04a7dc9729fe971f2965a6806546e4\" returns successfully" Jul 15 04:43:41.267077 containerd[1524]: time="2025-07-15T04:43:41.265761586Z" level=info msg="StartContainer for \"22206619fe21612904786784de155565f751072a1b62786b6e6bfc478a764530\" returns successfully" Jul 15 04:43:41.274513 kubelet[2309]: W0715 04:43:41.274456 2309 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.60:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.60:6443: connect: connection refused Jul 15 04:43:41.274603 kubelet[2309]: E0715 04:43:41.274522 2309 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.60:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.60:6443: connect: connection refused" logger="UnhandledError" Jul 15 04:43:41.361055 kubelet[2309]: E0715 04:43:41.360966 2309 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 15 04:43:41.362834 kubelet[2309]: E0715 04:43:41.362809 2309 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 15 04:43:41.366518 kubelet[2309]: E0715 04:43:41.366445 2309 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 15 04:43:41.972100 kubelet[2309]: I0715 04:43:41.969650 2309 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 15 04:43:42.370229 kubelet[2309]: E0715 04:43:42.369728 2309 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 15 04:43:42.370229 kubelet[2309]: E0715 04:43:42.369979 2309 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 15 04:43:42.533803 kubelet[2309]: E0715 04:43:42.533735 2309 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 15 04:43:42.878609 kubelet[2309]: E0715 04:43:42.878554 2309 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 15 04:43:43.052868 kubelet[2309]: I0715 04:43:43.052813 2309 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 15 04:43:43.052868 kubelet[2309]: E0715 04:43:43.052857 2309 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 15 04:43:43.065625 kubelet[2309]: E0715 04:43:43.065580 2309 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 04:43:43.166743 kubelet[2309]: E0715 04:43:43.166384 2309 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 04:43:43.266521 kubelet[2309]: E0715 04:43:43.266480 2309 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 04:43:43.366995 kubelet[2309]: E0715 04:43:43.366940 2309 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 04:43:43.468003 kubelet[2309]: E0715 04:43:43.467888 2309 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 04:43:43.568483 kubelet[2309]: E0715 04:43:43.568444 2309 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 04:43:43.669525 kubelet[2309]: E0715 04:43:43.669477 2309 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 04:43:43.770244 kubelet[2309]: E0715 04:43:43.770089 2309 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 04:43:43.870473 kubelet[2309]: E0715 04:43:43.870437 2309 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 04:43:43.971197 kubelet[2309]: E0715 04:43:43.971129 2309 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 04:43:44.071760 kubelet[2309]: E0715 04:43:44.071716 2309 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 04:43:44.223415 kubelet[2309]: I0715 04:43:44.223357 2309 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 15 04:43:44.250973 kubelet[2309]: I0715 04:43:44.250923 2309 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 15 04:43:44.314343 kubelet[2309]: I0715 04:43:44.314299 2309 apiserver.go:52] "Watching apiserver" Jul 15 04:43:44.323323 kubelet[2309]: I0715 04:43:44.323224 2309 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 15 04:43:44.341519 kubelet[2309]: I0715 04:43:44.341463 2309 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 15 04:43:44.808450 systemd[1]: Reload requested from client PID 2585 ('systemctl') (unit session-7.scope)... Jul 15 04:43:44.808468 systemd[1]: Reloading... Jul 15 04:43:44.887075 zram_generator::config[2628]: No configuration found. Jul 15 04:43:44.970018 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 04:43:45.066795 systemd[1]: Reloading finished in 257 ms. Jul 15 04:43:45.100891 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 04:43:45.117148 systemd[1]: kubelet.service: Deactivated successfully. Jul 15 04:43:45.117433 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 04:43:45.117493 systemd[1]: kubelet.service: Consumed 1.370s CPU time, 128M memory peak. Jul 15 04:43:45.119231 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 04:43:45.236931 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 04:43:45.253181 (kubelet)[2670]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 15 04:43:45.287373 kubelet[2670]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 04:43:45.287373 kubelet[2670]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 15 04:43:45.287373 kubelet[2670]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 04:43:45.287679 kubelet[2670]: I0715 04:43:45.287431 2670 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 15 04:43:45.294613 kubelet[2670]: I0715 04:43:45.294584 2670 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 15 04:43:45.294696 kubelet[2670]: I0715 04:43:45.294633 2670 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 15 04:43:45.294869 kubelet[2670]: I0715 04:43:45.294852 2670 server.go:954] "Client rotation is on, will bootstrap in background" Jul 15 04:43:45.296517 kubelet[2670]: I0715 04:43:45.296484 2670 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 15 04:43:45.300544 kubelet[2670]: I0715 04:43:45.300516 2670 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 15 04:43:45.307150 kubelet[2670]: I0715 04:43:45.307125 2670 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 15 04:43:45.309863 kubelet[2670]: I0715 04:43:45.309836 2670 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 15 04:43:45.310133 kubelet[2670]: I0715 04:43:45.310089 2670 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 15 04:43:45.310278 kubelet[2670]: I0715 04:43:45.310116 2670 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 15 04:43:45.310278 kubelet[2670]: I0715 04:43:45.310279 2670 topology_manager.go:138] "Creating topology manager with none policy" Jul 15 04:43:45.310442 kubelet[2670]: I0715 04:43:45.310288 2670 container_manager_linux.go:304] "Creating device plugin manager" Jul 15 04:43:45.310442 kubelet[2670]: I0715 04:43:45.310329 2670 state_mem.go:36] "Initialized new in-memory state store" Jul 15 04:43:45.310518 kubelet[2670]: I0715 04:43:45.310465 2670 kubelet.go:446] "Attempting to sync node with API server" Jul 15 04:43:45.310518 kubelet[2670]: I0715 04:43:45.310477 2670 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 15 04:43:45.310518 kubelet[2670]: I0715 04:43:45.310496 2670 kubelet.go:352] "Adding apiserver pod source" Jul 15 04:43:45.310518 kubelet[2670]: I0715 04:43:45.310505 2670 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 15 04:43:45.311279 kubelet[2670]: I0715 04:43:45.311126 2670 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Jul 15 04:43:45.315281 kubelet[2670]: I0715 04:43:45.315221 2670 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 15 04:43:45.316204 kubelet[2670]: I0715 04:43:45.316160 2670 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 15 04:43:45.316299 kubelet[2670]: I0715 04:43:45.316290 2670 server.go:1287] "Started kubelet" Jul 15 04:43:45.322468 kubelet[2670]: I0715 04:43:45.319669 2670 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 15 04:43:45.322468 kubelet[2670]: I0715 04:43:45.322223 2670 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 15 04:43:45.322468 kubelet[2670]: I0715 04:43:45.320819 2670 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 15 04:43:45.322867 kubelet[2670]: E0715 04:43:45.322584 2670 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 15 04:43:45.322867 kubelet[2670]: I0715 04:43:45.322830 2670 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 15 04:43:45.323222 kubelet[2670]: E0715 04:43:45.323068 2670 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 04:43:45.323265 kubelet[2670]: I0715 04:43:45.323253 2670 reconciler.go:26] "Reconciler: start to sync state" Jul 15 04:43:45.323265 kubelet[2670]: I0715 04:43:45.320853 2670 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 15 04:43:45.323490 kubelet[2670]: I0715 04:43:45.323401 2670 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 15 04:43:45.323719 kubelet[2670]: I0715 04:43:45.323696 2670 factory.go:221] Registration of the systemd container factory successfully Jul 15 04:43:45.323800 kubelet[2670]: I0715 04:43:45.323777 2670 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 15 04:43:45.323947 kubelet[2670]: I0715 04:43:45.321530 2670 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 15 04:43:45.324009 kubelet[2670]: I0715 04:43:45.323986 2670 server.go:479] "Adding debug handlers to kubelet server" Jul 15 04:43:45.331215 kubelet[2670]: I0715 04:43:45.331183 2670 factory.go:221] Registration of the containerd container factory successfully Jul 15 04:43:45.340281 kubelet[2670]: I0715 04:43:45.340242 2670 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 15 04:43:45.341614 kubelet[2670]: I0715 04:43:45.341583 2670 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 15 04:43:45.341614 kubelet[2670]: I0715 04:43:45.341607 2670 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 15 04:43:45.341703 kubelet[2670]: I0715 04:43:45.341623 2670 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 15 04:43:45.341703 kubelet[2670]: I0715 04:43:45.341630 2670 kubelet.go:2382] "Starting kubelet main sync loop" Jul 15 04:43:45.341703 kubelet[2670]: E0715 04:43:45.341665 2670 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 15 04:43:45.366762 kubelet[2670]: I0715 04:43:45.366735 2670 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 15 04:43:45.366762 kubelet[2670]: I0715 04:43:45.366756 2670 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 15 04:43:45.366880 kubelet[2670]: I0715 04:43:45.366777 2670 state_mem.go:36] "Initialized new in-memory state store" Jul 15 04:43:45.366932 kubelet[2670]: I0715 04:43:45.366914 2670 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 15 04:43:45.366956 kubelet[2670]: I0715 04:43:45.366931 2670 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 15 04:43:45.366956 kubelet[2670]: I0715 04:43:45.366949 2670 policy_none.go:49] "None policy: Start" Jul 15 04:43:45.366995 kubelet[2670]: I0715 04:43:45.366957 2670 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 15 04:43:45.366995 kubelet[2670]: I0715 04:43:45.366966 2670 state_mem.go:35] "Initializing new in-memory state store" Jul 15 04:43:45.367096 kubelet[2670]: I0715 04:43:45.367083 2670 state_mem.go:75] "Updated machine memory state" Jul 15 04:43:45.370504 kubelet[2670]: I0715 04:43:45.370470 2670 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 15 04:43:45.370639 kubelet[2670]: I0715 04:43:45.370616 2670 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 15 04:43:45.370677 kubelet[2670]: I0715 04:43:45.370635 2670 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 15 04:43:45.370828 kubelet[2670]: I0715 04:43:45.370808 2670 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 15 04:43:45.371852 kubelet[2670]: E0715 04:43:45.371794 2670 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 15 04:43:45.442673 kubelet[2670]: I0715 04:43:45.442617 2670 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 15 04:43:45.443074 kubelet[2670]: I0715 04:43:45.443054 2670 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 15 04:43:45.445158 kubelet[2670]: I0715 04:43:45.443058 2670 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 15 04:43:45.447674 kubelet[2670]: E0715 04:43:45.447589 2670 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 15 04:43:45.448824 kubelet[2670]: E0715 04:43:45.448799 2670 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 15 04:43:45.449494 kubelet[2670]: E0715 04:43:45.449467 2670 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jul 15 04:43:45.472903 kubelet[2670]: I0715 04:43:45.472875 2670 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 15 04:43:45.478633 kubelet[2670]: I0715 04:43:45.478608 2670 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jul 15 04:43:45.478705 kubelet[2670]: I0715 04:43:45.478681 2670 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 15 04:43:45.524590 kubelet[2670]: I0715 04:43:45.524494 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b1879a1ecfad9740c346637a56d2cb49-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b1879a1ecfad9740c346637a56d2cb49\") " pod="kube-system/kube-apiserver-localhost" Jul 15 04:43:45.524590 kubelet[2670]: I0715 04:43:45.524544 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 04:43:45.524590 kubelet[2670]: I0715 04:43:45.524570 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 04:43:45.524590 kubelet[2670]: I0715 04:43:45.524588 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a75e163f27396b2168da0f88f85f8a5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8a75e163f27396b2168da0f88f85f8a5\") " pod="kube-system/kube-scheduler-localhost" Jul 15 04:43:45.524590 kubelet[2670]: I0715 04:43:45.524604 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b1879a1ecfad9740c346637a56d2cb49-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b1879a1ecfad9740c346637a56d2cb49\") " pod="kube-system/kube-apiserver-localhost" Jul 15 04:43:45.524854 kubelet[2670]: I0715 04:43:45.524620 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b1879a1ecfad9740c346637a56d2cb49-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b1879a1ecfad9740c346637a56d2cb49\") " pod="kube-system/kube-apiserver-localhost" Jul 15 04:43:45.524854 kubelet[2670]: I0715 04:43:45.524638 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 04:43:45.524854 kubelet[2670]: I0715 04:43:45.524652 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 04:43:45.524854 kubelet[2670]: I0715 04:43:45.524669 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 04:43:46.311453 kubelet[2670]: I0715 04:43:46.311428 2670 apiserver.go:52] "Watching apiserver" Jul 15 04:43:46.323248 kubelet[2670]: I0715 04:43:46.323210 2670 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 15 04:43:46.352065 kubelet[2670]: I0715 04:43:46.350704 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.350684731 podStartE2EDuration="2.350684731s" podCreationTimestamp="2025-07-15 04:43:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 04:43:46.340576569 +0000 UTC m=+1.084467449" watchObservedRunningTime="2025-07-15 04:43:46.350684731 +0000 UTC m=+1.094575611" Jul 15 04:43:46.352065 kubelet[2670]: I0715 04:43:46.350840 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.350834646 podStartE2EDuration="2.350834646s" podCreationTimestamp="2025-07-15 04:43:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 04:43:46.350813556 +0000 UTC m=+1.094704476" watchObservedRunningTime="2025-07-15 04:43:46.350834646 +0000 UTC m=+1.094725486" Jul 15 04:43:46.360059 kubelet[2670]: I0715 04:43:46.359060 2670 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 15 04:43:46.360457 kubelet[2670]: I0715 04:43:46.359129 2670 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 15 04:43:46.367186 kubelet[2670]: E0715 04:43:46.367156 2670 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 15 04:43:46.368828 kubelet[2670]: E0715 04:43:46.368681 2670 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 15 04:43:46.372923 kubelet[2670]: I0715 04:43:46.372877 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.372863883 podStartE2EDuration="2.372863883s" podCreationTimestamp="2025-07-15 04:43:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 04:43:46.365236528 +0000 UTC m=+1.109127408" watchObservedRunningTime="2025-07-15 04:43:46.372863883 +0000 UTC m=+1.116754763" Jul 15 04:43:50.288898 kubelet[2670]: I0715 04:43:50.288860 2670 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 15 04:43:50.289816 containerd[1524]: time="2025-07-15T04:43:50.289723526Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 15 04:43:50.290265 kubelet[2670]: I0715 04:43:50.290241 2670 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 15 04:43:51.070206 systemd[1]: Created slice kubepods-besteffort-pod0200d4cb_31a7_44fd_8bb1_90a5031c6e24.slice - libcontainer container kubepods-besteffort-pod0200d4cb_31a7_44fd_8bb1_90a5031c6e24.slice. Jul 15 04:43:51.162124 kubelet[2670]: I0715 04:43:51.162077 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bl4w8\" (UniqueName: \"kubernetes.io/projected/0200d4cb-31a7-44fd-8bb1-90a5031c6e24-kube-api-access-bl4w8\") pod \"kube-proxy-7srr7\" (UID: \"0200d4cb-31a7-44fd-8bb1-90a5031c6e24\") " pod="kube-system/kube-proxy-7srr7" Jul 15 04:43:51.162236 kubelet[2670]: I0715 04:43:51.162139 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0200d4cb-31a7-44fd-8bb1-90a5031c6e24-kube-proxy\") pod \"kube-proxy-7srr7\" (UID: \"0200d4cb-31a7-44fd-8bb1-90a5031c6e24\") " pod="kube-system/kube-proxy-7srr7" Jul 15 04:43:51.162236 kubelet[2670]: I0715 04:43:51.162171 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0200d4cb-31a7-44fd-8bb1-90a5031c6e24-xtables-lock\") pod \"kube-proxy-7srr7\" (UID: \"0200d4cb-31a7-44fd-8bb1-90a5031c6e24\") " pod="kube-system/kube-proxy-7srr7" Jul 15 04:43:51.162236 kubelet[2670]: I0715 04:43:51.162187 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0200d4cb-31a7-44fd-8bb1-90a5031c6e24-lib-modules\") pod \"kube-proxy-7srr7\" (UID: \"0200d4cb-31a7-44fd-8bb1-90a5031c6e24\") " pod="kube-system/kube-proxy-7srr7" Jul 15 04:43:51.386409 containerd[1524]: time="2025-07-15T04:43:51.386284565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7srr7,Uid:0200d4cb-31a7-44fd-8bb1-90a5031c6e24,Namespace:kube-system,Attempt:0,}" Jul 15 04:43:51.420365 systemd[1]: Created slice kubepods-besteffort-pode74252f5_ca48_4486_b533_c1ba2526f428.slice - libcontainer container kubepods-besteffort-pode74252f5_ca48_4486_b533_c1ba2526f428.slice. Jul 15 04:43:51.421025 containerd[1524]: time="2025-07-15T04:43:51.420510526Z" level=info msg="connecting to shim f0d7b6cc207e7729fb8b14a986c85f0968213bb8b2077ba6fbeff4ac315bd430" address="unix:///run/containerd/s/d11b93086aa4f41f13561c1683a6f0894cbc644ba8874e56098b7163a13eb1a1" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:43:51.461255 systemd[1]: Started cri-containerd-f0d7b6cc207e7729fb8b14a986c85f0968213bb8b2077ba6fbeff4ac315bd430.scope - libcontainer container f0d7b6cc207e7729fb8b14a986c85f0968213bb8b2077ba6fbeff4ac315bd430. Jul 15 04:43:51.464367 kubelet[2670]: I0715 04:43:51.464327 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e74252f5-ca48-4486-b533-c1ba2526f428-var-lib-calico\") pod \"tigera-operator-747864d56d-7cjkd\" (UID: \"e74252f5-ca48-4486-b533-c1ba2526f428\") " pod="tigera-operator/tigera-operator-747864d56d-7cjkd" Jul 15 04:43:51.464367 kubelet[2670]: I0715 04:43:51.464368 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtb9d\" (UniqueName: \"kubernetes.io/projected/e74252f5-ca48-4486-b533-c1ba2526f428-kube-api-access-mtb9d\") pod \"tigera-operator-747864d56d-7cjkd\" (UID: \"e74252f5-ca48-4486-b533-c1ba2526f428\") " pod="tigera-operator/tigera-operator-747864d56d-7cjkd" Jul 15 04:43:51.482196 containerd[1524]: time="2025-07-15T04:43:51.482159602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7srr7,Uid:0200d4cb-31a7-44fd-8bb1-90a5031c6e24,Namespace:kube-system,Attempt:0,} returns sandbox id \"f0d7b6cc207e7729fb8b14a986c85f0968213bb8b2077ba6fbeff4ac315bd430\"" Jul 15 04:43:51.484950 containerd[1524]: time="2025-07-15T04:43:51.484567278Z" level=info msg="CreateContainer within sandbox \"f0d7b6cc207e7729fb8b14a986c85f0968213bb8b2077ba6fbeff4ac315bd430\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 15 04:43:51.505303 containerd[1524]: time="2025-07-15T04:43:51.505267782Z" level=info msg="Container c2eee81b54e587daff6350196d293ee44fb99344821293d804bbcb641da5822d: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:43:51.509181 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount372036661.mount: Deactivated successfully. Jul 15 04:43:51.523149 containerd[1524]: time="2025-07-15T04:43:51.523093489Z" level=info msg="CreateContainer within sandbox \"f0d7b6cc207e7729fb8b14a986c85f0968213bb8b2077ba6fbeff4ac315bd430\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c2eee81b54e587daff6350196d293ee44fb99344821293d804bbcb641da5822d\"" Jul 15 04:43:51.523851 containerd[1524]: time="2025-07-15T04:43:51.523811078Z" level=info msg="StartContainer for \"c2eee81b54e587daff6350196d293ee44fb99344821293d804bbcb641da5822d\"" Jul 15 04:43:51.525913 containerd[1524]: time="2025-07-15T04:43:51.525882368Z" level=info msg="connecting to shim c2eee81b54e587daff6350196d293ee44fb99344821293d804bbcb641da5822d" address="unix:///run/containerd/s/d11b93086aa4f41f13561c1683a6f0894cbc644ba8874e56098b7163a13eb1a1" protocol=ttrpc version=3 Jul 15 04:43:51.549184 systemd[1]: Started cri-containerd-c2eee81b54e587daff6350196d293ee44fb99344821293d804bbcb641da5822d.scope - libcontainer container c2eee81b54e587daff6350196d293ee44fb99344821293d804bbcb641da5822d. Jul 15 04:43:51.581690 containerd[1524]: time="2025-07-15T04:43:51.581651196Z" level=info msg="StartContainer for \"c2eee81b54e587daff6350196d293ee44fb99344821293d804bbcb641da5822d\" returns successfully" Jul 15 04:43:51.725831 containerd[1524]: time="2025-07-15T04:43:51.725728806Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-7cjkd,Uid:e74252f5-ca48-4486-b533-c1ba2526f428,Namespace:tigera-operator,Attempt:0,}" Jul 15 04:43:51.743872 containerd[1524]: time="2025-07-15T04:43:51.743795177Z" level=info msg="connecting to shim 80993ddecda8049cb38deb10f70b7d58f5b7f2f9a8053d52d87d24ebad8fb6f3" address="unix:///run/containerd/s/ccb1b3f7d9ff50d2b19fe27c6ff5b120b24cce6abee9c4a1706d7b9398f03fc5" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:43:51.767249 systemd[1]: Started cri-containerd-80993ddecda8049cb38deb10f70b7d58f5b7f2f9a8053d52d87d24ebad8fb6f3.scope - libcontainer container 80993ddecda8049cb38deb10f70b7d58f5b7f2f9a8053d52d87d24ebad8fb6f3. Jul 15 04:43:51.808826 containerd[1524]: time="2025-07-15T04:43:51.808692410Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-7cjkd,Uid:e74252f5-ca48-4486-b533-c1ba2526f428,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"80993ddecda8049cb38deb10f70b7d58f5b7f2f9a8053d52d87d24ebad8fb6f3\"" Jul 15 04:43:51.811540 containerd[1524]: time="2025-07-15T04:43:51.811283405Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 15 04:43:52.380861 kubelet[2670]: I0715 04:43:52.380671 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-7srr7" podStartSLOduration=1.3806519210000001 podStartE2EDuration="1.380651921s" podCreationTimestamp="2025-07-15 04:43:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 04:43:52.378941144 +0000 UTC m=+7.122832024" watchObservedRunningTime="2025-07-15 04:43:52.380651921 +0000 UTC m=+7.124542801" Jul 15 04:43:53.018017 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2003885969.mount: Deactivated successfully. Jul 15 04:43:53.437248 containerd[1524]: time="2025-07-15T04:43:53.437194985Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:43:53.438134 containerd[1524]: time="2025-07-15T04:43:53.437914102Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=22150610" Jul 15 04:43:53.438847 containerd[1524]: time="2025-07-15T04:43:53.438806766Z" level=info msg="ImageCreate event name:\"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:43:53.440947 containerd[1524]: time="2025-07-15T04:43:53.440907976Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:43:53.441733 containerd[1524]: time="2025-07-15T04:43:53.441696480Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"22146605\" in 1.629993574s" Jul 15 04:43:53.441733 containerd[1524]: time="2025-07-15T04:43:53.441731173Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\"" Jul 15 04:43:53.444988 containerd[1524]: time="2025-07-15T04:43:53.444959297Z" level=info msg="CreateContainer within sandbox \"80993ddecda8049cb38deb10f70b7d58f5b7f2f9a8053d52d87d24ebad8fb6f3\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 15 04:43:53.451674 containerd[1524]: time="2025-07-15T04:43:53.450195394Z" level=info msg="Container 96d818bf7e19146d42da0607bbd1a1c7075b46cfce6d16dfdce2c7ddc782d815: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:43:53.456396 containerd[1524]: time="2025-07-15T04:43:53.456360210Z" level=info msg="CreateContainer within sandbox \"80993ddecda8049cb38deb10f70b7d58f5b7f2f9a8053d52d87d24ebad8fb6f3\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"96d818bf7e19146d42da0607bbd1a1c7075b46cfce6d16dfdce2c7ddc782d815\"" Jul 15 04:43:53.456841 containerd[1524]: time="2025-07-15T04:43:53.456764085Z" level=info msg="StartContainer for \"96d818bf7e19146d42da0607bbd1a1c7075b46cfce6d16dfdce2c7ddc782d815\"" Jul 15 04:43:53.458180 containerd[1524]: time="2025-07-15T04:43:53.458136294Z" level=info msg="connecting to shim 96d818bf7e19146d42da0607bbd1a1c7075b46cfce6d16dfdce2c7ddc782d815" address="unix:///run/containerd/s/ccb1b3f7d9ff50d2b19fe27c6ff5b120b24cce6abee9c4a1706d7b9398f03fc5" protocol=ttrpc version=3 Jul 15 04:43:53.483188 systemd[1]: Started cri-containerd-96d818bf7e19146d42da0607bbd1a1c7075b46cfce6d16dfdce2c7ddc782d815.scope - libcontainer container 96d818bf7e19146d42da0607bbd1a1c7075b46cfce6d16dfdce2c7ddc782d815. Jul 15 04:43:53.514881 containerd[1524]: time="2025-07-15T04:43:53.514838101Z" level=info msg="StartContainer for \"96d818bf7e19146d42da0607bbd1a1c7075b46cfce6d16dfdce2c7ddc782d815\" returns successfully" Jul 15 04:43:54.382984 kubelet[2670]: I0715 04:43:54.382917 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-7cjkd" podStartSLOduration=1.7509558950000002 podStartE2EDuration="3.382898707s" podCreationTimestamp="2025-07-15 04:43:51 +0000 UTC" firstStartedPulling="2025-07-15 04:43:51.810656415 +0000 UTC m=+6.554547295" lastFinishedPulling="2025-07-15 04:43:53.442599227 +0000 UTC m=+8.186490107" observedRunningTime="2025-07-15 04:43:54.382746132 +0000 UTC m=+9.126637012" watchObservedRunningTime="2025-07-15 04:43:54.382898707 +0000 UTC m=+9.126789587" Jul 15 04:43:58.894228 sudo[1741]: pam_unix(sudo:session): session closed for user root Jul 15 04:43:58.901063 sshd[1740]: Connection closed by 10.0.0.1 port 58236 Jul 15 04:43:58.901215 sshd-session[1737]: pam_unix(sshd:session): session closed for user core Jul 15 04:43:58.905751 systemd-logind[1506]: Session 7 logged out. Waiting for processes to exit. Jul 15 04:43:58.907317 systemd[1]: sshd@6-10.0.0.60:22-10.0.0.1:58236.service: Deactivated successfully. Jul 15 04:43:58.909610 systemd[1]: session-7.scope: Deactivated successfully. Jul 15 04:43:58.909862 systemd[1]: session-7.scope: Consumed 7.885s CPU time, 227.5M memory peak. Jul 15 04:43:58.912298 systemd-logind[1506]: Removed session 7. Jul 15 04:43:59.476592 update_engine[1507]: I20250715 04:43:59.476079 1507 update_attempter.cc:509] Updating boot flags... Jul 15 04:44:05.023811 systemd[1]: Created slice kubepods-besteffort-pod99f5f8ca_a645_496c_9924_0f2b0387e78d.slice - libcontainer container kubepods-besteffort-pod99f5f8ca_a645_496c_9924_0f2b0387e78d.slice. Jul 15 04:44:05.048949 kubelet[2670]: I0715 04:44:05.048864 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/99f5f8ca-a645-496c-9924-0f2b0387e78d-typha-certs\") pod \"calico-typha-84546bcc4f-lf4tw\" (UID: \"99f5f8ca-a645-496c-9924-0f2b0387e78d\") " pod="calico-system/calico-typha-84546bcc4f-lf4tw" Jul 15 04:44:05.048949 kubelet[2670]: I0715 04:44:05.048952 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/99f5f8ca-a645-496c-9924-0f2b0387e78d-tigera-ca-bundle\") pod \"calico-typha-84546bcc4f-lf4tw\" (UID: \"99f5f8ca-a645-496c-9924-0f2b0387e78d\") " pod="calico-system/calico-typha-84546bcc4f-lf4tw" Jul 15 04:44:05.049356 kubelet[2670]: I0715 04:44:05.048975 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxrth\" (UniqueName: \"kubernetes.io/projected/99f5f8ca-a645-496c-9924-0f2b0387e78d-kube-api-access-vxrth\") pod \"calico-typha-84546bcc4f-lf4tw\" (UID: \"99f5f8ca-a645-496c-9924-0f2b0387e78d\") " pod="calico-system/calico-typha-84546bcc4f-lf4tw" Jul 15 04:44:05.152741 systemd[1]: Created slice kubepods-besteffort-pod80b48592_79eb_4a44_8de1_28e7c8388b66.slice - libcontainer container kubepods-besteffort-pod80b48592_79eb_4a44_8de1_28e7c8388b66.slice. Jul 15 04:44:05.249665 kubelet[2670]: I0715 04:44:05.249605 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/80b48592-79eb-4a44-8de1-28e7c8388b66-tigera-ca-bundle\") pod \"calico-node-pmbh2\" (UID: \"80b48592-79eb-4a44-8de1-28e7c8388b66\") " pod="calico-system/calico-node-pmbh2" Jul 15 04:44:05.249665 kubelet[2670]: I0715 04:44:05.249669 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/80b48592-79eb-4a44-8de1-28e7c8388b66-cni-bin-dir\") pod \"calico-node-pmbh2\" (UID: \"80b48592-79eb-4a44-8de1-28e7c8388b66\") " pod="calico-system/calico-node-pmbh2" Jul 15 04:44:05.249826 kubelet[2670]: I0715 04:44:05.249688 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/80b48592-79eb-4a44-8de1-28e7c8388b66-cni-log-dir\") pod \"calico-node-pmbh2\" (UID: \"80b48592-79eb-4a44-8de1-28e7c8388b66\") " pod="calico-system/calico-node-pmbh2" Jul 15 04:44:05.249826 kubelet[2670]: I0715 04:44:05.249744 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/80b48592-79eb-4a44-8de1-28e7c8388b66-cni-net-dir\") pod \"calico-node-pmbh2\" (UID: \"80b48592-79eb-4a44-8de1-28e7c8388b66\") " pod="calico-system/calico-node-pmbh2" Jul 15 04:44:05.249826 kubelet[2670]: I0715 04:44:05.249782 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/80b48592-79eb-4a44-8de1-28e7c8388b66-lib-modules\") pod \"calico-node-pmbh2\" (UID: \"80b48592-79eb-4a44-8de1-28e7c8388b66\") " pod="calico-system/calico-node-pmbh2" Jul 15 04:44:05.249826 kubelet[2670]: I0715 04:44:05.249801 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/80b48592-79eb-4a44-8de1-28e7c8388b66-var-run-calico\") pod \"calico-node-pmbh2\" (UID: \"80b48592-79eb-4a44-8de1-28e7c8388b66\") " pod="calico-system/calico-node-pmbh2" Jul 15 04:44:05.249906 kubelet[2670]: I0715 04:44:05.249825 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/80b48592-79eb-4a44-8de1-28e7c8388b66-xtables-lock\") pod \"calico-node-pmbh2\" (UID: \"80b48592-79eb-4a44-8de1-28e7c8388b66\") " pod="calico-system/calico-node-pmbh2" Jul 15 04:44:05.249906 kubelet[2670]: I0715 04:44:05.249844 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/80b48592-79eb-4a44-8de1-28e7c8388b66-var-lib-calico\") pod \"calico-node-pmbh2\" (UID: \"80b48592-79eb-4a44-8de1-28e7c8388b66\") " pod="calico-system/calico-node-pmbh2" Jul 15 04:44:05.249906 kubelet[2670]: I0715 04:44:05.249862 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/80b48592-79eb-4a44-8de1-28e7c8388b66-flexvol-driver-host\") pod \"calico-node-pmbh2\" (UID: \"80b48592-79eb-4a44-8de1-28e7c8388b66\") " pod="calico-system/calico-node-pmbh2" Jul 15 04:44:05.249906 kubelet[2670]: I0715 04:44:05.249879 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/80b48592-79eb-4a44-8de1-28e7c8388b66-policysync\") pod \"calico-node-pmbh2\" (UID: \"80b48592-79eb-4a44-8de1-28e7c8388b66\") " pod="calico-system/calico-node-pmbh2" Jul 15 04:44:05.249989 kubelet[2670]: I0715 04:44:05.249915 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-224k5\" (UniqueName: \"kubernetes.io/projected/80b48592-79eb-4a44-8de1-28e7c8388b66-kube-api-access-224k5\") pod \"calico-node-pmbh2\" (UID: \"80b48592-79eb-4a44-8de1-28e7c8388b66\") " pod="calico-system/calico-node-pmbh2" Jul 15 04:44:05.249989 kubelet[2670]: I0715 04:44:05.249948 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/80b48592-79eb-4a44-8de1-28e7c8388b66-node-certs\") pod \"calico-node-pmbh2\" (UID: \"80b48592-79eb-4a44-8de1-28e7c8388b66\") " pod="calico-system/calico-node-pmbh2" Jul 15 04:44:05.327957 containerd[1524]: time="2025-07-15T04:44:05.327912543Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-84546bcc4f-lf4tw,Uid:99f5f8ca-a645-496c-9924-0f2b0387e78d,Namespace:calico-system,Attempt:0,}" Jul 15 04:44:05.331268 kubelet[2670]: E0715 04:44:05.331190 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xrgjj" podUID="f012837f-8aa6-4d96-a20b-3f91976c5b9b" Jul 15 04:44:05.351482 kubelet[2670]: I0715 04:44:05.351395 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f012837f-8aa6-4d96-a20b-3f91976c5b9b-socket-dir\") pod \"csi-node-driver-xrgjj\" (UID: \"f012837f-8aa6-4d96-a20b-3f91976c5b9b\") " pod="calico-system/csi-node-driver-xrgjj" Jul 15 04:44:05.351482 kubelet[2670]: I0715 04:44:05.351445 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdsvr\" (UniqueName: \"kubernetes.io/projected/f012837f-8aa6-4d96-a20b-3f91976c5b9b-kube-api-access-pdsvr\") pod \"csi-node-driver-xrgjj\" (UID: \"f012837f-8aa6-4d96-a20b-3f91976c5b9b\") " pod="calico-system/csi-node-driver-xrgjj" Jul 15 04:44:05.351482 kubelet[2670]: I0715 04:44:05.351482 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f012837f-8aa6-4d96-a20b-3f91976c5b9b-kubelet-dir\") pod \"csi-node-driver-xrgjj\" (UID: \"f012837f-8aa6-4d96-a20b-3f91976c5b9b\") " pod="calico-system/csi-node-driver-xrgjj" Jul 15 04:44:05.351647 kubelet[2670]: I0715 04:44:05.351501 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/f012837f-8aa6-4d96-a20b-3f91976c5b9b-varrun\") pod \"csi-node-driver-xrgjj\" (UID: \"f012837f-8aa6-4d96-a20b-3f91976c5b9b\") " pod="calico-system/csi-node-driver-xrgjj" Jul 15 04:44:05.351647 kubelet[2670]: I0715 04:44:05.351595 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f012837f-8aa6-4d96-a20b-3f91976c5b9b-registration-dir\") pod \"csi-node-driver-xrgjj\" (UID: \"f012837f-8aa6-4d96-a20b-3f91976c5b9b\") " pod="calico-system/csi-node-driver-xrgjj" Jul 15 04:44:05.357753 kubelet[2670]: E0715 04:44:05.357712 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:44:05.358389 kubelet[2670]: W0715 04:44:05.358355 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:44:05.358430 kubelet[2670]: E0715 04:44:05.358410 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:44:05.362389 kubelet[2670]: E0715 04:44:05.362363 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:44:05.362434 kubelet[2670]: W0715 04:44:05.362388 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:44:05.362545 kubelet[2670]: E0715 04:44:05.362513 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:44:05.363328 kubelet[2670]: E0715 04:44:05.363305 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:44:05.363328 kubelet[2670]: W0715 04:44:05.363324 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:44:05.363397 kubelet[2670]: E0715 04:44:05.363366 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:44:05.366126 kubelet[2670]: E0715 04:44:05.366092 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:44:05.366126 kubelet[2670]: W0715 04:44:05.366116 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:44:05.366235 kubelet[2670]: E0715 04:44:05.366159 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:44:05.368170 kubelet[2670]: E0715 04:44:05.368146 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:44:05.368170 kubelet[2670]: W0715 04:44:05.368168 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:44:05.368357 kubelet[2670]: E0715 04:44:05.368234 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:44:05.368578 kubelet[2670]: E0715 04:44:05.368558 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:44:05.368578 kubelet[2670]: W0715 04:44:05.368575 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:44:05.368896 kubelet[2670]: E0715 04:44:05.368870 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:44:05.370337 kubelet[2670]: E0715 04:44:05.370304 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:44:05.370382 kubelet[2670]: W0715 04:44:05.370335 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:44:05.371111 kubelet[2670]: E0715 04:44:05.371083 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:44:05.371145 kubelet[2670]: E0715 04:44:05.371116 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:44:05.371145 kubelet[2670]: W0715 04:44:05.371131 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:44:05.371401 kubelet[2670]: E0715 04:44:05.371383 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:44:05.371401 kubelet[2670]: W0715 04:44:05.371397 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:44:05.371550 kubelet[2670]: E0715 04:44:05.371479 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:44:05.371580 kubelet[2670]: E0715 04:44:05.371556 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:44:05.372332 kubelet[2670]: E0715 04:44:05.372308 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:44:05.372370 kubelet[2670]: W0715 04:44:05.372332 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:44:05.372523 kubelet[2670]: E0715 04:44:05.372507 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:44:05.372523 kubelet[2670]: W0715 04:44:05.372520 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:44:05.372703 kubelet[2670]: E0715 04:44:05.372656 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:44:05.372703 kubelet[2670]: W0715 04:44:05.372669 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:44:05.372835 kubelet[2670]: E0715 04:44:05.372752 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:44:05.372835 kubelet[2670]: E0715 04:44:05.372787 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:44:05.372835 kubelet[2670]: E0715 04:44:05.372798 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:44:05.373084 kubelet[2670]: E0715 04:44:05.373062 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:44:05.373084 kubelet[2670]: W0715 04:44:05.373081 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:44:05.373967 kubelet[2670]: E0715 04:44:05.373927 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:44:05.377931 kubelet[2670]: E0715 04:44:05.377899 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:44:05.378391 kubelet[2670]: W0715 04:44:05.378366 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:44:05.379820 kubelet[2670]: E0715 04:44:05.379799 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:44:05.380410 kubelet[2670]: W0715 04:44:05.379816 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:44:05.381162 kubelet[2670]: E0715 04:44:05.380595 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:44:05.381215 kubelet[2670]: E0715 04:44:05.381204 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:44:05.382178 kubelet[2670]: E0715 04:44:05.382159 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:44:05.382232 kubelet[2670]: W0715 04:44:05.382176 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:44:05.382267 kubelet[2670]: E0715 04:44:05.382225 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:44:05.382555 kubelet[2670]: E0715 04:44:05.382538 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:44:05.382555 kubelet[2670]: W0715 04:44:05.382553 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:44:05.382645 kubelet[2670]: E0715 04:44:05.382604 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:44:05.382976 kubelet[2670]: E0715 04:44:05.382862 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:44:05.383327 kubelet[2670]: W0715 04:44:05.382985 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:44:05.383327 kubelet[2670]: E0715 04:44:05.383014 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:44:05.383810 kubelet[2670]: E0715 04:44:05.383779 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:44:05.383810 kubelet[2670]: W0715 04:44:05.383797 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:44:05.383914 kubelet[2670]: E0715 04:44:05.383834 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:44:05.384166 kubelet[2670]: E0715 04:44:05.384141 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:44:05.384166 kubelet[2670]: W0715 04:44:05.384159 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:44:05.384253 kubelet[2670]: E0715 04:44:05.384219 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:44:05.384340 kubelet[2670]: E0715 04:44:05.384321 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:44:05.384340 kubelet[2670]: W0715 04:44:05.384333 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:44:05.384412 kubelet[2670]: E0715 04:44:05.384364 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:44:05.384501 kubelet[2670]: E0715 04:44:05.384484 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:44:05.384501 kubelet[2670]: W0715 04:44:05.384496 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:44:05.384564 kubelet[2670]: E0715 04:44:05.384522 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:44:05.384667 kubelet[2670]: E0715 04:44:05.384651 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:44:05.384667 kubelet[2670]: W0715 04:44:05.384663 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:44:05.384730 kubelet[2670]: E0715 04:44:05.384698 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:44:05.384873 kubelet[2670]: E0715 04:44:05.384857 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:44:05.384873 kubelet[2670]: W0715 04:44:05.384871 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:44:05.384950 kubelet[2670]: E0715 04:44:05.384927 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:44:05.385588 kubelet[2670]: E0715 04:44:05.385565 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:44:05.385588 kubelet[2670]: W0715 04:44:05.385586 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:44:05.385651 kubelet[2670]: E0715 04:44:05.385615 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:44:05.387183 kubelet[2670]: E0715 04:44:05.387128 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:44:05.387183 kubelet[2670]: W0715 04:44:05.387150 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:44:05.387352 kubelet[2670]: E0715 04:44:05.387334 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:44:05.387352 kubelet[2670]: W0715 04:44:05.387349 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:44:05.387735 kubelet[2670]: E0715 04:44:05.387698 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:44:05.387735 kubelet[2670]: W0715 04:44:05.387718 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:44:05.388431 kubelet[2670]: E0715 04:44:05.388399 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:44:05.388431 kubelet[2670]: W0715 04:44:05.388425 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:44:05.388998 kubelet[2670]: E0715 04:44:05.388963 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:44:05.389080 kubelet[2670]: E0715 04:44:05.389023 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:44:05.389080 kubelet[2670]: E0715 04:44:05.389067 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:44:05.389140 kubelet[2670]: E0715 04:44:05.389125 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:44:05.389672 kubelet[2670]: E0715 04:44:05.389577 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:44:05.389672 kubelet[2670]: W0715 04:44:05.389598 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:44:05.390537 kubelet[2670]: E0715 04:44:05.390515 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:44:05.392236 kubelet[2670]: E0715 04:44:05.392079 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:44:05.392236 kubelet[2670]: W0715 04:44:05.392107 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:44:05.392323 kubelet[2670]: E0715 04:44:05.392243 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:44:05.393220 kubelet[2670]: E0715 04:44:05.393194 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:44:05.393220 kubelet[2670]: W0715 04:44:05.393215 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:44:05.393396 kubelet[2670]: E0715 04:44:05.393358 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:44:05.396230 kubelet[2670]: E0715 04:44:05.396204 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:44:05.396230 kubelet[2670]: W0715 04:44:05.396224 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:44:05.396322 kubelet[2670]: E0715 04:44:05.396260 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:44:05.400108 kubelet[2670]: E0715 04:44:05.400062 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:44:05.400108 kubelet[2670]: W0715 04:44:05.400095 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:44:05.400108 kubelet[2670]: E0715 04:44:05.400181 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:44:05.400494 kubelet[2670]: E0715 04:44:05.400476 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:44:05.400494 kubelet[2670]: W0715 04:44:05.400492 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:44:05.400553 kubelet[2670]: E0715 04:44:05.400521 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:44:05.403642 kubelet[2670]: E0715 04:44:05.403527 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:44:05.403642 kubelet[2670]: W0715 04:44:05.403548 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:44:05.403642 kubelet[2670]: E0715 04:44:05.403589 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:44:05.407179 kubelet[2670]: E0715 04:44:05.407142 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:44:05.407179 kubelet[2670]: W0715 04:44:05.407164 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:44:05.407280 kubelet[2670]: E0715 04:44:05.407210 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:44:05.407422 kubelet[2670]: E0715 04:44:05.407401 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:44:05.407422 kubelet[2670]: W0715 04:44:05.407415 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:44:05.407477 kubelet[2670]: E0715 04:44:05.407441 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:44:05.408191 kubelet[2670]: E0715 04:44:05.408164 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:44:05.408191 kubelet[2670]: W0715 04:44:05.408184 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:44:05.408277 kubelet[2670]: E0715 04:44:05.408213 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:44:05.411172 kubelet[2670]: E0715 04:44:05.411135 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:44:05.411172 kubelet[2670]: W0715 04:44:05.411160 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:44:05.411271 kubelet[2670]: E0715 04:44:05.411182 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:44:05.413148 containerd[1524]: time="2025-07-15T04:44:05.412989364Z" level=info msg="connecting to shim 191d5a2feac02d0ede76ee9e841c417550ac950ce89efde2d8458c687fe6766c" address="unix:///run/containerd/s/7d7404f19837c295e4704320c25ce77567a41ddfaefe68c2d852837bf2554901" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:44:05.413495 kubelet[2670]: E0715 04:44:05.413472 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:44:05.413495 kubelet[2670]: W0715 04:44:05.413491 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:44:05.413575 kubelet[2670]: E0715 04:44:05.413504 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:44:05.452705 kubelet[2670]: E0715 04:44:05.452673 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:44:05.452705 kubelet[2670]: W0715 04:44:05.452697 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:44:05.452866 kubelet[2670]: E0715 04:44:05.452719 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:44:05.453509 kubelet[2670]: E0715 04:44:05.453473 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:44:05.453509 kubelet[2670]: W0715 04:44:05.453491 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:44:05.453627 kubelet[2670]: E0715 04:44:05.453605 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:44:05.454181 kubelet[2670]: E0715 04:44:05.454157 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:44:05.454181 kubelet[2670]: W0715 04:44:05.454173 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:44:05.454685 kubelet[2670]: E0715 04:44:05.454298 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:44:05.454821 kubelet[2670]: E0715 04:44:05.454805 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:44:05.454821 kubelet[2670]: W0715 04:44:05.454819 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:44:05.456471 kubelet[2670]: E0715 04:44:05.455227 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:44:05.456471 kubelet[2670]: E0715 04:44:05.456090 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:44:05.456471 kubelet[2670]: W0715 04:44:05.456100 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:44:05.456471 kubelet[2670]: E0715 04:44:05.456117 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:44:05.456471 kubelet[2670]: E0715 04:44:05.456374 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:44:05.456471 kubelet[2670]: W0715 04:44:05.456383 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:44:05.456471 kubelet[2670]: E0715 04:44:05.456400 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:44:05.456966 kubelet[2670]: E0715 04:44:05.456931 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:44:05.456966 kubelet[2670]: W0715 04:44:05.456950 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:44:05.457076 kubelet[2670]: E0715 04:44:05.457048 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:44:05.457668 kubelet[2670]: E0715 04:44:05.457637 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:44:05.457668 kubelet[2670]: W0715 04:44:05.457665 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:44:05.457756 kubelet[2670]: E0715 04:44:05.457686 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:44:05.458125 kubelet[2670]: E0715 04:44:05.458106 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:44:05.458163 kubelet[2670]: W0715 04:44:05.458121 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:44:05.458194 kubelet[2670]: E0715 04:44:05.458175 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:44:05.458348 kubelet[2670]: E0715 04:44:05.458333 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:44:05.458377 kubelet[2670]: W0715 04:44:05.458355 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:44:05.458403 kubelet[2670]: E0715 04:44:05.458385 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:44:05.458579 kubelet[2670]: E0715 04:44:05.458559 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:44:05.458579 kubelet[2670]: W0715 04:44:05.458578 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:44:05.458635 kubelet[2670]: E0715 04:44:05.458623 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:44:05.458808 kubelet[2670]: E0715 04:44:05.458794 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:44:05.458808 kubelet[2670]: W0715 04:44:05.458807 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:44:05.458854 kubelet[2670]: E0715 04:44:05.458819 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:44:05.459577 kubelet[2670]: E0715 04:44:05.459553 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:44:05.459577 kubelet[2670]: W0715 04:44:05.459568 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:44:05.459682 kubelet[2670]: E0715 04:44:05.459582 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:44:05.460181 kubelet[2670]: E0715 04:44:05.460159 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:44:05.460181 kubelet[2670]: W0715 04:44:05.460172 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:44:05.460468 kubelet[2670]: E0715 04:44:05.460298 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:44:05.460468 kubelet[2670]: W0715 04:44:05.460308 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:44:05.460468 kubelet[2670]: E0715 04:44:05.460401 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:44:05.460468 kubelet[2670]: E0715 04:44:05.460399 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:44:05.460468 kubelet[2670]: E0715 04:44:05.460439 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:44:05.460468 kubelet[2670]: W0715 04:44:05.460445 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:44:05.460634 kubelet[2670]: E0715 04:44:05.460523 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:44:05.460889 kubelet[2670]: E0715 04:44:05.460869 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:44:05.460889 kubelet[2670]: W0715 04:44:05.460882 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:44:05.460992 kubelet[2670]: E0715 04:44:05.460975 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:44:05.461150 kubelet[2670]: E0715 04:44:05.461128 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:44:05.462115 kubelet[2670]: W0715 04:44:05.461150 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:44:05.462115 kubelet[2670]: E0715 04:44:05.462072 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:44:05.462899 kubelet[2670]: E0715 04:44:05.462291 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:44:05.462899 kubelet[2670]: W0715 04:44:05.462303 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:44:05.462899 kubelet[2670]: E0715 04:44:05.462344 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:44:05.462899 kubelet[2670]: E0715 04:44:05.462435 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:44:05.462899 kubelet[2670]: W0715 04:44:05.462474 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:44:05.462899 kubelet[2670]: E0715 04:44:05.462528 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:44:05.462899 kubelet[2670]: E0715 04:44:05.462699 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:44:05.462899 kubelet[2670]: W0715 04:44:05.462710 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:44:05.462899 kubelet[2670]: E0715 04:44:05.462742 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:44:05.462899 kubelet[2670]: E0715 04:44:05.462851 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:44:05.463224 kubelet[2670]: W0715 04:44:05.462860 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:44:05.463224 kubelet[2670]: E0715 04:44:05.462888 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:44:05.463224 kubelet[2670]: E0715 04:44:05.463108 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:44:05.463224 kubelet[2670]: W0715 04:44:05.463119 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:44:05.463224 kubelet[2670]: E0715 04:44:05.463136 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:44:05.464224 kubelet[2670]: E0715 04:44:05.464205 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:44:05.464224 kubelet[2670]: W0715 04:44:05.464221 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:44:05.464279 kubelet[2670]: E0715 04:44:05.464240 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:44:05.464442 kubelet[2670]: E0715 04:44:05.464426 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:44:05.464442 kubelet[2670]: W0715 04:44:05.464438 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:44:05.464497 kubelet[2670]: E0715 04:44:05.464446 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:44:05.469103 containerd[1524]: time="2025-07-15T04:44:05.467625314Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-pmbh2,Uid:80b48592-79eb-4a44-8de1-28e7c8388b66,Namespace:calico-system,Attempt:0,}" Jul 15 04:44:05.481396 kubelet[2670]: E0715 04:44:05.481358 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:44:05.481396 kubelet[2670]: W0715 04:44:05.481384 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:44:05.482065 kubelet[2670]: E0715 04:44:05.481735 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:44:05.504908 containerd[1524]: time="2025-07-15T04:44:05.504861891Z" level=info msg="connecting to shim 3342ca2e36f138a94025059b79e5d4342e86fe9a0fad57d6b9f44acb10cf0722" address="unix:///run/containerd/s/93add462ae458e3a234b4c06af325ef245a28b7e46b9dc014e226e881393dadb" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:44:05.528208 systemd[1]: Started cri-containerd-191d5a2feac02d0ede76ee9e841c417550ac950ce89efde2d8458c687fe6766c.scope - libcontainer container 191d5a2feac02d0ede76ee9e841c417550ac950ce89efde2d8458c687fe6766c. Jul 15 04:44:05.533757 systemd[1]: Started cri-containerd-3342ca2e36f138a94025059b79e5d4342e86fe9a0fad57d6b9f44acb10cf0722.scope - libcontainer container 3342ca2e36f138a94025059b79e5d4342e86fe9a0fad57d6b9f44acb10cf0722. Jul 15 04:44:05.577059 containerd[1524]: time="2025-07-15T04:44:05.576452641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-pmbh2,Uid:80b48592-79eb-4a44-8de1-28e7c8388b66,Namespace:calico-system,Attempt:0,} returns sandbox id \"3342ca2e36f138a94025059b79e5d4342e86fe9a0fad57d6b9f44acb10cf0722\"" Jul 15 04:44:05.585504 containerd[1524]: time="2025-07-15T04:44:05.584854845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-84546bcc4f-lf4tw,Uid:99f5f8ca-a645-496c-9924-0f2b0387e78d,Namespace:calico-system,Attempt:0,} returns sandbox id \"191d5a2feac02d0ede76ee9e841c417550ac950ce89efde2d8458c687fe6766c\"" Jul 15 04:44:05.585504 containerd[1524]: time="2025-07-15T04:44:05.585218841Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 15 04:44:06.591607 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2222874915.mount: Deactivated successfully. Jul 15 04:44:06.656056 containerd[1524]: time="2025-07-15T04:44:06.655334966Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:44:06.656056 containerd[1524]: time="2025-07-15T04:44:06.655821103Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=5636360" Jul 15 04:44:06.656648 containerd[1524]: time="2025-07-15T04:44:06.656619103Z" level=info msg="ImageCreate event name:\"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:44:06.659202 containerd[1524]: time="2025-07-15T04:44:06.659174576Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:44:06.660235 containerd[1524]: time="2025-07-15T04:44:06.659841669Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5636182\" in 1.074590262s" Jul 15 04:44:06.660553 containerd[1524]: time="2025-07-15T04:44:06.660302722Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\"" Jul 15 04:44:06.661354 containerd[1524]: time="2025-07-15T04:44:06.661329768Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 15 04:44:06.666135 containerd[1524]: time="2025-07-15T04:44:06.666102925Z" level=info msg="CreateContainer within sandbox \"3342ca2e36f138a94025059b79e5d4342e86fe9a0fad57d6b9f44acb10cf0722\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 15 04:44:06.673378 containerd[1524]: time="2025-07-15T04:44:06.673277764Z" level=info msg="Container e8b35bbce226f29897d58094665cd5f769ea4384ba903ea0306124653a290da1: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:44:06.677648 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2905384046.mount: Deactivated successfully. Jul 15 04:44:06.681454 containerd[1524]: time="2025-07-15T04:44:06.681405554Z" level=info msg="CreateContainer within sandbox \"3342ca2e36f138a94025059b79e5d4342e86fe9a0fad57d6b9f44acb10cf0722\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"e8b35bbce226f29897d58094665cd5f769ea4384ba903ea0306124653a290da1\"" Jul 15 04:44:06.682007 containerd[1524]: time="2025-07-15T04:44:06.681964587Z" level=info msg="StartContainer for \"e8b35bbce226f29897d58094665cd5f769ea4384ba903ea0306124653a290da1\"" Jul 15 04:44:06.683742 containerd[1524]: time="2025-07-15T04:44:06.683707616Z" level=info msg="connecting to shim e8b35bbce226f29897d58094665cd5f769ea4384ba903ea0306124653a290da1" address="unix:///run/containerd/s/93add462ae458e3a234b4c06af325ef245a28b7e46b9dc014e226e881393dadb" protocol=ttrpc version=3 Jul 15 04:44:06.704195 systemd[1]: Started cri-containerd-e8b35bbce226f29897d58094665cd5f769ea4384ba903ea0306124653a290da1.scope - libcontainer container e8b35bbce226f29897d58094665cd5f769ea4384ba903ea0306124653a290da1. Jul 15 04:44:06.737911 containerd[1524]: time="2025-07-15T04:44:06.737803866Z" level=info msg="StartContainer for \"e8b35bbce226f29897d58094665cd5f769ea4384ba903ea0306124653a290da1\" returns successfully" Jul 15 04:44:06.774099 systemd[1]: cri-containerd-e8b35bbce226f29897d58094665cd5f769ea4384ba903ea0306124653a290da1.scope: Deactivated successfully. Jul 15 04:44:06.793484 containerd[1524]: time="2025-07-15T04:44:06.793380733Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e8b35bbce226f29897d58094665cd5f769ea4384ba903ea0306124653a290da1\" id:\"e8b35bbce226f29897d58094665cd5f769ea4384ba903ea0306124653a290da1\" pid:3275 exited_at:{seconds:1752554646 nanos:792411698}" Jul 15 04:44:06.798660 containerd[1524]: time="2025-07-15T04:44:06.798539807Z" level=info msg="received exit event container_id:\"e8b35bbce226f29897d58094665cd5f769ea4384ba903ea0306124653a290da1\" id:\"e8b35bbce226f29897d58094665cd5f769ea4384ba903ea0306124653a290da1\" pid:3275 exited_at:{seconds:1752554646 nanos:792411698}" Jul 15 04:44:07.345370 kubelet[2670]: E0715 04:44:07.345294 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xrgjj" podUID="f012837f-8aa6-4d96-a20b-3f91976c5b9b" Jul 15 04:44:08.212994 containerd[1524]: time="2025-07-15T04:44:08.212945769Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:44:08.213882 containerd[1524]: time="2025-07-15T04:44:08.213842054Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=31717828" Jul 15 04:44:08.214606 containerd[1524]: time="2025-07-15T04:44:08.214578229Z" level=info msg="ImageCreate event name:\"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:44:08.237193 containerd[1524]: time="2025-07-15T04:44:08.216312467Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:44:08.237301 containerd[1524]: time="2025-07-15T04:44:08.216723062Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"33087061\" in 1.555159848s" Jul 15 04:44:08.237334 containerd[1524]: time="2025-07-15T04:44:08.237299239Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\"" Jul 15 04:44:08.239217 containerd[1524]: time="2025-07-15T04:44:08.239193066Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 15 04:44:08.253237 containerd[1524]: time="2025-07-15T04:44:08.253145747Z" level=info msg="CreateContainer within sandbox \"191d5a2feac02d0ede76ee9e841c417550ac950ce89efde2d8458c687fe6766c\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 15 04:44:08.264147 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3092940959.mount: Deactivated successfully. Jul 15 04:44:08.264459 containerd[1524]: time="2025-07-15T04:44:08.264246345Z" level=info msg="Container 44fcf1b8d389f631454f8ce02806c970b93a23cb5c7e8e643c1b01d644611eab: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:44:08.271552 containerd[1524]: time="2025-07-15T04:44:08.271507157Z" level=info msg="CreateContainer within sandbox \"191d5a2feac02d0ede76ee9e841c417550ac950ce89efde2d8458c687fe6766c\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"44fcf1b8d389f631454f8ce02806c970b93a23cb5c7e8e643c1b01d644611eab\"" Jul 15 04:44:08.271951 containerd[1524]: time="2025-07-15T04:44:08.271925274Z" level=info msg="StartContainer for \"44fcf1b8d389f631454f8ce02806c970b93a23cb5c7e8e643c1b01d644611eab\"" Jul 15 04:44:08.273219 containerd[1524]: time="2025-07-15T04:44:08.273183905Z" level=info msg="connecting to shim 44fcf1b8d389f631454f8ce02806c970b93a23cb5c7e8e643c1b01d644611eab" address="unix:///run/containerd/s/7d7404f19837c295e4704320c25ce77567a41ddfaefe68c2d852837bf2554901" protocol=ttrpc version=3 Jul 15 04:44:08.288181 systemd[1]: Started cri-containerd-44fcf1b8d389f631454f8ce02806c970b93a23cb5c7e8e643c1b01d644611eab.scope - libcontainer container 44fcf1b8d389f631454f8ce02806c970b93a23cb5c7e8e643c1b01d644611eab. Jul 15 04:44:08.327780 containerd[1524]: time="2025-07-15T04:44:08.327738398Z" level=info msg="StartContainer for \"44fcf1b8d389f631454f8ce02806c970b93a23cb5c7e8e643c1b01d644611eab\" returns successfully" Jul 15 04:44:09.354073 kubelet[2670]: E0715 04:44:09.354001 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xrgjj" podUID="f012837f-8aa6-4d96-a20b-3f91976c5b9b" Jul 15 04:44:09.420174 kubelet[2670]: I0715 04:44:09.420141 2670 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 04:44:11.342421 kubelet[2670]: E0715 04:44:11.342375 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xrgjj" podUID="f012837f-8aa6-4d96-a20b-3f91976c5b9b" Jul 15 04:44:11.707168 containerd[1524]: time="2025-07-15T04:44:11.706595498Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:44:11.707168 containerd[1524]: time="2025-07-15T04:44:11.706972199Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=65888320" Jul 15 04:44:11.707701 containerd[1524]: time="2025-07-15T04:44:11.707654190Z" level=info msg="ImageCreate event name:\"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:44:11.709510 containerd[1524]: time="2025-07-15T04:44:11.709477805Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:44:11.710410 containerd[1524]: time="2025-07-15T04:44:11.710379951Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"67257561\" in 3.471153398s" Jul 15 04:44:11.710515 containerd[1524]: time="2025-07-15T04:44:11.710499530Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\"" Jul 15 04:44:11.712553 containerd[1524]: time="2025-07-15T04:44:11.712516536Z" level=info msg="CreateContainer within sandbox \"3342ca2e36f138a94025059b79e5d4342e86fe9a0fad57d6b9f44acb10cf0722\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 15 04:44:11.726320 containerd[1524]: time="2025-07-15T04:44:11.726277162Z" level=info msg="Container 853f3c84bd1e90903c4fb79a66058e11a29ec66476a35a33d33d2c611bd11788: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:44:11.729151 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1540600715.mount: Deactivated successfully. Jul 15 04:44:11.734942 containerd[1524]: time="2025-07-15T04:44:11.734891876Z" level=info msg="CreateContainer within sandbox \"3342ca2e36f138a94025059b79e5d4342e86fe9a0fad57d6b9f44acb10cf0722\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"853f3c84bd1e90903c4fb79a66058e11a29ec66476a35a33d33d2c611bd11788\"" Jul 15 04:44:11.735676 containerd[1524]: time="2025-07-15T04:44:11.735635956Z" level=info msg="StartContainer for \"853f3c84bd1e90903c4fb79a66058e11a29ec66476a35a33d33d2c611bd11788\"" Jul 15 04:44:11.737025 containerd[1524]: time="2025-07-15T04:44:11.737000377Z" level=info msg="connecting to shim 853f3c84bd1e90903c4fb79a66058e11a29ec66476a35a33d33d2c611bd11788" address="unix:///run/containerd/s/93add462ae458e3a234b4c06af325ef245a28b7e46b9dc014e226e881393dadb" protocol=ttrpc version=3 Jul 15 04:44:11.762196 systemd[1]: Started cri-containerd-853f3c84bd1e90903c4fb79a66058e11a29ec66476a35a33d33d2c611bd11788.scope - libcontainer container 853f3c84bd1e90903c4fb79a66058e11a29ec66476a35a33d33d2c611bd11788. Jul 15 04:44:11.851142 containerd[1524]: time="2025-07-15T04:44:11.849390720Z" level=info msg="StartContainer for \"853f3c84bd1e90903c4fb79a66058e11a29ec66476a35a33d33d2c611bd11788\" returns successfully" Jul 15 04:44:12.369086 systemd[1]: cri-containerd-853f3c84bd1e90903c4fb79a66058e11a29ec66476a35a33d33d2c611bd11788.scope: Deactivated successfully. Jul 15 04:44:12.369917 containerd[1524]: time="2025-07-15T04:44:12.369851332Z" level=info msg="received exit event container_id:\"853f3c84bd1e90903c4fb79a66058e11a29ec66476a35a33d33d2c611bd11788\" id:\"853f3c84bd1e90903c4fb79a66058e11a29ec66476a35a33d33d2c611bd11788\" pid:3377 exited_at:{seconds:1752554652 nanos:369643220}" Jul 15 04:44:12.370128 systemd[1]: cri-containerd-853f3c84bd1e90903c4fb79a66058e11a29ec66476a35a33d33d2c611bd11788.scope: Consumed 481ms CPU time, 176.9M memory peak, 2.4M read from disk, 165.8M written to disk. Jul 15 04:44:12.370242 containerd[1524]: time="2025-07-15T04:44:12.370188905Z" level=info msg="TaskExit event in podsandbox handler container_id:\"853f3c84bd1e90903c4fb79a66058e11a29ec66476a35a33d33d2c611bd11788\" id:\"853f3c84bd1e90903c4fb79a66058e11a29ec66476a35a33d33d2c611bd11788\" pid:3377 exited_at:{seconds:1752554652 nanos:369643220}" Jul 15 04:44:12.393846 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-853f3c84bd1e90903c4fb79a66058e11a29ec66476a35a33d33d2c611bd11788-rootfs.mount: Deactivated successfully. Jul 15 04:44:12.446552 containerd[1524]: time="2025-07-15T04:44:12.446510526Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 15 04:44:12.448666 kubelet[2670]: I0715 04:44:12.448606 2670 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 15 04:44:12.465099 kubelet[2670]: I0715 04:44:12.465012 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-84546bcc4f-lf4tw" podStartSLOduration=5.812965965 podStartE2EDuration="8.46499776s" podCreationTimestamp="2025-07-15 04:44:04 +0000 UTC" firstStartedPulling="2025-07-15 04:44:05.586496869 +0000 UTC m=+20.330387749" lastFinishedPulling="2025-07-15 04:44:08.238528664 +0000 UTC m=+22.982419544" observedRunningTime="2025-07-15 04:44:08.430275177 +0000 UTC m=+23.174166057" watchObservedRunningTime="2025-07-15 04:44:12.46499776 +0000 UTC m=+27.208888640" Jul 15 04:44:12.484642 systemd[1]: Created slice kubepods-burstable-podfce61345_00e5_4ff3_a878_00cffc66772c.slice - libcontainer container kubepods-burstable-podfce61345_00e5_4ff3_a878_00cffc66772c.slice. Jul 15 04:44:12.497487 systemd[1]: Created slice kubepods-besteffort-pod91fbad59_8877_4e28_be2e_9de422626132.slice - libcontainer container kubepods-besteffort-pod91fbad59_8877_4e28_be2e_9de422626132.slice. Jul 15 04:44:12.505270 kubelet[2670]: I0715 04:44:12.505230 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgv7j\" (UniqueName: \"kubernetes.io/projected/91fbad59-8877-4e28-be2e-9de422626132-kube-api-access-zgv7j\") pod \"calico-kube-controllers-6ff68d64f6-j49l9\" (UID: \"91fbad59-8877-4e28-be2e-9de422626132\") " pod="calico-system/calico-kube-controllers-6ff68d64f6-j49l9" Jul 15 04:44:12.505270 kubelet[2670]: I0715 04:44:12.505267 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fce61345-00e5-4ff3-a878-00cffc66772c-config-volume\") pod \"coredns-668d6bf9bc-gtnvx\" (UID: \"fce61345-00e5-4ff3-a878-00cffc66772c\") " pod="kube-system/coredns-668d6bf9bc-gtnvx" Jul 15 04:44:12.505633 systemd[1]: Created slice kubepods-besteffort-pod26129a7b_5c2e_4c5b_9a03_568803205b97.slice - libcontainer container kubepods-besteffort-pod26129a7b_5c2e_4c5b_9a03_568803205b97.slice. Jul 15 04:44:12.507184 kubelet[2670]: I0715 04:44:12.507158 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82a27411-bde6-41a1-806d-7ff1ba284ea9-config\") pod \"goldmane-768f4c5c69-db5m2\" (UID: \"82a27411-bde6-41a1-806d-7ff1ba284ea9\") " pod="calico-system/goldmane-768f4c5c69-db5m2" Jul 15 04:44:12.507257 kubelet[2670]: I0715 04:44:12.507194 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/82a27411-bde6-41a1-806d-7ff1ba284ea9-goldmane-key-pair\") pod \"goldmane-768f4c5c69-db5m2\" (UID: \"82a27411-bde6-41a1-806d-7ff1ba284ea9\") " pod="calico-system/goldmane-768f4c5c69-db5m2" Jul 15 04:44:12.507257 kubelet[2670]: I0715 04:44:12.507218 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ws4gn\" (UniqueName: \"kubernetes.io/projected/26129a7b-5c2e-4c5b-9a03-568803205b97-kube-api-access-ws4gn\") pod \"whisker-79cb788654-7g89k\" (UID: \"26129a7b-5c2e-4c5b-9a03-568803205b97\") " pod="calico-system/whisker-79cb788654-7g89k" Jul 15 04:44:12.507257 kubelet[2670]: I0715 04:44:12.507235 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/82a27411-bde6-41a1-806d-7ff1ba284ea9-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-db5m2\" (UID: \"82a27411-bde6-41a1-806d-7ff1ba284ea9\") " pod="calico-system/goldmane-768f4c5c69-db5m2" Jul 15 04:44:12.507257 kubelet[2670]: I0715 04:44:12.507254 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/aea0df7c-5fe3-48eb-abd7-ab7bd635c065-calico-apiserver-certs\") pod \"calico-apiserver-5c4b7c8858-wmjsf\" (UID: \"aea0df7c-5fe3-48eb-abd7-ab7bd635c065\") " pod="calico-apiserver/calico-apiserver-5c4b7c8858-wmjsf" Jul 15 04:44:12.507353 kubelet[2670]: I0715 04:44:12.507270 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/26129a7b-5c2e-4c5b-9a03-568803205b97-whisker-backend-key-pair\") pod \"whisker-79cb788654-7g89k\" (UID: \"26129a7b-5c2e-4c5b-9a03-568803205b97\") " pod="calico-system/whisker-79cb788654-7g89k" Jul 15 04:44:12.507378 kubelet[2670]: I0715 04:44:12.507365 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/91fbad59-8877-4e28-be2e-9de422626132-tigera-ca-bundle\") pod \"calico-kube-controllers-6ff68d64f6-j49l9\" (UID: \"91fbad59-8877-4e28-be2e-9de422626132\") " pod="calico-system/calico-kube-controllers-6ff68d64f6-j49l9" Jul 15 04:44:12.507400 kubelet[2670]: I0715 04:44:12.507391 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kt7f\" (UniqueName: \"kubernetes.io/projected/aea0df7c-5fe3-48eb-abd7-ab7bd635c065-kube-api-access-9kt7f\") pod \"calico-apiserver-5c4b7c8858-wmjsf\" (UID: \"aea0df7c-5fe3-48eb-abd7-ab7bd635c065\") " pod="calico-apiserver/calico-apiserver-5c4b7c8858-wmjsf" Jul 15 04:44:12.507422 kubelet[2670]: I0715 04:44:12.507408 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/26129a7b-5c2e-4c5b-9a03-568803205b97-whisker-ca-bundle\") pod \"whisker-79cb788654-7g89k\" (UID: \"26129a7b-5c2e-4c5b-9a03-568803205b97\") " pod="calico-system/whisker-79cb788654-7g89k" Jul 15 04:44:12.507447 kubelet[2670]: I0715 04:44:12.507429 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qm2t\" (UniqueName: \"kubernetes.io/projected/fce61345-00e5-4ff3-a878-00cffc66772c-kube-api-access-6qm2t\") pod \"coredns-668d6bf9bc-gtnvx\" (UID: \"fce61345-00e5-4ff3-a878-00cffc66772c\") " pod="kube-system/coredns-668d6bf9bc-gtnvx" Jul 15 04:44:12.507471 kubelet[2670]: I0715 04:44:12.507448 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfnrr\" (UniqueName: \"kubernetes.io/projected/82a27411-bde6-41a1-806d-7ff1ba284ea9-kube-api-access-zfnrr\") pod \"goldmane-768f4c5c69-db5m2\" (UID: \"82a27411-bde6-41a1-806d-7ff1ba284ea9\") " pod="calico-system/goldmane-768f4c5c69-db5m2" Jul 15 04:44:12.511278 systemd[1]: Created slice kubepods-besteffort-pod82a27411_bde6_41a1_806d_7ff1ba284ea9.slice - libcontainer container kubepods-besteffort-pod82a27411_bde6_41a1_806d_7ff1ba284ea9.slice. Jul 15 04:44:12.516090 systemd[1]: Created slice kubepods-besteffort-podaea0df7c_5fe3_48eb_abd7_ab7bd635c065.slice - libcontainer container kubepods-besteffort-podaea0df7c_5fe3_48eb_abd7_ab7bd635c065.slice. Jul 15 04:44:12.521312 systemd[1]: Created slice kubepods-burstable-podc731f066_95c8_4765_956b_f87b24f29f54.slice - libcontainer container kubepods-burstable-podc731f066_95c8_4765_956b_f87b24f29f54.slice. Jul 15 04:44:12.531823 systemd[1]: Created slice kubepods-besteffort-pod0d43d695_d62b_4cf8_9d5d_f7d9df53995a.slice - libcontainer container kubepods-besteffort-pod0d43d695_d62b_4cf8_9d5d_f7d9df53995a.slice. Jul 15 04:44:12.607925 kubelet[2670]: I0715 04:44:12.607868 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtbwf\" (UniqueName: \"kubernetes.io/projected/c731f066-95c8-4765-956b-f87b24f29f54-kube-api-access-rtbwf\") pod \"coredns-668d6bf9bc-jsx84\" (UID: \"c731f066-95c8-4765-956b-f87b24f29f54\") " pod="kube-system/coredns-668d6bf9bc-jsx84" Jul 15 04:44:12.607925 kubelet[2670]: I0715 04:44:12.607925 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0d43d695-d62b-4cf8-9d5d-f7d9df53995a-calico-apiserver-certs\") pod \"calico-apiserver-5c4b7c8858-4lhx7\" (UID: \"0d43d695-d62b-4cf8-9d5d-f7d9df53995a\") " pod="calico-apiserver/calico-apiserver-5c4b7c8858-4lhx7" Jul 15 04:44:12.608099 kubelet[2670]: I0715 04:44:12.607972 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c731f066-95c8-4765-956b-f87b24f29f54-config-volume\") pod \"coredns-668d6bf9bc-jsx84\" (UID: \"c731f066-95c8-4765-956b-f87b24f29f54\") " pod="kube-system/coredns-668d6bf9bc-jsx84" Jul 15 04:44:12.608099 kubelet[2670]: I0715 04:44:12.608068 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62csf\" (UniqueName: \"kubernetes.io/projected/0d43d695-d62b-4cf8-9d5d-f7d9df53995a-kube-api-access-62csf\") pod \"calico-apiserver-5c4b7c8858-4lhx7\" (UID: \"0d43d695-d62b-4cf8-9d5d-f7d9df53995a\") " pod="calico-apiserver/calico-apiserver-5c4b7c8858-4lhx7" Jul 15 04:44:12.792803 containerd[1524]: time="2025-07-15T04:44:12.792549986Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-gtnvx,Uid:fce61345-00e5-4ff3-a878-00cffc66772c,Namespace:kube-system,Attempt:0,}" Jul 15 04:44:12.807628 containerd[1524]: time="2025-07-15T04:44:12.807374650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6ff68d64f6-j49l9,Uid:91fbad59-8877-4e28-be2e-9de422626132,Namespace:calico-system,Attempt:0,}" Jul 15 04:44:12.814329 containerd[1524]: time="2025-07-15T04:44:12.813621261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-79cb788654-7g89k,Uid:26129a7b-5c2e-4c5b-9a03-568803205b97,Namespace:calico-system,Attempt:0,}" Jul 15 04:44:12.825715 containerd[1524]: time="2025-07-15T04:44:12.825684576Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c4b7c8858-wmjsf,Uid:aea0df7c-5fe3-48eb-abd7-ab7bd635c065,Namespace:calico-apiserver,Attempt:0,}" Jul 15 04:44:12.826179 containerd[1524]: time="2025-07-15T04:44:12.826153289Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-db5m2,Uid:82a27411-bde6-41a1-806d-7ff1ba284ea9,Namespace:calico-system,Attempt:0,}" Jul 15 04:44:12.838346 containerd[1524]: time="2025-07-15T04:44:12.838243088Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jsx84,Uid:c731f066-95c8-4765-956b-f87b24f29f54,Namespace:kube-system,Attempt:0,}" Jul 15 04:44:12.848613 containerd[1524]: time="2025-07-15T04:44:12.847503527Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c4b7c8858-4lhx7,Uid:0d43d695-d62b-4cf8-9d5d-f7d9df53995a,Namespace:calico-apiserver,Attempt:0,}" Jul 15 04:44:13.230353 containerd[1524]: time="2025-07-15T04:44:13.230302252Z" level=error msg="Failed to destroy network for sandbox \"4158fe6638d05ff304cafd93ca3bdb26a256c7ffc90e9812d412929269215b50\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:44:13.231850 containerd[1524]: time="2025-07-15T04:44:13.231803036Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-gtnvx,Uid:fce61345-00e5-4ff3-a878-00cffc66772c,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4158fe6638d05ff304cafd93ca3bdb26a256c7ffc90e9812d412929269215b50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:44:13.232168 kubelet[2670]: E0715 04:44:13.232122 2670 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4158fe6638d05ff304cafd93ca3bdb26a256c7ffc90e9812d412929269215b50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:44:13.234755 containerd[1524]: time="2025-07-15T04:44:13.234723993Z" level=error msg="Failed to destroy network for sandbox \"d350c3d5c024b650fc17f8e86d359cb7d8948d8d3e48bf5834a1ee0b9513aaac\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:44:13.235446 kubelet[2670]: E0715 04:44:13.235306 2670 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4158fe6638d05ff304cafd93ca3bdb26a256c7ffc90e9812d412929269215b50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-gtnvx" Jul 15 04:44:13.235493 kubelet[2670]: E0715 04:44:13.235457 2670 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4158fe6638d05ff304cafd93ca3bdb26a256c7ffc90e9812d412929269215b50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-gtnvx" Jul 15 04:44:13.235574 kubelet[2670]: E0715 04:44:13.235529 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-gtnvx_kube-system(fce61345-00e5-4ff3-a878-00cffc66772c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-gtnvx_kube-system(fce61345-00e5-4ff3-a878-00cffc66772c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4158fe6638d05ff304cafd93ca3bdb26a256c7ffc90e9812d412929269215b50\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-gtnvx" podUID="fce61345-00e5-4ff3-a878-00cffc66772c" Jul 15 04:44:13.239125 containerd[1524]: time="2025-07-15T04:44:13.239064241Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6ff68d64f6-j49l9,Uid:91fbad59-8877-4e28-be2e-9de422626132,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d350c3d5c024b650fc17f8e86d359cb7d8948d8d3e48bf5834a1ee0b9513aaac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:44:13.239316 kubelet[2670]: E0715 04:44:13.239282 2670 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d350c3d5c024b650fc17f8e86d359cb7d8948d8d3e48bf5834a1ee0b9513aaac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:44:13.239361 kubelet[2670]: E0715 04:44:13.239335 2670 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d350c3d5c024b650fc17f8e86d359cb7d8948d8d3e48bf5834a1ee0b9513aaac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6ff68d64f6-j49l9" Jul 15 04:44:13.239361 kubelet[2670]: E0715 04:44:13.239353 2670 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d350c3d5c024b650fc17f8e86d359cb7d8948d8d3e48bf5834a1ee0b9513aaac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6ff68d64f6-j49l9" Jul 15 04:44:13.239425 kubelet[2670]: E0715 04:44:13.239398 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6ff68d64f6-j49l9_calico-system(91fbad59-8877-4e28-be2e-9de422626132)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6ff68d64f6-j49l9_calico-system(91fbad59-8877-4e28-be2e-9de422626132)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d350c3d5c024b650fc17f8e86d359cb7d8948d8d3e48bf5834a1ee0b9513aaac\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6ff68d64f6-j49l9" podUID="91fbad59-8877-4e28-be2e-9de422626132" Jul 15 04:44:13.253681 containerd[1524]: time="2025-07-15T04:44:13.253623577Z" level=error msg="Failed to destroy network for sandbox \"a3f96cea097d7a9d277c4080308e97aa248944858cb206037f07c052c7838246\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:44:13.256488 containerd[1524]: time="2025-07-15T04:44:13.256391711Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c4b7c8858-4lhx7,Uid:0d43d695-d62b-4cf8-9d5d-f7d9df53995a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3f96cea097d7a9d277c4080308e97aa248944858cb206037f07c052c7838246\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:44:13.256688 kubelet[2670]: E0715 04:44:13.256628 2670 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3f96cea097d7a9d277c4080308e97aa248944858cb206037f07c052c7838246\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:44:13.256741 kubelet[2670]: E0715 04:44:13.256707 2670 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3f96cea097d7a9d277c4080308e97aa248944858cb206037f07c052c7838246\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5c4b7c8858-4lhx7" Jul 15 04:44:13.256741 kubelet[2670]: E0715 04:44:13.256730 2670 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3f96cea097d7a9d277c4080308e97aa248944858cb206037f07c052c7838246\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5c4b7c8858-4lhx7" Jul 15 04:44:13.256901 kubelet[2670]: E0715 04:44:13.256770 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5c4b7c8858-4lhx7_calico-apiserver(0d43d695-d62b-4cf8-9d5d-f7d9df53995a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5c4b7c8858-4lhx7_calico-apiserver(0d43d695-d62b-4cf8-9d5d-f7d9df53995a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a3f96cea097d7a9d277c4080308e97aa248944858cb206037f07c052c7838246\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5c4b7c8858-4lhx7" podUID="0d43d695-d62b-4cf8-9d5d-f7d9df53995a" Jul 15 04:44:13.259883 containerd[1524]: time="2025-07-15T04:44:13.259785738Z" level=error msg="Failed to destroy network for sandbox \"12373638177343842ebf5e2470662a7e659a4cdc0b95ef8dcbdaa889fd6bf6b6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:44:13.261616 containerd[1524]: time="2025-07-15T04:44:13.261583087Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jsx84,Uid:c731f066-95c8-4765-956b-f87b24f29f54,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"12373638177343842ebf5e2470662a7e659a4cdc0b95ef8dcbdaa889fd6bf6b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:44:13.262185 kubelet[2670]: E0715 04:44:13.262154 2670 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12373638177343842ebf5e2470662a7e659a4cdc0b95ef8dcbdaa889fd6bf6b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:44:13.262257 kubelet[2670]: E0715 04:44:13.262201 2670 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12373638177343842ebf5e2470662a7e659a4cdc0b95ef8dcbdaa889fd6bf6b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-jsx84" Jul 15 04:44:13.262257 kubelet[2670]: E0715 04:44:13.262220 2670 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12373638177343842ebf5e2470662a7e659a4cdc0b95ef8dcbdaa889fd6bf6b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-jsx84" Jul 15 04:44:13.262345 containerd[1524]: time="2025-07-15T04:44:13.262193018Z" level=error msg="Failed to destroy network for sandbox \"e8bd93cd50affcaaa177786559334dba9a186a4def160a27d414ec53b4719921\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:44:13.262377 kubelet[2670]: E0715 04:44:13.262255 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-jsx84_kube-system(c731f066-95c8-4765-956b-f87b24f29f54)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-jsx84_kube-system(c731f066-95c8-4765-956b-f87b24f29f54)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"12373638177343842ebf5e2470662a7e659a4cdc0b95ef8dcbdaa889fd6bf6b6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-jsx84" podUID="c731f066-95c8-4765-956b-f87b24f29f54" Jul 15 04:44:13.262766 containerd[1524]: time="2025-07-15T04:44:13.262742020Z" level=error msg="Failed to destroy network for sandbox \"253e119a6b98e7ab351ee87f844cb40bbadb4e8660ab8968e9e40cf58567ea54\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:44:13.263157 containerd[1524]: time="2025-07-15T04:44:13.263110395Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c4b7c8858-wmjsf,Uid:aea0df7c-5fe3-48eb-abd7-ab7bd635c065,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8bd93cd50affcaaa177786559334dba9a186a4def160a27d414ec53b4719921\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:44:13.263356 kubelet[2670]: E0715 04:44:13.263314 2670 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8bd93cd50affcaaa177786559334dba9a186a4def160a27d414ec53b4719921\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:44:13.263415 kubelet[2670]: E0715 04:44:13.263365 2670 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8bd93cd50affcaaa177786559334dba9a186a4def160a27d414ec53b4719921\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5c4b7c8858-wmjsf" Jul 15 04:44:13.263415 kubelet[2670]: E0715 04:44:13.263397 2670 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8bd93cd50affcaaa177786559334dba9a186a4def160a27d414ec53b4719921\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5c4b7c8858-wmjsf" Jul 15 04:44:13.263492 kubelet[2670]: E0715 04:44:13.263443 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5c4b7c8858-wmjsf_calico-apiserver(aea0df7c-5fe3-48eb-abd7-ab7bd635c065)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5c4b7c8858-wmjsf_calico-apiserver(aea0df7c-5fe3-48eb-abd7-ab7bd635c065)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e8bd93cd50affcaaa177786559334dba9a186a4def160a27d414ec53b4719921\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5c4b7c8858-wmjsf" podUID="aea0df7c-5fe3-48eb-abd7-ab7bd635c065" Jul 15 04:44:13.264111 containerd[1524]: time="2025-07-15T04:44:13.263998808Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-db5m2,Uid:82a27411-bde6-41a1-806d-7ff1ba284ea9,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"253e119a6b98e7ab351ee87f844cb40bbadb4e8660ab8968e9e40cf58567ea54\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:44:13.265071 kubelet[2670]: E0715 04:44:13.264189 2670 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"253e119a6b98e7ab351ee87f844cb40bbadb4e8660ab8968e9e40cf58567ea54\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:44:13.265133 kubelet[2670]: E0715 04:44:13.265093 2670 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"253e119a6b98e7ab351ee87f844cb40bbadb4e8660ab8968e9e40cf58567ea54\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-db5m2" Jul 15 04:44:13.265174 kubelet[2670]: E0715 04:44:13.265113 2670 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"253e119a6b98e7ab351ee87f844cb40bbadb4e8660ab8968e9e40cf58567ea54\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-db5m2" Jul 15 04:44:13.265216 kubelet[2670]: E0715 04:44:13.265194 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-db5m2_calico-system(82a27411-bde6-41a1-806d-7ff1ba284ea9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-db5m2_calico-system(82a27411-bde6-41a1-806d-7ff1ba284ea9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"253e119a6b98e7ab351ee87f844cb40bbadb4e8660ab8968e9e40cf58567ea54\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-db5m2" podUID="82a27411-bde6-41a1-806d-7ff1ba284ea9" Jul 15 04:44:13.266102 containerd[1524]: time="2025-07-15T04:44:13.265992506Z" level=error msg="Failed to destroy network for sandbox \"eff5bd37565cfa8a7f7e7d67582f6f2c620d9b6eb3b9ef2fc5692f79b494b11e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:44:13.267116 containerd[1524]: time="2025-07-15T04:44:13.266996176Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-79cb788654-7g89k,Uid:26129a7b-5c2e-4c5b-9a03-568803205b97,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"eff5bd37565cfa8a7f7e7d67582f6f2c620d9b6eb3b9ef2fc5692f79b494b11e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:44:13.267236 kubelet[2670]: E0715 04:44:13.267199 2670 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eff5bd37565cfa8a7f7e7d67582f6f2c620d9b6eb3b9ef2fc5692f79b494b11e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:44:13.267274 kubelet[2670]: E0715 04:44:13.267243 2670 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eff5bd37565cfa8a7f7e7d67582f6f2c620d9b6eb3b9ef2fc5692f79b494b11e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-79cb788654-7g89k" Jul 15 04:44:13.267274 kubelet[2670]: E0715 04:44:13.267258 2670 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eff5bd37565cfa8a7f7e7d67582f6f2c620d9b6eb3b9ef2fc5692f79b494b11e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-79cb788654-7g89k" Jul 15 04:44:13.267337 kubelet[2670]: E0715 04:44:13.267290 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-79cb788654-7g89k_calico-system(26129a7b-5c2e-4c5b-9a03-568803205b97)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-79cb788654-7g89k_calico-system(26129a7b-5c2e-4c5b-9a03-568803205b97)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eff5bd37565cfa8a7f7e7d67582f6f2c620d9b6eb3b9ef2fc5692f79b494b11e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-79cb788654-7g89k" podUID="26129a7b-5c2e-4c5b-9a03-568803205b97" Jul 15 04:44:13.350547 systemd[1]: Created slice kubepods-besteffort-podf012837f_8aa6_4d96_a20b_3f91976c5b9b.slice - libcontainer container kubepods-besteffort-podf012837f_8aa6_4d96_a20b_3f91976c5b9b.slice. Jul 15 04:44:13.364759 containerd[1524]: time="2025-07-15T04:44:13.364708619Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xrgjj,Uid:f012837f-8aa6-4d96-a20b-3f91976c5b9b,Namespace:calico-system,Attempt:0,}" Jul 15 04:44:13.423469 containerd[1524]: time="2025-07-15T04:44:13.423418393Z" level=error msg="Failed to destroy network for sandbox \"edf23c68775646c636146fe082d43eceed60d9d56bf0c39eee77c444f0115ab6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:44:13.426490 containerd[1524]: time="2025-07-15T04:44:13.426438604Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xrgjj,Uid:f012837f-8aa6-4d96-a20b-3f91976c5b9b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"edf23c68775646c636146fe082d43eceed60d9d56bf0c39eee77c444f0115ab6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:44:13.427092 kubelet[2670]: E0715 04:44:13.426797 2670 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"edf23c68775646c636146fe082d43eceed60d9d56bf0c39eee77c444f0115ab6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:44:13.427092 kubelet[2670]: E0715 04:44:13.426854 2670 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"edf23c68775646c636146fe082d43eceed60d9d56bf0c39eee77c444f0115ab6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xrgjj" Jul 15 04:44:13.427092 kubelet[2670]: E0715 04:44:13.426873 2670 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"edf23c68775646c636146fe082d43eceed60d9d56bf0c39eee77c444f0115ab6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xrgjj" Jul 15 04:44:13.428077 kubelet[2670]: E0715 04:44:13.426911 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-xrgjj_calico-system(f012837f-8aa6-4d96-a20b-3f91976c5b9b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-xrgjj_calico-system(f012837f-8aa6-4d96-a20b-3f91976c5b9b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"edf23c68775646c636146fe082d43eceed60d9d56bf0c39eee77c444f0115ab6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-xrgjj" podUID="f012837f-8aa6-4d96-a20b-3f91976c5b9b" Jul 15 04:44:13.727055 systemd[1]: run-netns-cni\x2d27803cce\x2d6b2f\x2db892\x2db304\x2d7b9383b6b0ed.mount: Deactivated successfully. Jul 15 04:44:13.727403 systemd[1]: run-netns-cni\x2da2f00043\x2d402c\x2dd1a1\x2dff29\x2d47cdbe598860.mount: Deactivated successfully. Jul 15 04:44:13.727458 systemd[1]: run-netns-cni\x2d0a88e115\x2d243e\x2d80f3\x2d859c\x2d78a79924dcd7.mount: Deactivated successfully. Jul 15 04:44:13.727502 systemd[1]: run-netns-cni\x2d862dbb3c\x2dbde3\x2d2add\x2d2b1f\x2def2f263d20be.mount: Deactivated successfully. Jul 15 04:44:14.907977 kubelet[2670]: I0715 04:44:14.907940 2670 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 04:44:15.556114 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2187583167.mount: Deactivated successfully. Jul 15 04:44:15.719978 containerd[1524]: time="2025-07-15T04:44:15.719927174Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:44:15.721392 containerd[1524]: time="2025-07-15T04:44:15.721227714Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=152544909" Jul 15 04:44:15.722143 containerd[1524]: time="2025-07-15T04:44:15.722114837Z" level=info msg="ImageCreate event name:\"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:44:15.724076 containerd[1524]: time="2025-07-15T04:44:15.723895924Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:44:15.729329 containerd[1524]: time="2025-07-15T04:44:15.729283191Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"152544771\" in 3.282729938s" Jul 15 04:44:15.729329 containerd[1524]: time="2025-07-15T04:44:15.729318436Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\"" Jul 15 04:44:15.738649 containerd[1524]: time="2025-07-15T04:44:15.738608603Z" level=info msg="CreateContainer within sandbox \"3342ca2e36f138a94025059b79e5d4342e86fe9a0fad57d6b9f44acb10cf0722\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 15 04:44:15.749059 containerd[1524]: time="2025-07-15T04:44:15.748530899Z" level=info msg="Container 6be0e7a9c98bb82df7f43c2832dc55f61a6b816989cd8684d4cf684b41c88723: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:44:15.763893 containerd[1524]: time="2025-07-15T04:44:15.763843701Z" level=info msg="CreateContainer within sandbox \"3342ca2e36f138a94025059b79e5d4342e86fe9a0fad57d6b9f44acb10cf0722\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"6be0e7a9c98bb82df7f43c2832dc55f61a6b816989cd8684d4cf684b41c88723\"" Jul 15 04:44:15.764374 containerd[1524]: time="2025-07-15T04:44:15.764341370Z" level=info msg="StartContainer for \"6be0e7a9c98bb82df7f43c2832dc55f61a6b816989cd8684d4cf684b41c88723\"" Jul 15 04:44:15.765730 containerd[1524]: time="2025-07-15T04:44:15.765700839Z" level=info msg="connecting to shim 6be0e7a9c98bb82df7f43c2832dc55f61a6b816989cd8684d4cf684b41c88723" address="unix:///run/containerd/s/93add462ae458e3a234b4c06af325ef245a28b7e46b9dc014e226e881393dadb" protocol=ttrpc version=3 Jul 15 04:44:15.787176 systemd[1]: Started cri-containerd-6be0e7a9c98bb82df7f43c2832dc55f61a6b816989cd8684d4cf684b41c88723.scope - libcontainer container 6be0e7a9c98bb82df7f43c2832dc55f61a6b816989cd8684d4cf684b41c88723. Jul 15 04:44:15.831321 containerd[1524]: time="2025-07-15T04:44:15.831268087Z" level=info msg="StartContainer for \"6be0e7a9c98bb82df7f43c2832dc55f61a6b816989cd8684d4cf684b41c88723\" returns successfully" Jul 15 04:44:16.037486 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 15 04:44:16.037598 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 15 04:44:16.245111 kubelet[2670]: I0715 04:44:16.244975 2670 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ws4gn\" (UniqueName: \"kubernetes.io/projected/26129a7b-5c2e-4c5b-9a03-568803205b97-kube-api-access-ws4gn\") pod \"26129a7b-5c2e-4c5b-9a03-568803205b97\" (UID: \"26129a7b-5c2e-4c5b-9a03-568803205b97\") " Jul 15 04:44:16.245111 kubelet[2670]: I0715 04:44:16.245072 2670 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/26129a7b-5c2e-4c5b-9a03-568803205b97-whisker-ca-bundle\") pod \"26129a7b-5c2e-4c5b-9a03-568803205b97\" (UID: \"26129a7b-5c2e-4c5b-9a03-568803205b97\") " Jul 15 04:44:16.245111 kubelet[2670]: I0715 04:44:16.245106 2670 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/26129a7b-5c2e-4c5b-9a03-568803205b97-whisker-backend-key-pair\") pod \"26129a7b-5c2e-4c5b-9a03-568803205b97\" (UID: \"26129a7b-5c2e-4c5b-9a03-568803205b97\") " Jul 15 04:44:16.252318 kubelet[2670]: I0715 04:44:16.252243 2670 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26129a7b-5c2e-4c5b-9a03-568803205b97-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "26129a7b-5c2e-4c5b-9a03-568803205b97" (UID: "26129a7b-5c2e-4c5b-9a03-568803205b97"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 15 04:44:16.259063 kubelet[2670]: I0715 04:44:16.258762 2670 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26129a7b-5c2e-4c5b-9a03-568803205b97-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "26129a7b-5c2e-4c5b-9a03-568803205b97" (UID: "26129a7b-5c2e-4c5b-9a03-568803205b97"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 15 04:44:16.259205 kubelet[2670]: I0715 04:44:16.259171 2670 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26129a7b-5c2e-4c5b-9a03-568803205b97-kube-api-access-ws4gn" (OuterVolumeSpecName: "kube-api-access-ws4gn") pod "26129a7b-5c2e-4c5b-9a03-568803205b97" (UID: "26129a7b-5c2e-4c5b-9a03-568803205b97"). InnerVolumeSpecName "kube-api-access-ws4gn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 15 04:44:16.345983 kubelet[2670]: I0715 04:44:16.345913 2670 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/26129a7b-5c2e-4c5b-9a03-568803205b97-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jul 15 04:44:16.345983 kubelet[2670]: I0715 04:44:16.345950 2670 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ws4gn\" (UniqueName: \"kubernetes.io/projected/26129a7b-5c2e-4c5b-9a03-568803205b97-kube-api-access-ws4gn\") on node \"localhost\" DevicePath \"\"" Jul 15 04:44:16.345983 kubelet[2670]: I0715 04:44:16.345960 2670 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/26129a7b-5c2e-4c5b-9a03-568803205b97-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jul 15 04:44:16.479935 systemd[1]: Removed slice kubepods-besteffort-pod26129a7b_5c2e_4c5b_9a03_568803205b97.slice - libcontainer container kubepods-besteffort-pod26129a7b_5c2e_4c5b_9a03_568803205b97.slice. Jul 15 04:44:16.502777 kubelet[2670]: I0715 04:44:16.502463 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-pmbh2" podStartSLOduration=1.3560592470000001 podStartE2EDuration="11.502233464s" podCreationTimestamp="2025-07-15 04:44:05 +0000 UTC" firstStartedPulling="2025-07-15 04:44:05.583885121 +0000 UTC m=+20.327776001" lastFinishedPulling="2025-07-15 04:44:15.730059378 +0000 UTC m=+30.473950218" observedRunningTime="2025-07-15 04:44:16.492760997 +0000 UTC m=+31.236651877" watchObservedRunningTime="2025-07-15 04:44:16.502233464 +0000 UTC m=+31.246124344" Jul 15 04:44:16.547269 systemd[1]: Created slice kubepods-besteffort-poda1b0d7aa_8615_47f4_bf04_a32dde06f4a1.slice - libcontainer container kubepods-besteffort-poda1b0d7aa_8615_47f4_bf04_a32dde06f4a1.slice. Jul 15 04:44:16.559076 systemd[1]: var-lib-kubelet-pods-26129a7b\x2d5c2e\x2d4c5b\x2d9a03\x2d568803205b97-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dws4gn.mount: Deactivated successfully. Jul 15 04:44:16.559181 systemd[1]: var-lib-kubelet-pods-26129a7b\x2d5c2e\x2d4c5b\x2d9a03\x2d568803205b97-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 15 04:44:16.647882 kubelet[2670]: I0715 04:44:16.647813 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xl822\" (UniqueName: \"kubernetes.io/projected/a1b0d7aa-8615-47f4-bf04-a32dde06f4a1-kube-api-access-xl822\") pod \"whisker-54d66b7f78-qxgcm\" (UID: \"a1b0d7aa-8615-47f4-bf04-a32dde06f4a1\") " pod="calico-system/whisker-54d66b7f78-qxgcm" Jul 15 04:44:16.647882 kubelet[2670]: I0715 04:44:16.647856 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a1b0d7aa-8615-47f4-bf04-a32dde06f4a1-whisker-ca-bundle\") pod \"whisker-54d66b7f78-qxgcm\" (UID: \"a1b0d7aa-8615-47f4-bf04-a32dde06f4a1\") " pod="calico-system/whisker-54d66b7f78-qxgcm" Jul 15 04:44:16.648093 kubelet[2670]: I0715 04:44:16.647908 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a1b0d7aa-8615-47f4-bf04-a32dde06f4a1-whisker-backend-key-pair\") pod \"whisker-54d66b7f78-qxgcm\" (UID: \"a1b0d7aa-8615-47f4-bf04-a32dde06f4a1\") " pod="calico-system/whisker-54d66b7f78-qxgcm" Jul 15 04:44:16.852785 containerd[1524]: time="2025-07-15T04:44:16.852745644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-54d66b7f78-qxgcm,Uid:a1b0d7aa-8615-47f4-bf04-a32dde06f4a1,Namespace:calico-system,Attempt:0,}" Jul 15 04:44:17.098328 systemd-networkd[1435]: calibb0394c69ca: Link UP Jul 15 04:44:17.098877 systemd-networkd[1435]: calibb0394c69ca: Gained carrier Jul 15 04:44:17.111958 containerd[1524]: 2025-07-15 04:44:16.873 [INFO][3749] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 15 04:44:17.111958 containerd[1524]: 2025-07-15 04:44:16.941 [INFO][3749] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--54d66b7f78--qxgcm-eth0 whisker-54d66b7f78- calico-system a1b0d7aa-8615-47f4-bf04-a32dde06f4a1 889 0 2025-07-15 04:44:16 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:54d66b7f78 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-54d66b7f78-qxgcm eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calibb0394c69ca [] [] }} ContainerID="9b2cf1afe9a141ed6fd1f5e30fe0aedd94d5f66cd50f3781767b615d355a252e" Namespace="calico-system" Pod="whisker-54d66b7f78-qxgcm" WorkloadEndpoint="localhost-k8s-whisker--54d66b7f78--qxgcm-" Jul 15 04:44:17.111958 containerd[1524]: 2025-07-15 04:44:16.941 [INFO][3749] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9b2cf1afe9a141ed6fd1f5e30fe0aedd94d5f66cd50f3781767b615d355a252e" Namespace="calico-system" Pod="whisker-54d66b7f78-qxgcm" WorkloadEndpoint="localhost-k8s-whisker--54d66b7f78--qxgcm-eth0" Jul 15 04:44:17.111958 containerd[1524]: 2025-07-15 04:44:17.044 [INFO][3763] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9b2cf1afe9a141ed6fd1f5e30fe0aedd94d5f66cd50f3781767b615d355a252e" HandleID="k8s-pod-network.9b2cf1afe9a141ed6fd1f5e30fe0aedd94d5f66cd50f3781767b615d355a252e" Workload="localhost-k8s-whisker--54d66b7f78--qxgcm-eth0" Jul 15 04:44:17.112283 containerd[1524]: 2025-07-15 04:44:17.044 [INFO][3763] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9b2cf1afe9a141ed6fd1f5e30fe0aedd94d5f66cd50f3781767b615d355a252e" HandleID="k8s-pod-network.9b2cf1afe9a141ed6fd1f5e30fe0aedd94d5f66cd50f3781767b615d355a252e" Workload="localhost-k8s-whisker--54d66b7f78--qxgcm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001aa170), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-54d66b7f78-qxgcm", "timestamp":"2025-07-15 04:44:17.044067233 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 04:44:17.112283 containerd[1524]: 2025-07-15 04:44:17.044 [INFO][3763] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 04:44:17.112283 containerd[1524]: 2025-07-15 04:44:17.044 [INFO][3763] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 04:44:17.112283 containerd[1524]: 2025-07-15 04:44:17.044 [INFO][3763] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 15 04:44:17.112283 containerd[1524]: 2025-07-15 04:44:17.060 [INFO][3763] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9b2cf1afe9a141ed6fd1f5e30fe0aedd94d5f66cd50f3781767b615d355a252e" host="localhost" Jul 15 04:44:17.112283 containerd[1524]: 2025-07-15 04:44:17.069 [INFO][3763] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 15 04:44:17.112283 containerd[1524]: 2025-07-15 04:44:17.073 [INFO][3763] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 15 04:44:17.112283 containerd[1524]: 2025-07-15 04:44:17.074 [INFO][3763] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 15 04:44:17.112283 containerd[1524]: 2025-07-15 04:44:17.076 [INFO][3763] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 15 04:44:17.112283 containerd[1524]: 2025-07-15 04:44:17.076 [INFO][3763] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9b2cf1afe9a141ed6fd1f5e30fe0aedd94d5f66cd50f3781767b615d355a252e" host="localhost" Jul 15 04:44:17.112552 containerd[1524]: 2025-07-15 04:44:17.077 [INFO][3763] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9b2cf1afe9a141ed6fd1f5e30fe0aedd94d5f66cd50f3781767b615d355a252e Jul 15 04:44:17.112552 containerd[1524]: 2025-07-15 04:44:17.080 [INFO][3763] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9b2cf1afe9a141ed6fd1f5e30fe0aedd94d5f66cd50f3781767b615d355a252e" host="localhost" Jul 15 04:44:17.112552 containerd[1524]: 2025-07-15 04:44:17.085 [INFO][3763] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.9b2cf1afe9a141ed6fd1f5e30fe0aedd94d5f66cd50f3781767b615d355a252e" host="localhost" Jul 15 04:44:17.112552 containerd[1524]: 2025-07-15 04:44:17.086 [INFO][3763] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.9b2cf1afe9a141ed6fd1f5e30fe0aedd94d5f66cd50f3781767b615d355a252e" host="localhost" Jul 15 04:44:17.112552 containerd[1524]: 2025-07-15 04:44:17.086 [INFO][3763] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 04:44:17.112552 containerd[1524]: 2025-07-15 04:44:17.086 [INFO][3763] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="9b2cf1afe9a141ed6fd1f5e30fe0aedd94d5f66cd50f3781767b615d355a252e" HandleID="k8s-pod-network.9b2cf1afe9a141ed6fd1f5e30fe0aedd94d5f66cd50f3781767b615d355a252e" Workload="localhost-k8s-whisker--54d66b7f78--qxgcm-eth0" Jul 15 04:44:17.112706 containerd[1524]: 2025-07-15 04:44:17.088 [INFO][3749] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9b2cf1afe9a141ed6fd1f5e30fe0aedd94d5f66cd50f3781767b615d355a252e" Namespace="calico-system" Pod="whisker-54d66b7f78-qxgcm" WorkloadEndpoint="localhost-k8s-whisker--54d66b7f78--qxgcm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--54d66b7f78--qxgcm-eth0", GenerateName:"whisker-54d66b7f78-", Namespace:"calico-system", SelfLink:"", UID:"a1b0d7aa-8615-47f4-bf04-a32dde06f4a1", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 4, 44, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"54d66b7f78", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-54d66b7f78-qxgcm", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calibb0394c69ca", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 04:44:17.112706 containerd[1524]: 2025-07-15 04:44:17.088 [INFO][3749] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="9b2cf1afe9a141ed6fd1f5e30fe0aedd94d5f66cd50f3781767b615d355a252e" Namespace="calico-system" Pod="whisker-54d66b7f78-qxgcm" WorkloadEndpoint="localhost-k8s-whisker--54d66b7f78--qxgcm-eth0" Jul 15 04:44:17.112793 containerd[1524]: 2025-07-15 04:44:17.088 [INFO][3749] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibb0394c69ca ContainerID="9b2cf1afe9a141ed6fd1f5e30fe0aedd94d5f66cd50f3781767b615d355a252e" Namespace="calico-system" Pod="whisker-54d66b7f78-qxgcm" WorkloadEndpoint="localhost-k8s-whisker--54d66b7f78--qxgcm-eth0" Jul 15 04:44:17.112793 containerd[1524]: 2025-07-15 04:44:17.098 [INFO][3749] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9b2cf1afe9a141ed6fd1f5e30fe0aedd94d5f66cd50f3781767b615d355a252e" Namespace="calico-system" Pod="whisker-54d66b7f78-qxgcm" WorkloadEndpoint="localhost-k8s-whisker--54d66b7f78--qxgcm-eth0" Jul 15 04:44:17.112842 containerd[1524]: 2025-07-15 04:44:17.099 [INFO][3749] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9b2cf1afe9a141ed6fd1f5e30fe0aedd94d5f66cd50f3781767b615d355a252e" Namespace="calico-system" Pod="whisker-54d66b7f78-qxgcm" WorkloadEndpoint="localhost-k8s-whisker--54d66b7f78--qxgcm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--54d66b7f78--qxgcm-eth0", GenerateName:"whisker-54d66b7f78-", Namespace:"calico-system", SelfLink:"", UID:"a1b0d7aa-8615-47f4-bf04-a32dde06f4a1", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 4, 44, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"54d66b7f78", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9b2cf1afe9a141ed6fd1f5e30fe0aedd94d5f66cd50f3781767b615d355a252e", Pod:"whisker-54d66b7f78-qxgcm", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calibb0394c69ca", MAC:"9e:21:77:cc:0d:cb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 04:44:17.112904 containerd[1524]: 2025-07-15 04:44:17.109 [INFO][3749] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9b2cf1afe9a141ed6fd1f5e30fe0aedd94d5f66cd50f3781767b615d355a252e" Namespace="calico-system" Pod="whisker-54d66b7f78-qxgcm" WorkloadEndpoint="localhost-k8s-whisker--54d66b7f78--qxgcm-eth0" Jul 15 04:44:17.140482 containerd[1524]: time="2025-07-15T04:44:17.140394147Z" level=info msg="connecting to shim 9b2cf1afe9a141ed6fd1f5e30fe0aedd94d5f66cd50f3781767b615d355a252e" address="unix:///run/containerd/s/87d46ecceb4354c82e4cfcb2668794db269981e55387413a1b0f13963e29dce0" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:44:17.165236 systemd[1]: Started cri-containerd-9b2cf1afe9a141ed6fd1f5e30fe0aedd94d5f66cd50f3781767b615d355a252e.scope - libcontainer container 9b2cf1afe9a141ed6fd1f5e30fe0aedd94d5f66cd50f3781767b615d355a252e. Jul 15 04:44:17.175931 systemd-resolved[1350]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 15 04:44:17.205155 containerd[1524]: time="2025-07-15T04:44:17.205111021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-54d66b7f78-qxgcm,Uid:a1b0d7aa-8615-47f4-bf04-a32dde06f4a1,Namespace:calico-system,Attempt:0,} returns sandbox id \"9b2cf1afe9a141ed6fd1f5e30fe0aedd94d5f66cd50f3781767b615d355a252e\"" Jul 15 04:44:17.206622 containerd[1524]: time="2025-07-15T04:44:17.206592212Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 15 04:44:17.345942 kubelet[2670]: I0715 04:44:17.345897 2670 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26129a7b-5c2e-4c5b-9a03-568803205b97" path="/var/lib/kubelet/pods/26129a7b-5c2e-4c5b-9a03-568803205b97/volumes" Jul 15 04:44:17.627099 containerd[1524]: time="2025-07-15T04:44:17.627052726Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6be0e7a9c98bb82df7f43c2832dc55f61a6b816989cd8684d4cf684b41c88723\" id:\"ed6c17bb7180e3278280b7d4f54a5d9761e84710461c687656dbb3be2337b825\" pid:3961 exit_status:1 exited_at:{seconds:1752554657 nanos:626767849}" Jul 15 04:44:17.764894 systemd-networkd[1435]: vxlan.calico: Link UP Jul 15 04:44:17.764901 systemd-networkd[1435]: vxlan.calico: Gained carrier Jul 15 04:44:18.100582 containerd[1524]: time="2025-07-15T04:44:18.100545855Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:44:18.101074 containerd[1524]: time="2025-07-15T04:44:18.100970388Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4605614" Jul 15 04:44:18.104400 containerd[1524]: time="2025-07-15T04:44:18.104349409Z" level=info msg="ImageCreate event name:\"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:44:18.107620 containerd[1524]: time="2025-07-15T04:44:18.107562050Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:44:18.108516 containerd[1524]: time="2025-07-15T04:44:18.108077634Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"5974847\" in 901.450538ms" Jul 15 04:44:18.108516 containerd[1524]: time="2025-07-15T04:44:18.108114439Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\"" Jul 15 04:44:18.111146 containerd[1524]: time="2025-07-15T04:44:18.111113533Z" level=info msg="CreateContainer within sandbox \"9b2cf1afe9a141ed6fd1f5e30fe0aedd94d5f66cd50f3781767b615d355a252e\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 15 04:44:18.116221 containerd[1524]: time="2025-07-15T04:44:18.116190247Z" level=info msg="Container ba3c2e9235806f5c5c2cae1edcfd98961e878e7889dd40aefae816ab7526fe4b: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:44:18.123917 containerd[1524]: time="2025-07-15T04:44:18.123862004Z" level=info msg="CreateContainer within sandbox \"9b2cf1afe9a141ed6fd1f5e30fe0aedd94d5f66cd50f3781767b615d355a252e\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"ba3c2e9235806f5c5c2cae1edcfd98961e878e7889dd40aefae816ab7526fe4b\"" Jul 15 04:44:18.124389 containerd[1524]: time="2025-07-15T04:44:18.124348584Z" level=info msg="StartContainer for \"ba3c2e9235806f5c5c2cae1edcfd98961e878e7889dd40aefae816ab7526fe4b\"" Jul 15 04:44:18.125812 containerd[1524]: time="2025-07-15T04:44:18.125777363Z" level=info msg="connecting to shim ba3c2e9235806f5c5c2cae1edcfd98961e878e7889dd40aefae816ab7526fe4b" address="unix:///run/containerd/s/87d46ecceb4354c82e4cfcb2668794db269981e55387413a1b0f13963e29dce0" protocol=ttrpc version=3 Jul 15 04:44:18.146260 systemd[1]: Started cri-containerd-ba3c2e9235806f5c5c2cae1edcfd98961e878e7889dd40aefae816ab7526fe4b.scope - libcontainer container ba3c2e9235806f5c5c2cae1edcfd98961e878e7889dd40aefae816ab7526fe4b. Jul 15 04:44:18.183865 containerd[1524]: time="2025-07-15T04:44:18.183369828Z" level=info msg="StartContainer for \"ba3c2e9235806f5c5c2cae1edcfd98961e878e7889dd40aefae816ab7526fe4b\" returns successfully" Jul 15 04:44:18.186558 containerd[1524]: time="2025-07-15T04:44:18.186534063Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 15 04:44:18.553128 containerd[1524]: time="2025-07-15T04:44:18.552162838Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6be0e7a9c98bb82df7f43c2832dc55f61a6b816989cd8684d4cf684b41c88723\" id:\"9dcfe1628b956ed74c87d8831a828992486c698202438c9138b64f3eebc73deb\" pid:4109 exit_status:1 exited_at:{seconds:1752554658 nanos:551830277}" Jul 15 04:44:18.555413 systemd-networkd[1435]: calibb0394c69ca: Gained IPv6LL Jul 15 04:44:19.004065 systemd-networkd[1435]: vxlan.calico: Gained IPv6LL Jul 15 04:44:19.310522 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3210201977.mount: Deactivated successfully. Jul 15 04:44:19.335597 containerd[1524]: time="2025-07-15T04:44:19.335551378Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:44:19.336993 containerd[1524]: time="2025-07-15T04:44:19.336941906Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=30814581" Jul 15 04:44:19.337591 containerd[1524]: time="2025-07-15T04:44:19.337565021Z" level=info msg="ImageCreate event name:\"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:44:19.339969 containerd[1524]: time="2025-07-15T04:44:19.339922506Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:44:19.340581 containerd[1524]: time="2025-07-15T04:44:19.340550502Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"30814411\" in 1.153986396s" Jul 15 04:44:19.340581 containerd[1524]: time="2025-07-15T04:44:19.340579305Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\"" Jul 15 04:44:19.343132 containerd[1524]: time="2025-07-15T04:44:19.343102850Z" level=info msg="CreateContainer within sandbox \"9b2cf1afe9a141ed6fd1f5e30fe0aedd94d5f66cd50f3781767b615d355a252e\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 15 04:44:19.350252 containerd[1524]: time="2025-07-15T04:44:19.350213468Z" level=info msg="Container dc34f5b4bbab7f65cfa299d515b13c13fedb9d657d59e1e5d1b0b5b92f4a4ab6: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:44:19.357551 containerd[1524]: time="2025-07-15T04:44:19.357507989Z" level=info msg="CreateContainer within sandbox \"9b2cf1afe9a141ed6fd1f5e30fe0aedd94d5f66cd50f3781767b615d355a252e\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"dc34f5b4bbab7f65cfa299d515b13c13fedb9d657d59e1e5d1b0b5b92f4a4ab6\"" Jul 15 04:44:19.357958 containerd[1524]: time="2025-07-15T04:44:19.357917238Z" level=info msg="StartContainer for \"dc34f5b4bbab7f65cfa299d515b13c13fedb9d657d59e1e5d1b0b5b92f4a4ab6\"" Jul 15 04:44:19.360700 containerd[1524]: time="2025-07-15T04:44:19.360655009Z" level=info msg="connecting to shim dc34f5b4bbab7f65cfa299d515b13c13fedb9d657d59e1e5d1b0b5b92f4a4ab6" address="unix:///run/containerd/s/87d46ecceb4354c82e4cfcb2668794db269981e55387413a1b0f13963e29dce0" protocol=ttrpc version=3 Jul 15 04:44:19.385234 systemd[1]: Started cri-containerd-dc34f5b4bbab7f65cfa299d515b13c13fedb9d657d59e1e5d1b0b5b92f4a4ab6.scope - libcontainer container dc34f5b4bbab7f65cfa299d515b13c13fedb9d657d59e1e5d1b0b5b92f4a4ab6. Jul 15 04:44:19.427512 containerd[1524]: time="2025-07-15T04:44:19.427470274Z" level=info msg="StartContainer for \"dc34f5b4bbab7f65cfa299d515b13c13fedb9d657d59e1e5d1b0b5b92f4a4ab6\" returns successfully" Jul 15 04:44:19.502330 kubelet[2670]: I0715 04:44:19.502268 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-54d66b7f78-qxgcm" podStartSLOduration=1.367420743 podStartE2EDuration="3.50225222s" podCreationTimestamp="2025-07-15 04:44:16 +0000 UTC" firstStartedPulling="2025-07-15 04:44:17.206391626 +0000 UTC m=+31.950282506" lastFinishedPulling="2025-07-15 04:44:19.341223143 +0000 UTC m=+34.085113983" observedRunningTime="2025-07-15 04:44:19.50149997 +0000 UTC m=+34.245390850" watchObservedRunningTime="2025-07-15 04:44:19.50225222 +0000 UTC m=+34.246143100" Jul 15 04:44:24.347784 containerd[1524]: time="2025-07-15T04:44:24.347730753Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6ff68d64f6-j49l9,Uid:91fbad59-8877-4e28-be2e-9de422626132,Namespace:calico-system,Attempt:0,}" Jul 15 04:44:24.503149 systemd-networkd[1435]: cali0c573476a8b: Link UP Jul 15 04:44:24.503629 systemd-networkd[1435]: cali0c573476a8b: Gained carrier Jul 15 04:44:24.518626 containerd[1524]: 2025-07-15 04:44:24.436 [INFO][4181] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--6ff68d64f6--j49l9-eth0 calico-kube-controllers-6ff68d64f6- calico-system 91fbad59-8877-4e28-be2e-9de422626132 823 0 2025-07-15 04:44:05 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6ff68d64f6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-6ff68d64f6-j49l9 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali0c573476a8b [] [] }} ContainerID="a8a2e339dec4229a91500e01a52db17692541a5b0150ac951f303372f2a40ad7" Namespace="calico-system" Pod="calico-kube-controllers-6ff68d64f6-j49l9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6ff68d64f6--j49l9-" Jul 15 04:44:24.518626 containerd[1524]: 2025-07-15 04:44:24.437 [INFO][4181] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a8a2e339dec4229a91500e01a52db17692541a5b0150ac951f303372f2a40ad7" Namespace="calico-system" Pod="calico-kube-controllers-6ff68d64f6-j49l9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6ff68d64f6--j49l9-eth0" Jul 15 04:44:24.518626 containerd[1524]: 2025-07-15 04:44:24.462 [INFO][4194] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a8a2e339dec4229a91500e01a52db17692541a5b0150ac951f303372f2a40ad7" HandleID="k8s-pod-network.a8a2e339dec4229a91500e01a52db17692541a5b0150ac951f303372f2a40ad7" Workload="localhost-k8s-calico--kube--controllers--6ff68d64f6--j49l9-eth0" Jul 15 04:44:24.518814 containerd[1524]: 2025-07-15 04:44:24.462 [INFO][4194] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a8a2e339dec4229a91500e01a52db17692541a5b0150ac951f303372f2a40ad7" HandleID="k8s-pod-network.a8a2e339dec4229a91500e01a52db17692541a5b0150ac951f303372f2a40ad7" Workload="localhost-k8s-calico--kube--controllers--6ff68d64f6--j49l9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d960), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-6ff68d64f6-j49l9", "timestamp":"2025-07-15 04:44:24.462524765 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 04:44:24.518814 containerd[1524]: 2025-07-15 04:44:24.462 [INFO][4194] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 04:44:24.518814 containerd[1524]: 2025-07-15 04:44:24.462 [INFO][4194] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 04:44:24.518814 containerd[1524]: 2025-07-15 04:44:24.462 [INFO][4194] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 15 04:44:24.518814 containerd[1524]: 2025-07-15 04:44:24.475 [INFO][4194] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a8a2e339dec4229a91500e01a52db17692541a5b0150ac951f303372f2a40ad7" host="localhost" Jul 15 04:44:24.518814 containerd[1524]: 2025-07-15 04:44:24.479 [INFO][4194] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 15 04:44:24.518814 containerd[1524]: 2025-07-15 04:44:24.483 [INFO][4194] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 15 04:44:24.518814 containerd[1524]: 2025-07-15 04:44:24.484 [INFO][4194] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 15 04:44:24.518814 containerd[1524]: 2025-07-15 04:44:24.486 [INFO][4194] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 15 04:44:24.518814 containerd[1524]: 2025-07-15 04:44:24.486 [INFO][4194] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a8a2e339dec4229a91500e01a52db17692541a5b0150ac951f303372f2a40ad7" host="localhost" Jul 15 04:44:24.519254 containerd[1524]: 2025-07-15 04:44:24.488 [INFO][4194] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a8a2e339dec4229a91500e01a52db17692541a5b0150ac951f303372f2a40ad7 Jul 15 04:44:24.519254 containerd[1524]: 2025-07-15 04:44:24.491 [INFO][4194] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a8a2e339dec4229a91500e01a52db17692541a5b0150ac951f303372f2a40ad7" host="localhost" Jul 15 04:44:24.519254 containerd[1524]: 2025-07-15 04:44:24.497 [INFO][4194] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.a8a2e339dec4229a91500e01a52db17692541a5b0150ac951f303372f2a40ad7" host="localhost" Jul 15 04:44:24.519254 containerd[1524]: 2025-07-15 04:44:24.497 [INFO][4194] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.a8a2e339dec4229a91500e01a52db17692541a5b0150ac951f303372f2a40ad7" host="localhost" Jul 15 04:44:24.519254 containerd[1524]: 2025-07-15 04:44:24.497 [INFO][4194] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 04:44:24.519254 containerd[1524]: 2025-07-15 04:44:24.497 [INFO][4194] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="a8a2e339dec4229a91500e01a52db17692541a5b0150ac951f303372f2a40ad7" HandleID="k8s-pod-network.a8a2e339dec4229a91500e01a52db17692541a5b0150ac951f303372f2a40ad7" Workload="localhost-k8s-calico--kube--controllers--6ff68d64f6--j49l9-eth0" Jul 15 04:44:24.519435 containerd[1524]: 2025-07-15 04:44:24.499 [INFO][4181] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a8a2e339dec4229a91500e01a52db17692541a5b0150ac951f303372f2a40ad7" Namespace="calico-system" Pod="calico-kube-controllers-6ff68d64f6-j49l9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6ff68d64f6--j49l9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6ff68d64f6--j49l9-eth0", GenerateName:"calico-kube-controllers-6ff68d64f6-", Namespace:"calico-system", SelfLink:"", UID:"91fbad59-8877-4e28-be2e-9de422626132", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 4, 44, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6ff68d64f6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-6ff68d64f6-j49l9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0c573476a8b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 04:44:24.519497 containerd[1524]: 2025-07-15 04:44:24.499 [INFO][4181] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="a8a2e339dec4229a91500e01a52db17692541a5b0150ac951f303372f2a40ad7" Namespace="calico-system" Pod="calico-kube-controllers-6ff68d64f6-j49l9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6ff68d64f6--j49l9-eth0" Jul 15 04:44:24.519497 containerd[1524]: 2025-07-15 04:44:24.499 [INFO][4181] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0c573476a8b ContainerID="a8a2e339dec4229a91500e01a52db17692541a5b0150ac951f303372f2a40ad7" Namespace="calico-system" Pod="calico-kube-controllers-6ff68d64f6-j49l9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6ff68d64f6--j49l9-eth0" Jul 15 04:44:24.519497 containerd[1524]: 2025-07-15 04:44:24.502 [INFO][4181] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a8a2e339dec4229a91500e01a52db17692541a5b0150ac951f303372f2a40ad7" Namespace="calico-system" Pod="calico-kube-controllers-6ff68d64f6-j49l9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6ff68d64f6--j49l9-eth0" Jul 15 04:44:24.519566 containerd[1524]: 2025-07-15 04:44:24.505 [INFO][4181] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a8a2e339dec4229a91500e01a52db17692541a5b0150ac951f303372f2a40ad7" Namespace="calico-system" Pod="calico-kube-controllers-6ff68d64f6-j49l9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6ff68d64f6--j49l9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6ff68d64f6--j49l9-eth0", GenerateName:"calico-kube-controllers-6ff68d64f6-", Namespace:"calico-system", SelfLink:"", UID:"91fbad59-8877-4e28-be2e-9de422626132", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 4, 44, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6ff68d64f6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a8a2e339dec4229a91500e01a52db17692541a5b0150ac951f303372f2a40ad7", Pod:"calico-kube-controllers-6ff68d64f6-j49l9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0c573476a8b", MAC:"9a:47:0e:65:c9:08", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 04:44:24.519613 containerd[1524]: 2025-07-15 04:44:24.514 [INFO][4181] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a8a2e339dec4229a91500e01a52db17692541a5b0150ac951f303372f2a40ad7" Namespace="calico-system" Pod="calico-kube-controllers-6ff68d64f6-j49l9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6ff68d64f6--j49l9-eth0" Jul 15 04:44:24.538592 containerd[1524]: time="2025-07-15T04:44:24.538555468Z" level=info msg="connecting to shim a8a2e339dec4229a91500e01a52db17692541a5b0150ac951f303372f2a40ad7" address="unix:///run/containerd/s/145afc219e712fbb70dc9ada864e547ecabdcc8bf1794b0da88c1c2cf5b3736f" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:44:24.556188 systemd[1]: Started cri-containerd-a8a2e339dec4229a91500e01a52db17692541a5b0150ac951f303372f2a40ad7.scope - libcontainer container a8a2e339dec4229a91500e01a52db17692541a5b0150ac951f303372f2a40ad7. Jul 15 04:44:24.565892 systemd-resolved[1350]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 15 04:44:24.584253 containerd[1524]: time="2025-07-15T04:44:24.584220175Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6ff68d64f6-j49l9,Uid:91fbad59-8877-4e28-be2e-9de422626132,Namespace:calico-system,Attempt:0,} returns sandbox id \"a8a2e339dec4229a91500e01a52db17692541a5b0150ac951f303372f2a40ad7\"" Jul 15 04:44:24.588798 containerd[1524]: time="2025-07-15T04:44:24.588771448Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 15 04:44:25.346342 containerd[1524]: time="2025-07-15T04:44:25.346022936Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-gtnvx,Uid:fce61345-00e5-4ff3-a878-00cffc66772c,Namespace:kube-system,Attempt:0,}" Jul 15 04:44:25.477271 systemd-networkd[1435]: cali6c2200989d9: Link UP Jul 15 04:44:25.477751 systemd-networkd[1435]: cali6c2200989d9: Gained carrier Jul 15 04:44:25.500449 containerd[1524]: 2025-07-15 04:44:25.396 [INFO][4261] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--gtnvx-eth0 coredns-668d6bf9bc- kube-system fce61345-00e5-4ff3-a878-00cffc66772c 815 0 2025-07-15 04:43:51 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-gtnvx eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6c2200989d9 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="f4d87a9511337c399eff3ccefba2593a021d16b0683192d2e3b8c9dda0c0d0d3" Namespace="kube-system" Pod="coredns-668d6bf9bc-gtnvx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--gtnvx-" Jul 15 04:44:25.500449 containerd[1524]: 2025-07-15 04:44:25.396 [INFO][4261] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f4d87a9511337c399eff3ccefba2593a021d16b0683192d2e3b8c9dda0c0d0d3" Namespace="kube-system" Pod="coredns-668d6bf9bc-gtnvx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--gtnvx-eth0" Jul 15 04:44:25.500449 containerd[1524]: 2025-07-15 04:44:25.430 [INFO][4274] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f4d87a9511337c399eff3ccefba2593a021d16b0683192d2e3b8c9dda0c0d0d3" HandleID="k8s-pod-network.f4d87a9511337c399eff3ccefba2593a021d16b0683192d2e3b8c9dda0c0d0d3" Workload="localhost-k8s-coredns--668d6bf9bc--gtnvx-eth0" Jul 15 04:44:25.500835 containerd[1524]: 2025-07-15 04:44:25.431 [INFO][4274] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f4d87a9511337c399eff3ccefba2593a021d16b0683192d2e3b8c9dda0c0d0d3" HandleID="k8s-pod-network.f4d87a9511337c399eff3ccefba2593a021d16b0683192d2e3b8c9dda0c0d0d3" Workload="localhost-k8s-coredns--668d6bf9bc--gtnvx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400035d670), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-gtnvx", "timestamp":"2025-07-15 04:44:25.430869442 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 04:44:25.500835 containerd[1524]: 2025-07-15 04:44:25.431 [INFO][4274] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 04:44:25.500835 containerd[1524]: 2025-07-15 04:44:25.431 [INFO][4274] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 04:44:25.500835 containerd[1524]: 2025-07-15 04:44:25.431 [INFO][4274] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 15 04:44:25.500835 containerd[1524]: 2025-07-15 04:44:25.442 [INFO][4274] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f4d87a9511337c399eff3ccefba2593a021d16b0683192d2e3b8c9dda0c0d0d3" host="localhost" Jul 15 04:44:25.500835 containerd[1524]: 2025-07-15 04:44:25.447 [INFO][4274] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 15 04:44:25.500835 containerd[1524]: 2025-07-15 04:44:25.452 [INFO][4274] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 15 04:44:25.500835 containerd[1524]: 2025-07-15 04:44:25.455 [INFO][4274] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 15 04:44:25.500835 containerd[1524]: 2025-07-15 04:44:25.457 [INFO][4274] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 15 04:44:25.500835 containerd[1524]: 2025-07-15 04:44:25.458 [INFO][4274] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f4d87a9511337c399eff3ccefba2593a021d16b0683192d2e3b8c9dda0c0d0d3" host="localhost" Jul 15 04:44:25.501059 containerd[1524]: 2025-07-15 04:44:25.459 [INFO][4274] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f4d87a9511337c399eff3ccefba2593a021d16b0683192d2e3b8c9dda0c0d0d3 Jul 15 04:44:25.501059 containerd[1524]: 2025-07-15 04:44:25.464 [INFO][4274] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f4d87a9511337c399eff3ccefba2593a021d16b0683192d2e3b8c9dda0c0d0d3" host="localhost" Jul 15 04:44:25.501059 containerd[1524]: 2025-07-15 04:44:25.470 [INFO][4274] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.f4d87a9511337c399eff3ccefba2593a021d16b0683192d2e3b8c9dda0c0d0d3" host="localhost" Jul 15 04:44:25.501059 containerd[1524]: 2025-07-15 04:44:25.470 [INFO][4274] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.f4d87a9511337c399eff3ccefba2593a021d16b0683192d2e3b8c9dda0c0d0d3" host="localhost" Jul 15 04:44:25.501059 containerd[1524]: 2025-07-15 04:44:25.470 [INFO][4274] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 04:44:25.501059 containerd[1524]: 2025-07-15 04:44:25.470 [INFO][4274] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="f4d87a9511337c399eff3ccefba2593a021d16b0683192d2e3b8c9dda0c0d0d3" HandleID="k8s-pod-network.f4d87a9511337c399eff3ccefba2593a021d16b0683192d2e3b8c9dda0c0d0d3" Workload="localhost-k8s-coredns--668d6bf9bc--gtnvx-eth0" Jul 15 04:44:25.501177 containerd[1524]: 2025-07-15 04:44:25.472 [INFO][4261] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f4d87a9511337c399eff3ccefba2593a021d16b0683192d2e3b8c9dda0c0d0d3" Namespace="kube-system" Pod="coredns-668d6bf9bc-gtnvx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--gtnvx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--gtnvx-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"fce61345-00e5-4ff3-a878-00cffc66772c", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 4, 43, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-gtnvx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6c2200989d9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 04:44:25.501240 containerd[1524]: 2025-07-15 04:44:25.472 [INFO][4261] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="f4d87a9511337c399eff3ccefba2593a021d16b0683192d2e3b8c9dda0c0d0d3" Namespace="kube-system" Pod="coredns-668d6bf9bc-gtnvx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--gtnvx-eth0" Jul 15 04:44:25.501240 containerd[1524]: 2025-07-15 04:44:25.472 [INFO][4261] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6c2200989d9 ContainerID="f4d87a9511337c399eff3ccefba2593a021d16b0683192d2e3b8c9dda0c0d0d3" Namespace="kube-system" Pod="coredns-668d6bf9bc-gtnvx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--gtnvx-eth0" Jul 15 04:44:25.501240 containerd[1524]: 2025-07-15 04:44:25.477 [INFO][4261] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f4d87a9511337c399eff3ccefba2593a021d16b0683192d2e3b8c9dda0c0d0d3" Namespace="kube-system" Pod="coredns-668d6bf9bc-gtnvx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--gtnvx-eth0" Jul 15 04:44:25.501299 containerd[1524]: 2025-07-15 04:44:25.478 [INFO][4261] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f4d87a9511337c399eff3ccefba2593a021d16b0683192d2e3b8c9dda0c0d0d3" Namespace="kube-system" Pod="coredns-668d6bf9bc-gtnvx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--gtnvx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--gtnvx-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"fce61345-00e5-4ff3-a878-00cffc66772c", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 4, 43, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f4d87a9511337c399eff3ccefba2593a021d16b0683192d2e3b8c9dda0c0d0d3", Pod:"coredns-668d6bf9bc-gtnvx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6c2200989d9", MAC:"42:9b:5b:77:6f:b1", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 04:44:25.501299 containerd[1524]: 2025-07-15 04:44:25.497 [INFO][4261] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f4d87a9511337c399eff3ccefba2593a021d16b0683192d2e3b8c9dda0c0d0d3" Namespace="kube-system" Pod="coredns-668d6bf9bc-gtnvx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--gtnvx-eth0" Jul 15 04:44:25.568836 containerd[1524]: time="2025-07-15T04:44:25.568377638Z" level=info msg="connecting to shim f4d87a9511337c399eff3ccefba2593a021d16b0683192d2e3b8c9dda0c0d0d3" address="unix:///run/containerd/s/3dbc6265b1d547ad68e523121ee0aea696b35a805ed92b6691b88bc2bdb776cb" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:44:25.602248 systemd[1]: Started cri-containerd-f4d87a9511337c399eff3ccefba2593a021d16b0683192d2e3b8c9dda0c0d0d3.scope - libcontainer container f4d87a9511337c399eff3ccefba2593a021d16b0683192d2e3b8c9dda0c0d0d3. Jul 15 04:44:25.615074 systemd-resolved[1350]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 15 04:44:25.636357 containerd[1524]: time="2025-07-15T04:44:25.636322833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-gtnvx,Uid:fce61345-00e5-4ff3-a878-00cffc66772c,Namespace:kube-system,Attempt:0,} returns sandbox id \"f4d87a9511337c399eff3ccefba2593a021d16b0683192d2e3b8c9dda0c0d0d3\"" Jul 15 04:44:25.639101 containerd[1524]: time="2025-07-15T04:44:25.638894534Z" level=info msg="CreateContainer within sandbox \"f4d87a9511337c399eff3ccefba2593a021d16b0683192d2e3b8c9dda0c0d0d3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 15 04:44:25.650796 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1363552404.mount: Deactivated successfully. Jul 15 04:44:25.653299 containerd[1524]: time="2025-07-15T04:44:25.653263148Z" level=info msg="Container 4020303363a8d6f76896bb7e405800a0ef9230aa70d8c9097d6357ed891a8234: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:44:25.660090 containerd[1524]: time="2025-07-15T04:44:25.660052555Z" level=info msg="CreateContainer within sandbox \"f4d87a9511337c399eff3ccefba2593a021d16b0683192d2e3b8c9dda0c0d0d3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4020303363a8d6f76896bb7e405800a0ef9230aa70d8c9097d6357ed891a8234\"" Jul 15 04:44:25.660523 containerd[1524]: time="2025-07-15T04:44:25.660500680Z" level=info msg="StartContainer for \"4020303363a8d6f76896bb7e405800a0ef9230aa70d8c9097d6357ed891a8234\"" Jul 15 04:44:25.661994 containerd[1524]: time="2025-07-15T04:44:25.661864498Z" level=info msg="connecting to shim 4020303363a8d6f76896bb7e405800a0ef9230aa70d8c9097d6357ed891a8234" address="unix:///run/containerd/s/3dbc6265b1d547ad68e523121ee0aea696b35a805ed92b6691b88bc2bdb776cb" protocol=ttrpc version=3 Jul 15 04:44:25.688236 systemd[1]: Started cri-containerd-4020303363a8d6f76896bb7e405800a0ef9230aa70d8c9097d6357ed891a8234.scope - libcontainer container 4020303363a8d6f76896bb7e405800a0ef9230aa70d8c9097d6357ed891a8234. Jul 15 04:44:25.728973 containerd[1524]: time="2025-07-15T04:44:25.728928084Z" level=info msg="StartContainer for \"4020303363a8d6f76896bb7e405800a0ef9230aa70d8c9097d6357ed891a8234\" returns successfully" Jul 15 04:44:26.215413 containerd[1524]: time="2025-07-15T04:44:26.215367317Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=48128336" Jul 15 04:44:26.219197 containerd[1524]: time="2025-07-15T04:44:26.219140849Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"49497545\" in 1.630337438s" Jul 15 04:44:26.219434 containerd[1524]: time="2025-07-15T04:44:26.219325427Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\"" Jul 15 04:44:26.219777 containerd[1524]: time="2025-07-15T04:44:26.219698264Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:44:26.220923 containerd[1524]: time="2025-07-15T04:44:26.220673840Z" level=info msg="ImageCreate event name:\"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:44:26.221388 containerd[1524]: time="2025-07-15T04:44:26.221363668Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:44:26.229097 containerd[1524]: time="2025-07-15T04:44:26.229069228Z" level=info msg="CreateContainer within sandbox \"a8a2e339dec4229a91500e01a52db17692541a5b0150ac951f303372f2a40ad7\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 15 04:44:26.234007 containerd[1524]: time="2025-07-15T04:44:26.233979672Z" level=info msg="Container a7282f1ba49b696cdb12d0bd1905ee8c46cfb3f607de590cdcd9e0761e2b0b6c: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:44:26.240495 containerd[1524]: time="2025-07-15T04:44:26.240395785Z" level=info msg="CreateContainer within sandbox \"a8a2e339dec4229a91500e01a52db17692541a5b0150ac951f303372f2a40ad7\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"a7282f1ba49b696cdb12d0bd1905ee8c46cfb3f607de590cdcd9e0761e2b0b6c\"" Jul 15 04:44:26.241376 containerd[1524]: time="2025-07-15T04:44:26.241351999Z" level=info msg="StartContainer for \"a7282f1ba49b696cdb12d0bd1905ee8c46cfb3f607de590cdcd9e0761e2b0b6c\"" Jul 15 04:44:26.242382 containerd[1524]: time="2025-07-15T04:44:26.242349178Z" level=info msg="connecting to shim a7282f1ba49b696cdb12d0bd1905ee8c46cfb3f607de590cdcd9e0761e2b0b6c" address="unix:///run/containerd/s/145afc219e712fbb70dc9ada864e547ecabdcc8bf1794b0da88c1c2cf5b3736f" protocol=ttrpc version=3 Jul 15 04:44:26.269233 systemd[1]: Started cri-containerd-a7282f1ba49b696cdb12d0bd1905ee8c46cfb3f607de590cdcd9e0761e2b0b6c.scope - libcontainer container a7282f1ba49b696cdb12d0bd1905ee8c46cfb3f607de590cdcd9e0761e2b0b6c. Jul 15 04:44:26.343059 containerd[1524]: time="2025-07-15T04:44:26.342974781Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-db5m2,Uid:82a27411-bde6-41a1-806d-7ff1ba284ea9,Namespace:calico-system,Attempt:0,}" Jul 15 04:44:26.343611 containerd[1524]: time="2025-07-15T04:44:26.343564399Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c4b7c8858-4lhx7,Uid:0d43d695-d62b-4cf8-9d5d-f7d9df53995a,Namespace:calico-apiserver,Attempt:0,}" Jul 15 04:44:26.344923 containerd[1524]: time="2025-07-15T04:44:26.344897370Z" level=info msg="StartContainer for \"a7282f1ba49b696cdb12d0bd1905ee8c46cfb3f607de590cdcd9e0761e2b0b6c\" returns successfully" Jul 15 04:44:26.491658 systemd-networkd[1435]: cali0c573476a8b: Gained IPv6LL Jul 15 04:44:26.529489 kubelet[2670]: I0715 04:44:26.528774 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-gtnvx" podStartSLOduration=35.528754942 podStartE2EDuration="35.528754942s" podCreationTimestamp="2025-07-15 04:43:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 04:44:26.525770967 +0000 UTC m=+41.269661847" watchObservedRunningTime="2025-07-15 04:44:26.528754942 +0000 UTC m=+41.272645782" Jul 15 04:44:26.557505 systemd-networkd[1435]: cali389311ae77b: Link UP Jul 15 04:44:26.565661 systemd-networkd[1435]: cali389311ae77b: Gained carrier Jul 15 04:44:26.568813 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount239004054.mount: Deactivated successfully. Jul 15 04:44:26.601155 containerd[1524]: 2025-07-15 04:44:26.438 [INFO][4427] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5c4b7c8858--4lhx7-eth0 calico-apiserver-5c4b7c8858- calico-apiserver 0d43d695-d62b-4cf8-9d5d-f7d9df53995a 824 0 2025-07-15 04:43:59 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5c4b7c8858 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5c4b7c8858-4lhx7 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali389311ae77b [] [] }} ContainerID="e2ddbdd3d09aa3a3da3212aa7cdb3910a1f39b3a811c2e2a2dd66c4a3f7835c1" Namespace="calico-apiserver" Pod="calico-apiserver-5c4b7c8858-4lhx7" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c4b7c8858--4lhx7-" Jul 15 04:44:26.601155 containerd[1524]: 2025-07-15 04:44:26.438 [INFO][4427] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e2ddbdd3d09aa3a3da3212aa7cdb3910a1f39b3a811c2e2a2dd66c4a3f7835c1" Namespace="calico-apiserver" Pod="calico-apiserver-5c4b7c8858-4lhx7" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c4b7c8858--4lhx7-eth0" Jul 15 04:44:26.601155 containerd[1524]: 2025-07-15 04:44:26.469 [INFO][4446] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e2ddbdd3d09aa3a3da3212aa7cdb3910a1f39b3a811c2e2a2dd66c4a3f7835c1" HandleID="k8s-pod-network.e2ddbdd3d09aa3a3da3212aa7cdb3910a1f39b3a811c2e2a2dd66c4a3f7835c1" Workload="localhost-k8s-calico--apiserver--5c4b7c8858--4lhx7-eth0" Jul 15 04:44:26.601155 containerd[1524]: 2025-07-15 04:44:26.469 [INFO][4446] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e2ddbdd3d09aa3a3da3212aa7cdb3910a1f39b3a811c2e2a2dd66c4a3f7835c1" HandleID="k8s-pod-network.e2ddbdd3d09aa3a3da3212aa7cdb3910a1f39b3a811c2e2a2dd66c4a3f7835c1" Workload="localhost-k8s-calico--apiserver--5c4b7c8858--4lhx7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b160), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5c4b7c8858-4lhx7", "timestamp":"2025-07-15 04:44:26.46931956 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 04:44:26.601155 containerd[1524]: 2025-07-15 04:44:26.469 [INFO][4446] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 04:44:26.601155 containerd[1524]: 2025-07-15 04:44:26.469 [INFO][4446] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 04:44:26.601155 containerd[1524]: 2025-07-15 04:44:26.469 [INFO][4446] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 15 04:44:26.601155 containerd[1524]: 2025-07-15 04:44:26.482 [INFO][4446] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e2ddbdd3d09aa3a3da3212aa7cdb3910a1f39b3a811c2e2a2dd66c4a3f7835c1" host="localhost" Jul 15 04:44:26.601155 containerd[1524]: 2025-07-15 04:44:26.487 [INFO][4446] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 15 04:44:26.601155 containerd[1524]: 2025-07-15 04:44:26.502 [INFO][4446] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 15 04:44:26.601155 containerd[1524]: 2025-07-15 04:44:26.505 [INFO][4446] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 15 04:44:26.601155 containerd[1524]: 2025-07-15 04:44:26.507 [INFO][4446] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 15 04:44:26.601155 containerd[1524]: 2025-07-15 04:44:26.507 [INFO][4446] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e2ddbdd3d09aa3a3da3212aa7cdb3910a1f39b3a811c2e2a2dd66c4a3f7835c1" host="localhost" Jul 15 04:44:26.601155 containerd[1524]: 2025-07-15 04:44:26.509 [INFO][4446] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e2ddbdd3d09aa3a3da3212aa7cdb3910a1f39b3a811c2e2a2dd66c4a3f7835c1 Jul 15 04:44:26.601155 containerd[1524]: 2025-07-15 04:44:26.515 [INFO][4446] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e2ddbdd3d09aa3a3da3212aa7cdb3910a1f39b3a811c2e2a2dd66c4a3f7835c1" host="localhost" Jul 15 04:44:26.601155 containerd[1524]: 2025-07-15 04:44:26.523 [INFO][4446] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.e2ddbdd3d09aa3a3da3212aa7cdb3910a1f39b3a811c2e2a2dd66c4a3f7835c1" host="localhost" Jul 15 04:44:26.601155 containerd[1524]: 2025-07-15 04:44:26.525 [INFO][4446] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.e2ddbdd3d09aa3a3da3212aa7cdb3910a1f39b3a811c2e2a2dd66c4a3f7835c1" host="localhost" Jul 15 04:44:26.601155 containerd[1524]: 2025-07-15 04:44:26.525 [INFO][4446] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 04:44:26.601155 containerd[1524]: 2025-07-15 04:44:26.525 [INFO][4446] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="e2ddbdd3d09aa3a3da3212aa7cdb3910a1f39b3a811c2e2a2dd66c4a3f7835c1" HandleID="k8s-pod-network.e2ddbdd3d09aa3a3da3212aa7cdb3910a1f39b3a811c2e2a2dd66c4a3f7835c1" Workload="localhost-k8s-calico--apiserver--5c4b7c8858--4lhx7-eth0" Jul 15 04:44:26.602753 containerd[1524]: 2025-07-15 04:44:26.546 [INFO][4427] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e2ddbdd3d09aa3a3da3212aa7cdb3910a1f39b3a811c2e2a2dd66c4a3f7835c1" Namespace="calico-apiserver" Pod="calico-apiserver-5c4b7c8858-4lhx7" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c4b7c8858--4lhx7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5c4b7c8858--4lhx7-eth0", GenerateName:"calico-apiserver-5c4b7c8858-", Namespace:"calico-apiserver", SelfLink:"", UID:"0d43d695-d62b-4cf8-9d5d-f7d9df53995a", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 4, 43, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c4b7c8858", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5c4b7c8858-4lhx7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali389311ae77b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 04:44:26.602753 containerd[1524]: 2025-07-15 04:44:26.547 [INFO][4427] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="e2ddbdd3d09aa3a3da3212aa7cdb3910a1f39b3a811c2e2a2dd66c4a3f7835c1" Namespace="calico-apiserver" Pod="calico-apiserver-5c4b7c8858-4lhx7" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c4b7c8858--4lhx7-eth0" Jul 15 04:44:26.602753 containerd[1524]: 2025-07-15 04:44:26.547 [INFO][4427] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali389311ae77b ContainerID="e2ddbdd3d09aa3a3da3212aa7cdb3910a1f39b3a811c2e2a2dd66c4a3f7835c1" Namespace="calico-apiserver" Pod="calico-apiserver-5c4b7c8858-4lhx7" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c4b7c8858--4lhx7-eth0" Jul 15 04:44:26.602753 containerd[1524]: 2025-07-15 04:44:26.565 [INFO][4427] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e2ddbdd3d09aa3a3da3212aa7cdb3910a1f39b3a811c2e2a2dd66c4a3f7835c1" Namespace="calico-apiserver" Pod="calico-apiserver-5c4b7c8858-4lhx7" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c4b7c8858--4lhx7-eth0" Jul 15 04:44:26.602753 containerd[1524]: 2025-07-15 04:44:26.567 [INFO][4427] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e2ddbdd3d09aa3a3da3212aa7cdb3910a1f39b3a811c2e2a2dd66c4a3f7835c1" Namespace="calico-apiserver" Pod="calico-apiserver-5c4b7c8858-4lhx7" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c4b7c8858--4lhx7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5c4b7c8858--4lhx7-eth0", GenerateName:"calico-apiserver-5c4b7c8858-", Namespace:"calico-apiserver", SelfLink:"", UID:"0d43d695-d62b-4cf8-9d5d-f7d9df53995a", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 4, 43, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c4b7c8858", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e2ddbdd3d09aa3a3da3212aa7cdb3910a1f39b3a811c2e2a2dd66c4a3f7835c1", Pod:"calico-apiserver-5c4b7c8858-4lhx7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali389311ae77b", MAC:"8a:ec:80:a4:be:0c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 04:44:26.602753 containerd[1524]: 2025-07-15 04:44:26.595 [INFO][4427] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e2ddbdd3d09aa3a3da3212aa7cdb3910a1f39b3a811c2e2a2dd66c4a3f7835c1" Namespace="calico-apiserver" Pod="calico-apiserver-5c4b7c8858-4lhx7" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c4b7c8858--4lhx7-eth0" Jul 15 04:44:26.619864 systemd-networkd[1435]: cali6c2200989d9: Gained IPv6LL Jul 15 04:44:26.643555 containerd[1524]: time="2025-07-15T04:44:26.643469614Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a7282f1ba49b696cdb12d0bd1905ee8c46cfb3f607de590cdcd9e0761e2b0b6c\" id:\"e9825ae52c23d7ec465b89277176104df1e7425330fd47d1b6796ad6d5cfcf55\" pid:4478 exit_status:1 exited_at:{seconds:1752554666 nanos:641182389}" Jul 15 04:44:26.646172 containerd[1524]: time="2025-07-15T04:44:26.645856490Z" level=info msg="connecting to shim e2ddbdd3d09aa3a3da3212aa7cdb3910a1f39b3a811c2e2a2dd66c4a3f7835c1" address="unix:///run/containerd/s/4c8098a9f39d87bda61e119b31ef52a98434d153dc98c553ea72b9875551c043" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:44:26.670957 systemd-networkd[1435]: cali137ef6047f7: Link UP Jul 15 04:44:26.673205 systemd-networkd[1435]: cali137ef6047f7: Gained carrier Jul 15 04:44:26.682181 systemd[1]: Started cri-containerd-e2ddbdd3d09aa3a3da3212aa7cdb3910a1f39b3a811c2e2a2dd66c4a3f7835c1.scope - libcontainer container e2ddbdd3d09aa3a3da3212aa7cdb3910a1f39b3a811c2e2a2dd66c4a3f7835c1. Jul 15 04:44:26.688334 kubelet[2670]: I0715 04:44:26.688244 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6ff68d64f6-j49l9" podStartSLOduration=20.05587715 podStartE2EDuration="21.688224468s" podCreationTimestamp="2025-07-15 04:44:05 +0000 UTC" firstStartedPulling="2025-07-15 04:44:24.58840993 +0000 UTC m=+39.332300810" lastFinishedPulling="2025-07-15 04:44:26.220757248 +0000 UTC m=+40.964648128" observedRunningTime="2025-07-15 04:44:26.606121731 +0000 UTC m=+41.350012611" watchObservedRunningTime="2025-07-15 04:44:26.688224468 +0000 UTC m=+41.432115348" Jul 15 04:44:26.691704 containerd[1524]: 2025-07-15 04:44:26.444 [INFO][4415] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--768f4c5c69--db5m2-eth0 goldmane-768f4c5c69- calico-system 82a27411-bde6-41a1-806d-7ff1ba284ea9 821 0 2025-07-15 04:44:05 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-768f4c5c69-db5m2 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali137ef6047f7 [] [] }} ContainerID="8c28bfb015cc2320d06cb0da95d40063e3df7d51033f5a5452a12649cacccdf4" Namespace="calico-system" Pod="goldmane-768f4c5c69-db5m2" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--db5m2-" Jul 15 04:44:26.691704 containerd[1524]: 2025-07-15 04:44:26.444 [INFO][4415] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8c28bfb015cc2320d06cb0da95d40063e3df7d51033f5a5452a12649cacccdf4" Namespace="calico-system" Pod="goldmane-768f4c5c69-db5m2" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--db5m2-eth0" Jul 15 04:44:26.691704 containerd[1524]: 2025-07-15 04:44:26.483 [INFO][4452] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8c28bfb015cc2320d06cb0da95d40063e3df7d51033f5a5452a12649cacccdf4" HandleID="k8s-pod-network.8c28bfb015cc2320d06cb0da95d40063e3df7d51033f5a5452a12649cacccdf4" Workload="localhost-k8s-goldmane--768f4c5c69--db5m2-eth0" Jul 15 04:44:26.691704 containerd[1524]: 2025-07-15 04:44:26.484 [INFO][4452] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8c28bfb015cc2320d06cb0da95d40063e3df7d51033f5a5452a12649cacccdf4" HandleID="k8s-pod-network.8c28bfb015cc2320d06cb0da95d40063e3df7d51033f5a5452a12649cacccdf4" Workload="localhost-k8s-goldmane--768f4c5c69--db5m2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000137450), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-768f4c5c69-db5m2", "timestamp":"2025-07-15 04:44:26.483755464 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 04:44:26.691704 containerd[1524]: 2025-07-15 04:44:26.484 [INFO][4452] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 04:44:26.691704 containerd[1524]: 2025-07-15 04:44:26.530 [INFO][4452] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 04:44:26.691704 containerd[1524]: 2025-07-15 04:44:26.530 [INFO][4452] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 15 04:44:26.691704 containerd[1524]: 2025-07-15 04:44:26.586 [INFO][4452] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8c28bfb015cc2320d06cb0da95d40063e3df7d51033f5a5452a12649cacccdf4" host="localhost" Jul 15 04:44:26.691704 containerd[1524]: 2025-07-15 04:44:26.604 [INFO][4452] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 15 04:44:26.691704 containerd[1524]: 2025-07-15 04:44:26.614 [INFO][4452] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 15 04:44:26.691704 containerd[1524]: 2025-07-15 04:44:26.618 [INFO][4452] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 15 04:44:26.691704 containerd[1524]: 2025-07-15 04:44:26.624 [INFO][4452] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 15 04:44:26.691704 containerd[1524]: 2025-07-15 04:44:26.626 [INFO][4452] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8c28bfb015cc2320d06cb0da95d40063e3df7d51033f5a5452a12649cacccdf4" host="localhost" Jul 15 04:44:26.691704 containerd[1524]: 2025-07-15 04:44:26.629 [INFO][4452] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8c28bfb015cc2320d06cb0da95d40063e3df7d51033f5a5452a12649cacccdf4 Jul 15 04:44:26.691704 containerd[1524]: 2025-07-15 04:44:26.638 [INFO][4452] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8c28bfb015cc2320d06cb0da95d40063e3df7d51033f5a5452a12649cacccdf4" host="localhost" Jul 15 04:44:26.691704 containerd[1524]: 2025-07-15 04:44:26.649 [INFO][4452] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.8c28bfb015cc2320d06cb0da95d40063e3df7d51033f5a5452a12649cacccdf4" host="localhost" Jul 15 04:44:26.691704 containerd[1524]: 2025-07-15 04:44:26.650 [INFO][4452] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.8c28bfb015cc2320d06cb0da95d40063e3df7d51033f5a5452a12649cacccdf4" host="localhost" Jul 15 04:44:26.691704 containerd[1524]: 2025-07-15 04:44:26.651 [INFO][4452] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 04:44:26.691704 containerd[1524]: 2025-07-15 04:44:26.651 [INFO][4452] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="8c28bfb015cc2320d06cb0da95d40063e3df7d51033f5a5452a12649cacccdf4" HandleID="k8s-pod-network.8c28bfb015cc2320d06cb0da95d40063e3df7d51033f5a5452a12649cacccdf4" Workload="localhost-k8s-goldmane--768f4c5c69--db5m2-eth0" Jul 15 04:44:26.692185 containerd[1524]: 2025-07-15 04:44:26.660 [INFO][4415] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8c28bfb015cc2320d06cb0da95d40063e3df7d51033f5a5452a12649cacccdf4" Namespace="calico-system" Pod="goldmane-768f4c5c69-db5m2" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--db5m2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--db5m2-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"82a27411-bde6-41a1-806d-7ff1ba284ea9", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 4, 44, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-768f4c5c69-db5m2", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali137ef6047f7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 04:44:26.692185 containerd[1524]: 2025-07-15 04:44:26.660 [INFO][4415] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="8c28bfb015cc2320d06cb0da95d40063e3df7d51033f5a5452a12649cacccdf4" Namespace="calico-system" Pod="goldmane-768f4c5c69-db5m2" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--db5m2-eth0" Jul 15 04:44:26.692185 containerd[1524]: 2025-07-15 04:44:26.662 [INFO][4415] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali137ef6047f7 ContainerID="8c28bfb015cc2320d06cb0da95d40063e3df7d51033f5a5452a12649cacccdf4" Namespace="calico-system" Pod="goldmane-768f4c5c69-db5m2" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--db5m2-eth0" Jul 15 04:44:26.692185 containerd[1524]: 2025-07-15 04:44:26.675 [INFO][4415] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8c28bfb015cc2320d06cb0da95d40063e3df7d51033f5a5452a12649cacccdf4" Namespace="calico-system" Pod="goldmane-768f4c5c69-db5m2" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--db5m2-eth0" Jul 15 04:44:26.692185 containerd[1524]: 2025-07-15 04:44:26.675 [INFO][4415] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8c28bfb015cc2320d06cb0da95d40063e3df7d51033f5a5452a12649cacccdf4" Namespace="calico-system" Pod="goldmane-768f4c5c69-db5m2" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--db5m2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--db5m2-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"82a27411-bde6-41a1-806d-7ff1ba284ea9", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 4, 44, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8c28bfb015cc2320d06cb0da95d40063e3df7d51033f5a5452a12649cacccdf4", Pod:"goldmane-768f4c5c69-db5m2", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali137ef6047f7", MAC:"02:4f:5a:02:86:af", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 04:44:26.692185 containerd[1524]: 2025-07-15 04:44:26.688 [INFO][4415] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8c28bfb015cc2320d06cb0da95d40063e3df7d51033f5a5452a12649cacccdf4" Namespace="calico-system" Pod="goldmane-768f4c5c69-db5m2" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--db5m2-eth0" Jul 15 04:44:26.714715 systemd-resolved[1350]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 15 04:44:26.715128 containerd[1524]: time="2025-07-15T04:44:26.715091037Z" level=info msg="connecting to shim 8c28bfb015cc2320d06cb0da95d40063e3df7d51033f5a5452a12649cacccdf4" address="unix:///run/containerd/s/4a7517173c9aecabaff588c82b420772a4feb23aa83c6fecd6e0deee0c263c1d" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:44:26.740375 systemd[1]: Started cri-containerd-8c28bfb015cc2320d06cb0da95d40063e3df7d51033f5a5452a12649cacccdf4.scope - libcontainer container 8c28bfb015cc2320d06cb0da95d40063e3df7d51033f5a5452a12649cacccdf4. Jul 15 04:44:26.742719 containerd[1524]: time="2025-07-15T04:44:26.742597670Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c4b7c8858-4lhx7,Uid:0d43d695-d62b-4cf8-9d5d-f7d9df53995a,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"e2ddbdd3d09aa3a3da3212aa7cdb3910a1f39b3a811c2e2a2dd66c4a3f7835c1\"" Jul 15 04:44:26.745701 containerd[1524]: time="2025-07-15T04:44:26.745575884Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 15 04:44:26.755559 systemd-resolved[1350]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 15 04:44:26.780996 containerd[1524]: time="2025-07-15T04:44:26.780959293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-db5m2,Uid:82a27411-bde6-41a1-806d-7ff1ba284ea9,Namespace:calico-system,Attempt:0,} returns sandbox id \"8c28bfb015cc2320d06cb0da95d40063e3df7d51033f5a5452a12649cacccdf4\"" Jul 15 04:44:26.791751 systemd[1]: Started sshd@7-10.0.0.60:22-10.0.0.1:52564.service - OpenSSH per-connection server daemon (10.0.0.1:52564). Jul 15 04:44:26.867813 sshd[4614]: Accepted publickey for core from 10.0.0.1 port 52564 ssh2: RSA SHA256:sv36Sv5cF+dK4scc2r2cUvpDU+BCYvXiqSSRxSnX4+c Jul 15 04:44:26.869307 sshd-session[4614]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:44:26.873844 systemd-logind[1506]: New session 8 of user core. Jul 15 04:44:26.883186 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 15 04:44:27.065213 sshd[4618]: Connection closed by 10.0.0.1 port 52564 Jul 15 04:44:27.065675 sshd-session[4614]: pam_unix(sshd:session): session closed for user core Jul 15 04:44:27.069275 systemd[1]: sshd@7-10.0.0.60:22-10.0.0.1:52564.service: Deactivated successfully. Jul 15 04:44:27.071196 systemd[1]: session-8.scope: Deactivated successfully. Jul 15 04:44:27.071895 systemd-logind[1506]: Session 8 logged out. Waiting for processes to exit. Jul 15 04:44:27.072851 systemd-logind[1506]: Removed session 8. Jul 15 04:44:27.343351 containerd[1524]: time="2025-07-15T04:44:27.343009693Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jsx84,Uid:c731f066-95c8-4765-956b-f87b24f29f54,Namespace:kube-system,Attempt:0,}" Jul 15 04:44:27.444064 systemd-networkd[1435]: cali631b75de03c: Link UP Jul 15 04:44:27.444628 systemd-networkd[1435]: cali631b75de03c: Gained carrier Jul 15 04:44:27.460496 containerd[1524]: 2025-07-15 04:44:27.380 [INFO][4631] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--jsx84-eth0 coredns-668d6bf9bc- kube-system c731f066-95c8-4765-956b-f87b24f29f54 825 0 2025-07-15 04:43:51 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-jsx84 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali631b75de03c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="1a7fdaf425ea9010d4a0c4bcd632320b1b194245bdf55c0ec30f6e413b305cea" Namespace="kube-system" Pod="coredns-668d6bf9bc-jsx84" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--jsx84-" Jul 15 04:44:27.460496 containerd[1524]: 2025-07-15 04:44:27.380 [INFO][4631] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1a7fdaf425ea9010d4a0c4bcd632320b1b194245bdf55c0ec30f6e413b305cea" Namespace="kube-system" Pod="coredns-668d6bf9bc-jsx84" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--jsx84-eth0" Jul 15 04:44:27.460496 containerd[1524]: 2025-07-15 04:44:27.403 [INFO][4647] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1a7fdaf425ea9010d4a0c4bcd632320b1b194245bdf55c0ec30f6e413b305cea" HandleID="k8s-pod-network.1a7fdaf425ea9010d4a0c4bcd632320b1b194245bdf55c0ec30f6e413b305cea" Workload="localhost-k8s-coredns--668d6bf9bc--jsx84-eth0" Jul 15 04:44:27.460496 containerd[1524]: 2025-07-15 04:44:27.404 [INFO][4647] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1a7fdaf425ea9010d4a0c4bcd632320b1b194245bdf55c0ec30f6e413b305cea" HandleID="k8s-pod-network.1a7fdaf425ea9010d4a0c4bcd632320b1b194245bdf55c0ec30f6e413b305cea" Workload="localhost-k8s-coredns--668d6bf9bc--jsx84-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001969a0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-jsx84", "timestamp":"2025-07-15 04:44:27.403861787 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 04:44:27.460496 containerd[1524]: 2025-07-15 04:44:27.404 [INFO][4647] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 04:44:27.460496 containerd[1524]: 2025-07-15 04:44:27.404 [INFO][4647] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 04:44:27.460496 containerd[1524]: 2025-07-15 04:44:27.404 [INFO][4647] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 15 04:44:27.460496 containerd[1524]: 2025-07-15 04:44:27.413 [INFO][4647] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1a7fdaf425ea9010d4a0c4bcd632320b1b194245bdf55c0ec30f6e413b305cea" host="localhost" Jul 15 04:44:27.460496 containerd[1524]: 2025-07-15 04:44:27.418 [INFO][4647] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 15 04:44:27.460496 containerd[1524]: 2025-07-15 04:44:27.422 [INFO][4647] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 15 04:44:27.460496 containerd[1524]: 2025-07-15 04:44:27.424 [INFO][4647] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 15 04:44:27.460496 containerd[1524]: 2025-07-15 04:44:27.427 [INFO][4647] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 15 04:44:27.460496 containerd[1524]: 2025-07-15 04:44:27.427 [INFO][4647] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1a7fdaf425ea9010d4a0c4bcd632320b1b194245bdf55c0ec30f6e413b305cea" host="localhost" Jul 15 04:44:27.460496 containerd[1524]: 2025-07-15 04:44:27.428 [INFO][4647] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1a7fdaf425ea9010d4a0c4bcd632320b1b194245bdf55c0ec30f6e413b305cea Jul 15 04:44:27.460496 containerd[1524]: 2025-07-15 04:44:27.432 [INFO][4647] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1a7fdaf425ea9010d4a0c4bcd632320b1b194245bdf55c0ec30f6e413b305cea" host="localhost" Jul 15 04:44:27.460496 containerd[1524]: 2025-07-15 04:44:27.438 [INFO][4647] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.1a7fdaf425ea9010d4a0c4bcd632320b1b194245bdf55c0ec30f6e413b305cea" host="localhost" Jul 15 04:44:27.460496 containerd[1524]: 2025-07-15 04:44:27.438 [INFO][4647] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.1a7fdaf425ea9010d4a0c4bcd632320b1b194245bdf55c0ec30f6e413b305cea" host="localhost" Jul 15 04:44:27.460496 containerd[1524]: 2025-07-15 04:44:27.438 [INFO][4647] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 04:44:27.460496 containerd[1524]: 2025-07-15 04:44:27.438 [INFO][4647] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="1a7fdaf425ea9010d4a0c4bcd632320b1b194245bdf55c0ec30f6e413b305cea" HandleID="k8s-pod-network.1a7fdaf425ea9010d4a0c4bcd632320b1b194245bdf55c0ec30f6e413b305cea" Workload="localhost-k8s-coredns--668d6bf9bc--jsx84-eth0" Jul 15 04:44:27.462346 containerd[1524]: 2025-07-15 04:44:27.441 [INFO][4631] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1a7fdaf425ea9010d4a0c4bcd632320b1b194245bdf55c0ec30f6e413b305cea" Namespace="kube-system" Pod="coredns-668d6bf9bc-jsx84" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--jsx84-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--jsx84-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"c731f066-95c8-4765-956b-f87b24f29f54", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 4, 43, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-jsx84", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali631b75de03c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 04:44:27.462346 containerd[1524]: 2025-07-15 04:44:27.441 [INFO][4631] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="1a7fdaf425ea9010d4a0c4bcd632320b1b194245bdf55c0ec30f6e413b305cea" Namespace="kube-system" Pod="coredns-668d6bf9bc-jsx84" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--jsx84-eth0" Jul 15 04:44:27.462346 containerd[1524]: 2025-07-15 04:44:27.441 [INFO][4631] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali631b75de03c ContainerID="1a7fdaf425ea9010d4a0c4bcd632320b1b194245bdf55c0ec30f6e413b305cea" Namespace="kube-system" Pod="coredns-668d6bf9bc-jsx84" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--jsx84-eth0" Jul 15 04:44:27.462346 containerd[1524]: 2025-07-15 04:44:27.445 [INFO][4631] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1a7fdaf425ea9010d4a0c4bcd632320b1b194245bdf55c0ec30f6e413b305cea" Namespace="kube-system" Pod="coredns-668d6bf9bc-jsx84" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--jsx84-eth0" Jul 15 04:44:27.462346 containerd[1524]: 2025-07-15 04:44:27.445 [INFO][4631] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1a7fdaf425ea9010d4a0c4bcd632320b1b194245bdf55c0ec30f6e413b305cea" Namespace="kube-system" Pod="coredns-668d6bf9bc-jsx84" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--jsx84-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--jsx84-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"c731f066-95c8-4765-956b-f87b24f29f54", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 4, 43, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1a7fdaf425ea9010d4a0c4bcd632320b1b194245bdf55c0ec30f6e413b305cea", Pod:"coredns-668d6bf9bc-jsx84", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali631b75de03c", MAC:"9a:3b:b4:9a:a0:6e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 04:44:27.462346 containerd[1524]: 2025-07-15 04:44:27.457 [INFO][4631] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1a7fdaf425ea9010d4a0c4bcd632320b1b194245bdf55c0ec30f6e413b305cea" Namespace="kube-system" Pod="coredns-668d6bf9bc-jsx84" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--jsx84-eth0" Jul 15 04:44:27.505177 containerd[1524]: time="2025-07-15T04:44:27.505082404Z" level=info msg="connecting to shim 1a7fdaf425ea9010d4a0c4bcd632320b1b194245bdf55c0ec30f6e413b305cea" address="unix:///run/containerd/s/8fce46853f206c14c080302410a7d94288d3da8b182e7dc5fe385f914628f948" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:44:27.526206 systemd[1]: Started cri-containerd-1a7fdaf425ea9010d4a0c4bcd632320b1b194245bdf55c0ec30f6e413b305cea.scope - libcontainer container 1a7fdaf425ea9010d4a0c4bcd632320b1b194245bdf55c0ec30f6e413b305cea. Jul 15 04:44:27.546616 systemd-resolved[1350]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 15 04:44:27.603972 containerd[1524]: time="2025-07-15T04:44:27.603864867Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a7282f1ba49b696cdb12d0bd1905ee8c46cfb3f607de590cdcd9e0761e2b0b6c\" id:\"ac3cadc218abd01d898dc4c1d189f4fcb088fcc8f9b7056533c90c3902df890b\" pid:4726 exited_at:{seconds:1752554667 nanos:603413583}" Jul 15 04:44:27.652153 containerd[1524]: time="2025-07-15T04:44:27.652072464Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jsx84,Uid:c731f066-95c8-4765-956b-f87b24f29f54,Namespace:kube-system,Attempt:0,} returns sandbox id \"1a7fdaf425ea9010d4a0c4bcd632320b1b194245bdf55c0ec30f6e413b305cea\"" Jul 15 04:44:27.655482 containerd[1524]: time="2025-07-15T04:44:27.655442148Z" level=info msg="CreateContainer within sandbox \"1a7fdaf425ea9010d4a0c4bcd632320b1b194245bdf55c0ec30f6e413b305cea\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 15 04:44:27.664911 containerd[1524]: time="2025-07-15T04:44:27.664864895Z" level=info msg="Container 108510c31481fba73cd7fee1b912f657a051b7b9343735e61420cb5ffe135e59: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:44:27.668331 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount782720125.mount: Deactivated successfully. Jul 15 04:44:27.674321 containerd[1524]: time="2025-07-15T04:44:27.674285681Z" level=info msg="CreateContainer within sandbox \"1a7fdaf425ea9010d4a0c4bcd632320b1b194245bdf55c0ec30f6e413b305cea\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"108510c31481fba73cd7fee1b912f657a051b7b9343735e61420cb5ffe135e59\"" Jul 15 04:44:27.675175 containerd[1524]: time="2025-07-15T04:44:27.675146484Z" level=info msg="StartContainer for \"108510c31481fba73cd7fee1b912f657a051b7b9343735e61420cb5ffe135e59\"" Jul 15 04:44:27.676276 containerd[1524]: time="2025-07-15T04:44:27.676205826Z" level=info msg="connecting to shim 108510c31481fba73cd7fee1b912f657a051b7b9343735e61420cb5ffe135e59" address="unix:///run/containerd/s/8fce46853f206c14c080302410a7d94288d3da8b182e7dc5fe385f914628f948" protocol=ttrpc version=3 Jul 15 04:44:27.708181 systemd[1]: Started cri-containerd-108510c31481fba73cd7fee1b912f657a051b7b9343735e61420cb5ffe135e59.scope - libcontainer container 108510c31481fba73cd7fee1b912f657a051b7b9343735e61420cb5ffe135e59. Jul 15 04:44:27.737657 containerd[1524]: time="2025-07-15T04:44:27.737616493Z" level=info msg="StartContainer for \"108510c31481fba73cd7fee1b912f657a051b7b9343735e61420cb5ffe135e59\" returns successfully" Jul 15 04:44:28.208452 containerd[1524]: time="2025-07-15T04:44:28.208408833Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:44:28.208988 containerd[1524]: time="2025-07-15T04:44:28.208958444Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=44517149" Jul 15 04:44:28.209856 containerd[1524]: time="2025-07-15T04:44:28.209818685Z" level=info msg="ImageCreate event name:\"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:44:28.211870 containerd[1524]: time="2025-07-15T04:44:28.211821993Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:44:28.212361 containerd[1524]: time="2025-07-15T04:44:28.212326121Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 1.466528775s" Jul 15 04:44:28.212361 containerd[1524]: time="2025-07-15T04:44:28.212358644Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 15 04:44:28.214435 containerd[1524]: time="2025-07-15T04:44:28.214250061Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 15 04:44:28.215224 containerd[1524]: time="2025-07-15T04:44:28.215202031Z" level=info msg="CreateContainer within sandbox \"e2ddbdd3d09aa3a3da3212aa7cdb3910a1f39b3a811c2e2a2dd66c4a3f7835c1\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 15 04:44:28.223896 containerd[1524]: time="2025-07-15T04:44:28.223165499Z" level=info msg="Container 9c07561f81eeab12c91aed8701d3c7ca096240953e8167392cb81e29030b1a17: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:44:28.232231 containerd[1524]: time="2025-07-15T04:44:28.232192027Z" level=info msg="CreateContainer within sandbox \"e2ddbdd3d09aa3a3da3212aa7cdb3910a1f39b3a811c2e2a2dd66c4a3f7835c1\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"9c07561f81eeab12c91aed8701d3c7ca096240953e8167392cb81e29030b1a17\"" Jul 15 04:44:28.232910 containerd[1524]: time="2025-07-15T04:44:28.232879971Z" level=info msg="StartContainer for \"9c07561f81eeab12c91aed8701d3c7ca096240953e8167392cb81e29030b1a17\"" Jul 15 04:44:28.234225 containerd[1524]: time="2025-07-15T04:44:28.234181894Z" level=info msg="connecting to shim 9c07561f81eeab12c91aed8701d3c7ca096240953e8167392cb81e29030b1a17" address="unix:///run/containerd/s/4c8098a9f39d87bda61e119b31ef52a98434d153dc98c553ea72b9875551c043" protocol=ttrpc version=3 Jul 15 04:44:28.262220 systemd[1]: Started cri-containerd-9c07561f81eeab12c91aed8701d3c7ca096240953e8167392cb81e29030b1a17.scope - libcontainer container 9c07561f81eeab12c91aed8701d3c7ca096240953e8167392cb81e29030b1a17. Jul 15 04:44:28.303598 containerd[1524]: time="2025-07-15T04:44:28.303502405Z" level=info msg="StartContainer for \"9c07561f81eeab12c91aed8701d3c7ca096240953e8167392cb81e29030b1a17\" returns successfully" Jul 15 04:44:28.344218 containerd[1524]: time="2025-07-15T04:44:28.343589050Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c4b7c8858-wmjsf,Uid:aea0df7c-5fe3-48eb-abd7-ab7bd635c065,Namespace:calico-apiserver,Attempt:0,}" Jul 15 04:44:28.346836 containerd[1524]: time="2025-07-15T04:44:28.346712304Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xrgjj,Uid:f012837f-8aa6-4d96-a20b-3f91976c5b9b,Namespace:calico-system,Attempt:0,}" Jul 15 04:44:28.350135 systemd-networkd[1435]: cali389311ae77b: Gained IPv6LL Jul 15 04:44:28.353210 systemd-networkd[1435]: cali137ef6047f7: Gained IPv6LL Jul 15 04:44:28.507673 systemd-networkd[1435]: cali2f1f0d1fae1: Link UP Jul 15 04:44:28.508558 systemd-networkd[1435]: cali2f1f0d1fae1: Gained carrier Jul 15 04:44:28.520787 containerd[1524]: 2025-07-15 04:44:28.409 [INFO][4818] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5c4b7c8858--wmjsf-eth0 calico-apiserver-5c4b7c8858- calico-apiserver aea0df7c-5fe3-48eb-abd7-ab7bd635c065 822 0 2025-07-15 04:43:59 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5c4b7c8858 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5c4b7c8858-wmjsf eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2f1f0d1fae1 [] [] }} ContainerID="74caf92a1dde456bc0253b0673d100290b6b3c9052e7bca4be9e10b7e8358882" Namespace="calico-apiserver" Pod="calico-apiserver-5c4b7c8858-wmjsf" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c4b7c8858--wmjsf-" Jul 15 04:44:28.520787 containerd[1524]: 2025-07-15 04:44:28.409 [INFO][4818] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="74caf92a1dde456bc0253b0673d100290b6b3c9052e7bca4be9e10b7e8358882" Namespace="calico-apiserver" Pod="calico-apiserver-5c4b7c8858-wmjsf" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c4b7c8858--wmjsf-eth0" Jul 15 04:44:28.520787 containerd[1524]: 2025-07-15 04:44:28.460 [INFO][4846] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="74caf92a1dde456bc0253b0673d100290b6b3c9052e7bca4be9e10b7e8358882" HandleID="k8s-pod-network.74caf92a1dde456bc0253b0673d100290b6b3c9052e7bca4be9e10b7e8358882" Workload="localhost-k8s-calico--apiserver--5c4b7c8858--wmjsf-eth0" Jul 15 04:44:28.520787 containerd[1524]: 2025-07-15 04:44:28.461 [INFO][4846] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="74caf92a1dde456bc0253b0673d100290b6b3c9052e7bca4be9e10b7e8358882" HandleID="k8s-pod-network.74caf92a1dde456bc0253b0673d100290b6b3c9052e7bca4be9e10b7e8358882" Workload="localhost-k8s-calico--apiserver--5c4b7c8858--wmjsf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c3240), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5c4b7c8858-wmjsf", "timestamp":"2025-07-15 04:44:28.460890949 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 04:44:28.520787 containerd[1524]: 2025-07-15 04:44:28.461 [INFO][4846] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 04:44:28.520787 containerd[1524]: 2025-07-15 04:44:28.461 [INFO][4846] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 04:44:28.520787 containerd[1524]: 2025-07-15 04:44:28.461 [INFO][4846] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 15 04:44:28.520787 containerd[1524]: 2025-07-15 04:44:28.473 [INFO][4846] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.74caf92a1dde456bc0253b0673d100290b6b3c9052e7bca4be9e10b7e8358882" host="localhost" Jul 15 04:44:28.520787 containerd[1524]: 2025-07-15 04:44:28.477 [INFO][4846] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 15 04:44:28.520787 containerd[1524]: 2025-07-15 04:44:28.483 [INFO][4846] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 15 04:44:28.520787 containerd[1524]: 2025-07-15 04:44:28.486 [INFO][4846] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 15 04:44:28.520787 containerd[1524]: 2025-07-15 04:44:28.489 [INFO][4846] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 15 04:44:28.520787 containerd[1524]: 2025-07-15 04:44:28.489 [INFO][4846] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.74caf92a1dde456bc0253b0673d100290b6b3c9052e7bca4be9e10b7e8358882" host="localhost" Jul 15 04:44:28.520787 containerd[1524]: 2025-07-15 04:44:28.491 [INFO][4846] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.74caf92a1dde456bc0253b0673d100290b6b3c9052e7bca4be9e10b7e8358882 Jul 15 04:44:28.520787 containerd[1524]: 2025-07-15 04:44:28.495 [INFO][4846] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.74caf92a1dde456bc0253b0673d100290b6b3c9052e7bca4be9e10b7e8358882" host="localhost" Jul 15 04:44:28.520787 containerd[1524]: 2025-07-15 04:44:28.501 [INFO][4846] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.74caf92a1dde456bc0253b0673d100290b6b3c9052e7bca4be9e10b7e8358882" host="localhost" Jul 15 04:44:28.520787 containerd[1524]: 2025-07-15 04:44:28.501 [INFO][4846] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.74caf92a1dde456bc0253b0673d100290b6b3c9052e7bca4be9e10b7e8358882" host="localhost" Jul 15 04:44:28.520787 containerd[1524]: 2025-07-15 04:44:28.501 [INFO][4846] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 04:44:28.520787 containerd[1524]: 2025-07-15 04:44:28.501 [INFO][4846] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="74caf92a1dde456bc0253b0673d100290b6b3c9052e7bca4be9e10b7e8358882" HandleID="k8s-pod-network.74caf92a1dde456bc0253b0673d100290b6b3c9052e7bca4be9e10b7e8358882" Workload="localhost-k8s-calico--apiserver--5c4b7c8858--wmjsf-eth0" Jul 15 04:44:28.521668 containerd[1524]: 2025-07-15 04:44:28.505 [INFO][4818] cni-plugin/k8s.go 418: Populated endpoint ContainerID="74caf92a1dde456bc0253b0673d100290b6b3c9052e7bca4be9e10b7e8358882" Namespace="calico-apiserver" Pod="calico-apiserver-5c4b7c8858-wmjsf" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c4b7c8858--wmjsf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5c4b7c8858--wmjsf-eth0", GenerateName:"calico-apiserver-5c4b7c8858-", Namespace:"calico-apiserver", SelfLink:"", UID:"aea0df7c-5fe3-48eb-abd7-ab7bd635c065", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 4, 43, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c4b7c8858", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5c4b7c8858-wmjsf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2f1f0d1fae1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 04:44:28.521668 containerd[1524]: 2025-07-15 04:44:28.505 [INFO][4818] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="74caf92a1dde456bc0253b0673d100290b6b3c9052e7bca4be9e10b7e8358882" Namespace="calico-apiserver" Pod="calico-apiserver-5c4b7c8858-wmjsf" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c4b7c8858--wmjsf-eth0" Jul 15 04:44:28.521668 containerd[1524]: 2025-07-15 04:44:28.505 [INFO][4818] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2f1f0d1fae1 ContainerID="74caf92a1dde456bc0253b0673d100290b6b3c9052e7bca4be9e10b7e8358882" Namespace="calico-apiserver" Pod="calico-apiserver-5c4b7c8858-wmjsf" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c4b7c8858--wmjsf-eth0" Jul 15 04:44:28.521668 containerd[1524]: 2025-07-15 04:44:28.509 [INFO][4818] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="74caf92a1dde456bc0253b0673d100290b6b3c9052e7bca4be9e10b7e8358882" Namespace="calico-apiserver" Pod="calico-apiserver-5c4b7c8858-wmjsf" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c4b7c8858--wmjsf-eth0" Jul 15 04:44:28.521668 containerd[1524]: 2025-07-15 04:44:28.509 [INFO][4818] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="74caf92a1dde456bc0253b0673d100290b6b3c9052e7bca4be9e10b7e8358882" Namespace="calico-apiserver" Pod="calico-apiserver-5c4b7c8858-wmjsf" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c4b7c8858--wmjsf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5c4b7c8858--wmjsf-eth0", GenerateName:"calico-apiserver-5c4b7c8858-", Namespace:"calico-apiserver", SelfLink:"", UID:"aea0df7c-5fe3-48eb-abd7-ab7bd635c065", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 4, 43, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c4b7c8858", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"74caf92a1dde456bc0253b0673d100290b6b3c9052e7bca4be9e10b7e8358882", Pod:"calico-apiserver-5c4b7c8858-wmjsf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2f1f0d1fae1", MAC:"56:b8:e7:bb:7e:b3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 04:44:28.521668 containerd[1524]: 2025-07-15 04:44:28.518 [INFO][4818] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="74caf92a1dde456bc0253b0673d100290b6b3c9052e7bca4be9e10b7e8358882" Namespace="calico-apiserver" Pod="calico-apiserver-5c4b7c8858-wmjsf" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c4b7c8858--wmjsf-eth0" Jul 15 04:44:28.549038 containerd[1524]: time="2025-07-15T04:44:28.548941939Z" level=info msg="connecting to shim 74caf92a1dde456bc0253b0673d100290b6b3c9052e7bca4be9e10b7e8358882" address="unix:///run/containerd/s/a518c67ff9f51f21eabf8f15b0215cf5bd236923d19bde720658d35fe37dec5a" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:44:28.553443 kubelet[2670]: I0715 04:44:28.553331 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5c4b7c8858-4lhx7" podStartSLOduration=28.084728858 podStartE2EDuration="29.55331443s" podCreationTimestamp="2025-07-15 04:43:59 +0000 UTC" firstStartedPulling="2025-07-15 04:44:26.745054112 +0000 UTC m=+41.488944992" lastFinishedPulling="2025-07-15 04:44:28.213639684 +0000 UTC m=+42.957530564" observedRunningTime="2025-07-15 04:44:28.551858093 +0000 UTC m=+43.295748973" watchObservedRunningTime="2025-07-15 04:44:28.55331443 +0000 UTC m=+43.297205270" Jul 15 04:44:28.595368 systemd[1]: Started cri-containerd-74caf92a1dde456bc0253b0673d100290b6b3c9052e7bca4be9e10b7e8358882.scope - libcontainer container 74caf92a1dde456bc0253b0673d100290b6b3c9052e7bca4be9e10b7e8358882. Jul 15 04:44:28.628216 systemd-networkd[1435]: calie78c8a1225e: Link UP Jul 15 04:44:28.628595 systemd-networkd[1435]: calie78c8a1225e: Gained carrier Jul 15 04:44:28.635472 systemd-resolved[1350]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 15 04:44:28.646011 kubelet[2670]: I0715 04:44:28.645943 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-jsx84" podStartSLOduration=37.645912368 podStartE2EDuration="37.645912368s" podCreationTimestamp="2025-07-15 04:43:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 04:44:28.569060589 +0000 UTC m=+43.312951469" watchObservedRunningTime="2025-07-15 04:44:28.645912368 +0000 UTC m=+43.389803208" Jul 15 04:44:28.649216 containerd[1524]: 2025-07-15 04:44:28.420 [INFO][4829] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--xrgjj-eth0 csi-node-driver- calico-system f012837f-8aa6-4d96-a20b-3f91976c5b9b 682 0 2025-07-15 04:44:05 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-xrgjj eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calie78c8a1225e [] [] }} ContainerID="3219a53bbb20413a41915a33068250448f3bb3dcce7a070a6802c2f34b65a222" Namespace="calico-system" Pod="csi-node-driver-xrgjj" WorkloadEndpoint="localhost-k8s-csi--node--driver--xrgjj-" Jul 15 04:44:28.649216 containerd[1524]: 2025-07-15 04:44:28.420 [INFO][4829] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3219a53bbb20413a41915a33068250448f3bb3dcce7a070a6802c2f34b65a222" Namespace="calico-system" Pod="csi-node-driver-xrgjj" WorkloadEndpoint="localhost-k8s-csi--node--driver--xrgjj-eth0" Jul 15 04:44:28.649216 containerd[1524]: 2025-07-15 04:44:28.462 [INFO][4852] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3219a53bbb20413a41915a33068250448f3bb3dcce7a070a6802c2f34b65a222" HandleID="k8s-pod-network.3219a53bbb20413a41915a33068250448f3bb3dcce7a070a6802c2f34b65a222" Workload="localhost-k8s-csi--node--driver--xrgjj-eth0" Jul 15 04:44:28.649216 containerd[1524]: 2025-07-15 04:44:28.462 [INFO][4852] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3219a53bbb20413a41915a33068250448f3bb3dcce7a070a6802c2f34b65a222" HandleID="k8s-pod-network.3219a53bbb20413a41915a33068250448f3bb3dcce7a070a6802c2f34b65a222" Workload="localhost-k8s-csi--node--driver--xrgjj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002dd7e0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-xrgjj", "timestamp":"2025-07-15 04:44:28.46261183 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 04:44:28.649216 containerd[1524]: 2025-07-15 04:44:28.462 [INFO][4852] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 04:44:28.649216 containerd[1524]: 2025-07-15 04:44:28.501 [INFO][4852] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 04:44:28.649216 containerd[1524]: 2025-07-15 04:44:28.501 [INFO][4852] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 15 04:44:28.649216 containerd[1524]: 2025-07-15 04:44:28.573 [INFO][4852] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3219a53bbb20413a41915a33068250448f3bb3dcce7a070a6802c2f34b65a222" host="localhost" Jul 15 04:44:28.649216 containerd[1524]: 2025-07-15 04:44:28.594 [INFO][4852] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 15 04:44:28.649216 containerd[1524]: 2025-07-15 04:44:28.600 [INFO][4852] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 15 04:44:28.649216 containerd[1524]: 2025-07-15 04:44:28.602 [INFO][4852] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 15 04:44:28.649216 containerd[1524]: 2025-07-15 04:44:28.607 [INFO][4852] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 15 04:44:28.649216 containerd[1524]: 2025-07-15 04:44:28.607 [INFO][4852] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3219a53bbb20413a41915a33068250448f3bb3dcce7a070a6802c2f34b65a222" host="localhost" Jul 15 04:44:28.649216 containerd[1524]: 2025-07-15 04:44:28.610 [INFO][4852] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3219a53bbb20413a41915a33068250448f3bb3dcce7a070a6802c2f34b65a222 Jul 15 04:44:28.649216 containerd[1524]: 2025-07-15 04:44:28.614 [INFO][4852] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3219a53bbb20413a41915a33068250448f3bb3dcce7a070a6802c2f34b65a222" host="localhost" Jul 15 04:44:28.649216 containerd[1524]: 2025-07-15 04:44:28.621 [INFO][4852] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.3219a53bbb20413a41915a33068250448f3bb3dcce7a070a6802c2f34b65a222" host="localhost" Jul 15 04:44:28.649216 containerd[1524]: 2025-07-15 04:44:28.622 [INFO][4852] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.3219a53bbb20413a41915a33068250448f3bb3dcce7a070a6802c2f34b65a222" host="localhost" Jul 15 04:44:28.649216 containerd[1524]: 2025-07-15 04:44:28.622 [INFO][4852] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 04:44:28.649216 containerd[1524]: 2025-07-15 04:44:28.622 [INFO][4852] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="3219a53bbb20413a41915a33068250448f3bb3dcce7a070a6802c2f34b65a222" HandleID="k8s-pod-network.3219a53bbb20413a41915a33068250448f3bb3dcce7a070a6802c2f34b65a222" Workload="localhost-k8s-csi--node--driver--xrgjj-eth0" Jul 15 04:44:28.649859 containerd[1524]: 2025-07-15 04:44:28.626 [INFO][4829] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3219a53bbb20413a41915a33068250448f3bb3dcce7a070a6802c2f34b65a222" Namespace="calico-system" Pod="csi-node-driver-xrgjj" WorkloadEndpoint="localhost-k8s-csi--node--driver--xrgjj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--xrgjj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f012837f-8aa6-4d96-a20b-3f91976c5b9b", ResourceVersion:"682", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 4, 44, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-xrgjj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie78c8a1225e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 04:44:28.649859 containerd[1524]: 2025-07-15 04:44:28.626 [INFO][4829] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="3219a53bbb20413a41915a33068250448f3bb3dcce7a070a6802c2f34b65a222" Namespace="calico-system" Pod="csi-node-driver-xrgjj" WorkloadEndpoint="localhost-k8s-csi--node--driver--xrgjj-eth0" Jul 15 04:44:28.649859 containerd[1524]: 2025-07-15 04:44:28.626 [INFO][4829] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie78c8a1225e ContainerID="3219a53bbb20413a41915a33068250448f3bb3dcce7a070a6802c2f34b65a222" Namespace="calico-system" Pod="csi-node-driver-xrgjj" WorkloadEndpoint="localhost-k8s-csi--node--driver--xrgjj-eth0" Jul 15 04:44:28.649859 containerd[1524]: 2025-07-15 04:44:28.631 [INFO][4829] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3219a53bbb20413a41915a33068250448f3bb3dcce7a070a6802c2f34b65a222" Namespace="calico-system" Pod="csi-node-driver-xrgjj" WorkloadEndpoint="localhost-k8s-csi--node--driver--xrgjj-eth0" Jul 15 04:44:28.649859 containerd[1524]: 2025-07-15 04:44:28.632 [INFO][4829] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3219a53bbb20413a41915a33068250448f3bb3dcce7a070a6802c2f34b65a222" Namespace="calico-system" Pod="csi-node-driver-xrgjj" WorkloadEndpoint="localhost-k8s-csi--node--driver--xrgjj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--xrgjj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f012837f-8aa6-4d96-a20b-3f91976c5b9b", ResourceVersion:"682", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 4, 44, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3219a53bbb20413a41915a33068250448f3bb3dcce7a070a6802c2f34b65a222", Pod:"csi-node-driver-xrgjj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie78c8a1225e", MAC:"9a:a0:41:28:66:c2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 04:44:28.649859 containerd[1524]: 2025-07-15 04:44:28.646 [INFO][4829] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3219a53bbb20413a41915a33068250448f3bb3dcce7a070a6802c2f34b65a222" Namespace="calico-system" Pod="csi-node-driver-xrgjj" WorkloadEndpoint="localhost-k8s-csi--node--driver--xrgjj-eth0" Jul 15 04:44:28.673694 containerd[1524]: time="2025-07-15T04:44:28.673595248Z" level=info msg="connecting to shim 3219a53bbb20413a41915a33068250448f3bb3dcce7a070a6802c2f34b65a222" address="unix:///run/containerd/s/bd93a13ab22550635f47398913aacf162a02af9f942e0e6636195303b4d7b306" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:44:28.698379 containerd[1524]: time="2025-07-15T04:44:28.698334572Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c4b7c8858-wmjsf,Uid:aea0df7c-5fe3-48eb-abd7-ab7bd635c065,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"74caf92a1dde456bc0253b0673d100290b6b3c9052e7bca4be9e10b7e8358882\"" Jul 15 04:44:28.711458 containerd[1524]: time="2025-07-15T04:44:28.711413720Z" level=info msg="CreateContainer within sandbox \"74caf92a1dde456bc0253b0673d100290b6b3c9052e7bca4be9e10b7e8358882\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 15 04:44:28.718238 systemd[1]: Started cri-containerd-3219a53bbb20413a41915a33068250448f3bb3dcce7a070a6802c2f34b65a222.scope - libcontainer container 3219a53bbb20413a41915a33068250448f3bb3dcce7a070a6802c2f34b65a222. Jul 15 04:44:28.728249 containerd[1524]: time="2025-07-15T04:44:28.728210338Z" level=info msg="Container 24e18dc7dcb9b517545ff5175cfaf74fd42b7546cdbe29744577ee7f89abd5e5: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:44:28.735748 containerd[1524]: time="2025-07-15T04:44:28.735693161Z" level=info msg="CreateContainer within sandbox \"74caf92a1dde456bc0253b0673d100290b6b3c9052e7bca4be9e10b7e8358882\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"24e18dc7dcb9b517545ff5175cfaf74fd42b7546cdbe29744577ee7f89abd5e5\"" Jul 15 04:44:28.738047 containerd[1524]: time="2025-07-15T04:44:28.737455767Z" level=info msg="StartContainer for \"24e18dc7dcb9b517545ff5175cfaf74fd42b7546cdbe29744577ee7f89abd5e5\"" Jul 15 04:44:28.742152 systemd-resolved[1350]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 15 04:44:28.745006 containerd[1524]: time="2025-07-15T04:44:28.744971633Z" level=info msg="connecting to shim 24e18dc7dcb9b517545ff5175cfaf74fd42b7546cdbe29744577ee7f89abd5e5" address="unix:///run/containerd/s/a518c67ff9f51f21eabf8f15b0215cf5bd236923d19bde720658d35fe37dec5a" protocol=ttrpc version=3 Jul 15 04:44:28.764979 containerd[1524]: time="2025-07-15T04:44:28.764884183Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xrgjj,Uid:f012837f-8aa6-4d96-a20b-3f91976c5b9b,Namespace:calico-system,Attempt:0,} returns sandbox id \"3219a53bbb20413a41915a33068250448f3bb3dcce7a070a6802c2f34b65a222\"" Jul 15 04:44:28.769228 systemd[1]: Started cri-containerd-24e18dc7dcb9b517545ff5175cfaf74fd42b7546cdbe29744577ee7f89abd5e5.scope - libcontainer container 24e18dc7dcb9b517545ff5175cfaf74fd42b7546cdbe29744577ee7f89abd5e5. Jul 15 04:44:28.805839 containerd[1524]: time="2025-07-15T04:44:28.805797626Z" level=info msg="StartContainer for \"24e18dc7dcb9b517545ff5175cfaf74fd42b7546cdbe29744577ee7f89abd5e5\" returns successfully" Jul 15 04:44:29.435274 systemd-networkd[1435]: cali631b75de03c: Gained IPv6LL Jul 15 04:44:29.550591 kubelet[2670]: I0715 04:44:29.550548 2670 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 04:44:29.564861 kubelet[2670]: I0715 04:44:29.564778 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5c4b7c8858-wmjsf" podStartSLOduration=30.564755598 podStartE2EDuration="30.564755598s" podCreationTimestamp="2025-07-15 04:43:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 04:44:29.558548308 +0000 UTC m=+44.302439188" watchObservedRunningTime="2025-07-15 04:44:29.564755598 +0000 UTC m=+44.308646478" Jul 15 04:44:29.631312 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1517446412.mount: Deactivated successfully. Jul 15 04:44:29.691461 systemd-networkd[1435]: cali2f1f0d1fae1: Gained IPv6LL Jul 15 04:44:30.203230 systemd-networkd[1435]: calie78c8a1225e: Gained IPv6LL Jul 15 04:44:30.343850 containerd[1524]: time="2025-07-15T04:44:30.343264588Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:44:30.344482 containerd[1524]: time="2025-07-15T04:44:30.344429813Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=61838790" Jul 15 04:44:30.345814 containerd[1524]: time="2025-07-15T04:44:30.345697086Z" level=info msg="ImageCreate event name:\"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:44:30.351777 containerd[1524]: time="2025-07-15T04:44:30.351724948Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:44:30.352519 containerd[1524]: time="2025-07-15T04:44:30.352428451Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"61838636\" in 2.137738588s" Jul 15 04:44:30.352519 containerd[1524]: time="2025-07-15T04:44:30.352465414Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\"" Jul 15 04:44:30.353798 containerd[1524]: time="2025-07-15T04:44:30.353739289Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 15 04:44:30.355666 containerd[1524]: time="2025-07-15T04:44:30.355565373Z" level=info msg="CreateContainer within sandbox \"8c28bfb015cc2320d06cb0da95d40063e3df7d51033f5a5452a12649cacccdf4\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 15 04:44:30.367299 containerd[1524]: time="2025-07-15T04:44:30.367251782Z" level=info msg="Container 075137bb826fba58101191267080af1bc36518da6f6eeebd3e2e9a954719e970: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:44:30.376205 containerd[1524]: time="2025-07-15T04:44:30.376160342Z" level=info msg="CreateContainer within sandbox \"8c28bfb015cc2320d06cb0da95d40063e3df7d51033f5a5452a12649cacccdf4\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"075137bb826fba58101191267080af1bc36518da6f6eeebd3e2e9a954719e970\"" Jul 15 04:44:30.378751 containerd[1524]: time="2025-07-15T04:44:30.378716212Z" level=info msg="StartContainer for \"075137bb826fba58101191267080af1bc36518da6f6eeebd3e2e9a954719e970\"" Jul 15 04:44:30.380461 containerd[1524]: time="2025-07-15T04:44:30.379926841Z" level=info msg="connecting to shim 075137bb826fba58101191267080af1bc36518da6f6eeebd3e2e9a954719e970" address="unix:///run/containerd/s/4a7517173c9aecabaff588c82b420772a4feb23aa83c6fecd6e0deee0c263c1d" protocol=ttrpc version=3 Jul 15 04:44:30.417215 systemd[1]: Started cri-containerd-075137bb826fba58101191267080af1bc36518da6f6eeebd3e2e9a954719e970.scope - libcontainer container 075137bb826fba58101191267080af1bc36518da6f6eeebd3e2e9a954719e970. Jul 15 04:44:30.535654 containerd[1524]: time="2025-07-15T04:44:30.535549978Z" level=info msg="StartContainer for \"075137bb826fba58101191267080af1bc36518da6f6eeebd3e2e9a954719e970\" returns successfully" Jul 15 04:44:30.552336 kubelet[2670]: I0715 04:44:30.552263 2670 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 04:44:30.717096 containerd[1524]: time="2025-07-15T04:44:30.717007275Z" level=info msg="TaskExit event in podsandbox handler container_id:\"075137bb826fba58101191267080af1bc36518da6f6eeebd3e2e9a954719e970\" id:\"e1a7546e1bc65b8e6e180469b8e04592f4d02ea6ebcc0c41a37eaa0f021a6d4c\" pid:5084 exited_at:{seconds:1752554670 nanos:716123716}" Jul 15 04:44:30.732743 kubelet[2670]: I0715 04:44:30.732476 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-768f4c5c69-db5m2" podStartSLOduration=22.16113725 podStartE2EDuration="25.732459823s" podCreationTimestamp="2025-07-15 04:44:05 +0000 UTC" firstStartedPulling="2025-07-15 04:44:26.782312506 +0000 UTC m=+41.526203426" lastFinishedPulling="2025-07-15 04:44:30.353635119 +0000 UTC m=+45.097525999" observedRunningTime="2025-07-15 04:44:30.568731918 +0000 UTC m=+45.312622838" watchObservedRunningTime="2025-07-15 04:44:30.732459823 +0000 UTC m=+45.476350703" Jul 15 04:44:31.617948 containerd[1524]: time="2025-07-15T04:44:31.617898634Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:44:31.620645 containerd[1524]: time="2025-07-15T04:44:31.620599031Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8225702" Jul 15 04:44:31.622016 containerd[1524]: time="2025-07-15T04:44:31.621953551Z" level=info msg="ImageCreate event name:\"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:44:31.625827 containerd[1524]: time="2025-07-15T04:44:31.625704600Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:44:31.627564 containerd[1524]: time="2025-07-15T04:44:31.627404830Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"9594943\" in 1.273626618s" Jul 15 04:44:31.627564 containerd[1524]: time="2025-07-15T04:44:31.627440553Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\"" Jul 15 04:44:31.630933 containerd[1524]: time="2025-07-15T04:44:31.630891977Z" level=info msg="CreateContainer within sandbox \"3219a53bbb20413a41915a33068250448f3bb3dcce7a070a6802c2f34b65a222\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 15 04:44:31.638411 containerd[1524]: time="2025-07-15T04:44:31.638365594Z" level=info msg="Container 6ad2ec7d93b3fd8f1a3c8ec4cd9561046f63b8010d11ed722d345cfb86e9e75b: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:44:31.660399 containerd[1524]: time="2025-07-15T04:44:31.660348287Z" level=info msg="CreateContainer within sandbox \"3219a53bbb20413a41915a33068250448f3bb3dcce7a070a6802c2f34b65a222\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"6ad2ec7d93b3fd8f1a3c8ec4cd9561046f63b8010d11ed722d345cfb86e9e75b\"" Jul 15 04:44:31.661784 containerd[1524]: time="2025-07-15T04:44:31.661753771Z" level=info msg="StartContainer for \"6ad2ec7d93b3fd8f1a3c8ec4cd9561046f63b8010d11ed722d345cfb86e9e75b\"" Jul 15 04:44:31.666745 containerd[1524]: time="2025-07-15T04:44:31.666681564Z" level=info msg="connecting to shim 6ad2ec7d93b3fd8f1a3c8ec4cd9561046f63b8010d11ed722d345cfb86e9e75b" address="unix:///run/containerd/s/bd93a13ab22550635f47398913aacf162a02af9f942e0e6636195303b4d7b306" protocol=ttrpc version=3 Jul 15 04:44:31.695340 systemd[1]: Started cri-containerd-6ad2ec7d93b3fd8f1a3c8ec4cd9561046f63b8010d11ed722d345cfb86e9e75b.scope - libcontainer container 6ad2ec7d93b3fd8f1a3c8ec4cd9561046f63b8010d11ed722d345cfb86e9e75b. Jul 15 04:44:31.746551 containerd[1524]: time="2025-07-15T04:44:31.746491903Z" level=info msg="StartContainer for \"6ad2ec7d93b3fd8f1a3c8ec4cd9561046f63b8010d11ed722d345cfb86e9e75b\" returns successfully" Jul 15 04:44:31.750090 containerd[1524]: time="2025-07-15T04:44:31.749907804Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 15 04:44:32.082573 systemd[1]: Started sshd@8-10.0.0.60:22-10.0.0.1:52566.service - OpenSSH per-connection server daemon (10.0.0.1:52566). Jul 15 04:44:32.146412 sshd[5135]: Accepted publickey for core from 10.0.0.1 port 52566 ssh2: RSA SHA256:sv36Sv5cF+dK4scc2r2cUvpDU+BCYvXiqSSRxSnX4+c Jul 15 04:44:32.148116 sshd-session[5135]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:44:32.152355 systemd-logind[1506]: New session 9 of user core. Jul 15 04:44:32.165228 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 15 04:44:32.399187 sshd[5138]: Connection closed by 10.0.0.1 port 52566 Jul 15 04:44:32.399456 sshd-session[5135]: pam_unix(sshd:session): session closed for user core Jul 15 04:44:32.403335 systemd[1]: sshd@8-10.0.0.60:22-10.0.0.1:52566.service: Deactivated successfully. Jul 15 04:44:32.405218 systemd[1]: session-9.scope: Deactivated successfully. Jul 15 04:44:32.405859 systemd-logind[1506]: Session 9 logged out. Waiting for processes to exit. Jul 15 04:44:32.407372 systemd-logind[1506]: Removed session 9. Jul 15 04:44:33.008694 containerd[1524]: time="2025-07-15T04:44:33.008646728Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:44:33.009538 containerd[1524]: time="2025-07-15T04:44:33.009407072Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=13754366" Jul 15 04:44:33.010418 containerd[1524]: time="2025-07-15T04:44:33.010359033Z" level=info msg="ImageCreate event name:\"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:44:33.013655 containerd[1524]: time="2025-07-15T04:44:33.013222315Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:44:33.013764 containerd[1524]: time="2025-07-15T04:44:33.013734958Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"15123559\" in 1.263788991s" Jul 15 04:44:33.014149 containerd[1524]: time="2025-07-15T04:44:33.014108910Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\"" Jul 15 04:44:33.017432 containerd[1524]: time="2025-07-15T04:44:33.017338903Z" level=info msg="CreateContainer within sandbox \"3219a53bbb20413a41915a33068250448f3bb3dcce7a070a6802c2f34b65a222\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 15 04:44:33.027467 containerd[1524]: time="2025-07-15T04:44:33.027248101Z" level=info msg="Container 6c514c01267251880c0295207980ec6a8dfc289363a81accd861614c4a1dc4da: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:44:33.037880 containerd[1524]: time="2025-07-15T04:44:33.037825235Z" level=info msg="CreateContainer within sandbox \"3219a53bbb20413a41915a33068250448f3bb3dcce7a070a6802c2f34b65a222\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"6c514c01267251880c0295207980ec6a8dfc289363a81accd861614c4a1dc4da\"" Jul 15 04:44:33.038513 containerd[1524]: time="2025-07-15T04:44:33.038340719Z" level=info msg="StartContainer for \"6c514c01267251880c0295207980ec6a8dfc289363a81accd861614c4a1dc4da\"" Jul 15 04:44:33.039954 containerd[1524]: time="2025-07-15T04:44:33.039919292Z" level=info msg="connecting to shim 6c514c01267251880c0295207980ec6a8dfc289363a81accd861614c4a1dc4da" address="unix:///run/containerd/s/bd93a13ab22550635f47398913aacf162a02af9f942e0e6636195303b4d7b306" protocol=ttrpc version=3 Jul 15 04:44:33.068208 systemd[1]: Started cri-containerd-6c514c01267251880c0295207980ec6a8dfc289363a81accd861614c4a1dc4da.scope - libcontainer container 6c514c01267251880c0295207980ec6a8dfc289363a81accd861614c4a1dc4da. Jul 15 04:44:33.117304 containerd[1524]: time="2025-07-15T04:44:33.117264472Z" level=info msg="StartContainer for \"6c514c01267251880c0295207980ec6a8dfc289363a81accd861614c4a1dc4da\" returns successfully" Jul 15 04:44:33.417059 kubelet[2670]: I0715 04:44:33.416968 2670 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 15 04:44:33.419505 kubelet[2670]: I0715 04:44:33.419478 2670 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 15 04:44:33.580791 kubelet[2670]: I0715 04:44:33.580607 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-xrgjj" podStartSLOduration=24.333515369 podStartE2EDuration="28.580585407s" podCreationTimestamp="2025-07-15 04:44:05 +0000 UTC" firstStartedPulling="2025-07-15 04:44:28.767680846 +0000 UTC m=+43.511571726" lastFinishedPulling="2025-07-15 04:44:33.014750884 +0000 UTC m=+47.758641764" observedRunningTime="2025-07-15 04:44:33.579946353 +0000 UTC m=+48.323837233" watchObservedRunningTime="2025-07-15 04:44:33.580585407 +0000 UTC m=+48.324476287" Jul 15 04:44:37.416719 systemd[1]: Started sshd@9-10.0.0.60:22-10.0.0.1:59082.service - OpenSSH per-connection server daemon (10.0.0.1:59082). Jul 15 04:44:37.490439 sshd[5198]: Accepted publickey for core from 10.0.0.1 port 59082 ssh2: RSA SHA256:sv36Sv5cF+dK4scc2r2cUvpDU+BCYvXiqSSRxSnX4+c Jul 15 04:44:37.492578 sshd-session[5198]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:44:37.498489 systemd-logind[1506]: New session 10 of user core. Jul 15 04:44:37.507421 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 15 04:44:37.700485 sshd[5201]: Connection closed by 10.0.0.1 port 59082 Jul 15 04:44:37.701170 sshd-session[5198]: pam_unix(sshd:session): session closed for user core Jul 15 04:44:37.709476 systemd[1]: sshd@9-10.0.0.60:22-10.0.0.1:59082.service: Deactivated successfully. Jul 15 04:44:37.711227 systemd[1]: session-10.scope: Deactivated successfully. Jul 15 04:44:37.711943 systemd-logind[1506]: Session 10 logged out. Waiting for processes to exit. Jul 15 04:44:37.714590 systemd[1]: Started sshd@10-10.0.0.60:22-10.0.0.1:59084.service - OpenSSH per-connection server daemon (10.0.0.1:59084). Jul 15 04:44:37.717473 systemd-logind[1506]: Removed session 10. Jul 15 04:44:37.774091 sshd[5215]: Accepted publickey for core from 10.0.0.1 port 59084 ssh2: RSA SHA256:sv36Sv5cF+dK4scc2r2cUvpDU+BCYvXiqSSRxSnX4+c Jul 15 04:44:37.775401 sshd-session[5215]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:44:37.780978 systemd-logind[1506]: New session 11 of user core. Jul 15 04:44:37.794205 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 15 04:44:38.049523 sshd[5218]: Connection closed by 10.0.0.1 port 59084 Jul 15 04:44:38.050923 sshd-session[5215]: pam_unix(sshd:session): session closed for user core Jul 15 04:44:38.061239 systemd[1]: sshd@10-10.0.0.60:22-10.0.0.1:59084.service: Deactivated successfully. Jul 15 04:44:38.063480 systemd[1]: session-11.scope: Deactivated successfully. Jul 15 04:44:38.065610 systemd-logind[1506]: Session 11 logged out. Waiting for processes to exit. Jul 15 04:44:38.072107 systemd[1]: Started sshd@11-10.0.0.60:22-10.0.0.1:59092.service - OpenSSH per-connection server daemon (10.0.0.1:59092). Jul 15 04:44:38.074054 systemd-logind[1506]: Removed session 11. Jul 15 04:44:38.127631 sshd[5239]: Accepted publickey for core from 10.0.0.1 port 59092 ssh2: RSA SHA256:sv36Sv5cF+dK4scc2r2cUvpDU+BCYvXiqSSRxSnX4+c Jul 15 04:44:38.129175 sshd-session[5239]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:44:38.133783 systemd-logind[1506]: New session 12 of user core. Jul 15 04:44:38.143216 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 15 04:44:38.308452 sshd[5242]: Connection closed by 10.0.0.1 port 59092 Jul 15 04:44:38.308719 sshd-session[5239]: pam_unix(sshd:session): session closed for user core Jul 15 04:44:38.312657 systemd[1]: sshd@11-10.0.0.60:22-10.0.0.1:59092.service: Deactivated successfully. Jul 15 04:44:38.314382 systemd[1]: session-12.scope: Deactivated successfully. Jul 15 04:44:38.317396 systemd-logind[1506]: Session 12 logged out. Waiting for processes to exit. Jul 15 04:44:38.318400 systemd-logind[1506]: Removed session 12. Jul 15 04:44:41.409269 containerd[1524]: time="2025-07-15T04:44:41.409221156Z" level=info msg="TaskExit event in podsandbox handler container_id:\"075137bb826fba58101191267080af1bc36518da6f6eeebd3e2e9a954719e970\" id:\"3dcda045851dbbf8065651c95507c3bf2b2a40ec7bcb9f74e62ae5682e92510d\" pid:5270 exited_at:{seconds:1752554681 nanos:408864809}" Jul 15 04:44:43.327685 systemd[1]: Started sshd@12-10.0.0.60:22-10.0.0.1:42104.service - OpenSSH per-connection server daemon (10.0.0.1:42104). Jul 15 04:44:43.398333 sshd[5281]: Accepted publickey for core from 10.0.0.1 port 42104 ssh2: RSA SHA256:sv36Sv5cF+dK4scc2r2cUvpDU+BCYvXiqSSRxSnX4+c Jul 15 04:44:43.399586 sshd-session[5281]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:44:43.403423 systemd-logind[1506]: New session 13 of user core. Jul 15 04:44:43.413204 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 15 04:44:43.549822 sshd[5284]: Connection closed by 10.0.0.1 port 42104 Jul 15 04:44:43.549892 sshd-session[5281]: pam_unix(sshd:session): session closed for user core Jul 15 04:44:43.558995 systemd[1]: sshd@12-10.0.0.60:22-10.0.0.1:42104.service: Deactivated successfully. Jul 15 04:44:43.560637 systemd[1]: session-13.scope: Deactivated successfully. Jul 15 04:44:43.563765 systemd-logind[1506]: Session 13 logged out. Waiting for processes to exit. Jul 15 04:44:43.565763 systemd[1]: Started sshd@13-10.0.0.60:22-10.0.0.1:42110.service - OpenSSH per-connection server daemon (10.0.0.1:42110). Jul 15 04:44:43.567653 systemd-logind[1506]: Removed session 13. Jul 15 04:44:43.624260 sshd[5298]: Accepted publickey for core from 10.0.0.1 port 42110 ssh2: RSA SHA256:sv36Sv5cF+dK4scc2r2cUvpDU+BCYvXiqSSRxSnX4+c Jul 15 04:44:43.623245 sshd-session[5298]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:44:43.631962 systemd-logind[1506]: New session 14 of user core. Jul 15 04:44:43.642190 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 15 04:44:43.874234 sshd[5301]: Connection closed by 10.0.0.1 port 42110 Jul 15 04:44:43.874984 sshd-session[5298]: pam_unix(sshd:session): session closed for user core Jul 15 04:44:43.882203 systemd[1]: sshd@13-10.0.0.60:22-10.0.0.1:42110.service: Deactivated successfully. Jul 15 04:44:43.883783 systemd[1]: session-14.scope: Deactivated successfully. Jul 15 04:44:43.884525 systemd-logind[1506]: Session 14 logged out. Waiting for processes to exit. Jul 15 04:44:43.886719 systemd[1]: Started sshd@14-10.0.0.60:22-10.0.0.1:42118.service - OpenSSH per-connection server daemon (10.0.0.1:42118). Jul 15 04:44:43.888151 systemd-logind[1506]: Removed session 14. Jul 15 04:44:43.949378 sshd[5313]: Accepted publickey for core from 10.0.0.1 port 42118 ssh2: RSA SHA256:sv36Sv5cF+dK4scc2r2cUvpDU+BCYvXiqSSRxSnX4+c Jul 15 04:44:43.950595 sshd-session[5313]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:44:43.954535 systemd-logind[1506]: New session 15 of user core. Jul 15 04:44:43.964178 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 15 04:44:44.738401 sshd[5316]: Connection closed by 10.0.0.1 port 42118 Jul 15 04:44:44.738796 sshd-session[5313]: pam_unix(sshd:session): session closed for user core Jul 15 04:44:44.755207 systemd[1]: sshd@14-10.0.0.60:22-10.0.0.1:42118.service: Deactivated successfully. Jul 15 04:44:44.757466 systemd[1]: session-15.scope: Deactivated successfully. Jul 15 04:44:44.760831 systemd-logind[1506]: Session 15 logged out. Waiting for processes to exit. Jul 15 04:44:44.766618 systemd[1]: Started sshd@15-10.0.0.60:22-10.0.0.1:42126.service - OpenSSH per-connection server daemon (10.0.0.1:42126). Jul 15 04:44:44.769231 systemd-logind[1506]: Removed session 15. Jul 15 04:44:44.835592 sshd[5338]: Accepted publickey for core from 10.0.0.1 port 42126 ssh2: RSA SHA256:sv36Sv5cF+dK4scc2r2cUvpDU+BCYvXiqSSRxSnX4+c Jul 15 04:44:44.837123 sshd-session[5338]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:44:44.841970 systemd-logind[1506]: New session 16 of user core. Jul 15 04:44:44.850198 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 15 04:44:44.924222 containerd[1524]: time="2025-07-15T04:44:44.923254140Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a7282f1ba49b696cdb12d0bd1905ee8c46cfb3f607de590cdcd9e0761e2b0b6c\" id:\"a41f1ff247adeb7483f3a246ca877f2cfb8dd5cc261df52e96c842f1f6b22e75\" pid:5354 exited_at:{seconds:1752554684 nanos:915795043}" Jul 15 04:44:45.171572 sshd[5341]: Connection closed by 10.0.0.1 port 42126 Jul 15 04:44:45.170594 sshd-session[5338]: pam_unix(sshd:session): session closed for user core Jul 15 04:44:45.180768 systemd[1]: sshd@15-10.0.0.60:22-10.0.0.1:42126.service: Deactivated successfully. Jul 15 04:44:45.182942 systemd[1]: session-16.scope: Deactivated successfully. Jul 15 04:44:45.185224 systemd-logind[1506]: Session 16 logged out. Waiting for processes to exit. Jul 15 04:44:45.190749 systemd[1]: Started sshd@16-10.0.0.60:22-10.0.0.1:42130.service - OpenSSH per-connection server daemon (10.0.0.1:42130). Jul 15 04:44:45.192683 systemd-logind[1506]: Removed session 16. Jul 15 04:44:45.248151 sshd[5374]: Accepted publickey for core from 10.0.0.1 port 42130 ssh2: RSA SHA256:sv36Sv5cF+dK4scc2r2cUvpDU+BCYvXiqSSRxSnX4+c Jul 15 04:44:45.249454 sshd-session[5374]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:44:45.253994 systemd-logind[1506]: New session 17 of user core. Jul 15 04:44:45.263210 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 15 04:44:45.391207 sshd[5377]: Connection closed by 10.0.0.1 port 42130 Jul 15 04:44:45.391574 sshd-session[5374]: pam_unix(sshd:session): session closed for user core Jul 15 04:44:45.394986 systemd-logind[1506]: Session 17 logged out. Waiting for processes to exit. Jul 15 04:44:45.395386 systemd[1]: sshd@16-10.0.0.60:22-10.0.0.1:42130.service: Deactivated successfully. Jul 15 04:44:45.398121 systemd[1]: session-17.scope: Deactivated successfully. Jul 15 04:44:45.400934 systemd-logind[1506]: Removed session 17. Jul 15 04:44:48.596894 containerd[1524]: time="2025-07-15T04:44:48.596839420Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6be0e7a9c98bb82df7f43c2832dc55f61a6b816989cd8684d4cf684b41c88723\" id:\"42a9f8d456922dbc983f5ab39076a972bb821aedf2e16452027ebba462f81354\" pid:5404 exited_at:{seconds:1752554688 nanos:595929917}" Jul 15 04:44:50.407153 systemd[1]: Started sshd@17-10.0.0.60:22-10.0.0.1:42144.service - OpenSSH per-connection server daemon (10.0.0.1:42144). Jul 15 04:44:50.479025 sshd[5422]: Accepted publickey for core from 10.0.0.1 port 42144 ssh2: RSA SHA256:sv36Sv5cF+dK4scc2r2cUvpDU+BCYvXiqSSRxSnX4+c Jul 15 04:44:50.482059 sshd-session[5422]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:44:50.488123 systemd-logind[1506]: New session 18 of user core. Jul 15 04:44:50.494253 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 15 04:44:50.669872 sshd[5425]: Connection closed by 10.0.0.1 port 42144 Jul 15 04:44:50.671178 sshd-session[5422]: pam_unix(sshd:session): session closed for user core Jul 15 04:44:50.675565 systemd[1]: sshd@17-10.0.0.60:22-10.0.0.1:42144.service: Deactivated successfully. Jul 15 04:44:50.678645 systemd[1]: session-18.scope: Deactivated successfully. Jul 15 04:44:50.680692 systemd-logind[1506]: Session 18 logged out. Waiting for processes to exit. Jul 15 04:44:50.681860 systemd-logind[1506]: Removed session 18. Jul 15 04:44:55.681367 systemd[1]: Started sshd@18-10.0.0.60:22-10.0.0.1:54534.service - OpenSSH per-connection server daemon (10.0.0.1:54534). Jul 15 04:44:55.740118 sshd[5440]: Accepted publickey for core from 10.0.0.1 port 54534 ssh2: RSA SHA256:sv36Sv5cF+dK4scc2r2cUvpDU+BCYvXiqSSRxSnX4+c Jul 15 04:44:55.741359 sshd-session[5440]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:44:55.745660 systemd-logind[1506]: New session 19 of user core. Jul 15 04:44:55.751178 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 15 04:44:55.893737 sshd[5443]: Connection closed by 10.0.0.1 port 54534 Jul 15 04:44:55.894287 sshd-session[5440]: pam_unix(sshd:session): session closed for user core Jul 15 04:44:55.897645 systemd[1]: sshd@18-10.0.0.60:22-10.0.0.1:54534.service: Deactivated successfully. Jul 15 04:44:55.899257 systemd[1]: session-19.scope: Deactivated successfully. Jul 15 04:44:55.901175 systemd-logind[1506]: Session 19 logged out. Waiting for processes to exit. Jul 15 04:44:55.903473 systemd-logind[1506]: Removed session 19. Jul 15 04:44:56.401350 kubelet[2670]: I0715 04:44:56.401309 2670 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 04:44:57.567877 containerd[1524]: time="2025-07-15T04:44:57.567832619Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a7282f1ba49b696cdb12d0bd1905ee8c46cfb3f607de590cdcd9e0761e2b0b6c\" id:\"99a41c022ab68c87114a599a8b2955e4637507c7e883c3dc3c8bdb3b0c43ec78\" pid:5470 exited_at:{seconds:1752554697 nanos:567602752}" Jul 15 04:45:00.627562 containerd[1524]: time="2025-07-15T04:45:00.627405221Z" level=info msg="TaskExit event in podsandbox handler container_id:\"075137bb826fba58101191267080af1bc36518da6f6eeebd3e2e9a954719e970\" id:\"0ad1ad70c2abe53f33da707021714bfbbddc51375b058994ad4f35c028a567d2\" pid:5502 exited_at:{seconds:1752554700 nanos:626806169}" Jul 15 04:45:00.919331 systemd[1]: Started sshd@19-10.0.0.60:22-10.0.0.1:54544.service - OpenSSH per-connection server daemon (10.0.0.1:54544). Jul 15 04:45:01.016532 sshd[5514]: Accepted publickey for core from 10.0.0.1 port 54544 ssh2: RSA SHA256:sv36Sv5cF+dK4scc2r2cUvpDU+BCYvXiqSSRxSnX4+c Jul 15 04:45:01.018563 sshd-session[5514]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:45:01.022412 systemd-logind[1506]: New session 20 of user core. Jul 15 04:45:01.032239 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 15 04:45:01.193842 sshd[5518]: Connection closed by 10.0.0.1 port 54544 Jul 15 04:45:01.194451 sshd-session[5514]: pam_unix(sshd:session): session closed for user core Jul 15 04:45:01.199642 systemd[1]: sshd@19-10.0.0.60:22-10.0.0.1:54544.service: Deactivated successfully. Jul 15 04:45:01.201992 systemd[1]: session-20.scope: Deactivated successfully. Jul 15 04:45:01.203266 systemd-logind[1506]: Session 20 logged out. Waiting for processes to exit. Jul 15 04:45:01.205064 systemd-logind[1506]: Removed session 20.