Jul 15 04:46:06.793400 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 15 04:46:06.793420 kernel: Linux version 6.12.36-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Tue Jul 15 03:28:41 -00 2025 Jul 15 04:46:06.793430 kernel: KASLR enabled Jul 15 04:46:06.793435 kernel: efi: EFI v2.7 by EDK II Jul 15 04:46:06.793441 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb228018 ACPI 2.0=0xdb9b8018 RNG=0xdb9b8a18 MEMRESERVE=0xdb221f18 Jul 15 04:46:06.793446 kernel: random: crng init done Jul 15 04:46:06.793453 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 Jul 15 04:46:06.793458 kernel: secureboot: Secure boot enabled Jul 15 04:46:06.793464 kernel: ACPI: Early table checksum verification disabled Jul 15 04:46:06.793470 kernel: ACPI: RSDP 0x00000000DB9B8018 000024 (v02 BOCHS ) Jul 15 04:46:06.793476 kernel: ACPI: XSDT 0x00000000DB9B8F18 000064 (v01 BOCHS BXPC 00000001 01000013) Jul 15 04:46:06.793482 kernel: ACPI: FACP 0x00000000DB9B8B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 04:46:06.793487 kernel: ACPI: DSDT 0x00000000DB904018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 04:46:06.793493 kernel: ACPI: APIC 0x00000000DB9B8C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 04:46:06.793501 kernel: ACPI: PPTT 0x00000000DB9B8098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 04:46:06.793508 kernel: ACPI: GTDT 0x00000000DB9B8818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 04:46:06.793514 kernel: ACPI: MCFG 0x00000000DB9B8A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 04:46:06.793520 kernel: ACPI: SPCR 0x00000000DB9B8918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 04:46:06.793526 kernel: ACPI: DBG2 0x00000000DB9B8998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 04:46:06.793532 kernel: ACPI: IORT 0x00000000DB9B8198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 04:46:06.793538 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jul 15 04:46:06.793544 kernel: ACPI: Use ACPI SPCR as default console: Yes Jul 15 04:46:06.793550 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jul 15 04:46:06.793556 kernel: NODE_DATA(0) allocated [mem 0xdc737a00-0xdc73efff] Jul 15 04:46:06.793562 kernel: Zone ranges: Jul 15 04:46:06.793569 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jul 15 04:46:06.793575 kernel: DMA32 empty Jul 15 04:46:06.793581 kernel: Normal empty Jul 15 04:46:06.793586 kernel: Device empty Jul 15 04:46:06.793592 kernel: Movable zone start for each node Jul 15 04:46:06.793598 kernel: Early memory node ranges Jul 15 04:46:06.793604 kernel: node 0: [mem 0x0000000040000000-0x00000000dbb4ffff] Jul 15 04:46:06.793610 kernel: node 0: [mem 0x00000000dbb50000-0x00000000dbe7ffff] Jul 15 04:46:06.793616 kernel: node 0: [mem 0x00000000dbe80000-0x00000000dbe9ffff] Jul 15 04:46:06.793622 kernel: node 0: [mem 0x00000000dbea0000-0x00000000dbedffff] Jul 15 04:46:06.793628 kernel: node 0: [mem 0x00000000dbee0000-0x00000000dbf1ffff] Jul 15 04:46:06.793633 kernel: node 0: [mem 0x00000000dbf20000-0x00000000dbf6ffff] Jul 15 04:46:06.793640 kernel: node 0: [mem 0x00000000dbf70000-0x00000000dcbfffff] Jul 15 04:46:06.793646 kernel: node 0: [mem 0x00000000dcc00000-0x00000000dcfdffff] Jul 15 04:46:06.793652 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jul 15 04:46:06.793661 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jul 15 04:46:06.793667 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jul 15 04:46:06.793674 kernel: cma: Reserved 16 MiB at 0x00000000d7a00000 on node -1 Jul 15 04:46:06.793680 kernel: psci: probing for conduit method from ACPI. Jul 15 04:46:06.793688 kernel: psci: PSCIv1.1 detected in firmware. Jul 15 04:46:06.793694 kernel: psci: Using standard PSCI v0.2 function IDs Jul 15 04:46:06.793700 kernel: psci: Trusted OS migration not required Jul 15 04:46:06.793707 kernel: psci: SMC Calling Convention v1.1 Jul 15 04:46:06.793713 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jul 15 04:46:06.793720 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jul 15 04:46:06.793726 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jul 15 04:46:06.793733 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jul 15 04:46:06.793739 kernel: Detected PIPT I-cache on CPU0 Jul 15 04:46:06.793746 kernel: CPU features: detected: GIC system register CPU interface Jul 15 04:46:06.793753 kernel: CPU features: detected: Spectre-v4 Jul 15 04:46:06.793759 kernel: CPU features: detected: Spectre-BHB Jul 15 04:46:06.793765 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 15 04:46:06.793772 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 15 04:46:06.793778 kernel: CPU features: detected: ARM erratum 1418040 Jul 15 04:46:06.793784 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 15 04:46:06.793791 kernel: alternatives: applying boot alternatives Jul 15 04:46:06.793798 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=71133d47dc7355ed63f3db64861b54679726ebf08c2975c3bf327e76b39a3acd Jul 15 04:46:06.793805 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 15 04:46:06.793811 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 15 04:46:06.793819 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 15 04:46:06.793825 kernel: Fallback order for Node 0: 0 Jul 15 04:46:06.793831 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Jul 15 04:46:06.793838 kernel: Policy zone: DMA Jul 15 04:46:06.793844 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 15 04:46:06.793850 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Jul 15 04:46:06.793857 kernel: software IO TLB: area num 4. Jul 15 04:46:06.793863 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Jul 15 04:46:06.793870 kernel: software IO TLB: mapped [mem 0x00000000db504000-0x00000000db904000] (4MB) Jul 15 04:46:06.793876 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 15 04:46:06.793883 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 15 04:46:06.793890 kernel: rcu: RCU event tracing is enabled. Jul 15 04:46:06.793898 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 15 04:46:06.793905 kernel: Trampoline variant of Tasks RCU enabled. Jul 15 04:46:06.793911 kernel: Tracing variant of Tasks RCU enabled. Jul 15 04:46:06.793918 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 15 04:46:06.793925 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 15 04:46:06.793931 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 15 04:46:06.793938 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 15 04:46:06.793944 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 15 04:46:06.793950 kernel: GICv3: 256 SPIs implemented Jul 15 04:46:06.793957 kernel: GICv3: 0 Extended SPIs implemented Jul 15 04:46:06.793963 kernel: Root IRQ handler: gic_handle_irq Jul 15 04:46:06.793970 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jul 15 04:46:06.793977 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Jul 15 04:46:06.793983 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jul 15 04:46:06.793990 kernel: ITS [mem 0x08080000-0x0809ffff] Jul 15 04:46:06.793997 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Jul 15 04:46:06.794003 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Jul 15 04:46:06.794010 kernel: GICv3: using LPI property table @0x0000000040130000 Jul 15 04:46:06.794016 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Jul 15 04:46:06.794023 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 15 04:46:06.794029 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 15 04:46:06.794036 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 15 04:46:06.794042 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 15 04:46:06.794050 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 15 04:46:06.794056 kernel: arm-pv: using stolen time PV Jul 15 04:46:06.794063 kernel: Console: colour dummy device 80x25 Jul 15 04:46:06.794069 kernel: ACPI: Core revision 20240827 Jul 15 04:46:06.794076 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 15 04:46:06.794083 kernel: pid_max: default: 32768 minimum: 301 Jul 15 04:46:06.794090 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 15 04:46:06.794096 kernel: landlock: Up and running. Jul 15 04:46:06.794102 kernel: SELinux: Initializing. Jul 15 04:46:06.794110 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 15 04:46:06.794117 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 15 04:46:06.794124 kernel: rcu: Hierarchical SRCU implementation. Jul 15 04:46:06.794130 kernel: rcu: Max phase no-delay instances is 400. Jul 15 04:46:06.794137 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 15 04:46:06.794143 kernel: Remapping and enabling EFI services. Jul 15 04:46:06.794150 kernel: smp: Bringing up secondary CPUs ... Jul 15 04:46:06.794156 kernel: Detected PIPT I-cache on CPU1 Jul 15 04:46:06.794163 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jul 15 04:46:06.794171 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Jul 15 04:46:06.794182 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 15 04:46:06.794188 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 15 04:46:06.794197 kernel: Detected PIPT I-cache on CPU2 Jul 15 04:46:06.794203 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jul 15 04:46:06.794211 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Jul 15 04:46:06.794217 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 15 04:46:06.794224 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jul 15 04:46:06.794231 kernel: Detected PIPT I-cache on CPU3 Jul 15 04:46:06.794244 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jul 15 04:46:06.794252 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Jul 15 04:46:06.794263 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 15 04:46:06.794272 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jul 15 04:46:06.794282 kernel: smp: Brought up 1 node, 4 CPUs Jul 15 04:46:06.794289 kernel: SMP: Total of 4 processors activated. Jul 15 04:46:06.794296 kernel: CPU: All CPU(s) started at EL1 Jul 15 04:46:06.794303 kernel: CPU features: detected: 32-bit EL0 Support Jul 15 04:46:06.794310 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 15 04:46:06.794319 kernel: CPU features: detected: Common not Private translations Jul 15 04:46:06.794326 kernel: CPU features: detected: CRC32 instructions Jul 15 04:46:06.794332 kernel: CPU features: detected: Enhanced Virtualization Traps Jul 15 04:46:06.794350 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 15 04:46:06.794374 kernel: CPU features: detected: LSE atomic instructions Jul 15 04:46:06.794382 kernel: CPU features: detected: Privileged Access Never Jul 15 04:46:06.794389 kernel: CPU features: detected: RAS Extension Support Jul 15 04:46:06.794396 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 15 04:46:06.794403 kernel: alternatives: applying system-wide alternatives Jul 15 04:46:06.794415 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Jul 15 04:46:06.794422 kernel: Memory: 2421924K/2572288K available (11136K kernel code, 2436K rwdata, 9056K rodata, 39424K init, 1038K bss, 128028K reserved, 16384K cma-reserved) Jul 15 04:46:06.794429 kernel: devtmpfs: initialized Jul 15 04:46:06.794437 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 15 04:46:06.794444 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 15 04:46:06.794451 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 15 04:46:06.794458 kernel: 0 pages in range for non-PLT usage Jul 15 04:46:06.794465 kernel: 508448 pages in range for PLT usage Jul 15 04:46:06.794472 kernel: pinctrl core: initialized pinctrl subsystem Jul 15 04:46:06.794481 kernel: SMBIOS 3.0.0 present. Jul 15 04:46:06.794488 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Jul 15 04:46:06.794495 kernel: DMI: Memory slots populated: 1/1 Jul 15 04:46:06.794502 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 15 04:46:06.794509 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 15 04:46:06.794516 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 15 04:46:06.794524 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 15 04:46:06.794531 kernel: audit: initializing netlink subsys (disabled) Jul 15 04:46:06.794538 kernel: audit: type=2000 audit(0.024:1): state=initialized audit_enabled=0 res=1 Jul 15 04:46:06.794547 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 15 04:46:06.794554 kernel: cpuidle: using governor menu Jul 15 04:46:06.794560 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 15 04:46:06.794567 kernel: ASID allocator initialised with 32768 entries Jul 15 04:46:06.794574 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 15 04:46:06.794581 kernel: Serial: AMBA PL011 UART driver Jul 15 04:46:06.794588 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 15 04:46:06.794595 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 15 04:46:06.794602 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 15 04:46:06.794610 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 15 04:46:06.794617 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 15 04:46:06.794624 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 15 04:46:06.794631 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 15 04:46:06.794638 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 15 04:46:06.794645 kernel: ACPI: Added _OSI(Module Device) Jul 15 04:46:06.794652 kernel: ACPI: Added _OSI(Processor Device) Jul 15 04:46:06.794658 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 15 04:46:06.794665 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 15 04:46:06.794673 kernel: ACPI: Interpreter enabled Jul 15 04:46:06.794680 kernel: ACPI: Using GIC for interrupt routing Jul 15 04:46:06.794687 kernel: ACPI: MCFG table detected, 1 entries Jul 15 04:46:06.794694 kernel: ACPI: CPU0 has been hot-added Jul 15 04:46:06.794700 kernel: ACPI: CPU1 has been hot-added Jul 15 04:46:06.794707 kernel: ACPI: CPU2 has been hot-added Jul 15 04:46:06.794714 kernel: ACPI: CPU3 has been hot-added Jul 15 04:46:06.794721 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jul 15 04:46:06.794728 kernel: printk: legacy console [ttyAMA0] enabled Jul 15 04:46:06.794736 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 15 04:46:06.794854 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 15 04:46:06.794916 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 15 04:46:06.794989 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 15 04:46:06.795045 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jul 15 04:46:06.795110 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jul 15 04:46:06.795124 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jul 15 04:46:06.795136 kernel: PCI host bridge to bus 0000:00 Jul 15 04:46:06.795236 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jul 15 04:46:06.795290 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 15 04:46:06.795349 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jul 15 04:46:06.795425 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 15 04:46:06.795506 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Jul 15 04:46:06.795574 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jul 15 04:46:06.795640 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Jul 15 04:46:06.795699 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Jul 15 04:46:06.795756 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Jul 15 04:46:06.795814 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Jul 15 04:46:06.795871 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Jul 15 04:46:06.795930 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Jul 15 04:46:06.795985 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jul 15 04:46:06.796038 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 15 04:46:06.796089 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jul 15 04:46:06.796098 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 15 04:46:06.796105 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 15 04:46:06.796112 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 15 04:46:06.796119 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 15 04:46:06.796126 kernel: iommu: Default domain type: Translated Jul 15 04:46:06.796134 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 15 04:46:06.796141 kernel: efivars: Registered efivars operations Jul 15 04:46:06.796148 kernel: vgaarb: loaded Jul 15 04:46:06.796155 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 15 04:46:06.796161 kernel: VFS: Disk quotas dquot_6.6.0 Jul 15 04:46:06.796168 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 15 04:46:06.796175 kernel: pnp: PnP ACPI init Jul 15 04:46:06.796242 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jul 15 04:46:06.796252 kernel: pnp: PnP ACPI: found 1 devices Jul 15 04:46:06.796260 kernel: NET: Registered PF_INET protocol family Jul 15 04:46:06.796267 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 15 04:46:06.796274 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 15 04:46:06.796281 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 15 04:46:06.796287 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 15 04:46:06.796294 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 15 04:46:06.796301 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 15 04:46:06.796308 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 15 04:46:06.796315 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 15 04:46:06.796323 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 15 04:46:06.796330 kernel: PCI: CLS 0 bytes, default 64 Jul 15 04:46:06.796337 kernel: kvm [1]: HYP mode not available Jul 15 04:46:06.796351 kernel: Initialise system trusted keyrings Jul 15 04:46:06.796369 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 15 04:46:06.796378 kernel: Key type asymmetric registered Jul 15 04:46:06.796385 kernel: Asymmetric key parser 'x509' registered Jul 15 04:46:06.796392 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 15 04:46:06.796399 kernel: io scheduler mq-deadline registered Jul 15 04:46:06.796409 kernel: io scheduler kyber registered Jul 15 04:46:06.796416 kernel: io scheduler bfq registered Jul 15 04:46:06.796423 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 15 04:46:06.796430 kernel: ACPI: button: Power Button [PWRB] Jul 15 04:46:06.796437 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 15 04:46:06.796505 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jul 15 04:46:06.796515 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 15 04:46:06.796521 kernel: thunder_xcv, ver 1.0 Jul 15 04:46:06.796528 kernel: thunder_bgx, ver 1.0 Jul 15 04:46:06.796537 kernel: nicpf, ver 1.0 Jul 15 04:46:06.796544 kernel: nicvf, ver 1.0 Jul 15 04:46:06.796611 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 15 04:46:06.796666 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-15T04:46:06 UTC (1752554766) Jul 15 04:46:06.796675 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 15 04:46:06.796683 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Jul 15 04:46:06.796690 kernel: watchdog: NMI not fully supported Jul 15 04:46:06.796696 kernel: watchdog: Hard watchdog permanently disabled Jul 15 04:46:06.796705 kernel: NET: Registered PF_INET6 protocol family Jul 15 04:46:06.796712 kernel: Segment Routing with IPv6 Jul 15 04:46:06.796719 kernel: In-situ OAM (IOAM) with IPv6 Jul 15 04:46:06.796726 kernel: NET: Registered PF_PACKET protocol family Jul 15 04:46:06.796732 kernel: Key type dns_resolver registered Jul 15 04:46:06.796740 kernel: registered taskstats version 1 Jul 15 04:46:06.796746 kernel: Loading compiled-in X.509 certificates Jul 15 04:46:06.796753 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.36-flatcar: b5c59c413839929aea5bd4b52ae6eaff0e245cd2' Jul 15 04:46:06.796760 kernel: Demotion targets for Node 0: null Jul 15 04:46:06.796768 kernel: Key type .fscrypt registered Jul 15 04:46:06.796775 kernel: Key type fscrypt-provisioning registered Jul 15 04:46:06.796782 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 15 04:46:06.796789 kernel: ima: Allocated hash algorithm: sha1 Jul 15 04:46:06.796796 kernel: ima: No architecture policies found Jul 15 04:46:06.796802 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 15 04:46:06.796809 kernel: clk: Disabling unused clocks Jul 15 04:46:06.796816 kernel: PM: genpd: Disabling unused power domains Jul 15 04:46:06.796823 kernel: Warning: unable to open an initial console. Jul 15 04:46:06.796831 kernel: Freeing unused kernel memory: 39424K Jul 15 04:46:06.796838 kernel: Run /init as init process Jul 15 04:46:06.796845 kernel: with arguments: Jul 15 04:46:06.796851 kernel: /init Jul 15 04:46:06.796858 kernel: with environment: Jul 15 04:46:06.796865 kernel: HOME=/ Jul 15 04:46:06.796871 kernel: TERM=linux Jul 15 04:46:06.796878 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 15 04:46:06.796886 systemd[1]: Successfully made /usr/ read-only. Jul 15 04:46:06.796897 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 15 04:46:06.796905 systemd[1]: Detected virtualization kvm. Jul 15 04:46:06.796912 systemd[1]: Detected architecture arm64. Jul 15 04:46:06.796919 systemd[1]: Running in initrd. Jul 15 04:46:06.796926 systemd[1]: No hostname configured, using default hostname. Jul 15 04:46:06.796934 systemd[1]: Hostname set to . Jul 15 04:46:06.796941 systemd[1]: Initializing machine ID from VM UUID. Jul 15 04:46:06.796949 systemd[1]: Queued start job for default target initrd.target. Jul 15 04:46:06.796957 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 15 04:46:06.796964 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 15 04:46:06.796972 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 15 04:46:06.796980 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 15 04:46:06.796987 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 15 04:46:06.796996 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 15 04:46:06.797005 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 15 04:46:06.797012 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 15 04:46:06.797020 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 15 04:46:06.797027 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 15 04:46:06.797035 systemd[1]: Reached target paths.target - Path Units. Jul 15 04:46:06.797042 systemd[1]: Reached target slices.target - Slice Units. Jul 15 04:46:06.797049 systemd[1]: Reached target swap.target - Swaps. Jul 15 04:46:06.797057 systemd[1]: Reached target timers.target - Timer Units. Jul 15 04:46:06.797065 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 15 04:46:06.797073 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 15 04:46:06.797080 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 15 04:46:06.797088 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 15 04:46:06.797095 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 15 04:46:06.797102 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 15 04:46:06.797110 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 15 04:46:06.797117 systemd[1]: Reached target sockets.target - Socket Units. Jul 15 04:46:06.797125 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 15 04:46:06.797133 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 15 04:46:06.797140 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 15 04:46:06.797148 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 15 04:46:06.797155 systemd[1]: Starting systemd-fsck-usr.service... Jul 15 04:46:06.797163 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 15 04:46:06.797170 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 15 04:46:06.797177 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 04:46:06.797185 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 15 04:46:06.797194 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 15 04:46:06.797201 systemd[1]: Finished systemd-fsck-usr.service. Jul 15 04:46:06.797221 systemd-journald[242]: Collecting audit messages is disabled. Jul 15 04:46:06.797240 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 15 04:46:06.797249 systemd-journald[242]: Journal started Jul 15 04:46:06.797266 systemd-journald[242]: Runtime Journal (/run/log/journal/477d6f04d2024995829698e5446edd08) is 6M, max 48.5M, 42.4M free. Jul 15 04:46:06.801605 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 04:46:06.793303 systemd-modules-load[244]: Inserted module 'overlay' Jul 15 04:46:06.803808 systemd[1]: Started systemd-journald.service - Journal Service. Jul 15 04:46:06.806904 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 15 04:46:06.809995 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 15 04:46:06.809471 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 15 04:46:06.813474 kernel: Bridge firewalling registered Jul 15 04:46:06.811169 systemd-modules-load[244]: Inserted module 'br_netfilter' Jul 15 04:46:06.814594 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 15 04:46:06.816288 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 15 04:46:06.819518 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 15 04:46:06.821246 systemd-tmpfiles[263]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 15 04:46:06.822913 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 15 04:46:06.826146 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 15 04:46:06.829273 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 15 04:46:06.831439 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 15 04:46:06.832286 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 15 04:46:06.833902 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 15 04:46:06.847815 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 15 04:46:06.861748 dracut-cmdline[291]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=71133d47dc7355ed63f3db64861b54679726ebf08c2975c3bf327e76b39a3acd Jul 15 04:46:06.873662 systemd-resolved[288]: Positive Trust Anchors: Jul 15 04:46:06.873677 systemd-resolved[288]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 15 04:46:06.873708 systemd-resolved[288]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 15 04:46:06.878261 systemd-resolved[288]: Defaulting to hostname 'linux'. Jul 15 04:46:06.879161 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 15 04:46:06.881455 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 15 04:46:06.939408 kernel: SCSI subsystem initialized Jul 15 04:46:06.943381 kernel: Loading iSCSI transport class v2.0-870. Jul 15 04:46:06.953375 kernel: iscsi: registered transport (tcp) Jul 15 04:46:06.967386 kernel: iscsi: registered transport (qla4xxx) Jul 15 04:46:06.967399 kernel: QLogic iSCSI HBA Driver Jul 15 04:46:06.985091 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 15 04:46:07.001415 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 15 04:46:07.003218 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 15 04:46:07.046649 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 15 04:46:07.048241 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 15 04:46:07.111390 kernel: raid6: neonx8 gen() 15739 MB/s Jul 15 04:46:07.128387 kernel: raid6: neonx4 gen() 15836 MB/s Jul 15 04:46:07.145372 kernel: raid6: neonx2 gen() 13226 MB/s Jul 15 04:46:07.162384 kernel: raid6: neonx1 gen() 10451 MB/s Jul 15 04:46:07.179384 kernel: raid6: int64x8 gen() 6903 MB/s Jul 15 04:46:07.196377 kernel: raid6: int64x4 gen() 7349 MB/s Jul 15 04:46:07.213375 kernel: raid6: int64x2 gen() 6109 MB/s Jul 15 04:46:07.230375 kernel: raid6: int64x1 gen() 5056 MB/s Jul 15 04:46:07.230390 kernel: raid6: using algorithm neonx4 gen() 15836 MB/s Jul 15 04:46:07.247389 kernel: raid6: .... xor() 12342 MB/s, rmw enabled Jul 15 04:46:07.247418 kernel: raid6: using neon recovery algorithm Jul 15 04:46:07.252425 kernel: xor: measuring software checksum speed Jul 15 04:46:07.252464 kernel: 8regs : 21636 MB/sec Jul 15 04:46:07.253481 kernel: 32regs : 21722 MB/sec Jul 15 04:46:07.253500 kernel: arm64_neon : 28234 MB/sec Jul 15 04:46:07.254377 kernel: xor: using function: arm64_neon (28234 MB/sec) Jul 15 04:46:07.309383 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 15 04:46:07.315071 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 15 04:46:07.317302 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 15 04:46:07.341052 systemd-udevd[499]: Using default interface naming scheme 'v255'. Jul 15 04:46:07.345116 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 15 04:46:07.346766 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 15 04:46:07.372301 dracut-pre-trigger[506]: rd.md=0: removing MD RAID activation Jul 15 04:46:07.392407 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 15 04:46:07.394161 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 15 04:46:07.443275 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 15 04:46:07.446529 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 15 04:46:07.486594 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jul 15 04:46:07.487783 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 15 04:46:07.493284 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 15 04:46:07.493436 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 04:46:07.498708 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 15 04:46:07.498724 kernel: GPT:9289727 != 19775487 Jul 15 04:46:07.498733 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 15 04:46:07.498746 kernel: GPT:9289727 != 19775487 Jul 15 04:46:07.498755 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 15 04:46:07.498763 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 15 04:46:07.495663 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 04:46:07.500712 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 04:46:07.522346 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 15 04:46:07.532882 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 15 04:46:07.533885 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 04:46:07.543505 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 15 04:46:07.551121 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 15 04:46:07.557013 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 15 04:46:07.557935 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 15 04:46:07.560127 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 15 04:46:07.561762 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 15 04:46:07.563253 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 15 04:46:07.565442 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 15 04:46:07.566948 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 15 04:46:07.580093 disk-uuid[590]: Primary Header is updated. Jul 15 04:46:07.580093 disk-uuid[590]: Secondary Entries is updated. Jul 15 04:46:07.580093 disk-uuid[590]: Secondary Header is updated. Jul 15 04:46:07.582790 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 15 04:46:07.589379 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 15 04:46:08.595390 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 15 04:46:08.596054 disk-uuid[595]: The operation has completed successfully. Jul 15 04:46:08.617605 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 15 04:46:08.617697 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 15 04:46:08.642915 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 15 04:46:08.670001 sh[610]: Success Jul 15 04:46:08.683567 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 15 04:46:08.683598 kernel: device-mapper: uevent: version 1.0.3 Jul 15 04:46:08.684407 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 15 04:46:08.691396 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jul 15 04:46:08.717598 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 15 04:46:08.720032 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 15 04:46:08.734237 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 15 04:46:08.740379 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 15 04:46:08.740413 kernel: BTRFS: device fsid a7b7592d-2d1d-4236-b04f-dc58147b4692 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (622) Jul 15 04:46:08.742603 kernel: BTRFS info (device dm-0): first mount of filesystem a7b7592d-2d1d-4236-b04f-dc58147b4692 Jul 15 04:46:08.742616 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 15 04:46:08.743765 kernel: BTRFS info (device dm-0): using free-space-tree Jul 15 04:46:08.750933 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 15 04:46:08.751958 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 15 04:46:08.752954 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 15 04:46:08.753667 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 15 04:46:08.756098 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 15 04:46:08.775395 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (653) Jul 15 04:46:08.775440 kernel: BTRFS info (device vda6): first mount of filesystem 1ba6da34-80a1-4a8c-bd4d-0f30640013e8 Jul 15 04:46:08.776826 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 15 04:46:08.777392 kernel: BTRFS info (device vda6): using free-space-tree Jul 15 04:46:08.783376 kernel: BTRFS info (device vda6): last unmount of filesystem 1ba6da34-80a1-4a8c-bd4d-0f30640013e8 Jul 15 04:46:08.783790 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 15 04:46:08.785576 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 15 04:46:08.853129 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 15 04:46:08.855669 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 15 04:46:08.901621 systemd-networkd[799]: lo: Link UP Jul 15 04:46:08.901634 systemd-networkd[799]: lo: Gained carrier Jul 15 04:46:08.902571 systemd-networkd[799]: Enumeration completed Jul 15 04:46:08.902693 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 15 04:46:08.902977 systemd-networkd[799]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 04:46:08.902982 systemd-networkd[799]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 15 04:46:08.903970 systemd-networkd[799]: eth0: Link UP Jul 15 04:46:08.903973 systemd-networkd[799]: eth0: Gained carrier Jul 15 04:46:08.903983 systemd-networkd[799]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 04:46:08.904429 systemd[1]: Reached target network.target - Network. Jul 15 04:46:08.914407 systemd-networkd[799]: eth0: DHCPv4 address 10.0.0.76/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 15 04:46:08.934432 ignition[700]: Ignition 2.21.0 Jul 15 04:46:08.934449 ignition[700]: Stage: fetch-offline Jul 15 04:46:08.934487 ignition[700]: no configs at "/usr/lib/ignition/base.d" Jul 15 04:46:08.934545 ignition[700]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 04:46:08.934748 ignition[700]: parsed url from cmdline: "" Jul 15 04:46:08.934752 ignition[700]: no config URL provided Jul 15 04:46:08.934757 ignition[700]: reading system config file "/usr/lib/ignition/user.ign" Jul 15 04:46:08.934764 ignition[700]: no config at "/usr/lib/ignition/user.ign" Jul 15 04:46:08.934783 ignition[700]: op(1): [started] loading QEMU firmware config module Jul 15 04:46:08.934787 ignition[700]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 15 04:46:08.940074 ignition[700]: op(1): [finished] loading QEMU firmware config module Jul 15 04:46:08.989820 ignition[700]: parsing config with SHA512: 3ab399c4294fc67f4f4a0203b795258c00886ab1b456d060c54c6e54e868623a8c0ebd3a754d8cce5f36c84329c319a0f7706d4841c7cb9e390d55ed8c64e186 Jul 15 04:46:08.994412 unknown[700]: fetched base config from "system" Jul 15 04:46:08.994424 unknown[700]: fetched user config from "qemu" Jul 15 04:46:08.994868 ignition[700]: fetch-offline: fetch-offline passed Jul 15 04:46:08.994937 ignition[700]: Ignition finished successfully Jul 15 04:46:08.996981 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 15 04:46:08.998414 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 15 04:46:08.999158 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 15 04:46:09.025090 ignition[813]: Ignition 2.21.0 Jul 15 04:46:09.025109 ignition[813]: Stage: kargs Jul 15 04:46:09.025231 ignition[813]: no configs at "/usr/lib/ignition/base.d" Jul 15 04:46:09.025240 ignition[813]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 04:46:09.026856 ignition[813]: kargs: kargs passed Jul 15 04:46:09.026930 ignition[813]: Ignition finished successfully Jul 15 04:46:09.029975 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 15 04:46:09.031668 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 15 04:46:09.051753 ignition[821]: Ignition 2.21.0 Jul 15 04:46:09.051770 ignition[821]: Stage: disks Jul 15 04:46:09.051903 ignition[821]: no configs at "/usr/lib/ignition/base.d" Jul 15 04:46:09.051912 ignition[821]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 04:46:09.054133 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 15 04:46:09.052680 ignition[821]: disks: disks passed Jul 15 04:46:09.055814 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 15 04:46:09.052721 ignition[821]: Ignition finished successfully Jul 15 04:46:09.056751 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 15 04:46:09.057904 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 15 04:46:09.059260 systemd[1]: Reached target sysinit.target - System Initialization. Jul 15 04:46:09.060349 systemd[1]: Reached target basic.target - Basic System. Jul 15 04:46:09.062490 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 15 04:46:09.081326 systemd-fsck[831]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jul 15 04:46:09.085434 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 15 04:46:09.087541 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 15 04:46:09.144376 kernel: EXT4-fs (vda9): mounted filesystem 4818953b-9d82-47bd-ab58-d0aa5641a19a r/w with ordered data mode. Quota mode: none. Jul 15 04:46:09.144932 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 15 04:46:09.145988 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 15 04:46:09.149618 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 15 04:46:09.151065 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 15 04:46:09.151863 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 15 04:46:09.151905 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 15 04:46:09.151930 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 15 04:46:09.175295 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 15 04:46:09.178137 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 15 04:46:09.180889 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (839) Jul 15 04:46:09.180910 kernel: BTRFS info (device vda6): first mount of filesystem 1ba6da34-80a1-4a8c-bd4d-0f30640013e8 Jul 15 04:46:09.180920 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 15 04:46:09.182386 kernel: BTRFS info (device vda6): using free-space-tree Jul 15 04:46:09.183530 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 15 04:46:09.237763 initrd-setup-root[863]: cut: /sysroot/etc/passwd: No such file or directory Jul 15 04:46:09.240829 initrd-setup-root[870]: cut: /sysroot/etc/group: No such file or directory Jul 15 04:46:09.244132 initrd-setup-root[877]: cut: /sysroot/etc/shadow: No such file or directory Jul 15 04:46:09.247124 initrd-setup-root[884]: cut: /sysroot/etc/gshadow: No such file or directory Jul 15 04:46:09.316518 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 15 04:46:09.318447 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 15 04:46:09.320500 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 15 04:46:09.337394 kernel: BTRFS info (device vda6): last unmount of filesystem 1ba6da34-80a1-4a8c-bd4d-0f30640013e8 Jul 15 04:46:09.355738 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 15 04:46:09.358000 ignition[953]: INFO : Ignition 2.21.0 Jul 15 04:46:09.358723 ignition[953]: INFO : Stage: mount Jul 15 04:46:09.359365 ignition[953]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 04:46:09.360027 ignition[953]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 04:46:09.361662 ignition[953]: INFO : mount: mount passed Jul 15 04:46:09.362424 ignition[953]: INFO : Ignition finished successfully Jul 15 04:46:09.365410 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 15 04:46:09.366932 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 15 04:46:09.874802 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 15 04:46:09.876233 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 15 04:46:09.894523 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (965) Jul 15 04:46:09.894554 kernel: BTRFS info (device vda6): first mount of filesystem 1ba6da34-80a1-4a8c-bd4d-0f30640013e8 Jul 15 04:46:09.894564 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 15 04:46:09.895635 kernel: BTRFS info (device vda6): using free-space-tree Jul 15 04:46:09.898008 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 15 04:46:09.930580 ignition[982]: INFO : Ignition 2.21.0 Jul 15 04:46:09.930580 ignition[982]: INFO : Stage: files Jul 15 04:46:09.931772 ignition[982]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 04:46:09.931772 ignition[982]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 04:46:09.933266 ignition[982]: DEBUG : files: compiled without relabeling support, skipping Jul 15 04:46:09.933266 ignition[982]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 15 04:46:09.933266 ignition[982]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 15 04:46:09.936035 ignition[982]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 15 04:46:09.936035 ignition[982]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 15 04:46:09.936035 ignition[982]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 15 04:46:09.936035 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jul 15 04:46:09.934866 unknown[982]: wrote ssh authorized keys file for user: core Jul 15 04:46:09.940584 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Jul 15 04:46:10.682487 systemd-networkd[799]: eth0: Gained IPv6LL Jul 15 04:46:11.824433 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 15 04:46:21.608927 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jul 15 04:46:21.608927 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 15 04:46:21.612186 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 15 04:46:21.612186 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 15 04:46:21.612186 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 15 04:46:21.612186 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 15 04:46:21.612186 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 15 04:46:21.612186 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 15 04:46:21.612186 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 15 04:46:21.612186 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 15 04:46:21.612186 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 15 04:46:21.612186 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 15 04:46:21.612186 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 15 04:46:21.612186 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 15 04:46:21.627598 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Jul 15 04:46:22.154605 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 15 04:46:22.680711 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 15 04:46:22.680711 ignition[982]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 15 04:46:22.683804 ignition[982]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 15 04:46:22.685137 ignition[982]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 15 04:46:22.685137 ignition[982]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 15 04:46:22.685137 ignition[982]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jul 15 04:46:22.685137 ignition[982]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 15 04:46:22.685137 ignition[982]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 15 04:46:22.685137 ignition[982]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jul 15 04:46:22.685137 ignition[982]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jul 15 04:46:22.700434 ignition[982]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 15 04:46:22.704096 ignition[982]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 15 04:46:22.705197 ignition[982]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jul 15 04:46:22.705197 ignition[982]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jul 15 04:46:22.705197 ignition[982]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jul 15 04:46:22.705197 ignition[982]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 15 04:46:22.705197 ignition[982]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 15 04:46:22.705197 ignition[982]: INFO : files: files passed Jul 15 04:46:22.705197 ignition[982]: INFO : Ignition finished successfully Jul 15 04:46:22.706278 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 15 04:46:22.710322 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 15 04:46:22.711856 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 15 04:46:22.726903 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 15 04:46:22.728070 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 15 04:46:22.729752 initrd-setup-root-after-ignition[1011]: grep: /sysroot/oem/oem-release: No such file or directory Jul 15 04:46:22.731228 initrd-setup-root-after-ignition[1013]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 15 04:46:22.731228 initrd-setup-root-after-ignition[1013]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 15 04:46:22.733758 initrd-setup-root-after-ignition[1017]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 15 04:46:22.734820 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 15 04:46:22.735988 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 15 04:46:22.739523 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 15 04:46:22.783439 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 15 04:46:22.784260 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 15 04:46:22.786341 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 15 04:46:22.787821 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 15 04:46:22.788594 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 15 04:46:22.789444 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 15 04:46:22.820981 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 15 04:46:22.823194 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 15 04:46:22.849693 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 15 04:46:22.850712 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 15 04:46:22.852191 systemd[1]: Stopped target timers.target - Timer Units. Jul 15 04:46:22.853494 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 15 04:46:22.853627 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 15 04:46:22.855490 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 15 04:46:22.856959 systemd[1]: Stopped target basic.target - Basic System. Jul 15 04:46:22.858099 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 15 04:46:22.859306 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 15 04:46:22.860808 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 15 04:46:22.862165 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 15 04:46:22.863568 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 15 04:46:22.864889 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 15 04:46:22.866263 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 15 04:46:22.867689 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 15 04:46:22.868930 systemd[1]: Stopped target swap.target - Swaps. Jul 15 04:46:22.870012 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 15 04:46:22.870148 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 15 04:46:22.871849 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 15 04:46:22.873233 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 15 04:46:22.874710 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 15 04:46:22.875483 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 15 04:46:22.876921 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 15 04:46:22.877039 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 15 04:46:22.878976 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 15 04:46:22.879099 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 15 04:46:22.880738 systemd[1]: Stopped target paths.target - Path Units. Jul 15 04:46:22.881970 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 15 04:46:22.885428 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 15 04:46:22.886417 systemd[1]: Stopped target slices.target - Slice Units. Jul 15 04:46:22.887960 systemd[1]: Stopped target sockets.target - Socket Units. Jul 15 04:46:22.889050 systemd[1]: iscsid.socket: Deactivated successfully. Jul 15 04:46:22.889137 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 15 04:46:22.890217 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 15 04:46:22.890289 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 15 04:46:22.891390 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 15 04:46:22.891509 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 15 04:46:22.892753 systemd[1]: ignition-files.service: Deactivated successfully. Jul 15 04:46:22.892850 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 15 04:46:22.894751 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 15 04:46:22.895825 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 15 04:46:22.895969 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 15 04:46:22.898203 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 15 04:46:22.899191 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 15 04:46:22.899337 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 15 04:46:22.900674 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 15 04:46:22.900782 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 15 04:46:22.905463 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 15 04:46:22.909494 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 15 04:46:22.917682 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 15 04:46:22.922053 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 15 04:46:22.922155 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 15 04:46:22.924424 ignition[1037]: INFO : Ignition 2.21.0 Jul 15 04:46:22.924424 ignition[1037]: INFO : Stage: umount Jul 15 04:46:22.924424 ignition[1037]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 04:46:22.924424 ignition[1037]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 04:46:22.927071 ignition[1037]: INFO : umount: umount passed Jul 15 04:46:22.927071 ignition[1037]: INFO : Ignition finished successfully Jul 15 04:46:22.927621 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 15 04:46:22.928431 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 15 04:46:22.929829 systemd[1]: Stopped target network.target - Network. Jul 15 04:46:22.930850 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 15 04:46:22.930907 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 15 04:46:22.932149 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 15 04:46:22.932203 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 15 04:46:22.933405 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 15 04:46:22.933452 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 15 04:46:22.934642 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 15 04:46:22.934678 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 15 04:46:22.935904 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 15 04:46:22.935947 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 15 04:46:22.937242 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 15 04:46:22.938509 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 15 04:46:22.944845 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 15 04:46:22.944960 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 15 04:46:22.948087 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 15 04:46:22.948494 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 15 04:46:22.948542 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 15 04:46:22.951347 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 15 04:46:22.951685 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 15 04:46:22.952462 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 15 04:46:22.954695 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 15 04:46:22.955120 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 15 04:46:22.956349 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 15 04:46:22.956418 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 15 04:46:22.958623 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 15 04:46:22.959714 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 15 04:46:22.959781 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 15 04:46:22.961218 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 15 04:46:22.961262 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 15 04:46:22.963384 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 15 04:46:22.963429 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 15 04:46:22.964815 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 15 04:46:22.967083 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 15 04:46:22.986088 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 15 04:46:22.986269 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 15 04:46:22.988065 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 15 04:46:22.988107 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 15 04:46:22.989415 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 15 04:46:22.989447 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 15 04:46:22.990831 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 15 04:46:22.990877 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 15 04:46:22.992896 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 15 04:46:22.992941 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 15 04:46:22.994827 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 15 04:46:22.994877 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 15 04:46:22.997315 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 15 04:46:22.998596 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 15 04:46:22.998662 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 15 04:46:23.000997 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 15 04:46:23.001041 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 15 04:46:23.003411 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 15 04:46:23.003455 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 04:46:23.006527 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 15 04:46:23.010474 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 15 04:46:23.015824 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 15 04:46:23.015924 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 15 04:46:23.017607 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 15 04:46:23.019812 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 15 04:46:23.046801 systemd[1]: Switching root. Jul 15 04:46:23.077674 systemd-journald[242]: Journal stopped Jul 15 04:46:23.902113 systemd-journald[242]: Received SIGTERM from PID 1 (systemd). Jul 15 04:46:23.902174 kernel: SELinux: policy capability network_peer_controls=1 Jul 15 04:46:23.902188 kernel: SELinux: policy capability open_perms=1 Jul 15 04:46:23.902198 kernel: SELinux: policy capability extended_socket_class=1 Jul 15 04:46:23.902211 kernel: SELinux: policy capability always_check_network=0 Jul 15 04:46:23.902221 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 15 04:46:23.902234 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 15 04:46:23.902244 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 15 04:46:23.902258 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 15 04:46:23.902270 kernel: SELinux: policy capability userspace_initial_context=0 Jul 15 04:46:23.902280 kernel: audit: type=1403 audit(1752554783.327:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 15 04:46:23.902299 systemd[1]: Successfully loaded SELinux policy in 61.834ms. Jul 15 04:46:23.902318 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.550ms. Jul 15 04:46:23.902330 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 15 04:46:23.902341 systemd[1]: Detected virtualization kvm. Jul 15 04:46:23.902351 systemd[1]: Detected architecture arm64. Jul 15 04:46:23.902417 systemd[1]: Detected first boot. Jul 15 04:46:23.902436 systemd[1]: Initializing machine ID from VM UUID. Jul 15 04:46:23.902448 zram_generator::config[1083]: No configuration found. Jul 15 04:46:23.902459 kernel: NET: Registered PF_VSOCK protocol family Jul 15 04:46:23.902472 systemd[1]: Populated /etc with preset unit settings. Jul 15 04:46:23.902483 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 15 04:46:23.902494 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 15 04:46:23.902504 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 15 04:46:23.902514 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 15 04:46:23.902524 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 15 04:46:23.902534 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 15 04:46:23.902545 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 15 04:46:23.902555 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 15 04:46:23.902567 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 15 04:46:23.902578 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 15 04:46:23.902588 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 15 04:46:23.902597 systemd[1]: Created slice user.slice - User and Session Slice. Jul 15 04:46:23.902607 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 15 04:46:23.902619 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 15 04:46:23.902630 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 15 04:46:23.902640 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 15 04:46:23.902652 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 15 04:46:23.902663 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 15 04:46:23.902674 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jul 15 04:46:23.902685 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 15 04:46:23.902695 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 15 04:46:23.902706 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 15 04:46:23.902716 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 15 04:46:23.902726 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 15 04:46:23.902741 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 15 04:46:23.902751 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 15 04:46:23.902765 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 15 04:46:23.902775 systemd[1]: Reached target slices.target - Slice Units. Jul 15 04:46:23.902785 systemd[1]: Reached target swap.target - Swaps. Jul 15 04:46:23.902795 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 15 04:46:23.902805 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 15 04:46:23.902816 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 15 04:46:23.902826 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 15 04:46:23.902838 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 15 04:46:23.902848 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 15 04:46:23.902858 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 15 04:46:23.902868 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 15 04:46:23.902878 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 15 04:46:23.902888 systemd[1]: Mounting media.mount - External Media Directory... Jul 15 04:46:23.902899 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 15 04:46:23.902909 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 15 04:46:23.902919 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 15 04:46:23.902930 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 15 04:46:23.902941 systemd[1]: Reached target machines.target - Containers. Jul 15 04:46:23.902951 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 15 04:46:23.902962 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 04:46:23.902972 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 15 04:46:23.902983 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 15 04:46:23.902995 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 15 04:46:23.903005 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 15 04:46:23.903015 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 15 04:46:23.903027 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 15 04:46:23.903037 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 15 04:46:23.903048 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 15 04:46:23.903058 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 15 04:46:23.903068 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 15 04:46:23.903078 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 15 04:46:23.903088 systemd[1]: Stopped systemd-fsck-usr.service. Jul 15 04:46:23.903099 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 04:46:23.903110 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 15 04:46:23.903121 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 15 04:46:23.903131 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 15 04:46:23.903141 kernel: fuse: init (API version 7.41) Jul 15 04:46:23.903150 kernel: loop: module loaded Jul 15 04:46:23.903160 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 15 04:46:23.903170 kernel: ACPI: bus type drm_connector registered Jul 15 04:46:23.903180 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 15 04:46:23.903191 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 15 04:46:23.903202 systemd[1]: verity-setup.service: Deactivated successfully. Jul 15 04:46:23.903212 systemd[1]: Stopped verity-setup.service. Jul 15 04:46:23.903222 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 15 04:46:23.903232 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 15 04:46:23.903242 systemd[1]: Mounted media.mount - External Media Directory. Jul 15 04:46:23.903254 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 15 04:46:23.903265 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 15 04:46:23.903274 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 15 04:46:23.903285 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 15 04:46:23.903326 systemd-journald[1148]: Collecting audit messages is disabled. Jul 15 04:46:23.903349 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 15 04:46:23.903368 systemd-journald[1148]: Journal started Jul 15 04:46:23.903393 systemd-journald[1148]: Runtime Journal (/run/log/journal/477d6f04d2024995829698e5446edd08) is 6M, max 48.5M, 42.4M free. Jul 15 04:46:23.688483 systemd[1]: Queued start job for default target multi-user.target. Jul 15 04:46:23.712406 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 15 04:46:23.712808 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 15 04:46:23.905422 systemd[1]: Started systemd-journald.service - Journal Service. Jul 15 04:46:23.906213 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 15 04:46:23.906463 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 15 04:46:23.907573 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 04:46:23.907746 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 15 04:46:23.908836 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 15 04:46:23.909009 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 15 04:46:23.910036 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 04:46:23.910196 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 15 04:46:23.911391 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 15 04:46:23.911546 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 15 04:46:23.912595 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 04:46:23.912751 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 15 04:46:23.913869 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 15 04:46:23.914978 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 15 04:46:23.916172 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 15 04:46:23.917417 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 15 04:46:23.929329 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 15 04:46:23.931438 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 15 04:46:23.933345 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 15 04:46:23.934208 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 15 04:46:23.934241 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 15 04:46:23.935951 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 15 04:46:23.944531 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 15 04:46:23.945495 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 04:46:23.946825 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 15 04:46:23.949171 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 15 04:46:23.950125 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 15 04:46:23.951177 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 15 04:46:23.952165 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 15 04:46:23.953143 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 15 04:46:23.957338 systemd-journald[1148]: Time spent on flushing to /var/log/journal/477d6f04d2024995829698e5446edd08 is 17.823ms for 877 entries. Jul 15 04:46:23.957338 systemd-journald[1148]: System Journal (/var/log/journal/477d6f04d2024995829698e5446edd08) is 8M, max 195.6M, 187.6M free. Jul 15 04:46:23.984328 systemd-journald[1148]: Received client request to flush runtime journal. Jul 15 04:46:23.984481 kernel: loop0: detected capacity change from 0 to 105936 Jul 15 04:46:23.957490 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 15 04:46:23.962402 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 15 04:46:23.968180 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 15 04:46:23.969473 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 15 04:46:23.970710 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 15 04:46:23.971817 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 15 04:46:23.974592 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 15 04:46:23.978847 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 15 04:46:23.988698 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 15 04:46:23.990776 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 15 04:46:23.999564 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 15 04:46:24.005394 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 15 04:46:24.012646 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 15 04:46:24.015046 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 15 04:46:24.020872 kernel: loop1: detected capacity change from 0 to 211168 Jul 15 04:46:24.039041 systemd-tmpfiles[1217]: ACLs are not supported, ignoring. Jul 15 04:46:24.039419 systemd-tmpfiles[1217]: ACLs are not supported, ignoring. Jul 15 04:46:24.043515 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 15 04:46:24.049615 kernel: loop2: detected capacity change from 0 to 134232 Jul 15 04:46:24.076425 kernel: loop3: detected capacity change from 0 to 105936 Jul 15 04:46:24.083394 kernel: loop4: detected capacity change from 0 to 211168 Jul 15 04:46:24.088387 kernel: loop5: detected capacity change from 0 to 134232 Jul 15 04:46:24.094463 (sd-merge)[1222]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 15 04:46:24.094847 (sd-merge)[1222]: Merged extensions into '/usr'. Jul 15 04:46:24.100074 systemd[1]: Reload requested from client PID 1199 ('systemd-sysext') (unit systemd-sysext.service)... Jul 15 04:46:24.100094 systemd[1]: Reloading... Jul 15 04:46:24.158182 zram_generator::config[1248]: No configuration found. Jul 15 04:46:24.235565 ldconfig[1194]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 15 04:46:24.236872 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 04:46:24.298725 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 15 04:46:24.299059 systemd[1]: Reloading finished in 198 ms. Jul 15 04:46:24.330390 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 15 04:46:24.331639 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 15 04:46:24.341780 systemd[1]: Starting ensure-sysext.service... Jul 15 04:46:24.343475 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 15 04:46:24.356446 systemd[1]: Reload requested from client PID 1282 ('systemctl') (unit ensure-sysext.service)... Jul 15 04:46:24.356462 systemd[1]: Reloading... Jul 15 04:46:24.357026 systemd-tmpfiles[1284]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 15 04:46:24.357190 systemd-tmpfiles[1284]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 15 04:46:24.357600 systemd-tmpfiles[1284]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 15 04:46:24.357793 systemd-tmpfiles[1284]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 15 04:46:24.358421 systemd-tmpfiles[1284]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 15 04:46:24.358626 systemd-tmpfiles[1284]: ACLs are not supported, ignoring. Jul 15 04:46:24.358678 systemd-tmpfiles[1284]: ACLs are not supported, ignoring. Jul 15 04:46:24.360870 systemd-tmpfiles[1284]: Detected autofs mount point /boot during canonicalization of boot. Jul 15 04:46:24.360884 systemd-tmpfiles[1284]: Skipping /boot Jul 15 04:46:24.366559 systemd-tmpfiles[1284]: Detected autofs mount point /boot during canonicalization of boot. Jul 15 04:46:24.366574 systemd-tmpfiles[1284]: Skipping /boot Jul 15 04:46:24.397390 zram_generator::config[1311]: No configuration found. Jul 15 04:46:24.471734 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 04:46:24.533619 systemd[1]: Reloading finished in 176 ms. Jul 15 04:46:24.556401 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 15 04:46:24.561793 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 15 04:46:24.567508 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 15 04:46:24.569565 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 15 04:46:24.571627 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 15 04:46:24.574434 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 15 04:46:24.579970 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 15 04:46:24.582026 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 15 04:46:24.587275 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 04:46:24.592981 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 15 04:46:24.596628 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 15 04:46:24.599736 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 15 04:46:24.601413 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 04:46:24.601562 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 04:46:24.603024 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 04:46:24.604543 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 15 04:46:24.606327 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 15 04:46:24.608849 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 04:46:24.609062 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 15 04:46:24.613847 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 04:46:24.614021 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 15 04:46:24.619203 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 04:46:24.620823 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 15 04:46:24.621787 systemd-udevd[1357]: Using default interface naming scheme 'v255'. Jul 15 04:46:24.623157 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 15 04:46:24.626633 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 15 04:46:24.627717 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 04:46:24.627887 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 04:46:24.640552 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 15 04:46:24.645275 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 15 04:46:24.645905 augenrules[1384]: No rules Jul 15 04:46:24.649692 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 15 04:46:24.651423 systemd[1]: audit-rules.service: Deactivated successfully. Jul 15 04:46:24.651676 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 15 04:46:24.653829 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 15 04:46:24.655295 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 04:46:24.657470 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 15 04:46:24.659574 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 04:46:24.659735 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 15 04:46:24.662298 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 04:46:24.662489 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 15 04:46:24.664115 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 15 04:46:24.673958 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 15 04:46:24.686586 systemd[1]: Finished ensure-sysext.service. Jul 15 04:46:24.694433 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 15 04:46:24.695349 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 04:46:24.699498 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 15 04:46:24.703783 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 15 04:46:24.707587 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 15 04:46:24.721001 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 15 04:46:24.722524 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 04:46:24.722561 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 04:46:24.725619 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 15 04:46:24.729773 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 15 04:46:24.730584 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 15 04:46:24.741040 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 04:46:24.742434 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 15 04:46:24.744696 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 15 04:46:24.744870 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 15 04:46:24.745946 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 04:46:24.748101 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 15 04:46:24.757181 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jul 15 04:46:24.759262 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 15 04:46:24.772373 augenrules[1425]: /sbin/augenrules: No change Jul 15 04:46:24.773781 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 04:46:24.775577 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 15 04:46:24.789224 augenrules[1466]: No rules Jul 15 04:46:24.790557 systemd[1]: audit-rules.service: Deactivated successfully. Jul 15 04:46:24.790804 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 15 04:46:24.793679 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 15 04:46:24.797235 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 15 04:46:24.798886 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 15 04:46:24.798951 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 15 04:46:24.829231 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 15 04:46:24.855459 systemd-resolved[1350]: Positive Trust Anchors: Jul 15 04:46:24.855480 systemd-resolved[1350]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 15 04:46:24.855511 systemd-resolved[1350]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 15 04:46:24.863568 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 15 04:46:24.863969 systemd-resolved[1350]: Defaulting to hostname 'linux'. Jul 15 04:46:24.864409 systemd-networkd[1433]: lo: Link UP Jul 15 04:46:24.864413 systemd-networkd[1433]: lo: Gained carrier Jul 15 04:46:24.865120 systemd[1]: Reached target time-set.target - System Time Set. Jul 15 04:46:24.865337 systemd-networkd[1433]: Enumeration completed Jul 15 04:46:24.867484 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 15 04:46:24.868412 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 15 04:46:24.868699 systemd-networkd[1433]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 04:46:24.868708 systemd-networkd[1433]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 15 04:46:24.869369 systemd-networkd[1433]: eth0: Link UP Jul 15 04:46:24.869502 systemd[1]: Reached target network.target - Network. Jul 15 04:46:24.869569 systemd-networkd[1433]: eth0: Gained carrier Jul 15 04:46:24.869584 systemd-networkd[1433]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 04:46:24.870208 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 15 04:46:24.871090 systemd[1]: Reached target sysinit.target - System Initialization. Jul 15 04:46:24.871969 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 15 04:46:24.872948 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 15 04:46:24.874131 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 15 04:46:24.875332 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 15 04:46:24.876417 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 15 04:46:24.877438 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 15 04:46:24.877469 systemd[1]: Reached target paths.target - Path Units. Jul 15 04:46:24.878668 systemd[1]: Reached target timers.target - Timer Units. Jul 15 04:46:24.880232 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 15 04:46:24.882545 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 15 04:46:24.885313 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 15 04:46:24.886531 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 15 04:46:24.887507 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 15 04:46:24.891494 systemd-networkd[1433]: eth0: DHCPv4 address 10.0.0.76/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 15 04:46:24.892430 systemd-timesyncd[1435]: Network configuration changed, trying to establish connection. Jul 15 04:46:24.893077 systemd-timesyncd[1435]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 15 04:46:24.893137 systemd-timesyncd[1435]: Initial clock synchronization to Tue 2025-07-15 04:46:24.911174 UTC. Jul 15 04:46:24.901818 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 15 04:46:24.904387 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 15 04:46:24.906865 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 15 04:46:24.908889 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 15 04:46:24.910813 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 15 04:46:24.914707 systemd[1]: Reached target sockets.target - Socket Units. Jul 15 04:46:24.915710 systemd[1]: Reached target basic.target - Basic System. Jul 15 04:46:24.916646 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 15 04:46:24.916753 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 15 04:46:24.917971 systemd[1]: Starting containerd.service - containerd container runtime... Jul 15 04:46:24.922640 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 15 04:46:24.924559 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 15 04:46:24.927613 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 15 04:46:24.933659 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 15 04:46:24.934713 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 15 04:46:24.936462 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 15 04:46:24.942050 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 15 04:46:24.949717 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 15 04:46:24.951266 jq[1498]: false Jul 15 04:46:24.951925 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 15 04:46:24.956057 extend-filesystems[1499]: Found /dev/vda6 Jul 15 04:46:24.959627 extend-filesystems[1499]: Found /dev/vda9 Jul 15 04:46:24.960145 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 15 04:46:24.962213 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 15 04:46:24.962883 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 15 04:46:24.964606 systemd[1]: Starting update-engine.service - Update Engine... Jul 15 04:46:24.966440 extend-filesystems[1499]: Checking size of /dev/vda9 Jul 15 04:46:24.966503 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 15 04:46:24.978715 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 15 04:46:24.980733 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 15 04:46:24.980956 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 15 04:46:24.981456 systemd[1]: motdgen.service: Deactivated successfully. Jul 15 04:46:24.981672 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 15 04:46:24.982228 jq[1513]: true Jul 15 04:46:24.983995 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 15 04:46:24.985155 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 15 04:46:24.986706 extend-filesystems[1499]: Resized partition /dev/vda9 Jul 15 04:46:25.000270 extend-filesystems[1528]: resize2fs 1.47.2 (1-Jan-2025) Jul 15 04:46:25.002558 (ntainerd)[1527]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 15 04:46:25.006913 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 15 04:46:25.008503 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 04:46:25.027213 jq[1526]: true Jul 15 04:46:25.036383 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 15 04:46:25.043966 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 15 04:46:25.059394 extend-filesystems[1528]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 15 04:46:25.059394 extend-filesystems[1528]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 15 04:46:25.059394 extend-filesystems[1528]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 15 04:46:25.071078 extend-filesystems[1499]: Resized filesystem in /dev/vda9 Jul 15 04:46:25.073296 tar[1523]: linux-arm64/LICENSE Jul 15 04:46:25.073296 tar[1523]: linux-arm64/helm Jul 15 04:46:25.060877 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 15 04:46:25.070718 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 15 04:46:25.073075 systemd-logind[1508]: Watching system buttons on /dev/input/event0 (Power Button) Jul 15 04:46:25.074080 systemd-logind[1508]: New seat seat0. Jul 15 04:46:25.075805 update_engine[1510]: I20250715 04:46:25.074955 1510 main.cc:92] Flatcar Update Engine starting Jul 15 04:46:25.095029 dbus-daemon[1496]: [system] SELinux support is enabled Jul 15 04:46:25.102853 update_engine[1510]: I20250715 04:46:25.102514 1510 update_check_scheduler.cc:74] Next update check in 2m47s Jul 15 04:46:25.122534 systemd[1]: Started systemd-logind.service - User Login Management. Jul 15 04:46:25.124028 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 15 04:46:25.129534 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 04:46:25.135375 bash[1567]: Updated "/home/core/.ssh/authorized_keys" Jul 15 04:46:25.137077 dbus-daemon[1496]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 15 04:46:25.138992 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 15 04:46:25.146983 systemd[1]: Started update-engine.service - Update Engine. Jul 15 04:46:25.148819 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 15 04:46:25.149011 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 15 04:46:25.149127 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 15 04:46:25.150253 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 15 04:46:25.150386 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 15 04:46:25.153590 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 15 04:46:25.222386 locksmithd[1572]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 15 04:46:25.230550 containerd[1527]: time="2025-07-15T04:46:25Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 15 04:46:25.231656 containerd[1527]: time="2025-07-15T04:46:25.231593789Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Jul 15 04:46:25.240211 containerd[1527]: time="2025-07-15T04:46:25.240164385Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.206µs" Jul 15 04:46:25.240211 containerd[1527]: time="2025-07-15T04:46:25.240204808Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 15 04:46:25.240319 containerd[1527]: time="2025-07-15T04:46:25.240225900Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 15 04:46:25.240447 containerd[1527]: time="2025-07-15T04:46:25.240401320Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 15 04:46:25.240447 containerd[1527]: time="2025-07-15T04:46:25.240436380Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 15 04:46:25.240514 containerd[1527]: time="2025-07-15T04:46:25.240463995Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 15 04:46:25.240537 containerd[1527]: time="2025-07-15T04:46:25.240520868Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 15 04:46:25.240537 containerd[1527]: time="2025-07-15T04:46:25.240532354Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 15 04:46:25.240785 containerd[1527]: time="2025-07-15T04:46:25.240755361Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 15 04:46:25.240785 containerd[1527]: time="2025-07-15T04:46:25.240783217Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 15 04:46:25.240842 containerd[1527]: time="2025-07-15T04:46:25.240795864Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 15 04:46:25.240842 containerd[1527]: time="2025-07-15T04:46:25.240804469Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 15 04:46:25.240929 containerd[1527]: time="2025-07-15T04:46:25.240874269Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 15 04:46:25.241092 containerd[1527]: time="2025-07-15T04:46:25.241071301Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 15 04:46:25.241120 containerd[1527]: time="2025-07-15T04:46:25.241106041Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 15 04:46:25.241120 containerd[1527]: time="2025-07-15T04:46:25.241116046Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 15 04:46:25.241163 containerd[1527]: time="2025-07-15T04:46:25.241153548Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 15 04:46:25.241699 containerd[1527]: time="2025-07-15T04:46:25.241507189Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 15 04:46:25.241699 containerd[1527]: time="2025-07-15T04:46:25.241655593Z" level=info msg="metadata content store policy set" policy=shared Jul 15 04:46:25.245447 containerd[1527]: time="2025-07-15T04:46:25.245347053Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 15 04:46:25.245447 containerd[1527]: time="2025-07-15T04:46:25.245434423Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 15 04:46:25.245544 containerd[1527]: time="2025-07-15T04:46:25.245463680Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 15 04:46:25.245544 containerd[1527]: time="2025-07-15T04:46:25.245481330Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 15 04:46:25.245544 containerd[1527]: time="2025-07-15T04:46:25.245495018Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 15 04:46:25.245544 containerd[1527]: time="2025-07-15T04:46:25.245505864Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 15 04:46:25.245544 containerd[1527]: time="2025-07-15T04:46:25.245520392Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 15 04:46:25.245544 containerd[1527]: time="2025-07-15T04:46:25.245532759Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 15 04:46:25.245544 containerd[1527]: time="2025-07-15T04:46:25.245543325Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 15 04:46:25.245680 containerd[1527]: time="2025-07-15T04:46:25.245554011Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 15 04:46:25.245680 containerd[1527]: time="2025-07-15T04:46:25.245563457Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 15 04:46:25.246886 containerd[1527]: time="2025-07-15T04:46:25.246766621Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 15 04:46:25.247185 containerd[1527]: time="2025-07-15T04:46:25.247159284Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 15 04:46:25.247266 containerd[1527]: time="2025-07-15T04:46:25.247251897Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 15 04:46:25.247324 containerd[1527]: time="2025-07-15T04:46:25.247310531Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 15 04:46:25.247395 containerd[1527]: time="2025-07-15T04:46:25.247380931Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 15 04:46:25.247482 containerd[1527]: time="2025-07-15T04:46:25.247466059Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 15 04:46:25.247533 containerd[1527]: time="2025-07-15T04:46:25.247521210Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 15 04:46:25.247588 containerd[1527]: time="2025-07-15T04:46:25.247575361Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 15 04:46:25.247647 containerd[1527]: time="2025-07-15T04:46:25.247634795Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 15 04:46:25.247708 containerd[1527]: time="2025-07-15T04:46:25.247693829Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 15 04:46:25.247764 containerd[1527]: time="2025-07-15T04:46:25.247752782Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 15 04:46:25.247818 containerd[1527]: time="2025-07-15T04:46:25.247805332Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 15 04:46:25.248056 containerd[1527]: time="2025-07-15T04:46:25.248039945Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 15 04:46:25.248110 containerd[1527]: time="2025-07-15T04:46:25.248098699Z" level=info msg="Start snapshots syncer" Jul 15 04:46:25.248183 containerd[1527]: time="2025-07-15T04:46:25.248168939Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 15 04:46:25.248512 containerd[1527]: time="2025-07-15T04:46:25.248472151Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 15 04:46:25.248669 containerd[1527]: time="2025-07-15T04:46:25.248652334Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 15 04:46:25.249400 containerd[1527]: time="2025-07-15T04:46:25.249349490Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 15 04:46:25.249634 containerd[1527]: time="2025-07-15T04:46:25.249613201Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 15 04:46:25.249707 containerd[1527]: time="2025-07-15T04:46:25.249693806Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 15 04:46:25.249757 containerd[1527]: time="2025-07-15T04:46:25.249744915Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 15 04:46:25.249829 containerd[1527]: time="2025-07-15T04:46:25.249815315Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 15 04:46:25.249882 containerd[1527]: time="2025-07-15T04:46:25.249870227Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 15 04:46:25.249936 containerd[1527]: time="2025-07-15T04:46:25.249923577Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 15 04:46:25.249990 containerd[1527]: time="2025-07-15T04:46:25.249976687Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 15 04:46:25.250059 containerd[1527]: time="2025-07-15T04:46:25.250045326Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 15 04:46:25.250115 containerd[1527]: time="2025-07-15T04:46:25.250101318Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 15 04:46:25.250179 containerd[1527]: time="2025-07-15T04:46:25.250166315Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 15 04:46:25.250264 containerd[1527]: time="2025-07-15T04:46:25.250249322Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 15 04:46:25.250325 containerd[1527]: time="2025-07-15T04:46:25.250309797Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 15 04:46:25.250390 containerd[1527]: time="2025-07-15T04:46:25.250375794Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 15 04:46:25.250468 containerd[1527]: time="2025-07-15T04:46:25.250453158Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 15 04:46:25.250515 containerd[1527]: time="2025-07-15T04:46:25.250503107Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 15 04:46:25.250563 containerd[1527]: time="2025-07-15T04:46:25.250550814Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 15 04:46:25.250623 containerd[1527]: time="2025-07-15T04:46:25.250610128Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 15 04:46:25.250740 containerd[1527]: time="2025-07-15T04:46:25.250729956Z" level=info msg="runtime interface created" Jul 15 04:46:25.250785 containerd[1527]: time="2025-07-15T04:46:25.250773341Z" level=info msg="created NRI interface" Jul 15 04:46:25.250836 containerd[1527]: time="2025-07-15T04:46:25.250823089Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 15 04:46:25.250897 containerd[1527]: time="2025-07-15T04:46:25.250885324Z" level=info msg="Connect containerd service" Jul 15 04:46:25.250982 containerd[1527]: time="2025-07-15T04:46:25.250967211Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 15 04:46:25.251810 containerd[1527]: time="2025-07-15T04:46:25.251780153Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 15 04:46:25.353991 containerd[1527]: time="2025-07-15T04:46:25.353867832Z" level=info msg="Start subscribing containerd event" Jul 15 04:46:25.353991 containerd[1527]: time="2025-07-15T04:46:25.353940513Z" level=info msg="Start recovering state" Jul 15 04:46:25.354327 containerd[1527]: time="2025-07-15T04:46:25.354307322Z" level=info msg="Start event monitor" Jul 15 04:46:25.354372 containerd[1527]: time="2025-07-15T04:46:25.354335858Z" level=info msg="Start cni network conf syncer for default" Jul 15 04:46:25.354372 containerd[1527]: time="2025-07-15T04:46:25.354345744Z" level=info msg="Start streaming server" Jul 15 04:46:25.354372 containerd[1527]: time="2025-07-15T04:46:25.354354589Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 15 04:46:25.354443 containerd[1527]: time="2025-07-15T04:46:25.354376201Z" level=info msg="runtime interface starting up..." Jul 15 04:46:25.354443 containerd[1527]: time="2025-07-15T04:46:25.354384406Z" level=info msg="starting plugins..." Jul 15 04:46:25.354443 containerd[1527]: time="2025-07-15T04:46:25.354399334Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 15 04:46:25.354765 containerd[1527]: time="2025-07-15T04:46:25.354738367Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 15 04:46:25.354891 containerd[1527]: time="2025-07-15T04:46:25.354868921Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 15 04:46:25.355026 containerd[1527]: time="2025-07-15T04:46:25.355013043Z" level=info msg="containerd successfully booted in 0.125151s" Jul 15 04:46:25.355198 systemd[1]: Started containerd.service - containerd container runtime. Jul 15 04:46:25.395968 tar[1523]: linux-arm64/README.md Jul 15 04:46:25.418199 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 15 04:46:25.435743 sshd_keygen[1522]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 15 04:46:25.455987 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 15 04:46:25.460629 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 15 04:46:25.484235 systemd[1]: issuegen.service: Deactivated successfully. Jul 15 04:46:25.484464 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 15 04:46:25.486891 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 15 04:46:25.516459 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 15 04:46:25.519066 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 15 04:46:25.521342 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 15 04:46:25.522443 systemd[1]: Reached target getty.target - Login Prompts. Jul 15 04:46:26.362522 systemd-networkd[1433]: eth0: Gained IPv6LL Jul 15 04:46:26.364989 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 15 04:46:26.366501 systemd[1]: Reached target network-online.target - Network is Online. Jul 15 04:46:26.368689 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 15 04:46:26.370966 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 04:46:26.372999 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 15 04:46:26.405575 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 15 04:46:26.407061 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 15 04:46:26.407266 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 15 04:46:26.409104 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 15 04:46:26.925991 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 04:46:26.927353 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 15 04:46:26.929864 (kubelet)[1636]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 15 04:46:26.932534 systemd[1]: Startup finished in 2.058s (kernel) + 16.693s (initrd) + 3.674s (userspace) = 22.426s. Jul 15 04:46:27.394429 kubelet[1636]: E0715 04:46:27.394296 1636 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 04:46:27.397161 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 04:46:27.397294 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 04:46:27.397675 systemd[1]: kubelet.service: Consumed 820ms CPU time, 257.9M memory peak. Jul 15 04:46:29.857235 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 15 04:46:29.858501 systemd[1]: Started sshd@0-10.0.0.76:22-10.0.0.1:34394.service - OpenSSH per-connection server daemon (10.0.0.1:34394). Jul 15 04:46:29.967920 sshd[1650]: Accepted publickey for core from 10.0.0.1 port 34394 ssh2: RSA SHA256:sv36Sv5cF+dK4scc2r2cUvpDU+BCYvXiqSSRxSnX4+c Jul 15 04:46:29.971545 sshd-session[1650]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:46:29.987747 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 15 04:46:29.988703 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 15 04:46:29.994428 systemd-logind[1508]: New session 1 of user core. Jul 15 04:46:30.008268 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 15 04:46:30.011475 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 15 04:46:30.025501 (systemd)[1655]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 15 04:46:30.027929 systemd-logind[1508]: New session c1 of user core. Jul 15 04:46:30.141150 systemd[1655]: Queued start job for default target default.target. Jul 15 04:46:30.150286 systemd[1655]: Created slice app.slice - User Application Slice. Jul 15 04:46:30.150318 systemd[1655]: Reached target paths.target - Paths. Jul 15 04:46:30.150378 systemd[1655]: Reached target timers.target - Timers. Jul 15 04:46:30.151576 systemd[1655]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 15 04:46:30.162983 systemd[1655]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 15 04:46:30.163109 systemd[1655]: Reached target sockets.target - Sockets. Jul 15 04:46:30.163158 systemd[1655]: Reached target basic.target - Basic System. Jul 15 04:46:30.163187 systemd[1655]: Reached target default.target - Main User Target. Jul 15 04:46:30.163213 systemd[1655]: Startup finished in 128ms. Jul 15 04:46:30.163334 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 15 04:46:30.164681 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 15 04:46:30.230972 systemd[1]: Started sshd@1-10.0.0.76:22-10.0.0.1:34400.service - OpenSSH per-connection server daemon (10.0.0.1:34400). Jul 15 04:46:30.286837 sshd[1666]: Accepted publickey for core from 10.0.0.1 port 34400 ssh2: RSA SHA256:sv36Sv5cF+dK4scc2r2cUvpDU+BCYvXiqSSRxSnX4+c Jul 15 04:46:30.288234 sshd-session[1666]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:46:30.296296 systemd-logind[1508]: New session 2 of user core. Jul 15 04:46:30.306552 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 15 04:46:30.357109 sshd[1669]: Connection closed by 10.0.0.1 port 34400 Jul 15 04:46:30.357543 sshd-session[1666]: pam_unix(sshd:session): session closed for user core Jul 15 04:46:30.367393 systemd[1]: sshd@1-10.0.0.76:22-10.0.0.1:34400.service: Deactivated successfully. Jul 15 04:46:30.368950 systemd[1]: session-2.scope: Deactivated successfully. Jul 15 04:46:30.369735 systemd-logind[1508]: Session 2 logged out. Waiting for processes to exit. Jul 15 04:46:30.371849 systemd[1]: Started sshd@2-10.0.0.76:22-10.0.0.1:34412.service - OpenSSH per-connection server daemon (10.0.0.1:34412). Jul 15 04:46:30.372719 systemd-logind[1508]: Removed session 2. Jul 15 04:46:30.428680 sshd[1675]: Accepted publickey for core from 10.0.0.1 port 34412 ssh2: RSA SHA256:sv36Sv5cF+dK4scc2r2cUvpDU+BCYvXiqSSRxSnX4+c Jul 15 04:46:30.429934 sshd-session[1675]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:46:30.433700 systemd-logind[1508]: New session 3 of user core. Jul 15 04:46:30.449543 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 15 04:46:30.497139 sshd[1678]: Connection closed by 10.0.0.1 port 34412 Jul 15 04:46:30.496982 sshd-session[1675]: pam_unix(sshd:session): session closed for user core Jul 15 04:46:30.515474 systemd[1]: sshd@2-10.0.0.76:22-10.0.0.1:34412.service: Deactivated successfully. Jul 15 04:46:30.516960 systemd[1]: session-3.scope: Deactivated successfully. Jul 15 04:46:30.518949 systemd-logind[1508]: Session 3 logged out. Waiting for processes to exit. Jul 15 04:46:30.521001 systemd[1]: Started sshd@3-10.0.0.76:22-10.0.0.1:34418.service - OpenSSH per-connection server daemon (10.0.0.1:34418). Jul 15 04:46:30.521881 systemd-logind[1508]: Removed session 3. Jul 15 04:46:30.571697 sshd[1684]: Accepted publickey for core from 10.0.0.1 port 34418 ssh2: RSA SHA256:sv36Sv5cF+dK4scc2r2cUvpDU+BCYvXiqSSRxSnX4+c Jul 15 04:46:30.573069 sshd-session[1684]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:46:30.577043 systemd-logind[1508]: New session 4 of user core. Jul 15 04:46:30.584520 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 15 04:46:30.636035 sshd[1687]: Connection closed by 10.0.0.1 port 34418 Jul 15 04:46:30.635896 sshd-session[1684]: pam_unix(sshd:session): session closed for user core Jul 15 04:46:30.655513 systemd[1]: sshd@3-10.0.0.76:22-10.0.0.1:34418.service: Deactivated successfully. Jul 15 04:46:30.657016 systemd[1]: session-4.scope: Deactivated successfully. Jul 15 04:46:30.659414 systemd-logind[1508]: Session 4 logged out. Waiting for processes to exit. Jul 15 04:46:30.661089 systemd[1]: Started sshd@4-10.0.0.76:22-10.0.0.1:34420.service - OpenSSH per-connection server daemon (10.0.0.1:34420). Jul 15 04:46:30.662141 systemd-logind[1508]: Removed session 4. Jul 15 04:46:30.726171 sshd[1693]: Accepted publickey for core from 10.0.0.1 port 34420 ssh2: RSA SHA256:sv36Sv5cF+dK4scc2r2cUvpDU+BCYvXiqSSRxSnX4+c Jul 15 04:46:30.727671 sshd-session[1693]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:46:30.734599 systemd-logind[1508]: New session 5 of user core. Jul 15 04:46:30.749537 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 15 04:46:30.810694 sudo[1697]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 15 04:46:30.810969 sudo[1697]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 04:46:30.828265 sudo[1697]: pam_unix(sudo:session): session closed for user root Jul 15 04:46:30.830415 sshd[1696]: Connection closed by 10.0.0.1 port 34420 Jul 15 04:46:30.830354 sshd-session[1693]: pam_unix(sshd:session): session closed for user core Jul 15 04:46:30.840487 systemd[1]: sshd@4-10.0.0.76:22-10.0.0.1:34420.service: Deactivated successfully. Jul 15 04:46:30.841997 systemd[1]: session-5.scope: Deactivated successfully. Jul 15 04:46:30.843950 systemd-logind[1508]: Session 5 logged out. Waiting for processes to exit. Jul 15 04:46:30.846404 systemd[1]: Started sshd@5-10.0.0.76:22-10.0.0.1:34436.service - OpenSSH per-connection server daemon (10.0.0.1:34436). Jul 15 04:46:30.847064 systemd-logind[1508]: Removed session 5. Jul 15 04:46:30.908020 sshd[1703]: Accepted publickey for core from 10.0.0.1 port 34436 ssh2: RSA SHA256:sv36Sv5cF+dK4scc2r2cUvpDU+BCYvXiqSSRxSnX4+c Jul 15 04:46:30.909425 sshd-session[1703]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:46:30.913447 systemd-logind[1508]: New session 6 of user core. Jul 15 04:46:30.922542 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 15 04:46:30.974299 sudo[1708]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 15 04:46:30.974614 sudo[1708]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 04:46:30.979580 sudo[1708]: pam_unix(sudo:session): session closed for user root Jul 15 04:46:30.984697 sudo[1707]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 15 04:46:30.984985 sudo[1707]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 04:46:30.993595 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 15 04:46:31.035432 augenrules[1730]: No rules Jul 15 04:46:31.036692 systemd[1]: audit-rules.service: Deactivated successfully. Jul 15 04:46:31.037457 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 15 04:46:31.038821 sudo[1707]: pam_unix(sudo:session): session closed for user root Jul 15 04:46:31.040198 sshd[1706]: Connection closed by 10.0.0.1 port 34436 Jul 15 04:46:31.040601 sshd-session[1703]: pam_unix(sshd:session): session closed for user core Jul 15 04:46:31.050457 systemd[1]: sshd@5-10.0.0.76:22-10.0.0.1:34436.service: Deactivated successfully. Jul 15 04:46:31.052819 systemd[1]: session-6.scope: Deactivated successfully. Jul 15 04:46:31.053507 systemd-logind[1508]: Session 6 logged out. Waiting for processes to exit. Jul 15 04:46:31.055701 systemd[1]: Started sshd@6-10.0.0.76:22-10.0.0.1:34446.service - OpenSSH per-connection server daemon (10.0.0.1:34446). Jul 15 04:46:31.056206 systemd-logind[1508]: Removed session 6. Jul 15 04:46:31.104044 sshd[1739]: Accepted publickey for core from 10.0.0.1 port 34446 ssh2: RSA SHA256:sv36Sv5cF+dK4scc2r2cUvpDU+BCYvXiqSSRxSnX4+c Jul 15 04:46:31.105373 sshd-session[1739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:46:31.109193 systemd-logind[1508]: New session 7 of user core. Jul 15 04:46:31.121594 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 15 04:46:31.172684 sudo[1743]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 15 04:46:31.173327 sudo[1743]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 04:46:31.511143 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 15 04:46:31.528740 (dockerd)[1764]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 15 04:46:31.784420 dockerd[1764]: time="2025-07-15T04:46:31.784268948Z" level=info msg="Starting up" Jul 15 04:46:31.785166 dockerd[1764]: time="2025-07-15T04:46:31.785141478Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 15 04:46:31.797612 dockerd[1764]: time="2025-07-15T04:46:31.797552035Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jul 15 04:46:31.894826 dockerd[1764]: time="2025-07-15T04:46:31.894772716Z" level=info msg="Loading containers: start." Jul 15 04:46:31.902477 kernel: Initializing XFRM netlink socket Jul 15 04:46:32.116571 systemd-networkd[1433]: docker0: Link UP Jul 15 04:46:32.120233 dockerd[1764]: time="2025-07-15T04:46:32.120054677Z" level=info msg="Loading containers: done." Jul 15 04:46:32.134122 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1207662810-merged.mount: Deactivated successfully. Jul 15 04:46:32.137654 dockerd[1764]: time="2025-07-15T04:46:32.137604713Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 15 04:46:32.137735 dockerd[1764]: time="2025-07-15T04:46:32.137700677Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jul 15 04:46:32.137820 dockerd[1764]: time="2025-07-15T04:46:32.137794639Z" level=info msg="Initializing buildkit" Jul 15 04:46:32.160217 dockerd[1764]: time="2025-07-15T04:46:32.160164071Z" level=info msg="Completed buildkit initialization" Jul 15 04:46:32.167412 dockerd[1764]: time="2025-07-15T04:46:32.167326894Z" level=info msg="Daemon has completed initialization" Jul 15 04:46:32.167530 dockerd[1764]: time="2025-07-15T04:46:32.167416655Z" level=info msg="API listen on /run/docker.sock" Jul 15 04:46:32.167583 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 15 04:46:32.663823 containerd[1527]: time="2025-07-15T04:46:32.663774510Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\"" Jul 15 04:46:33.475630 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount830827013.mount: Deactivated successfully. Jul 15 04:46:34.449751 containerd[1527]: time="2025-07-15T04:46:34.449706192Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:46:34.450218 containerd[1527]: time="2025-07-15T04:46:34.450185917Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.2: active requests=0, bytes read=27351718" Jul 15 04:46:34.450930 containerd[1527]: time="2025-07-15T04:46:34.450891899Z" level=info msg="ImageCreate event name:\"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:46:34.453889 containerd[1527]: time="2025-07-15T04:46:34.453850924Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:46:34.454741 containerd[1527]: time="2025-07-15T04:46:34.454706810Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.2\" with image id \"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\", size \"27348516\" in 1.790866431s" Jul 15 04:46:34.454786 containerd[1527]: time="2025-07-15T04:46:34.454745507Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\" returns image reference \"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\"" Jul 15 04:46:34.457611 containerd[1527]: time="2025-07-15T04:46:34.457580839Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\"" Jul 15 04:46:35.544053 containerd[1527]: time="2025-07-15T04:46:35.543978025Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:46:35.544495 containerd[1527]: time="2025-07-15T04:46:35.544449940Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.2: active requests=0, bytes read=23537625" Jul 15 04:46:35.545342 containerd[1527]: time="2025-07-15T04:46:35.545305494Z" level=info msg="ImageCreate event name:\"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:46:35.547593 containerd[1527]: time="2025-07-15T04:46:35.547547863Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:46:35.548652 containerd[1527]: time="2025-07-15T04:46:35.548610944Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.2\" with image id \"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\", size \"25092541\" in 1.090997571s" Jul 15 04:46:35.548691 containerd[1527]: time="2025-07-15T04:46:35.548648799Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\" returns image reference \"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\"" Jul 15 04:46:35.549224 containerd[1527]: time="2025-07-15T04:46:35.549190864Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\"" Jul 15 04:46:36.931305 containerd[1527]: time="2025-07-15T04:46:36.931255198Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:46:36.944255 containerd[1527]: time="2025-07-15T04:46:36.944193269Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.2: active requests=0, bytes read=18293517" Jul 15 04:46:36.997091 containerd[1527]: time="2025-07-15T04:46:36.997032312Z" level=info msg="ImageCreate event name:\"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:46:37.011935 containerd[1527]: time="2025-07-15T04:46:37.011869737Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:46:37.012993 containerd[1527]: time="2025-07-15T04:46:37.012934631Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.2\" with image id \"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\", size \"19848451\" in 1.463707152s" Jul 15 04:46:37.012993 containerd[1527]: time="2025-07-15T04:46:37.012968084Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\" returns image reference \"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\"" Jul 15 04:46:37.013416 containerd[1527]: time="2025-07-15T04:46:37.013393289Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\"" Jul 15 04:46:37.582883 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 15 04:46:37.586543 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 04:46:37.734530 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 04:46:37.737671 (kubelet)[2055]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 15 04:46:37.879745 kubelet[2055]: E0715 04:46:37.872678 2055 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 04:46:37.875883 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 04:46:37.876007 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 04:46:37.876300 systemd[1]: kubelet.service: Consumed 148ms CPU time, 107M memory peak. Jul 15 04:46:38.140446 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3562883978.mount: Deactivated successfully. Jul 15 04:46:38.502310 containerd[1527]: time="2025-07-15T04:46:38.502143695Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:46:38.503187 containerd[1527]: time="2025-07-15T04:46:38.502987372Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.2: active requests=0, bytes read=28199474" Jul 15 04:46:38.503837 containerd[1527]: time="2025-07-15T04:46:38.503793036Z" level=info msg="ImageCreate event name:\"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:46:38.505674 containerd[1527]: time="2025-07-15T04:46:38.505643893Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:46:38.506331 containerd[1527]: time="2025-07-15T04:46:38.506301740Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.2\" with image id \"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\", repo tag \"registry.k8s.io/kube-proxy:v1.33.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\", size \"28198491\" in 1.49287928s" Jul 15 04:46:38.506331 containerd[1527]: time="2025-07-15T04:46:38.506330431Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\" returns image reference \"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\"" Jul 15 04:46:38.506785 containerd[1527]: time="2025-07-15T04:46:38.506725740Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jul 15 04:46:39.096563 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2244440276.mount: Deactivated successfully. Jul 15 04:46:39.946754 containerd[1527]: time="2025-07-15T04:46:39.946695024Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:46:39.959756 containerd[1527]: time="2025-07-15T04:46:39.959711373Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152119" Jul 15 04:46:39.967989 containerd[1527]: time="2025-07-15T04:46:39.967954140Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:46:39.971202 containerd[1527]: time="2025-07-15T04:46:39.970583899Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:46:39.971802 containerd[1527]: time="2025-07-15T04:46:39.971663373Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.464807904s" Jul 15 04:46:39.971802 containerd[1527]: time="2025-07-15T04:46:39.971699706Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Jul 15 04:46:39.972256 containerd[1527]: time="2025-07-15T04:46:39.972224498Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 15 04:46:40.386535 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1968601080.mount: Deactivated successfully. Jul 15 04:46:40.392722 containerd[1527]: time="2025-07-15T04:46:40.392675392Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 15 04:46:40.393402 containerd[1527]: time="2025-07-15T04:46:40.393374199Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Jul 15 04:46:40.394101 containerd[1527]: time="2025-07-15T04:46:40.394041475Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 15 04:46:40.396246 containerd[1527]: time="2025-07-15T04:46:40.396196477Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 15 04:46:40.397386 containerd[1527]: time="2025-07-15T04:46:40.397159537Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 424.898946ms" Jul 15 04:46:40.397386 containerd[1527]: time="2025-07-15T04:46:40.397192109Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 15 04:46:40.397793 containerd[1527]: time="2025-07-15T04:46:40.397755788Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jul 15 04:46:40.842962 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2772985118.mount: Deactivated successfully. Jul 15 04:46:42.455692 containerd[1527]: time="2025-07-15T04:46:42.455611103Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:46:42.456191 containerd[1527]: time="2025-07-15T04:46:42.456154923Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69334601" Jul 15 04:46:42.456809 containerd[1527]: time="2025-07-15T04:46:42.456782171Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:46:42.461240 containerd[1527]: time="2025-07-15T04:46:42.461176109Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:46:42.462429 containerd[1527]: time="2025-07-15T04:46:42.462394673Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 2.064593829s" Jul 15 04:46:42.462429 containerd[1527]: time="2025-07-15T04:46:42.462427364Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Jul 15 04:46:48.083036 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 15 04:46:48.084852 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 04:46:48.189807 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 15 04:46:48.189912 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 15 04:46:48.190197 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 04:46:48.190530 systemd[1]: kubelet.service: Consumed 68ms CPU time, 82.2M memory peak. Jul 15 04:46:48.193869 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 04:46:48.213928 systemd[1]: Reload requested from client PID 2218 ('systemctl') (unit session-7.scope)... Jul 15 04:46:48.213943 systemd[1]: Reloading... Jul 15 04:46:48.282394 zram_generator::config[2257]: No configuration found. Jul 15 04:46:48.486223 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 04:46:48.568997 systemd[1]: Reloading finished in 354 ms. Jul 15 04:46:48.619339 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 04:46:48.621266 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 04:46:48.622926 systemd[1]: kubelet.service: Deactivated successfully. Jul 15 04:46:48.623115 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 04:46:48.623151 systemd[1]: kubelet.service: Consumed 92ms CPU time, 95.1M memory peak. Jul 15 04:46:48.625463 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 04:46:48.776300 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 04:46:48.781421 (kubelet)[2307]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 15 04:46:48.815391 kubelet[2307]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 04:46:48.815391 kubelet[2307]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 15 04:46:48.815391 kubelet[2307]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 04:46:48.815706 kubelet[2307]: I0715 04:46:48.815486 2307 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 15 04:46:49.472025 kubelet[2307]: I0715 04:46:49.471983 2307 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 15 04:46:49.472025 kubelet[2307]: I0715 04:46:49.472012 2307 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 15 04:46:49.472263 kubelet[2307]: I0715 04:46:49.472234 2307 server.go:956] "Client rotation is on, will bootstrap in background" Jul 15 04:46:49.505876 kubelet[2307]: I0715 04:46:49.505834 2307 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 15 04:46:49.506139 kubelet[2307]: E0715 04:46:49.506106 2307 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.76:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.76:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jul 15 04:46:49.512599 kubelet[2307]: I0715 04:46:49.512575 2307 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 15 04:46:49.515149 kubelet[2307]: I0715 04:46:49.515131 2307 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 15 04:46:49.516279 kubelet[2307]: I0715 04:46:49.516225 2307 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 15 04:46:49.516440 kubelet[2307]: I0715 04:46:49.516271 2307 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 15 04:46:49.516542 kubelet[2307]: I0715 04:46:49.516493 2307 topology_manager.go:138] "Creating topology manager with none policy" Jul 15 04:46:49.516542 kubelet[2307]: I0715 04:46:49.516501 2307 container_manager_linux.go:303] "Creating device plugin manager" Jul 15 04:46:49.517219 kubelet[2307]: I0715 04:46:49.517182 2307 state_mem.go:36] "Initialized new in-memory state store" Jul 15 04:46:49.521339 kubelet[2307]: I0715 04:46:49.521299 2307 kubelet.go:480] "Attempting to sync node with API server" Jul 15 04:46:49.521339 kubelet[2307]: I0715 04:46:49.521325 2307 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 15 04:46:49.521490 kubelet[2307]: I0715 04:46:49.521353 2307 kubelet.go:386] "Adding apiserver pod source" Jul 15 04:46:49.521490 kubelet[2307]: I0715 04:46:49.521379 2307 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 15 04:46:49.522523 kubelet[2307]: I0715 04:46:49.522290 2307 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Jul 15 04:46:49.523029 kubelet[2307]: I0715 04:46:49.523007 2307 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 15 04:46:49.523148 kubelet[2307]: W0715 04:46:49.523132 2307 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 15 04:46:49.525398 kubelet[2307]: E0715 04:46:49.525347 2307 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.76:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.76:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 15 04:46:49.525661 kubelet[2307]: I0715 04:46:49.525606 2307 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 15 04:46:49.525661 kubelet[2307]: I0715 04:46:49.525641 2307 server.go:1289] "Started kubelet" Jul 15 04:46:49.528388 kubelet[2307]: I0715 04:46:49.527450 2307 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 15 04:46:49.528479 kubelet[2307]: I0715 04:46:49.528464 2307 server.go:317] "Adding debug handlers to kubelet server" Jul 15 04:46:49.528572 kubelet[2307]: E0715 04:46:49.528539 2307 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.76:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.76:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 15 04:46:49.531420 kubelet[2307]: I0715 04:46:49.531352 2307 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 15 04:46:49.531739 kubelet[2307]: I0715 04:46:49.531720 2307 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 15 04:46:49.537190 kubelet[2307]: I0715 04:46:49.533391 2307 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 15 04:46:49.537483 kubelet[2307]: E0715 04:46:49.531088 2307 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.76:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.76:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18525350572d4eb6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-15 04:46:49.525620406 +0000 UTC m=+0.740864362,LastTimestamp:2025-07-15 04:46:49.525620406 +0000 UTC m=+0.740864362,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 15 04:46:49.537710 kubelet[2307]: I0715 04:46:49.537648 2307 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 15 04:46:49.539669 kubelet[2307]: E0715 04:46:49.539652 2307 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 15 04:46:49.539902 kubelet[2307]: I0715 04:46:49.539875 2307 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 15 04:46:49.540244 kubelet[2307]: E0715 04:46:49.540175 2307 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.76:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.76:6443: connect: connection refused" interval="200ms" Jul 15 04:46:49.541425 kubelet[2307]: E0715 04:46:49.540305 2307 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 04:46:49.541483 kubelet[2307]: I0715 04:46:49.541428 2307 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 15 04:46:49.541513 kubelet[2307]: I0715 04:46:49.541504 2307 reconciler.go:26] "Reconciler: start to sync state" Jul 15 04:46:49.542147 kubelet[2307]: E0715 04:46:49.542125 2307 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.76:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.76:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 15 04:46:49.544586 kubelet[2307]: I0715 04:46:49.544567 2307 factory.go:223] Registration of the systemd container factory successfully Jul 15 04:46:49.544764 kubelet[2307]: I0715 04:46:49.544745 2307 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 15 04:46:49.546439 kubelet[2307]: I0715 04:46:49.546417 2307 factory.go:223] Registration of the containerd container factory successfully Jul 15 04:46:49.550215 kubelet[2307]: I0715 04:46:49.550188 2307 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 15 04:46:49.551293 kubelet[2307]: I0715 04:46:49.551205 2307 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 15 04:46:49.551293 kubelet[2307]: I0715 04:46:49.551222 2307 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 15 04:46:49.551293 kubelet[2307]: I0715 04:46:49.551238 2307 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 15 04:46:49.551293 kubelet[2307]: I0715 04:46:49.551244 2307 kubelet.go:2436] "Starting kubelet main sync loop" Jul 15 04:46:49.551432 kubelet[2307]: E0715 04:46:49.551400 2307 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 15 04:46:49.553881 kubelet[2307]: E0715 04:46:49.553835 2307 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.76:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.76:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 15 04:46:49.555965 kubelet[2307]: I0715 04:46:49.555893 2307 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 15 04:46:49.555965 kubelet[2307]: I0715 04:46:49.555953 2307 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 15 04:46:49.555965 kubelet[2307]: I0715 04:46:49.555971 2307 state_mem.go:36] "Initialized new in-memory state store" Jul 15 04:46:49.641341 kubelet[2307]: E0715 04:46:49.641295 2307 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 04:46:49.642563 kubelet[2307]: I0715 04:46:49.642527 2307 policy_none.go:49] "None policy: Start" Jul 15 04:46:49.642563 kubelet[2307]: I0715 04:46:49.642564 2307 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 15 04:46:49.642617 kubelet[2307]: I0715 04:46:49.642577 2307 state_mem.go:35] "Initializing new in-memory state store" Jul 15 04:46:49.648479 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 15 04:46:49.652218 kubelet[2307]: E0715 04:46:49.652195 2307 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 15 04:46:49.666798 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 15 04:46:49.669738 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 15 04:46:49.677101 kubelet[2307]: E0715 04:46:49.677051 2307 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 15 04:46:49.677298 kubelet[2307]: I0715 04:46:49.677272 2307 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 15 04:46:49.677328 kubelet[2307]: I0715 04:46:49.677289 2307 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 15 04:46:49.677892 kubelet[2307]: I0715 04:46:49.677873 2307 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 15 04:46:49.678669 kubelet[2307]: E0715 04:46:49.678598 2307 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 15 04:46:49.678669 kubelet[2307]: E0715 04:46:49.678636 2307 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 15 04:46:49.741851 kubelet[2307]: E0715 04:46:49.741749 2307 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.76:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.76:6443: connect: connection refused" interval="400ms" Jul 15 04:46:49.779312 kubelet[2307]: I0715 04:46:49.779290 2307 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 15 04:46:49.779764 kubelet[2307]: E0715 04:46:49.779720 2307 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.76:6443/api/v1/nodes\": dial tcp 10.0.0.76:6443: connect: connection refused" node="localhost" Jul 15 04:46:49.861638 systemd[1]: Created slice kubepods-burstable-pod4576b19423dbb49d9882a648ff04b28a.slice - libcontainer container kubepods-burstable-pod4576b19423dbb49d9882a648ff04b28a.slice. Jul 15 04:46:49.890579 kubelet[2307]: E0715 04:46:49.890548 2307 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 15 04:46:49.893824 systemd[1]: Created slice kubepods-burstable-pod84b858ec27c8b2738b1d9ff9927e0dcb.slice - libcontainer container kubepods-burstable-pod84b858ec27c8b2738b1d9ff9927e0dcb.slice. Jul 15 04:46:49.895647 kubelet[2307]: E0715 04:46:49.895558 2307 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 15 04:46:49.897089 systemd[1]: Created slice kubepods-burstable-pod834ee54f1daa06092e339273649eb5ea.slice - libcontainer container kubepods-burstable-pod834ee54f1daa06092e339273649eb5ea.slice. Jul 15 04:46:49.898415 kubelet[2307]: E0715 04:46:49.898396 2307 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 15 04:46:49.943829 kubelet[2307]: I0715 04:46:49.943786 2307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 04:46:49.943916 kubelet[2307]: I0715 04:46:49.943846 2307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 04:46:49.943916 kubelet[2307]: I0715 04:46:49.943873 2307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 04:46:49.943916 kubelet[2307]: I0715 04:46:49.943891 2307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4576b19423dbb49d9882a648ff04b28a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"4576b19423dbb49d9882a648ff04b28a\") " pod="kube-system/kube-apiserver-localhost" Jul 15 04:46:49.943916 kubelet[2307]: I0715 04:46:49.943905 2307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 04:46:49.943998 kubelet[2307]: I0715 04:46:49.943919 2307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 04:46:49.943998 kubelet[2307]: I0715 04:46:49.943940 2307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/834ee54f1daa06092e339273649eb5ea-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"834ee54f1daa06092e339273649eb5ea\") " pod="kube-system/kube-scheduler-localhost" Jul 15 04:46:49.943998 kubelet[2307]: I0715 04:46:49.943953 2307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4576b19423dbb49d9882a648ff04b28a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"4576b19423dbb49d9882a648ff04b28a\") " pod="kube-system/kube-apiserver-localhost" Jul 15 04:46:49.943998 kubelet[2307]: I0715 04:46:49.943971 2307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4576b19423dbb49d9882a648ff04b28a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"4576b19423dbb49d9882a648ff04b28a\") " pod="kube-system/kube-apiserver-localhost" Jul 15 04:46:49.980954 kubelet[2307]: I0715 04:46:49.980928 2307 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 15 04:46:49.981296 kubelet[2307]: E0715 04:46:49.981268 2307 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.76:6443/api/v1/nodes\": dial tcp 10.0.0.76:6443: connect: connection refused" node="localhost" Jul 15 04:46:50.143137 kubelet[2307]: E0715 04:46:50.143069 2307 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.76:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.76:6443: connect: connection refused" interval="800ms" Jul 15 04:46:50.191750 kubelet[2307]: E0715 04:46:50.191634 2307 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 04:46:50.192324 containerd[1527]: time="2025-07-15T04:46:50.192247737Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:4576b19423dbb49d9882a648ff04b28a,Namespace:kube-system,Attempt:0,}" Jul 15 04:46:50.196201 kubelet[2307]: E0715 04:46:50.196123 2307 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 04:46:50.196652 containerd[1527]: time="2025-07-15T04:46:50.196474705Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:84b858ec27c8b2738b1d9ff9927e0dcb,Namespace:kube-system,Attempt:0,}" Jul 15 04:46:50.198870 kubelet[2307]: E0715 04:46:50.198839 2307 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 04:46:50.199186 containerd[1527]: time="2025-07-15T04:46:50.199137710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:834ee54f1daa06092e339273649eb5ea,Namespace:kube-system,Attempt:0,}" Jul 15 04:46:50.213102 containerd[1527]: time="2025-07-15T04:46:50.213059972Z" level=info msg="connecting to shim 84962fdd28e2986e4e12d6d472ed5dd065164949908dfadfb78851887578afef" address="unix:///run/containerd/s/370cb0412b96fb62cde79fe5cb4a5c15cfeea07c96fb7258d4b93e4a2ada70fd" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:46:50.229968 containerd[1527]: time="2025-07-15T04:46:50.229927353Z" level=info msg="connecting to shim 6f78f22109ed250ebb42ef8a1209780219d257c99511f775df4300ee80874bbf" address="unix:///run/containerd/s/2c429bc5331614056e05462d4a59a2ed3de048f5900777ffd8c6a5a718670182" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:46:50.234121 containerd[1527]: time="2025-07-15T04:46:50.234083222Z" level=info msg="connecting to shim d3d927c1d5d29dac2da40f321e6fc5f4df7dfec51eeaaa6ba9905dd442b27dd6" address="unix:///run/containerd/s/4971637ef45a169d8788cd19b433d95b1de5795b057bdbf7737068bcc761ec96" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:46:50.238526 systemd[1]: Started cri-containerd-84962fdd28e2986e4e12d6d472ed5dd065164949908dfadfb78851887578afef.scope - libcontainer container 84962fdd28e2986e4e12d6d472ed5dd065164949908dfadfb78851887578afef. Jul 15 04:46:50.257606 systemd[1]: Started cri-containerd-6f78f22109ed250ebb42ef8a1209780219d257c99511f775df4300ee80874bbf.scope - libcontainer container 6f78f22109ed250ebb42ef8a1209780219d257c99511f775df4300ee80874bbf. Jul 15 04:46:50.260676 systemd[1]: Started cri-containerd-d3d927c1d5d29dac2da40f321e6fc5f4df7dfec51eeaaa6ba9905dd442b27dd6.scope - libcontainer container d3d927c1d5d29dac2da40f321e6fc5f4df7dfec51eeaaa6ba9905dd442b27dd6. Jul 15 04:46:50.279958 containerd[1527]: time="2025-07-15T04:46:50.279911975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:4576b19423dbb49d9882a648ff04b28a,Namespace:kube-system,Attempt:0,} returns sandbox id \"84962fdd28e2986e4e12d6d472ed5dd065164949908dfadfb78851887578afef\"" Jul 15 04:46:50.281806 kubelet[2307]: E0715 04:46:50.281751 2307 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 04:46:50.286081 containerd[1527]: time="2025-07-15T04:46:50.286044713Z" level=info msg="CreateContainer within sandbox \"84962fdd28e2986e4e12d6d472ed5dd065164949908dfadfb78851887578afef\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 15 04:46:50.295727 containerd[1527]: time="2025-07-15T04:46:50.295635060Z" level=info msg="Container 3a44c28340fc24c67541c2fbb9eb479ab016a503558b055e1f2b07960708124b: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:46:50.303130 containerd[1527]: time="2025-07-15T04:46:50.303094260Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:834ee54f1daa06092e339273649eb5ea,Namespace:kube-system,Attempt:0,} returns sandbox id \"d3d927c1d5d29dac2da40f321e6fc5f4df7dfec51eeaaa6ba9905dd442b27dd6\"" Jul 15 04:46:50.304657 kubelet[2307]: E0715 04:46:50.304511 2307 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 04:46:50.304948 containerd[1527]: time="2025-07-15T04:46:50.304905366Z" level=info msg="CreateContainer within sandbox \"84962fdd28e2986e4e12d6d472ed5dd065164949908dfadfb78851887578afef\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3a44c28340fc24c67541c2fbb9eb479ab016a503558b055e1f2b07960708124b\"" Jul 15 04:46:50.306766 containerd[1527]: time="2025-07-15T04:46:50.306487053Z" level=info msg="StartContainer for \"3a44c28340fc24c67541c2fbb9eb479ab016a503558b055e1f2b07960708124b\"" Jul 15 04:46:50.308764 containerd[1527]: time="2025-07-15T04:46:50.308721068Z" level=info msg="CreateContainer within sandbox \"d3d927c1d5d29dac2da40f321e6fc5f4df7dfec51eeaaa6ba9905dd442b27dd6\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 15 04:46:50.309557 containerd[1527]: time="2025-07-15T04:46:50.309508950Z" level=info msg="connecting to shim 3a44c28340fc24c67541c2fbb9eb479ab016a503558b055e1f2b07960708124b" address="unix:///run/containerd/s/370cb0412b96fb62cde79fe5cb4a5c15cfeea07c96fb7258d4b93e4a2ada70fd" protocol=ttrpc version=3 Jul 15 04:46:50.314898 containerd[1527]: time="2025-07-15T04:46:50.314751139Z" level=info msg="Container 1b7284c049aab3ecb0ca39318564facc79d488ce3e7f1932d85fbbcda52c8c9e: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:46:50.315329 containerd[1527]: time="2025-07-15T04:46:50.315296440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:84b858ec27c8b2738b1d9ff9927e0dcb,Namespace:kube-system,Attempt:0,} returns sandbox id \"6f78f22109ed250ebb42ef8a1209780219d257c99511f775df4300ee80874bbf\"" Jul 15 04:46:50.316307 kubelet[2307]: E0715 04:46:50.316142 2307 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 04:46:50.319334 containerd[1527]: time="2025-07-15T04:46:50.319297909Z" level=info msg="CreateContainer within sandbox \"6f78f22109ed250ebb42ef8a1209780219d257c99511f775df4300ee80874bbf\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 15 04:46:50.326533 containerd[1527]: time="2025-07-15T04:46:50.326503363Z" level=info msg="Container 86c381a375d50ccffbafb69fc1b6537571a34be357d9c22b70b133baebad982e: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:46:50.328106 containerd[1527]: time="2025-07-15T04:46:50.328068846Z" level=info msg="CreateContainer within sandbox \"d3d927c1d5d29dac2da40f321e6fc5f4df7dfec51eeaaa6ba9905dd442b27dd6\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"1b7284c049aab3ecb0ca39318564facc79d488ce3e7f1932d85fbbcda52c8c9e\"" Jul 15 04:46:50.328629 containerd[1527]: time="2025-07-15T04:46:50.328599383Z" level=info msg="StartContainer for \"1b7284c049aab3ecb0ca39318564facc79d488ce3e7f1932d85fbbcda52c8c9e\"" Jul 15 04:46:50.329744 containerd[1527]: time="2025-07-15T04:46:50.329719991Z" level=info msg="connecting to shim 1b7284c049aab3ecb0ca39318564facc79d488ce3e7f1932d85fbbcda52c8c9e" address="unix:///run/containerd/s/4971637ef45a169d8788cd19b433d95b1de5795b057bdbf7737068bcc761ec96" protocol=ttrpc version=3 Jul 15 04:46:50.331690 systemd[1]: Started cri-containerd-3a44c28340fc24c67541c2fbb9eb479ab016a503558b055e1f2b07960708124b.scope - libcontainer container 3a44c28340fc24c67541c2fbb9eb479ab016a503558b055e1f2b07960708124b. Jul 15 04:46:50.337715 containerd[1527]: time="2025-07-15T04:46:50.337676958Z" level=info msg="CreateContainer within sandbox \"6f78f22109ed250ebb42ef8a1209780219d257c99511f775df4300ee80874bbf\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"86c381a375d50ccffbafb69fc1b6537571a34be357d9c22b70b133baebad982e\"" Jul 15 04:46:50.338170 containerd[1527]: time="2025-07-15T04:46:50.338141878Z" level=info msg="StartContainer for \"86c381a375d50ccffbafb69fc1b6537571a34be357d9c22b70b133baebad982e\"" Jul 15 04:46:50.339981 containerd[1527]: time="2025-07-15T04:46:50.339938220Z" level=info msg="connecting to shim 86c381a375d50ccffbafb69fc1b6537571a34be357d9c22b70b133baebad982e" address="unix:///run/containerd/s/2c429bc5331614056e05462d4a59a2ed3de048f5900777ffd8c6a5a718670182" protocol=ttrpc version=3 Jul 15 04:46:50.351527 systemd[1]: Started cri-containerd-1b7284c049aab3ecb0ca39318564facc79d488ce3e7f1932d85fbbcda52c8c9e.scope - libcontainer container 1b7284c049aab3ecb0ca39318564facc79d488ce3e7f1932d85fbbcda52c8c9e. Jul 15 04:46:50.355689 systemd[1]: Started cri-containerd-86c381a375d50ccffbafb69fc1b6537571a34be357d9c22b70b133baebad982e.scope - libcontainer container 86c381a375d50ccffbafb69fc1b6537571a34be357d9c22b70b133baebad982e. Jul 15 04:46:50.377653 containerd[1527]: time="2025-07-15T04:46:50.377576105Z" level=info msg="StartContainer for \"3a44c28340fc24c67541c2fbb9eb479ab016a503558b055e1f2b07960708124b\" returns successfully" Jul 15 04:46:50.382561 kubelet[2307]: I0715 04:46:50.382162 2307 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 15 04:46:50.382561 kubelet[2307]: E0715 04:46:50.382517 2307 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.76:6443/api/v1/nodes\": dial tcp 10.0.0.76:6443: connect: connection refused" node="localhost" Jul 15 04:46:50.394095 containerd[1527]: time="2025-07-15T04:46:50.392197268Z" level=info msg="StartContainer for \"1b7284c049aab3ecb0ca39318564facc79d488ce3e7f1932d85fbbcda52c8c9e\" returns successfully" Jul 15 04:46:50.440864 containerd[1527]: time="2025-07-15T04:46:50.436219555Z" level=info msg="StartContainer for \"86c381a375d50ccffbafb69fc1b6537571a34be357d9c22b70b133baebad982e\" returns successfully" Jul 15 04:46:50.560883 kubelet[2307]: E0715 04:46:50.560853 2307 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 15 04:46:50.561201 kubelet[2307]: E0715 04:46:50.561179 2307 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 04:46:50.563877 kubelet[2307]: E0715 04:46:50.563856 2307 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 15 04:46:50.563987 kubelet[2307]: E0715 04:46:50.563969 2307 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 04:46:50.566289 kubelet[2307]: E0715 04:46:50.566268 2307 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 15 04:46:50.566552 kubelet[2307]: E0715 04:46:50.566535 2307 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 04:46:51.184548 kubelet[2307]: I0715 04:46:51.184515 2307 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 15 04:46:51.569117 kubelet[2307]: E0715 04:46:51.569021 2307 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 15 04:46:51.569203 kubelet[2307]: E0715 04:46:51.569141 2307 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 04:46:51.570701 kubelet[2307]: E0715 04:46:51.570681 2307 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 15 04:46:51.570803 kubelet[2307]: E0715 04:46:51.570785 2307 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 04:46:52.190881 kubelet[2307]: E0715 04:46:52.190821 2307 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 15 04:46:52.216469 kubelet[2307]: I0715 04:46:52.216434 2307 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 15 04:46:52.240756 kubelet[2307]: I0715 04:46:52.240173 2307 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 15 04:46:52.263030 kubelet[2307]: E0715 04:46:52.262979 2307 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jul 15 04:46:52.263030 kubelet[2307]: I0715 04:46:52.263010 2307 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 15 04:46:52.264949 kubelet[2307]: E0715 04:46:52.264843 2307 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jul 15 04:46:52.264949 kubelet[2307]: I0715 04:46:52.264867 2307 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 15 04:46:52.266464 kubelet[2307]: E0715 04:46:52.266374 2307 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 15 04:46:52.527479 kubelet[2307]: I0715 04:46:52.527260 2307 apiserver.go:52] "Watching apiserver" Jul 15 04:46:52.541754 kubelet[2307]: I0715 04:46:52.541661 2307 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 15 04:46:52.826656 kubelet[2307]: I0715 04:46:52.826559 2307 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 15 04:46:52.828739 kubelet[2307]: E0715 04:46:52.828658 2307 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 15 04:46:52.829379 kubelet[2307]: E0715 04:46:52.828813 2307 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 04:46:54.238457 systemd[1]: Reload requested from client PID 2590 ('systemctl') (unit session-7.scope)... Jul 15 04:46:54.238472 systemd[1]: Reloading... Jul 15 04:46:54.297381 zram_generator::config[2632]: No configuration found. Jul 15 04:46:54.367117 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 04:46:54.464331 systemd[1]: Reloading finished in 225 ms. Jul 15 04:46:54.495455 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 04:46:54.508883 systemd[1]: kubelet.service: Deactivated successfully. Jul 15 04:46:54.509135 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 04:46:54.509188 systemd[1]: kubelet.service: Consumed 1.159s CPU time, 128.3M memory peak. Jul 15 04:46:54.511403 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 04:46:54.656792 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 04:46:54.661202 (kubelet)[2675]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 15 04:46:54.698270 kubelet[2675]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 04:46:54.698270 kubelet[2675]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 15 04:46:54.698270 kubelet[2675]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 04:46:54.698864 kubelet[2675]: I0715 04:46:54.698319 2675 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 15 04:46:54.707391 kubelet[2675]: I0715 04:46:54.706551 2675 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 15 04:46:54.707391 kubelet[2675]: I0715 04:46:54.706580 2675 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 15 04:46:54.707391 kubelet[2675]: I0715 04:46:54.706755 2675 server.go:956] "Client rotation is on, will bootstrap in background" Jul 15 04:46:54.707922 kubelet[2675]: I0715 04:46:54.707899 2675 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jul 15 04:46:54.712146 kubelet[2675]: I0715 04:46:54.712120 2675 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 15 04:46:54.717241 kubelet[2675]: I0715 04:46:54.717205 2675 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 15 04:46:54.720125 kubelet[2675]: I0715 04:46:54.720096 2675 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 15 04:46:54.720342 kubelet[2675]: I0715 04:46:54.720300 2675 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 15 04:46:54.720487 kubelet[2675]: I0715 04:46:54.720332 2675 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 15 04:46:54.720568 kubelet[2675]: I0715 04:46:54.720490 2675 topology_manager.go:138] "Creating topology manager with none policy" Jul 15 04:46:54.720568 kubelet[2675]: I0715 04:46:54.720498 2675 container_manager_linux.go:303] "Creating device plugin manager" Jul 15 04:46:54.720568 kubelet[2675]: I0715 04:46:54.720540 2675 state_mem.go:36] "Initialized new in-memory state store" Jul 15 04:46:54.720696 kubelet[2675]: I0715 04:46:54.720673 2675 kubelet.go:480] "Attempting to sync node with API server" Jul 15 04:46:54.720696 kubelet[2675]: I0715 04:46:54.720688 2675 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 15 04:46:54.720749 kubelet[2675]: I0715 04:46:54.720714 2675 kubelet.go:386] "Adding apiserver pod source" Jul 15 04:46:54.720749 kubelet[2675]: I0715 04:46:54.720727 2675 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 15 04:46:54.721610 kubelet[2675]: I0715 04:46:54.721574 2675 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Jul 15 04:46:54.722151 kubelet[2675]: I0715 04:46:54.722123 2675 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 15 04:46:54.724767 kubelet[2675]: I0715 04:46:54.724730 2675 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 15 04:46:54.724823 kubelet[2675]: I0715 04:46:54.724773 2675 server.go:1289] "Started kubelet" Jul 15 04:46:54.727381 kubelet[2675]: I0715 04:46:54.724918 2675 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 15 04:46:54.727381 kubelet[2675]: I0715 04:46:54.725032 2675 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 15 04:46:54.727381 kubelet[2675]: I0715 04:46:54.725245 2675 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 15 04:46:54.727381 kubelet[2675]: I0715 04:46:54.726203 2675 server.go:317] "Adding debug handlers to kubelet server" Jul 15 04:46:54.727381 kubelet[2675]: I0715 04:46:54.726877 2675 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 15 04:46:54.727381 kubelet[2675]: I0715 04:46:54.727069 2675 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 15 04:46:54.727381 kubelet[2675]: E0715 04:46:54.727236 2675 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 04:46:54.727381 kubelet[2675]: I0715 04:46:54.727264 2675 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 15 04:46:54.729385 kubelet[2675]: I0715 04:46:54.728918 2675 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 15 04:46:54.729574 kubelet[2675]: I0715 04:46:54.729547 2675 reconciler.go:26] "Reconciler: start to sync state" Jul 15 04:46:54.732809 kubelet[2675]: I0715 04:46:54.732229 2675 factory.go:223] Registration of the systemd container factory successfully Jul 15 04:46:54.737016 kubelet[2675]: I0715 04:46:54.736874 2675 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 15 04:46:54.748123 kubelet[2675]: I0715 04:46:54.747277 2675 factory.go:223] Registration of the containerd container factory successfully Jul 15 04:46:54.752944 kubelet[2675]: E0715 04:46:54.752910 2675 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 15 04:46:54.753594 kubelet[2675]: I0715 04:46:54.753565 2675 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 15 04:46:54.754505 kubelet[2675]: I0715 04:46:54.754476 2675 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 15 04:46:54.754505 kubelet[2675]: I0715 04:46:54.754500 2675 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 15 04:46:54.754577 kubelet[2675]: I0715 04:46:54.754519 2675 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 15 04:46:54.754577 kubelet[2675]: I0715 04:46:54.754525 2675 kubelet.go:2436] "Starting kubelet main sync loop" Jul 15 04:46:54.754577 kubelet[2675]: E0715 04:46:54.754564 2675 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 15 04:46:54.785195 kubelet[2675]: I0715 04:46:54.785157 2675 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 15 04:46:54.785195 kubelet[2675]: I0715 04:46:54.785178 2675 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 15 04:46:54.785195 kubelet[2675]: I0715 04:46:54.785201 2675 state_mem.go:36] "Initialized new in-memory state store" Jul 15 04:46:54.785392 kubelet[2675]: I0715 04:46:54.785321 2675 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 15 04:46:54.785392 kubelet[2675]: I0715 04:46:54.785340 2675 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 15 04:46:54.785392 kubelet[2675]: I0715 04:46:54.785372 2675 policy_none.go:49] "None policy: Start" Jul 15 04:46:54.785392 kubelet[2675]: I0715 04:46:54.785383 2675 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 15 04:46:54.785392 kubelet[2675]: I0715 04:46:54.785391 2675 state_mem.go:35] "Initializing new in-memory state store" Jul 15 04:46:54.785507 kubelet[2675]: I0715 04:46:54.785476 2675 state_mem.go:75] "Updated machine memory state" Jul 15 04:46:54.789073 kubelet[2675]: E0715 04:46:54.789035 2675 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 15 04:46:54.789413 kubelet[2675]: I0715 04:46:54.789343 2675 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 15 04:46:54.789413 kubelet[2675]: I0715 04:46:54.789386 2675 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 15 04:46:54.789617 kubelet[2675]: I0715 04:46:54.789592 2675 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 15 04:46:54.790109 kubelet[2675]: E0715 04:46:54.790074 2675 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 15 04:46:54.855703 kubelet[2675]: I0715 04:46:54.855624 2675 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 15 04:46:54.855810 kubelet[2675]: I0715 04:46:54.855714 2675 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 15 04:46:54.855810 kubelet[2675]: I0715 04:46:54.855624 2675 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 15 04:46:54.893045 kubelet[2675]: I0715 04:46:54.893019 2675 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 15 04:46:54.900670 kubelet[2675]: I0715 04:46:54.900641 2675 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jul 15 04:46:54.901179 kubelet[2675]: I0715 04:46:54.900724 2675 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 15 04:46:54.930498 kubelet[2675]: I0715 04:46:54.930464 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/834ee54f1daa06092e339273649eb5ea-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"834ee54f1daa06092e339273649eb5ea\") " pod="kube-system/kube-scheduler-localhost" Jul 15 04:46:54.930665 kubelet[2675]: I0715 04:46:54.930503 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4576b19423dbb49d9882a648ff04b28a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"4576b19423dbb49d9882a648ff04b28a\") " pod="kube-system/kube-apiserver-localhost" Jul 15 04:46:54.930665 kubelet[2675]: I0715 04:46:54.930535 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 04:46:54.930665 kubelet[2675]: I0715 04:46:54.930551 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 04:46:54.930665 kubelet[2675]: I0715 04:46:54.930568 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 04:46:54.930665 kubelet[2675]: I0715 04:46:54.930599 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 04:46:54.930843 kubelet[2675]: I0715 04:46:54.930623 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4576b19423dbb49d9882a648ff04b28a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"4576b19423dbb49d9882a648ff04b28a\") " pod="kube-system/kube-apiserver-localhost" Jul 15 04:46:54.930843 kubelet[2675]: I0715 04:46:54.930639 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4576b19423dbb49d9882a648ff04b28a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"4576b19423dbb49d9882a648ff04b28a\") " pod="kube-system/kube-apiserver-localhost" Jul 15 04:46:54.930843 kubelet[2675]: I0715 04:46:54.930660 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 04:46:55.164933 kubelet[2675]: E0715 04:46:55.164893 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 04:46:55.166137 kubelet[2675]: E0715 04:46:55.166047 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 04:46:55.166548 kubelet[2675]: E0715 04:46:55.166494 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 04:46:55.721089 kubelet[2675]: I0715 04:46:55.721047 2675 apiserver.go:52] "Watching apiserver" Jul 15 04:46:55.729110 kubelet[2675]: I0715 04:46:55.729083 2675 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 15 04:46:55.775480 kubelet[2675]: E0715 04:46:55.775432 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 04:46:55.775480 kubelet[2675]: I0715 04:46:55.775459 2675 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 15 04:46:55.776620 kubelet[2675]: I0715 04:46:55.776511 2675 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 15 04:46:55.780450 kubelet[2675]: E0715 04:46:55.779939 2675 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 15 04:46:55.780450 kubelet[2675]: E0715 04:46:55.780447 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 04:46:55.783335 kubelet[2675]: E0715 04:46:55.783288 2675 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 15 04:46:55.783499 kubelet[2675]: E0715 04:46:55.783448 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 04:46:55.817793 kubelet[2675]: I0715 04:46:55.817711 2675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.8176656580000001 podStartE2EDuration="1.817665658s" podCreationTimestamp="2025-07-15 04:46:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 04:46:55.816647675 +0000 UTC m=+1.152201593" watchObservedRunningTime="2025-07-15 04:46:55.817665658 +0000 UTC m=+1.153219576" Jul 15 04:46:55.843150 kubelet[2675]: I0715 04:46:55.842792 2675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.8427752910000001 podStartE2EDuration="1.842775291s" podCreationTimestamp="2025-07-15 04:46:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 04:46:55.828277308 +0000 UTC m=+1.163831226" watchObservedRunningTime="2025-07-15 04:46:55.842775291 +0000 UTC m=+1.178329209" Jul 15 04:46:56.776698 kubelet[2675]: E0715 04:46:56.776591 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 04:46:56.776698 kubelet[2675]: E0715 04:46:56.776620 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 04:46:56.776698 kubelet[2675]: E0715 04:46:56.776715 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 04:46:58.661302 kubelet[2675]: E0715 04:46:58.661260 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 04:47:00.837401 kubelet[2675]: I0715 04:47:00.837279 2675 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 15 04:47:00.837752 containerd[1527]: time="2025-07-15T04:47:00.837615966Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 15 04:47:00.837931 kubelet[2675]: I0715 04:47:00.837786 2675 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 15 04:47:01.425165 kubelet[2675]: I0715 04:47:01.425109 2675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=7.425090337 podStartE2EDuration="7.425090337s" podCreationTimestamp="2025-07-15 04:46:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 04:46:55.843843926 +0000 UTC m=+1.179397844" watchObservedRunningTime="2025-07-15 04:47:01.425090337 +0000 UTC m=+6.760644255" Jul 15 04:47:01.436032 systemd[1]: Created slice kubepods-besteffort-pod590f355e_046d_4ee4_9a14_772506b5b06a.slice - libcontainer container kubepods-besteffort-pod590f355e_046d_4ee4_9a14_772506b5b06a.slice. Jul 15 04:47:01.475726 kubelet[2675]: I0715 04:47:01.475686 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/590f355e-046d-4ee4-9a14-772506b5b06a-kube-proxy\") pod \"kube-proxy-ntflx\" (UID: \"590f355e-046d-4ee4-9a14-772506b5b06a\") " pod="kube-system/kube-proxy-ntflx" Jul 15 04:47:01.475726 kubelet[2675]: I0715 04:47:01.475723 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/590f355e-046d-4ee4-9a14-772506b5b06a-xtables-lock\") pod \"kube-proxy-ntflx\" (UID: \"590f355e-046d-4ee4-9a14-772506b5b06a\") " pod="kube-system/kube-proxy-ntflx" Jul 15 04:47:01.475875 kubelet[2675]: I0715 04:47:01.475743 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wfz8\" (UniqueName: \"kubernetes.io/projected/590f355e-046d-4ee4-9a14-772506b5b06a-kube-api-access-4wfz8\") pod \"kube-proxy-ntflx\" (UID: \"590f355e-046d-4ee4-9a14-772506b5b06a\") " pod="kube-system/kube-proxy-ntflx" Jul 15 04:47:01.475875 kubelet[2675]: I0715 04:47:01.475765 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/590f355e-046d-4ee4-9a14-772506b5b06a-lib-modules\") pod \"kube-proxy-ntflx\" (UID: \"590f355e-046d-4ee4-9a14-772506b5b06a\") " pod="kube-system/kube-proxy-ntflx" Jul 15 04:47:01.748574 kubelet[2675]: E0715 04:47:01.748445 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 04:47:01.750412 containerd[1527]: time="2025-07-15T04:47:01.749042528Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ntflx,Uid:590f355e-046d-4ee4-9a14-772506b5b06a,Namespace:kube-system,Attempt:0,}" Jul 15 04:47:01.782966 containerd[1527]: time="2025-07-15T04:47:01.780618739Z" level=info msg="connecting to shim ce6a2c558b52c5475d1a4a5abdbe0501dd65f8bf452db429c777372ce56ae254" address="unix:///run/containerd/s/5ee91ed91f1b1a66253a29851d9d969fb2860f5881f258213995f5b38246710c" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:47:01.812585 systemd[1]: Started cri-containerd-ce6a2c558b52c5475d1a4a5abdbe0501dd65f8bf452db429c777372ce56ae254.scope - libcontainer container ce6a2c558b52c5475d1a4a5abdbe0501dd65f8bf452db429c777372ce56ae254. Jul 15 04:47:01.839738 containerd[1527]: time="2025-07-15T04:47:01.839685738Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ntflx,Uid:590f355e-046d-4ee4-9a14-772506b5b06a,Namespace:kube-system,Attempt:0,} returns sandbox id \"ce6a2c558b52c5475d1a4a5abdbe0501dd65f8bf452db429c777372ce56ae254\"" Jul 15 04:47:01.840594 kubelet[2675]: E0715 04:47:01.840561 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 04:47:01.844556 containerd[1527]: time="2025-07-15T04:47:01.844501612Z" level=info msg="CreateContainer within sandbox \"ce6a2c558b52c5475d1a4a5abdbe0501dd65f8bf452db429c777372ce56ae254\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 15 04:47:01.854580 containerd[1527]: time="2025-07-15T04:47:01.854543795Z" level=info msg="Container 1bc6693d9dbf14d6660f0e0650a3ccb521ab69bff1f46781bb0e1c1d132c2b7d: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:47:01.861118 containerd[1527]: time="2025-07-15T04:47:01.861008008Z" level=info msg="CreateContainer within sandbox \"ce6a2c558b52c5475d1a4a5abdbe0501dd65f8bf452db429c777372ce56ae254\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1bc6693d9dbf14d6660f0e0650a3ccb521ab69bff1f46781bb0e1c1d132c2b7d\"" Jul 15 04:47:01.861591 containerd[1527]: time="2025-07-15T04:47:01.861568390Z" level=info msg="StartContainer for \"1bc6693d9dbf14d6660f0e0650a3ccb521ab69bff1f46781bb0e1c1d132c2b7d\"" Jul 15 04:47:01.862970 containerd[1527]: time="2025-07-15T04:47:01.862939959Z" level=info msg="connecting to shim 1bc6693d9dbf14d6660f0e0650a3ccb521ab69bff1f46781bb0e1c1d132c2b7d" address="unix:///run/containerd/s/5ee91ed91f1b1a66253a29851d9d969fb2860f5881f258213995f5b38246710c" protocol=ttrpc version=3 Jul 15 04:47:01.887595 systemd[1]: Started cri-containerd-1bc6693d9dbf14d6660f0e0650a3ccb521ab69bff1f46781bb0e1c1d132c2b7d.scope - libcontainer container 1bc6693d9dbf14d6660f0e0650a3ccb521ab69bff1f46781bb0e1c1d132c2b7d. Jul 15 04:47:01.934844 containerd[1527]: time="2025-07-15T04:47:01.934745710Z" level=info msg="StartContainer for \"1bc6693d9dbf14d6660f0e0650a3ccb521ab69bff1f46781bb0e1c1d132c2b7d\" returns successfully" Jul 15 04:47:02.151309 systemd[1]: Created slice kubepods-besteffort-poddd8b6189_fc1c_402f_bcdf_a54a2df2ebbe.slice - libcontainer container kubepods-besteffort-poddd8b6189_fc1c_402f_bcdf_a54a2df2ebbe.slice. Jul 15 04:47:02.179367 kubelet[2675]: I0715 04:47:02.179321 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/dd8b6189-fc1c-402f-bcdf-a54a2df2ebbe-var-lib-calico\") pod \"tigera-operator-747864d56d-rbv2p\" (UID: \"dd8b6189-fc1c-402f-bcdf-a54a2df2ebbe\") " pod="tigera-operator/tigera-operator-747864d56d-rbv2p" Jul 15 04:47:02.179494 kubelet[2675]: I0715 04:47:02.179378 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sj78l\" (UniqueName: \"kubernetes.io/projected/dd8b6189-fc1c-402f-bcdf-a54a2df2ebbe-kube-api-access-sj78l\") pod \"tigera-operator-747864d56d-rbv2p\" (UID: \"dd8b6189-fc1c-402f-bcdf-a54a2df2ebbe\") " pod="tigera-operator/tigera-operator-747864d56d-rbv2p" Jul 15 04:47:02.455036 containerd[1527]: time="2025-07-15T04:47:02.454912579Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-rbv2p,Uid:dd8b6189-fc1c-402f-bcdf-a54a2df2ebbe,Namespace:tigera-operator,Attempt:0,}" Jul 15 04:47:02.471720 containerd[1527]: time="2025-07-15T04:47:02.471682807Z" level=info msg="connecting to shim 42e472fb06e9761ab93c1f88945c342e0ee76764ca4d36798c303dede59b28fa" address="unix:///run/containerd/s/90d8d532ea49207c325d7877595797e13403af9ba7856e712a218379012ae0fa" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:47:02.496513 systemd[1]: Started cri-containerd-42e472fb06e9761ab93c1f88945c342e0ee76764ca4d36798c303dede59b28fa.scope - libcontainer container 42e472fb06e9761ab93c1f88945c342e0ee76764ca4d36798c303dede59b28fa. Jul 15 04:47:02.528978 containerd[1527]: time="2025-07-15T04:47:02.528910828Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-rbv2p,Uid:dd8b6189-fc1c-402f-bcdf-a54a2df2ebbe,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"42e472fb06e9761ab93c1f88945c342e0ee76764ca4d36798c303dede59b28fa\"" Jul 15 04:47:02.530521 containerd[1527]: time="2025-07-15T04:47:02.530435297Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 15 04:47:02.791664 kubelet[2675]: E0715 04:47:02.791304 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 04:47:03.861477 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount541181192.mount: Deactivated successfully. Jul 15 04:47:04.315174 containerd[1527]: time="2025-07-15T04:47:04.314459909Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:47:04.317112 containerd[1527]: time="2025-07-15T04:47:04.317026693Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=22150610" Jul 15 04:47:04.317794 containerd[1527]: time="2025-07-15T04:47:04.317766375Z" level=info msg="ImageCreate event name:\"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:47:04.320737 containerd[1527]: time="2025-07-15T04:47:04.320701779Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:47:04.321530 containerd[1527]: time="2025-07-15T04:47:04.321489589Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"22146605\" in 1.790989521s" Jul 15 04:47:04.321530 containerd[1527]: time="2025-07-15T04:47:04.321525355Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\"" Jul 15 04:47:04.329084 containerd[1527]: time="2025-07-15T04:47:04.329040675Z" level=info msg="CreateContainer within sandbox \"42e472fb06e9761ab93c1f88945c342e0ee76764ca4d36798c303dede59b28fa\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 15 04:47:04.335710 containerd[1527]: time="2025-07-15T04:47:04.335661607Z" level=info msg="Container f17444fe5f133e4da9372f5e56ef75a9626727eadcbdd2a9a7c1b3ec207bc581: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:47:04.349012 containerd[1527]: time="2025-07-15T04:47:04.348942158Z" level=info msg="CreateContainer within sandbox \"42e472fb06e9761ab93c1f88945c342e0ee76764ca4d36798c303dede59b28fa\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"f17444fe5f133e4da9372f5e56ef75a9626727eadcbdd2a9a7c1b3ec207bc581\"" Jul 15 04:47:04.349665 containerd[1527]: time="2025-07-15T04:47:04.349639874Z" level=info msg="StartContainer for \"f17444fe5f133e4da9372f5e56ef75a9626727eadcbdd2a9a7c1b3ec207bc581\"" Jul 15 04:47:04.351631 containerd[1527]: time="2025-07-15T04:47:04.351603678Z" level=info msg="connecting to shim f17444fe5f133e4da9372f5e56ef75a9626727eadcbdd2a9a7c1b3ec207bc581" address="unix:///run/containerd/s/90d8d532ea49207c325d7877595797e13403af9ba7856e712a218379012ae0fa" protocol=ttrpc version=3 Jul 15 04:47:04.403562 systemd[1]: Started cri-containerd-f17444fe5f133e4da9372f5e56ef75a9626727eadcbdd2a9a7c1b3ec207bc581.scope - libcontainer container f17444fe5f133e4da9372f5e56ef75a9626727eadcbdd2a9a7c1b3ec207bc581. Jul 15 04:47:04.436396 containerd[1527]: time="2025-07-15T04:47:04.436285530Z" level=info msg="StartContainer for \"f17444fe5f133e4da9372f5e56ef75a9626727eadcbdd2a9a7c1b3ec207bc581\" returns successfully" Jul 15 04:47:04.810326 kubelet[2675]: I0715 04:47:04.810272 2675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-ntflx" podStartSLOduration=3.807924729 podStartE2EDuration="3.807924729s" podCreationTimestamp="2025-07-15 04:47:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 04:47:02.800164158 +0000 UTC m=+8.135718076" watchObservedRunningTime="2025-07-15 04:47:04.807924729 +0000 UTC m=+10.143478647" Jul 15 04:47:04.811134 kubelet[2675]: I0715 04:47:04.810829 2675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-rbv2p" podStartSLOduration=1.014733681 podStartE2EDuration="2.810813366s" podCreationTimestamp="2025-07-15 04:47:02 +0000 UTC" firstStartedPulling="2025-07-15 04:47:02.530118641 +0000 UTC m=+7.865672559" lastFinishedPulling="2025-07-15 04:47:04.326198326 +0000 UTC m=+9.661752244" observedRunningTime="2025-07-15 04:47:04.807519742 +0000 UTC m=+10.143073660" watchObservedRunningTime="2025-07-15 04:47:04.810813366 +0000 UTC m=+10.146367284" Jul 15 04:47:05.389673 kubelet[2675]: E0715 04:47:05.389511 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 04:47:05.818691 kubelet[2675]: E0715 04:47:05.818205 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 04:47:05.970280 kubelet[2675]: E0715 04:47:05.970217 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 04:47:06.495538 systemd[1]: cri-containerd-f17444fe5f133e4da9372f5e56ef75a9626727eadcbdd2a9a7c1b3ec207bc581.scope: Deactivated successfully. Jul 15 04:47:06.530905 containerd[1527]: time="2025-07-15T04:47:06.530792381Z" level=info msg="received exit event container_id:\"f17444fe5f133e4da9372f5e56ef75a9626727eadcbdd2a9a7c1b3ec207bc581\" id:\"f17444fe5f133e4da9372f5e56ef75a9626727eadcbdd2a9a7c1b3ec207bc581\" pid:3009 exit_status:1 exited_at:{seconds:1752554826 nanos:526289644}" Jul 15 04:47:06.531909 containerd[1527]: time="2025-07-15T04:47:06.530859071Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f17444fe5f133e4da9372f5e56ef75a9626727eadcbdd2a9a7c1b3ec207bc581\" id:\"f17444fe5f133e4da9372f5e56ef75a9626727eadcbdd2a9a7c1b3ec207bc581\" pid:3009 exit_status:1 exited_at:{seconds:1752554826 nanos:526289644}" Jul 15 04:47:06.630245 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f17444fe5f133e4da9372f5e56ef75a9626727eadcbdd2a9a7c1b3ec207bc581-rootfs.mount: Deactivated successfully. Jul 15 04:47:06.838308 kubelet[2675]: I0715 04:47:06.837854 2675 scope.go:117] "RemoveContainer" containerID="f17444fe5f133e4da9372f5e56ef75a9626727eadcbdd2a9a7c1b3ec207bc581" Jul 15 04:47:06.845592 containerd[1527]: time="2025-07-15T04:47:06.845545439Z" level=info msg="CreateContainer within sandbox \"42e472fb06e9761ab93c1f88945c342e0ee76764ca4d36798c303dede59b28fa\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jul 15 04:47:06.856232 containerd[1527]: time="2025-07-15T04:47:06.855833832Z" level=info msg="Container 78cc153ccc8c3c9a0eccda5d064f2900e5c83b7407a6e24d589140f2c025e02f: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:47:06.863024 containerd[1527]: time="2025-07-15T04:47:06.862939173Z" level=info msg="CreateContainer within sandbox \"42e472fb06e9761ab93c1f88945c342e0ee76764ca4d36798c303dede59b28fa\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"78cc153ccc8c3c9a0eccda5d064f2900e5c83b7407a6e24d589140f2c025e02f\"" Jul 15 04:47:06.863593 containerd[1527]: time="2025-07-15T04:47:06.863565670Z" level=info msg="StartContainer for \"78cc153ccc8c3c9a0eccda5d064f2900e5c83b7407a6e24d589140f2c025e02f\"" Jul 15 04:47:06.864693 containerd[1527]: time="2025-07-15T04:47:06.864658679Z" level=info msg="connecting to shim 78cc153ccc8c3c9a0eccda5d064f2900e5c83b7407a6e24d589140f2c025e02f" address="unix:///run/containerd/s/90d8d532ea49207c325d7877595797e13403af9ba7856e712a218379012ae0fa" protocol=ttrpc version=3 Jul 15 04:47:06.897792 systemd[1]: Started cri-containerd-78cc153ccc8c3c9a0eccda5d064f2900e5c83b7407a6e24d589140f2c025e02f.scope - libcontainer container 78cc153ccc8c3c9a0eccda5d064f2900e5c83b7407a6e24d589140f2c025e02f. Jul 15 04:47:06.945868 containerd[1527]: time="2025-07-15T04:47:06.945760197Z" level=info msg="StartContainer for \"78cc153ccc8c3c9a0eccda5d064f2900e5c83b7407a6e24d589140f2c025e02f\" returns successfully" Jul 15 04:47:08.676805 kubelet[2675]: E0715 04:47:08.676750 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 04:47:09.717745 sudo[1743]: pam_unix(sudo:session): session closed for user root Jul 15 04:47:09.720308 sshd[1742]: Connection closed by 10.0.0.1 port 34446 Jul 15 04:47:09.721200 sshd-session[1739]: pam_unix(sshd:session): session closed for user core Jul 15 04:47:09.724728 systemd-logind[1508]: Session 7 logged out. Waiting for processes to exit. Jul 15 04:47:09.725121 systemd[1]: sshd@6-10.0.0.76:22-10.0.0.1:34446.service: Deactivated successfully. Jul 15 04:47:09.727285 systemd[1]: session-7.scope: Deactivated successfully. Jul 15 04:47:09.727643 systemd[1]: session-7.scope: Consumed 7.921s CPU time, 214.8M memory peak. Jul 15 04:47:09.730023 systemd-logind[1508]: Removed session 7. Jul 15 04:47:10.507180 update_engine[1510]: I20250715 04:47:10.507104 1510 update_attempter.cc:509] Updating boot flags... Jul 15 04:47:13.436661 systemd[1]: Created slice kubepods-besteffort-podff4d54ac_e3ef_4e16_9e2e_78a3464acb60.slice - libcontainer container kubepods-besteffort-podff4d54ac_e3ef_4e16_9e2e_78a3464acb60.slice. Jul 15 04:47:13.447831 kubelet[2675]: I0715 04:47:13.447708 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ff4d54ac-e3ef-4e16-9e2e-78a3464acb60-tigera-ca-bundle\") pod \"calico-typha-566fcb56-pcj7m\" (UID: \"ff4d54ac-e3ef-4e16-9e2e-78a3464acb60\") " pod="calico-system/calico-typha-566fcb56-pcj7m" Jul 15 04:47:13.448989 kubelet[2675]: I0715 04:47:13.448420 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/ff4d54ac-e3ef-4e16-9e2e-78a3464acb60-typha-certs\") pod \"calico-typha-566fcb56-pcj7m\" (UID: \"ff4d54ac-e3ef-4e16-9e2e-78a3464acb60\") " pod="calico-system/calico-typha-566fcb56-pcj7m" Jul 15 04:47:13.448989 kubelet[2675]: I0715 04:47:13.448672 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnmmf\" (UniqueName: \"kubernetes.io/projected/ff4d54ac-e3ef-4e16-9e2e-78a3464acb60-kube-api-access-tnmmf\") pod \"calico-typha-566fcb56-pcj7m\" (UID: \"ff4d54ac-e3ef-4e16-9e2e-78a3464acb60\") " pod="calico-system/calico-typha-566fcb56-pcj7m" Jul 15 04:47:13.584025 systemd[1]: Created slice kubepods-besteffort-pod6bde24c0_80c0_4d45_9192_878fb2ece86f.slice - libcontainer container kubepods-besteffort-pod6bde24c0_80c0_4d45_9192_878fb2ece86f.slice. Jul 15 04:47:13.649826 kubelet[2675]: I0715 04:47:13.649767 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/6bde24c0-80c0-4d45-9192-878fb2ece86f-flexvol-driver-host\") pod \"calico-node-gzkfz\" (UID: \"6bde24c0-80c0-4d45-9192-878fb2ece86f\") " pod="calico-system/calico-node-gzkfz" Jul 15 04:47:13.649826 kubelet[2675]: I0715 04:47:13.649821 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6bde24c0-80c0-4d45-9192-878fb2ece86f-lib-modules\") pod \"calico-node-gzkfz\" (UID: \"6bde24c0-80c0-4d45-9192-878fb2ece86f\") " pod="calico-system/calico-node-gzkfz" Jul 15 04:47:13.649826 kubelet[2675]: I0715 04:47:13.649846 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/6bde24c0-80c0-4d45-9192-878fb2ece86f-var-run-calico\") pod \"calico-node-gzkfz\" (UID: \"6bde24c0-80c0-4d45-9192-878fb2ece86f\") " pod="calico-system/calico-node-gzkfz" Jul 15 04:47:13.649826 kubelet[2675]: I0715 04:47:13.649867 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6bde24c0-80c0-4d45-9192-878fb2ece86f-tigera-ca-bundle\") pod \"calico-node-gzkfz\" (UID: \"6bde24c0-80c0-4d45-9192-878fb2ece86f\") " pod="calico-system/calico-node-gzkfz" Jul 15 04:47:13.649826 kubelet[2675]: I0715 04:47:13.649896 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/6bde24c0-80c0-4d45-9192-878fb2ece86f-cni-bin-dir\") pod \"calico-node-gzkfz\" (UID: \"6bde24c0-80c0-4d45-9192-878fb2ece86f\") " pod="calico-system/calico-node-gzkfz" Jul 15 04:47:13.650554 kubelet[2675]: I0715 04:47:13.649913 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/6bde24c0-80c0-4d45-9192-878fb2ece86f-cni-net-dir\") pod \"calico-node-gzkfz\" (UID: \"6bde24c0-80c0-4d45-9192-878fb2ece86f\") " pod="calico-system/calico-node-gzkfz" Jul 15 04:47:13.650554 kubelet[2675]: I0715 04:47:13.649930 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/6bde24c0-80c0-4d45-9192-878fb2ece86f-node-certs\") pod \"calico-node-gzkfz\" (UID: \"6bde24c0-80c0-4d45-9192-878fb2ece86f\") " pod="calico-system/calico-node-gzkfz" Jul 15 04:47:13.650554 kubelet[2675]: I0715 04:47:13.649946 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6bde24c0-80c0-4d45-9192-878fb2ece86f-xtables-lock\") pod \"calico-node-gzkfz\" (UID: \"6bde24c0-80c0-4d45-9192-878fb2ece86f\") " pod="calico-system/calico-node-gzkfz" Jul 15 04:47:13.650554 kubelet[2675]: I0715 04:47:13.649964 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6bde24c0-80c0-4d45-9192-878fb2ece86f-var-lib-calico\") pod \"calico-node-gzkfz\" (UID: \"6bde24c0-80c0-4d45-9192-878fb2ece86f\") " pod="calico-system/calico-node-gzkfz" Jul 15 04:47:13.650554 kubelet[2675]: I0715 04:47:13.649982 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/6bde24c0-80c0-4d45-9192-878fb2ece86f-policysync\") pod \"calico-node-gzkfz\" (UID: \"6bde24c0-80c0-4d45-9192-878fb2ece86f\") " pod="calico-system/calico-node-gzkfz" Jul 15 04:47:13.650708 kubelet[2675]: I0715 04:47:13.650000 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/6bde24c0-80c0-4d45-9192-878fb2ece86f-cni-log-dir\") pod \"calico-node-gzkfz\" (UID: \"6bde24c0-80c0-4d45-9192-878fb2ece86f\") " pod="calico-system/calico-node-gzkfz" Jul 15 04:47:13.650708 kubelet[2675]: I0715 04:47:13.650015 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5sb7v\" (UniqueName: \"kubernetes.io/projected/6bde24c0-80c0-4d45-9192-878fb2ece86f-kube-api-access-5sb7v\") pod \"calico-node-gzkfz\" (UID: \"6bde24c0-80c0-4d45-9192-878fb2ece86f\") " pod="calico-system/calico-node-gzkfz" Jul 15 04:47:13.743179 kubelet[2675]: E0715 04:47:13.742984 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 04:47:13.743772 containerd[1527]: time="2025-07-15T04:47:13.743723851Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-566fcb56-pcj7m,Uid:ff4d54ac-e3ef-4e16-9e2e-78a3464acb60,Namespace:calico-system,Attempt:0,}" Jul 15 04:47:13.765998 kubelet[2675]: E0715 04:47:13.765961 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:13.765998 kubelet[2675]: W0715 04:47:13.765987 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:13.767384 kubelet[2675]: E0715 04:47:13.766692 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:13.772501 kubelet[2675]: E0715 04:47:13.772456 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:13.772501 kubelet[2675]: W0715 04:47:13.772483 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:13.772501 kubelet[2675]: E0715 04:47:13.772503 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:13.782378 containerd[1527]: time="2025-07-15T04:47:13.782317877Z" level=info msg="connecting to shim 82927877450b65a538a55462054a13e90dd4ce6459688140affe98d12e568c3a" address="unix:///run/containerd/s/0bb218aa1307ca9077b66d9120bf922885f59b95618f686d7e12c1f32522061e" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:47:13.812598 systemd[1]: Started cri-containerd-82927877450b65a538a55462054a13e90dd4ce6459688140affe98d12e568c3a.scope - libcontainer container 82927877450b65a538a55462054a13e90dd4ce6459688140affe98d12e568c3a. Jul 15 04:47:13.889266 containerd[1527]: time="2025-07-15T04:47:13.888723790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-gzkfz,Uid:6bde24c0-80c0-4d45-9192-878fb2ece86f,Namespace:calico-system,Attempt:0,}" Jul 15 04:47:13.933432 containerd[1527]: time="2025-07-15T04:47:13.933273874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-566fcb56-pcj7m,Uid:ff4d54ac-e3ef-4e16-9e2e-78a3464acb60,Namespace:calico-system,Attempt:0,} returns sandbox id \"82927877450b65a538a55462054a13e90dd4ce6459688140affe98d12e568c3a\"" Jul 15 04:47:13.934328 kubelet[2675]: E0715 04:47:13.934179 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 04:47:13.937895 containerd[1527]: time="2025-07-15T04:47:13.937807437Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 15 04:47:13.960254 kubelet[2675]: E0715 04:47:13.960016 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nvc78" podUID="084d4de4-ee2b-48f0-ba8f-271876d17fba" Jul 15 04:47:13.961624 containerd[1527]: time="2025-07-15T04:47:13.961581224Z" level=info msg="connecting to shim 1f3cbf0d15e2177adc6b2b867d3e1312fc34c70d6933596c5cfc88bf7acc7239" address="unix:///run/containerd/s/6204c916c8ec9110a4280ce61abb6d3db870c6b863ce784cc6c62b40af0cf710" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:47:13.990553 systemd[1]: Started cri-containerd-1f3cbf0d15e2177adc6b2b867d3e1312fc34c70d6933596c5cfc88bf7acc7239.scope - libcontainer container 1f3cbf0d15e2177adc6b2b867d3e1312fc34c70d6933596c5cfc88bf7acc7239. Jul 15 04:47:14.023304 containerd[1527]: time="2025-07-15T04:47:14.023193739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-gzkfz,Uid:6bde24c0-80c0-4d45-9192-878fb2ece86f,Namespace:calico-system,Attempt:0,} returns sandbox id \"1f3cbf0d15e2177adc6b2b867d3e1312fc34c70d6933596c5cfc88bf7acc7239\"" Jul 15 04:47:14.044910 kubelet[2675]: E0715 04:47:14.044872 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:14.044910 kubelet[2675]: W0715 04:47:14.044904 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:14.045055 kubelet[2675]: E0715 04:47:14.044928 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:14.045611 kubelet[2675]: E0715 04:47:14.045586 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:14.048128 kubelet[2675]: W0715 04:47:14.045612 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:14.048128 kubelet[2675]: E0715 04:47:14.048004 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:14.048322 kubelet[2675]: E0715 04:47:14.048307 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:14.048392 kubelet[2675]: W0715 04:47:14.048380 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:14.048452 kubelet[2675]: E0715 04:47:14.048441 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:14.048818 kubelet[2675]: E0715 04:47:14.048707 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:14.048818 kubelet[2675]: W0715 04:47:14.048720 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:14.048818 kubelet[2675]: E0715 04:47:14.048729 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:14.048985 kubelet[2675]: E0715 04:47:14.048971 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:14.049038 kubelet[2675]: W0715 04:47:14.049026 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:14.049096 kubelet[2675]: E0715 04:47:14.049085 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:14.049370 kubelet[2675]: E0715 04:47:14.049283 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:14.049370 kubelet[2675]: W0715 04:47:14.049295 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:14.049370 kubelet[2675]: E0715 04:47:14.049305 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:14.049625 kubelet[2675]: E0715 04:47:14.049610 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:14.049783 kubelet[2675]: W0715 04:47:14.049681 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:14.049783 kubelet[2675]: E0715 04:47:14.049697 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:14.049913 kubelet[2675]: E0715 04:47:14.049900 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:14.050022 kubelet[2675]: W0715 04:47:14.050007 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:14.050179 kubelet[2675]: E0715 04:47:14.050071 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:14.050278 kubelet[2675]: E0715 04:47:14.050265 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:14.050337 kubelet[2675]: W0715 04:47:14.050325 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:14.050518 kubelet[2675]: E0715 04:47:14.050412 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:14.050635 kubelet[2675]: E0715 04:47:14.050622 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:14.050702 kubelet[2675]: W0715 04:47:14.050691 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:14.050754 kubelet[2675]: E0715 04:47:14.050742 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:14.051088 kubelet[2675]: E0715 04:47:14.050988 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:14.051088 kubelet[2675]: W0715 04:47:14.051000 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:14.051088 kubelet[2675]: E0715 04:47:14.051010 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:14.051291 kubelet[2675]: E0715 04:47:14.051278 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:14.051351 kubelet[2675]: W0715 04:47:14.051339 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:14.051432 kubelet[2675]: E0715 04:47:14.051419 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:14.051672 kubelet[2675]: E0715 04:47:14.051657 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:14.051836 kubelet[2675]: W0715 04:47:14.051735 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:14.051836 kubelet[2675]: E0715 04:47:14.051751 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:14.051968 kubelet[2675]: E0715 04:47:14.051954 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:14.052019 kubelet[2675]: W0715 04:47:14.052008 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:14.052077 kubelet[2675]: E0715 04:47:14.052065 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:14.052273 kubelet[2675]: E0715 04:47:14.052259 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:14.052332 kubelet[2675]: W0715 04:47:14.052321 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:14.052429 kubelet[2675]: E0715 04:47:14.052416 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:14.052663 kubelet[2675]: E0715 04:47:14.052648 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:14.052843 kubelet[2675]: W0715 04:47:14.052716 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:14.052843 kubelet[2675]: E0715 04:47:14.052731 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:14.052985 kubelet[2675]: E0715 04:47:14.052971 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:14.053040 kubelet[2675]: W0715 04:47:14.053029 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:14.053091 kubelet[2675]: E0715 04:47:14.053081 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:14.053280 kubelet[2675]: E0715 04:47:14.053267 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:14.053348 kubelet[2675]: W0715 04:47:14.053336 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:14.053546 kubelet[2675]: E0715 04:47:14.053445 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:14.053654 kubelet[2675]: E0715 04:47:14.053642 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:14.053705 kubelet[2675]: W0715 04:47:14.053693 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:14.053761 kubelet[2675]: E0715 04:47:14.053750 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:14.053970 kubelet[2675]: E0715 04:47:14.053957 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:14.054036 kubelet[2675]: W0715 04:47:14.054025 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:14.054096 kubelet[2675]: E0715 04:47:14.054084 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:14.054591 kubelet[2675]: E0715 04:47:14.054421 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:14.054591 kubelet[2675]: W0715 04:47:14.054435 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:14.054591 kubelet[2675]: E0715 04:47:14.054445 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:14.054591 kubelet[2675]: I0715 04:47:14.054480 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqnc6\" (UniqueName: \"kubernetes.io/projected/084d4de4-ee2b-48f0-ba8f-271876d17fba-kube-api-access-dqnc6\") pod \"csi-node-driver-nvc78\" (UID: \"084d4de4-ee2b-48f0-ba8f-271876d17fba\") " pod="calico-system/csi-node-driver-nvc78" Jul 15 04:47:14.054794 kubelet[2675]: E0715 04:47:14.054777 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:14.054853 kubelet[2675]: W0715 04:47:14.054840 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:14.054903 kubelet[2675]: E0715 04:47:14.054892 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:14.054971 kubelet[2675]: I0715 04:47:14.054957 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/084d4de4-ee2b-48f0-ba8f-271876d17fba-varrun\") pod \"csi-node-driver-nvc78\" (UID: \"084d4de4-ee2b-48f0-ba8f-271876d17fba\") " pod="calico-system/csi-node-driver-nvc78" Jul 15 04:47:14.055283 kubelet[2675]: E0715 04:47:14.055264 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:14.055283 kubelet[2675]: W0715 04:47:14.055281 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:14.055351 kubelet[2675]: E0715 04:47:14.055294 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:14.055535 kubelet[2675]: E0715 04:47:14.055518 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:14.055535 kubelet[2675]: W0715 04:47:14.055532 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:14.055598 kubelet[2675]: E0715 04:47:14.055542 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:14.055887 kubelet[2675]: E0715 04:47:14.055869 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:14.056044 kubelet[2675]: W0715 04:47:14.056024 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:14.056100 kubelet[2675]: E0715 04:47:14.056048 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:14.056227 kubelet[2675]: I0715 04:47:14.056117 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/084d4de4-ee2b-48f0-ba8f-271876d17fba-registration-dir\") pod \"csi-node-driver-nvc78\" (UID: \"084d4de4-ee2b-48f0-ba8f-271876d17fba\") " pod="calico-system/csi-node-driver-nvc78" Jul 15 04:47:14.056428 kubelet[2675]: E0715 04:47:14.056406 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:14.056494 kubelet[2675]: W0715 04:47:14.056481 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:14.056564 kubelet[2675]: E0715 04:47:14.056551 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:14.056910 kubelet[2675]: E0715 04:47:14.056803 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:14.056910 kubelet[2675]: W0715 04:47:14.056815 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:14.056910 kubelet[2675]: E0715 04:47:14.056825 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:14.057079 kubelet[2675]: E0715 04:47:14.057066 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:14.057129 kubelet[2675]: W0715 04:47:14.057118 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:14.057185 kubelet[2675]: E0715 04:47:14.057175 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:14.057255 kubelet[2675]: I0715 04:47:14.057242 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/084d4de4-ee2b-48f0-ba8f-271876d17fba-socket-dir\") pod \"csi-node-driver-nvc78\" (UID: \"084d4de4-ee2b-48f0-ba8f-271876d17fba\") " pod="calico-system/csi-node-driver-nvc78" Jul 15 04:47:14.057604 kubelet[2675]: E0715 04:47:14.057563 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:14.057604 kubelet[2675]: W0715 04:47:14.057601 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:14.057676 kubelet[2675]: E0715 04:47:14.057615 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:14.057906 kubelet[2675]: E0715 04:47:14.057854 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:14.057906 kubelet[2675]: W0715 04:47:14.057892 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:14.057906 kubelet[2675]: E0715 04:47:14.057904 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:14.058133 kubelet[2675]: E0715 04:47:14.058101 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:14.058156 kubelet[2675]: W0715 04:47:14.058134 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:14.058178 kubelet[2675]: E0715 04:47:14.058162 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:14.058349 kubelet[2675]: I0715 04:47:14.058331 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/084d4de4-ee2b-48f0-ba8f-271876d17fba-kubelet-dir\") pod \"csi-node-driver-nvc78\" (UID: \"084d4de4-ee2b-48f0-ba8f-271876d17fba\") " pod="calico-system/csi-node-driver-nvc78" Jul 15 04:47:14.059021 kubelet[2675]: E0715 04:47:14.058988 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:14.059021 kubelet[2675]: W0715 04:47:14.059010 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:14.059078 kubelet[2675]: E0715 04:47:14.059025 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:14.059637 kubelet[2675]: E0715 04:47:14.059619 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:14.059673 kubelet[2675]: W0715 04:47:14.059636 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:14.059673 kubelet[2675]: E0715 04:47:14.059650 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:14.059874 kubelet[2675]: E0715 04:47:14.059858 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:14.059874 kubelet[2675]: W0715 04:47:14.059872 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:14.059921 kubelet[2675]: E0715 04:47:14.059881 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:14.060098 kubelet[2675]: E0715 04:47:14.060085 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:14.060098 kubelet[2675]: W0715 04:47:14.060096 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:14.060145 kubelet[2675]: E0715 04:47:14.060107 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:14.159649 kubelet[2675]: E0715 04:47:14.159614 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:14.159649 kubelet[2675]: W0715 04:47:14.159637 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:14.159649 kubelet[2675]: E0715 04:47:14.159658 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:14.159892 kubelet[2675]: E0715 04:47:14.159868 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:14.159892 kubelet[2675]: W0715 04:47:14.159882 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:14.159892 kubelet[2675]: E0715 04:47:14.159891 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:14.160133 kubelet[2675]: E0715 04:47:14.160100 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:14.160133 kubelet[2675]: W0715 04:47:14.160123 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:14.160186 kubelet[2675]: E0715 04:47:14.160138 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:14.160296 kubelet[2675]: E0715 04:47:14.160283 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:14.160296 kubelet[2675]: W0715 04:47:14.160295 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:14.160354 kubelet[2675]: E0715 04:47:14.160303 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:14.160473 kubelet[2675]: E0715 04:47:14.160461 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:14.160473 kubelet[2675]: W0715 04:47:14.160471 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:14.160522 kubelet[2675]: E0715 04:47:14.160480 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:14.160708 kubelet[2675]: E0715 04:47:14.160684 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:14.160708 kubelet[2675]: W0715 04:47:14.160697 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:14.160708 kubelet[2675]: E0715 04:47:14.160707 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:14.160871 kubelet[2675]: E0715 04:47:14.160857 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:14.160871 kubelet[2675]: W0715 04:47:14.160869 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:14.160916 kubelet[2675]: E0715 04:47:14.160878 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:14.161051 kubelet[2675]: E0715 04:47:14.161039 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:14.161073 kubelet[2675]: W0715 04:47:14.161050 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:14.161073 kubelet[2675]: E0715 04:47:14.161059 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:14.161213 kubelet[2675]: E0715 04:47:14.161201 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:14.161236 kubelet[2675]: W0715 04:47:14.161212 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:14.161236 kubelet[2675]: E0715 04:47:14.161220 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:14.161382 kubelet[2675]: E0715 04:47:14.161352 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:14.161405 kubelet[2675]: W0715 04:47:14.161382 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:14.161405 kubelet[2675]: E0715 04:47:14.161391 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:14.161542 kubelet[2675]: E0715 04:47:14.161528 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:14.161542 kubelet[2675]: W0715 04:47:14.161539 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:14.161590 kubelet[2675]: E0715 04:47:14.161547 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:14.161680 kubelet[2675]: E0715 04:47:14.161669 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:14.161708 kubelet[2675]: W0715 04:47:14.161681 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:14.161708 kubelet[2675]: E0715 04:47:14.161689 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:14.161859 kubelet[2675]: E0715 04:47:14.161845 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:14.161888 kubelet[2675]: W0715 04:47:14.161859 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:14.161888 kubelet[2675]: E0715 04:47:14.161869 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:14.162079 kubelet[2675]: E0715 04:47:14.162067 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:14.162107 kubelet[2675]: W0715 04:47:14.162079 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:14.162107 kubelet[2675]: E0715 04:47:14.162089 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:14.162218 kubelet[2675]: E0715 04:47:14.162208 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:14.162240 kubelet[2675]: W0715 04:47:14.162218 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:14.162240 kubelet[2675]: E0715 04:47:14.162226 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:14.162387 kubelet[2675]: E0715 04:47:14.162375 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:14.162417 kubelet[2675]: W0715 04:47:14.162386 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:14.162417 kubelet[2675]: E0715 04:47:14.162396 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:14.162562 kubelet[2675]: E0715 04:47:14.162549 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:14.162562 kubelet[2675]: W0715 04:47:14.162560 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:14.162609 kubelet[2675]: E0715 04:47:14.162569 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:14.162743 kubelet[2675]: E0715 04:47:14.162731 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:14.162771 kubelet[2675]: W0715 04:47:14.162742 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:14.162771 kubelet[2675]: E0715 04:47:14.162750 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:14.162911 kubelet[2675]: E0715 04:47:14.162900 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:14.162933 kubelet[2675]: W0715 04:47:14.162910 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:14.162933 kubelet[2675]: E0715 04:47:14.162918 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:14.163106 kubelet[2675]: E0715 04:47:14.163093 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:14.163134 kubelet[2675]: W0715 04:47:14.163106 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:14.163134 kubelet[2675]: E0715 04:47:14.163115 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:14.163275 kubelet[2675]: E0715 04:47:14.163263 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:14.163297 kubelet[2675]: W0715 04:47:14.163275 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:14.163297 kubelet[2675]: E0715 04:47:14.163284 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:14.163429 kubelet[2675]: E0715 04:47:14.163418 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:14.163429 kubelet[2675]: W0715 04:47:14.163428 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:14.163477 kubelet[2675]: E0715 04:47:14.163436 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:14.163610 kubelet[2675]: E0715 04:47:14.163596 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:14.163633 kubelet[2675]: W0715 04:47:14.163609 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:14.163633 kubelet[2675]: E0715 04:47:14.163618 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:14.164343 kubelet[2675]: E0715 04:47:14.163854 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:14.164343 kubelet[2675]: W0715 04:47:14.163867 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:14.164343 kubelet[2675]: E0715 04:47:14.163878 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:14.164464 kubelet[2675]: E0715 04:47:14.164399 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:14.164464 kubelet[2675]: W0715 04:47:14.164410 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:14.164464 kubelet[2675]: E0715 04:47:14.164437 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:14.174941 kubelet[2675]: E0715 04:47:14.174904 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:14.174941 kubelet[2675]: W0715 04:47:14.174923 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:14.174941 kubelet[2675]: E0715 04:47:14.174939 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:14.883309 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1345505604.mount: Deactivated successfully. Jul 15 04:47:15.726477 containerd[1527]: time="2025-07-15T04:47:15.726433489Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:47:15.727792 containerd[1527]: time="2025-07-15T04:47:15.727469369Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=33087207" Jul 15 04:47:15.728345 containerd[1527]: time="2025-07-15T04:47:15.728307787Z" level=info msg="ImageCreate event name:\"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:47:15.731013 containerd[1527]: time="2025-07-15T04:47:15.730974457Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:47:15.731609 containerd[1527]: time="2025-07-15T04:47:15.731583848Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"33087061\" in 1.793503858s" Jul 15 04:47:15.731704 containerd[1527]: time="2025-07-15T04:47:15.731679019Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\"" Jul 15 04:47:15.734395 containerd[1527]: time="2025-07-15T04:47:15.732762705Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 15 04:47:15.745726 containerd[1527]: time="2025-07-15T04:47:15.745667927Z" level=info msg="CreateContainer within sandbox \"82927877450b65a538a55462054a13e90dd4ce6459688140affe98d12e568c3a\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 15 04:47:15.753330 containerd[1527]: time="2025-07-15T04:47:15.753275772Z" level=info msg="Container 008d948d5ea5aa26d88f37ed584de66547be171aabf3b225a62789ba654ff630: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:47:15.755172 kubelet[2675]: E0715 04:47:15.755123 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nvc78" podUID="084d4de4-ee2b-48f0-ba8f-271876d17fba" Jul 15 04:47:15.761279 containerd[1527]: time="2025-07-15T04:47:15.761172211Z" level=info msg="CreateContainer within sandbox \"82927877450b65a538a55462054a13e90dd4ce6459688140affe98d12e568c3a\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"008d948d5ea5aa26d88f37ed584de66547be171aabf3b225a62789ba654ff630\"" Jul 15 04:47:15.763861 containerd[1527]: time="2025-07-15T04:47:15.762535650Z" level=info msg="StartContainer for \"008d948d5ea5aa26d88f37ed584de66547be171aabf3b225a62789ba654ff630\"" Jul 15 04:47:15.763861 containerd[1527]: time="2025-07-15T04:47:15.763810518Z" level=info msg="connecting to shim 008d948d5ea5aa26d88f37ed584de66547be171aabf3b225a62789ba654ff630" address="unix:///run/containerd/s/0bb218aa1307ca9077b66d9120bf922885f59b95618f686d7e12c1f32522061e" protocol=ttrpc version=3 Jul 15 04:47:15.788572 systemd[1]: Started cri-containerd-008d948d5ea5aa26d88f37ed584de66547be171aabf3b225a62789ba654ff630.scope - libcontainer container 008d948d5ea5aa26d88f37ed584de66547be171aabf3b225a62789ba654ff630. Jul 15 04:47:15.839217 containerd[1527]: time="2025-07-15T04:47:15.839163727Z" level=info msg="StartContainer for \"008d948d5ea5aa26d88f37ed584de66547be171aabf3b225a62789ba654ff630\" returns successfully" Jul 15 04:47:15.848467 kubelet[2675]: E0715 04:47:15.848424 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 04:47:15.862533 kubelet[2675]: I0715 04:47:15.862476 2675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-566fcb56-pcj7m" podStartSLOduration=1.066482841 podStartE2EDuration="2.862460198s" podCreationTimestamp="2025-07-15 04:47:13 +0000 UTC" firstStartedPulling="2025-07-15 04:47:13.936657934 +0000 UTC m=+19.272211852" lastFinishedPulling="2025-07-15 04:47:15.732635291 +0000 UTC m=+21.068189209" observedRunningTime="2025-07-15 04:47:15.862384869 +0000 UTC m=+21.197938787" watchObservedRunningTime="2025-07-15 04:47:15.862460198 +0000 UTC m=+21.198014116" Jul 15 04:47:15.867669 kubelet[2675]: E0715 04:47:15.867640 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:15.867669 kubelet[2675]: W0715 04:47:15.867664 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:15.867828 kubelet[2675]: E0715 04:47:15.867686 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:15.867908 kubelet[2675]: E0715 04:47:15.867896 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:15.867979 kubelet[2675]: W0715 04:47:15.867908 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:15.868011 kubelet[2675]: E0715 04:47:15.867981 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:15.868157 kubelet[2675]: E0715 04:47:15.868144 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:15.868157 kubelet[2675]: W0715 04:47:15.868155 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:15.868216 kubelet[2675]: E0715 04:47:15.868165 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:15.868308 kubelet[2675]: E0715 04:47:15.868297 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:15.868308 kubelet[2675]: W0715 04:47:15.868307 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:15.868384 kubelet[2675]: E0715 04:47:15.868315 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:15.868596 kubelet[2675]: E0715 04:47:15.868583 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:15.868596 kubelet[2675]: W0715 04:47:15.868596 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:15.868663 kubelet[2675]: E0715 04:47:15.868606 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:15.868750 kubelet[2675]: E0715 04:47:15.868739 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:15.868750 kubelet[2675]: W0715 04:47:15.868749 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:15.868816 kubelet[2675]: E0715 04:47:15.868757 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:15.868884 kubelet[2675]: E0715 04:47:15.868875 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:15.868916 kubelet[2675]: W0715 04:47:15.868887 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:15.868916 kubelet[2675]: E0715 04:47:15.868896 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:15.869020 kubelet[2675]: E0715 04:47:15.869010 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:15.869020 kubelet[2675]: W0715 04:47:15.869020 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:15.869076 kubelet[2675]: E0715 04:47:15.869028 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:15.869167 kubelet[2675]: E0715 04:47:15.869156 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:15.869167 kubelet[2675]: W0715 04:47:15.869166 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:15.869231 kubelet[2675]: E0715 04:47:15.869174 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:15.869313 kubelet[2675]: E0715 04:47:15.869302 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:15.869313 kubelet[2675]: W0715 04:47:15.869312 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:15.869389 kubelet[2675]: E0715 04:47:15.869320 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:15.869454 kubelet[2675]: E0715 04:47:15.869444 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:15.869454 kubelet[2675]: W0715 04:47:15.869454 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:15.869508 kubelet[2675]: E0715 04:47:15.869461 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:15.869589 kubelet[2675]: E0715 04:47:15.869579 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:15.869589 kubelet[2675]: W0715 04:47:15.869589 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:15.869645 kubelet[2675]: E0715 04:47:15.869596 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:15.869751 kubelet[2675]: E0715 04:47:15.869740 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:15.869751 kubelet[2675]: W0715 04:47:15.869751 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:15.869809 kubelet[2675]: E0715 04:47:15.869758 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:15.869891 kubelet[2675]: E0715 04:47:15.869881 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:15.869891 kubelet[2675]: W0715 04:47:15.869891 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:15.869947 kubelet[2675]: E0715 04:47:15.869899 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:15.870042 kubelet[2675]: E0715 04:47:15.870032 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:15.870042 kubelet[2675]: W0715 04:47:15.870041 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:15.870091 kubelet[2675]: E0715 04:47:15.870049 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:15.874615 kubelet[2675]: E0715 04:47:15.874522 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:15.874615 kubelet[2675]: W0715 04:47:15.874544 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:15.874615 kubelet[2675]: E0715 04:47:15.874560 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:15.874951 kubelet[2675]: E0715 04:47:15.874837 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:15.874951 kubelet[2675]: W0715 04:47:15.874846 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:15.874951 kubelet[2675]: E0715 04:47:15.874857 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:15.875548 kubelet[2675]: E0715 04:47:15.875441 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:15.875754 kubelet[2675]: W0715 04:47:15.875736 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:15.876541 kubelet[2675]: E0715 04:47:15.876522 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:15.877242 kubelet[2675]: E0715 04:47:15.876869 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:15.877242 kubelet[2675]: W0715 04:47:15.877059 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:15.877242 kubelet[2675]: E0715 04:47:15.877077 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:15.877996 kubelet[2675]: E0715 04:47:15.877976 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:15.878417 kubelet[2675]: W0715 04:47:15.878167 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:15.878417 kubelet[2675]: E0715 04:47:15.878191 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:15.879184 kubelet[2675]: E0715 04:47:15.879160 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:15.879184 kubelet[2675]: W0715 04:47:15.879181 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:15.879239 kubelet[2675]: E0715 04:47:15.879199 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:15.879415 kubelet[2675]: E0715 04:47:15.879386 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:15.879415 kubelet[2675]: W0715 04:47:15.879396 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:15.879415 kubelet[2675]: E0715 04:47:15.879404 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:15.879844 kubelet[2675]: E0715 04:47:15.879585 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:15.879844 kubelet[2675]: W0715 04:47:15.879595 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:15.879844 kubelet[2675]: E0715 04:47:15.879605 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:15.879844 kubelet[2675]: E0715 04:47:15.879796 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:15.879844 kubelet[2675]: W0715 04:47:15.879807 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:15.879844 kubelet[2675]: E0715 04:47:15.879815 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:15.880022 kubelet[2675]: E0715 04:47:15.879975 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:15.880022 kubelet[2675]: W0715 04:47:15.879982 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:15.880022 kubelet[2675]: E0715 04:47:15.879989 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:15.880690 kubelet[2675]: E0715 04:47:15.880156 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:15.880690 kubelet[2675]: W0715 04:47:15.880167 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:15.880690 kubelet[2675]: E0715 04:47:15.880176 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:15.880690 kubelet[2675]: E0715 04:47:15.880520 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:15.880690 kubelet[2675]: W0715 04:47:15.880661 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:15.880690 kubelet[2675]: E0715 04:47:15.880679 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:15.881172 kubelet[2675]: E0715 04:47:15.881044 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:15.881172 kubelet[2675]: W0715 04:47:15.881058 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:15.881172 kubelet[2675]: E0715 04:47:15.881069 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:15.881656 kubelet[2675]: E0715 04:47:15.881636 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:15.881656 kubelet[2675]: W0715 04:47:15.881653 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:15.881747 kubelet[2675]: E0715 04:47:15.881666 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:15.882322 kubelet[2675]: E0715 04:47:15.882293 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:15.882322 kubelet[2675]: W0715 04:47:15.882312 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:15.882436 kubelet[2675]: E0715 04:47:15.882328 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:15.883087 kubelet[2675]: E0715 04:47:15.882890 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:15.883087 kubelet[2675]: W0715 04:47:15.882912 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:15.883087 kubelet[2675]: E0715 04:47:15.882926 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:15.883211 kubelet[2675]: E0715 04:47:15.883121 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:15.883211 kubelet[2675]: W0715 04:47:15.883130 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:15.883211 kubelet[2675]: E0715 04:47:15.883140 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:15.883820 kubelet[2675]: E0715 04:47:15.883786 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:15.883820 kubelet[2675]: W0715 04:47:15.883819 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:15.883922 kubelet[2675]: E0715 04:47:15.883835 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:16.850579 kubelet[2675]: I0715 04:47:16.850505 2675 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 04:47:16.851641 kubelet[2675]: E0715 04:47:16.850899 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 04:47:16.877041 kubelet[2675]: E0715 04:47:16.877012 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:16.877041 kubelet[2675]: W0715 04:47:16.877035 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:16.877407 kubelet[2675]: E0715 04:47:16.877055 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:16.877480 kubelet[2675]: E0715 04:47:16.877442 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:16.877480 kubelet[2675]: W0715 04:47:16.877457 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:16.877480 kubelet[2675]: E0715 04:47:16.877470 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:16.877618 kubelet[2675]: E0715 04:47:16.877610 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:16.877644 kubelet[2675]: W0715 04:47:16.877618 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:16.877644 kubelet[2675]: E0715 04:47:16.877626 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:16.877766 kubelet[2675]: E0715 04:47:16.877753 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:16.877766 kubelet[2675]: W0715 04:47:16.877763 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:16.877857 kubelet[2675]: E0715 04:47:16.877773 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:16.878002 kubelet[2675]: E0715 04:47:16.877948 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:16.878002 kubelet[2675]: W0715 04:47:16.877959 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:16.878002 kubelet[2675]: E0715 04:47:16.877968 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:16.878181 kubelet[2675]: E0715 04:47:16.878152 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:16.878181 kubelet[2675]: W0715 04:47:16.878163 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:16.878181 kubelet[2675]: E0715 04:47:16.878172 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:16.878414 kubelet[2675]: E0715 04:47:16.878303 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:16.878414 kubelet[2675]: W0715 04:47:16.878313 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:16.878414 kubelet[2675]: E0715 04:47:16.878322 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:16.878557 kubelet[2675]: E0715 04:47:16.878463 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:16.878557 kubelet[2675]: W0715 04:47:16.878470 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:16.878557 kubelet[2675]: E0715 04:47:16.878478 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:16.878656 kubelet[2675]: E0715 04:47:16.878618 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:16.878656 kubelet[2675]: W0715 04:47:16.878627 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:16.878656 kubelet[2675]: E0715 04:47:16.878635 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:16.878879 kubelet[2675]: E0715 04:47:16.878837 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:16.878879 kubelet[2675]: W0715 04:47:16.878850 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:16.878879 kubelet[2675]: E0715 04:47:16.878858 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:16.879497 kubelet[2675]: E0715 04:47:16.879398 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:16.879497 kubelet[2675]: W0715 04:47:16.879413 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:16.879497 kubelet[2675]: E0715 04:47:16.879423 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:16.879803 kubelet[2675]: E0715 04:47:16.879633 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:16.879803 kubelet[2675]: W0715 04:47:16.879650 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:16.879803 kubelet[2675]: E0715 04:47:16.879660 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:16.880888 kubelet[2675]: E0715 04:47:16.879899 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:16.880888 kubelet[2675]: W0715 04:47:16.879910 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:16.880888 kubelet[2675]: E0715 04:47:16.879920 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:16.880888 kubelet[2675]: E0715 04:47:16.880159 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:16.880888 kubelet[2675]: W0715 04:47:16.880169 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:16.880888 kubelet[2675]: E0715 04:47:16.880178 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:16.880888 kubelet[2675]: E0715 04:47:16.880472 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:16.880888 kubelet[2675]: W0715 04:47:16.880484 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:16.880888 kubelet[2675]: E0715 04:47:16.880494 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:16.885007 kubelet[2675]: E0715 04:47:16.884878 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:16.885007 kubelet[2675]: W0715 04:47:16.884898 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:16.885007 kubelet[2675]: E0715 04:47:16.884911 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:16.885385 kubelet[2675]: E0715 04:47:16.885277 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:16.885385 kubelet[2675]: W0715 04:47:16.885291 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:16.885385 kubelet[2675]: E0715 04:47:16.885301 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:16.885838 kubelet[2675]: E0715 04:47:16.885808 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:16.885838 kubelet[2675]: W0715 04:47:16.885836 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:16.885925 kubelet[2675]: E0715 04:47:16.885851 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:16.886094 kubelet[2675]: E0715 04:47:16.886080 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:16.886127 kubelet[2675]: W0715 04:47:16.886094 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:16.886127 kubelet[2675]: E0715 04:47:16.886106 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:16.886376 kubelet[2675]: E0715 04:47:16.886350 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:16.886427 kubelet[2675]: W0715 04:47:16.886379 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:16.886427 kubelet[2675]: E0715 04:47:16.886391 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:16.886613 kubelet[2675]: E0715 04:47:16.886602 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:16.886613 kubelet[2675]: W0715 04:47:16.886613 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:16.886693 kubelet[2675]: E0715 04:47:16.886622 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:16.886793 kubelet[2675]: E0715 04:47:16.886779 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:16.886793 kubelet[2675]: W0715 04:47:16.886790 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:16.886901 kubelet[2675]: E0715 04:47:16.886797 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:16.886999 kubelet[2675]: E0715 04:47:16.886944 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:16.886999 kubelet[2675]: W0715 04:47:16.886960 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:16.886999 kubelet[2675]: E0715 04:47:16.886971 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:16.887239 kubelet[2675]: E0715 04:47:16.887175 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:16.887239 kubelet[2675]: W0715 04:47:16.887205 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:16.887239 kubelet[2675]: E0715 04:47:16.887220 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:16.887446 kubelet[2675]: E0715 04:47:16.887368 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:16.887446 kubelet[2675]: W0715 04:47:16.887381 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:16.887446 kubelet[2675]: E0715 04:47:16.887390 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:16.887731 kubelet[2675]: E0715 04:47:16.887718 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:16.887731 kubelet[2675]: W0715 04:47:16.887731 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:16.887805 kubelet[2675]: E0715 04:47:16.887743 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:16.888580 kubelet[2675]: E0715 04:47:16.888562 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:16.888613 kubelet[2675]: W0715 04:47:16.888580 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:16.888613 kubelet[2675]: E0715 04:47:16.888594 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:16.888978 kubelet[2675]: E0715 04:47:16.888964 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:16.889017 kubelet[2675]: W0715 04:47:16.888979 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:16.889017 kubelet[2675]: E0715 04:47:16.888990 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:16.889317 kubelet[2675]: E0715 04:47:16.889302 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:16.889349 kubelet[2675]: W0715 04:47:16.889317 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:16.889349 kubelet[2675]: E0715 04:47:16.889328 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:16.889516 kubelet[2675]: E0715 04:47:16.889501 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:16.889562 kubelet[2675]: W0715 04:47:16.889517 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:16.889562 kubelet[2675]: E0715 04:47:16.889529 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:16.889891 kubelet[2675]: E0715 04:47:16.889875 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:16.889920 kubelet[2675]: W0715 04:47:16.889891 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:16.889920 kubelet[2675]: E0715 04:47:16.889904 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:16.890275 kubelet[2675]: E0715 04:47:16.890260 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:16.890308 kubelet[2675]: W0715 04:47:16.890274 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:16.890308 kubelet[2675]: E0715 04:47:16.890286 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:16.891043 kubelet[2675]: E0715 04:47:16.891025 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:47:16.891076 kubelet[2675]: W0715 04:47:16.891043 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:47:16.891076 kubelet[2675]: E0715 04:47:16.891058 2675 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:47:16.936742 containerd[1527]: time="2025-07-15T04:47:16.936144299Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:47:16.936742 containerd[1527]: time="2025-07-15T04:47:16.936486657Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4266981" Jul 15 04:47:16.937263 containerd[1527]: time="2025-07-15T04:47:16.937231621Z" level=info msg="ImageCreate event name:\"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:47:16.939171 containerd[1527]: time="2025-07-15T04:47:16.939141557Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:47:16.939756 containerd[1527]: time="2025-07-15T04:47:16.939721822Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5636182\" in 1.206925993s" Jul 15 04:47:16.939821 containerd[1527]: time="2025-07-15T04:47:16.939765627Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\"" Jul 15 04:47:16.943643 containerd[1527]: time="2025-07-15T04:47:16.943578977Z" level=info msg="CreateContainer within sandbox \"1f3cbf0d15e2177adc6b2b867d3e1312fc34c70d6933596c5cfc88bf7acc7239\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 15 04:47:16.952660 containerd[1527]: time="2025-07-15T04:47:16.951642366Z" level=info msg="Container d9f9cc26e38a41f3d4a178eed4fe5cdcc7410d221232b5910b7037d9053025f7: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:47:16.955840 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount55055995.mount: Deactivated successfully. Jul 15 04:47:16.959498 containerd[1527]: time="2025-07-15T04:47:16.959459687Z" level=info msg="CreateContainer within sandbox \"1f3cbf0d15e2177adc6b2b867d3e1312fc34c70d6933596c5cfc88bf7acc7239\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"d9f9cc26e38a41f3d4a178eed4fe5cdcc7410d221232b5910b7037d9053025f7\"" Jul 15 04:47:16.961400 containerd[1527]: time="2025-07-15T04:47:16.960662623Z" level=info msg="StartContainer for \"d9f9cc26e38a41f3d4a178eed4fe5cdcc7410d221232b5910b7037d9053025f7\"" Jul 15 04:47:16.962933 containerd[1527]: time="2025-07-15T04:47:16.962886633Z" level=info msg="connecting to shim d9f9cc26e38a41f3d4a178eed4fe5cdcc7410d221232b5910b7037d9053025f7" address="unix:///run/containerd/s/6204c916c8ec9110a4280ce61abb6d3db870c6b863ce784cc6c62b40af0cf710" protocol=ttrpc version=3 Jul 15 04:47:16.985553 systemd[1]: Started cri-containerd-d9f9cc26e38a41f3d4a178eed4fe5cdcc7410d221232b5910b7037d9053025f7.scope - libcontainer container d9f9cc26e38a41f3d4a178eed4fe5cdcc7410d221232b5910b7037d9053025f7. Jul 15 04:47:17.017978 containerd[1527]: time="2025-07-15T04:47:17.017928100Z" level=info msg="StartContainer for \"d9f9cc26e38a41f3d4a178eed4fe5cdcc7410d221232b5910b7037d9053025f7\" returns successfully" Jul 15 04:47:17.055146 systemd[1]: cri-containerd-d9f9cc26e38a41f3d4a178eed4fe5cdcc7410d221232b5910b7037d9053025f7.scope: Deactivated successfully. Jul 15 04:47:17.066210 containerd[1527]: time="2025-07-15T04:47:17.066097360Z" level=info msg="received exit event container_id:\"d9f9cc26e38a41f3d4a178eed4fe5cdcc7410d221232b5910b7037d9053025f7\" id:\"d9f9cc26e38a41f3d4a178eed4fe5cdcc7410d221232b5910b7037d9053025f7\" pid:3461 exited_at:{seconds:1752554837 nanos:58160653}" Jul 15 04:47:17.066770 containerd[1527]: time="2025-07-15T04:47:17.066192691Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d9f9cc26e38a41f3d4a178eed4fe5cdcc7410d221232b5910b7037d9053025f7\" id:\"d9f9cc26e38a41f3d4a178eed4fe5cdcc7410d221232b5910b7037d9053025f7\" pid:3461 exited_at:{seconds:1752554837 nanos:58160653}" Jul 15 04:47:17.085178 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d9f9cc26e38a41f3d4a178eed4fe5cdcc7410d221232b5910b7037d9053025f7-rootfs.mount: Deactivated successfully. Jul 15 04:47:17.755708 kubelet[2675]: E0715 04:47:17.755655 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nvc78" podUID="084d4de4-ee2b-48f0-ba8f-271876d17fba" Jul 15 04:47:17.856236 containerd[1527]: time="2025-07-15T04:47:17.855929696Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 15 04:47:19.755293 kubelet[2675]: E0715 04:47:19.755176 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nvc78" podUID="084d4de4-ee2b-48f0-ba8f-271876d17fba" Jul 15 04:47:20.652532 containerd[1527]: time="2025-07-15T04:47:20.652446701Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:47:20.653008 containerd[1527]: time="2025-07-15T04:47:20.652960792Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=65888320" Jul 15 04:47:20.653766 containerd[1527]: time="2025-07-15T04:47:20.653729388Z" level=info msg="ImageCreate event name:\"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:47:20.655559 containerd[1527]: time="2025-07-15T04:47:20.655522847Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:47:20.656250 containerd[1527]: time="2025-07-15T04:47:20.656219996Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"67257561\" in 2.800250216s" Jul 15 04:47:20.656281 containerd[1527]: time="2025-07-15T04:47:20.656261200Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\"" Jul 15 04:47:20.661913 containerd[1527]: time="2025-07-15T04:47:20.661620252Z" level=info msg="CreateContainer within sandbox \"1f3cbf0d15e2177adc6b2b867d3e1312fc34c70d6933596c5cfc88bf7acc7239\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 15 04:47:20.670700 containerd[1527]: time="2025-07-15T04:47:20.670652629Z" level=info msg="Container ef362ad801b2bdd3392d8e217976aaccd161073ed5ec91d37557a3d43cbe7b9e: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:47:20.692379 containerd[1527]: time="2025-07-15T04:47:20.692317620Z" level=info msg="CreateContainer within sandbox \"1f3cbf0d15e2177adc6b2b867d3e1312fc34c70d6933596c5cfc88bf7acc7239\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"ef362ad801b2bdd3392d8e217976aaccd161073ed5ec91d37557a3d43cbe7b9e\"" Jul 15 04:47:20.692906 containerd[1527]: time="2025-07-15T04:47:20.692880916Z" level=info msg="StartContainer for \"ef362ad801b2bdd3392d8e217976aaccd161073ed5ec91d37557a3d43cbe7b9e\"" Jul 15 04:47:20.694706 containerd[1527]: time="2025-07-15T04:47:20.694616848Z" level=info msg="connecting to shim ef362ad801b2bdd3392d8e217976aaccd161073ed5ec91d37557a3d43cbe7b9e" address="unix:///run/containerd/s/6204c916c8ec9110a4280ce61abb6d3db870c6b863ce784cc6c62b40af0cf710" protocol=ttrpc version=3 Jul 15 04:47:20.726570 systemd[1]: Started cri-containerd-ef362ad801b2bdd3392d8e217976aaccd161073ed5ec91d37557a3d43cbe7b9e.scope - libcontainer container ef362ad801b2bdd3392d8e217976aaccd161073ed5ec91d37557a3d43cbe7b9e. Jul 15 04:47:20.763189 containerd[1527]: time="2025-07-15T04:47:20.763104288Z" level=info msg="StartContainer for \"ef362ad801b2bdd3392d8e217976aaccd161073ed5ec91d37557a3d43cbe7b9e\" returns successfully" Jul 15 04:47:21.328388 containerd[1527]: time="2025-07-15T04:47:21.328157613Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 15 04:47:21.328770 systemd[1]: cri-containerd-ef362ad801b2bdd3392d8e217976aaccd161073ed5ec91d37557a3d43cbe7b9e.scope: Deactivated successfully. Jul 15 04:47:21.329412 systemd[1]: cri-containerd-ef362ad801b2bdd3392d8e217976aaccd161073ed5ec91d37557a3d43cbe7b9e.scope: Consumed 466ms CPU time, 179.5M memory peak, 3.7M read from disk, 165.8M written to disk. Jul 15 04:47:21.330469 containerd[1527]: time="2025-07-15T04:47:21.330406870Z" level=info msg="received exit event container_id:\"ef362ad801b2bdd3392d8e217976aaccd161073ed5ec91d37557a3d43cbe7b9e\" id:\"ef362ad801b2bdd3392d8e217976aaccd161073ed5ec91d37557a3d43cbe7b9e\" pid:3521 exited_at:{seconds:1752554841 nanos:330239613}" Jul 15 04:47:21.330600 containerd[1527]: time="2025-07-15T04:47:21.330579486Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ef362ad801b2bdd3392d8e217976aaccd161073ed5ec91d37557a3d43cbe7b9e\" id:\"ef362ad801b2bdd3392d8e217976aaccd161073ed5ec91d37557a3d43cbe7b9e\" pid:3521 exited_at:{seconds:1752554841 nanos:330239613}" Jul 15 04:47:21.347636 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ef362ad801b2bdd3392d8e217976aaccd161073ed5ec91d37557a3d43cbe7b9e-rootfs.mount: Deactivated successfully. Jul 15 04:47:21.390199 kubelet[2675]: I0715 04:47:21.390074 2675 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 15 04:47:21.447245 systemd[1]: Created slice kubepods-besteffort-pod3dde1c96_ab62_40f8_ba4c_252df247f179.slice - libcontainer container kubepods-besteffort-pod3dde1c96_ab62_40f8_ba4c_252df247f179.slice. Jul 15 04:47:21.459484 systemd[1]: Created slice kubepods-burstable-pod20642bd1_05fb_429a_a40a_fda9cd5820ba.slice - libcontainer container kubepods-burstable-pod20642bd1_05fb_429a_a40a_fda9cd5820ba.slice. Jul 15 04:47:21.476052 systemd[1]: Created slice kubepods-besteffort-pod348e8462_7e3a_4709_a2d9_1e4cf20af94a.slice - libcontainer container kubepods-besteffort-pod348e8462_7e3a_4709_a2d9_1e4cf20af94a.slice. Jul 15 04:47:21.484462 systemd[1]: Created slice kubepods-besteffort-pod89c9e28e_100f_4e96_85cc_ea4bcf2583b8.slice - libcontainer container kubepods-besteffort-pod89c9e28e_100f_4e96_85cc_ea4bcf2583b8.slice. Jul 15 04:47:21.491681 systemd[1]: Created slice kubepods-besteffort-pod13855d06_ab16_423c_92da_241b51e79162.slice - libcontainer container kubepods-besteffort-pod13855d06_ab16_423c_92da_241b51e79162.slice. Jul 15 04:47:21.500957 systemd[1]: Created slice kubepods-besteffort-podd34fce70_fd78_417f_b0ec_e1f83ffebe90.slice - libcontainer container kubepods-besteffort-podd34fce70_fd78_417f_b0ec_e1f83ffebe90.slice. Jul 15 04:47:21.506631 systemd[1]: Created slice kubepods-burstable-pod41c5b645_6734_48b4_a165_074bd90a2006.slice - libcontainer container kubepods-burstable-pod41c5b645_6734_48b4_a165_074bd90a2006.slice. Jul 15 04:47:21.623440 kubelet[2675]: I0715 04:47:21.623334 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqm5s\" (UniqueName: \"kubernetes.io/projected/13855d06-ab16-423c-92da-241b51e79162-kube-api-access-kqm5s\") pod \"whisker-54dc45866c-ptnmh\" (UID: \"13855d06-ab16-423c-92da-241b51e79162\") " pod="calico-system/whisker-54dc45866c-ptnmh" Jul 15 04:47:21.623568 kubelet[2675]: I0715 04:47:21.623472 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7c697\" (UniqueName: \"kubernetes.io/projected/3dde1c96-ab62-40f8-ba4c-252df247f179-kube-api-access-7c697\") pod \"calico-kube-controllers-7bf87f4ddb-bbvjj\" (UID: \"3dde1c96-ab62-40f8-ba4c-252df247f179\") " pod="calico-system/calico-kube-controllers-7bf87f4ddb-bbvjj" Jul 15 04:47:21.623568 kubelet[2675]: I0715 04:47:21.623499 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13855d06-ab16-423c-92da-241b51e79162-whisker-ca-bundle\") pod \"whisker-54dc45866c-ptnmh\" (UID: \"13855d06-ab16-423c-92da-241b51e79162\") " pod="calico-system/whisker-54dc45866c-ptnmh" Jul 15 04:47:21.623568 kubelet[2675]: I0715 04:47:21.623517 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttvcm\" (UniqueName: \"kubernetes.io/projected/348e8462-7e3a-4709-a2d9-1e4cf20af94a-kube-api-access-ttvcm\") pod \"calico-apiserver-5555fdf644-2ngbv\" (UID: \"348e8462-7e3a-4709-a2d9-1e4cf20af94a\") " pod="calico-apiserver/calico-apiserver-5555fdf644-2ngbv" Jul 15 04:47:21.623568 kubelet[2675]: I0715 04:47:21.623534 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmh5s\" (UniqueName: \"kubernetes.io/projected/d34fce70-fd78-417f-b0ec-e1f83ffebe90-kube-api-access-jmh5s\") pod \"calico-apiserver-5555fdf644-6dvsm\" (UID: \"d34fce70-fd78-417f-b0ec-e1f83ffebe90\") " pod="calico-apiserver/calico-apiserver-5555fdf644-6dvsm" Jul 15 04:47:21.623568 kubelet[2675]: I0715 04:47:21.623550 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whwdx\" (UniqueName: \"kubernetes.io/projected/20642bd1-05fb-429a-a40a-fda9cd5820ba-kube-api-access-whwdx\") pod \"coredns-674b8bbfcf-gg8bb\" (UID: \"20642bd1-05fb-429a-a40a-fda9cd5820ba\") " pod="kube-system/coredns-674b8bbfcf-gg8bb" Jul 15 04:47:21.623734 kubelet[2675]: I0715 04:47:21.623567 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/89c9e28e-100f-4e96-85cc-ea4bcf2583b8-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-9sgfr\" (UID: \"89c9e28e-100f-4e96-85cc-ea4bcf2583b8\") " pod="calico-system/goldmane-768f4c5c69-9sgfr" Jul 15 04:47:21.623734 kubelet[2675]: I0715 04:47:21.623585 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89c9e28e-100f-4e96-85cc-ea4bcf2583b8-config\") pod \"goldmane-768f4c5c69-9sgfr\" (UID: \"89c9e28e-100f-4e96-85cc-ea4bcf2583b8\") " pod="calico-system/goldmane-768f4c5c69-9sgfr" Jul 15 04:47:21.623734 kubelet[2675]: I0715 04:47:21.623600 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlzn6\" (UniqueName: \"kubernetes.io/projected/89c9e28e-100f-4e96-85cc-ea4bcf2583b8-kube-api-access-wlzn6\") pod \"goldmane-768f4c5c69-9sgfr\" (UID: \"89c9e28e-100f-4e96-85cc-ea4bcf2583b8\") " pod="calico-system/goldmane-768f4c5c69-9sgfr" Jul 15 04:47:21.623734 kubelet[2675]: I0715 04:47:21.623614 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/41c5b645-6734-48b4-a165-074bd90a2006-config-volume\") pod \"coredns-674b8bbfcf-xgvdj\" (UID: \"41c5b645-6734-48b4-a165-074bd90a2006\") " pod="kube-system/coredns-674b8bbfcf-xgvdj" Jul 15 04:47:21.623734 kubelet[2675]: I0715 04:47:21.623630 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ml7j9\" (UniqueName: \"kubernetes.io/projected/41c5b645-6734-48b4-a165-074bd90a2006-kube-api-access-ml7j9\") pod \"coredns-674b8bbfcf-xgvdj\" (UID: \"41c5b645-6734-48b4-a165-074bd90a2006\") " pod="kube-system/coredns-674b8bbfcf-xgvdj" Jul 15 04:47:21.623840 kubelet[2675]: I0715 04:47:21.623645 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/348e8462-7e3a-4709-a2d9-1e4cf20af94a-calico-apiserver-certs\") pod \"calico-apiserver-5555fdf644-2ngbv\" (UID: \"348e8462-7e3a-4709-a2d9-1e4cf20af94a\") " pod="calico-apiserver/calico-apiserver-5555fdf644-2ngbv" Jul 15 04:47:21.623840 kubelet[2675]: I0715 04:47:21.623662 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d34fce70-fd78-417f-b0ec-e1f83ffebe90-calico-apiserver-certs\") pod \"calico-apiserver-5555fdf644-6dvsm\" (UID: \"d34fce70-fd78-417f-b0ec-e1f83ffebe90\") " pod="calico-apiserver/calico-apiserver-5555fdf644-6dvsm" Jul 15 04:47:21.623840 kubelet[2675]: I0715 04:47:21.623680 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/89c9e28e-100f-4e96-85cc-ea4bcf2583b8-goldmane-key-pair\") pod \"goldmane-768f4c5c69-9sgfr\" (UID: \"89c9e28e-100f-4e96-85cc-ea4bcf2583b8\") " pod="calico-system/goldmane-768f4c5c69-9sgfr" Jul 15 04:47:21.623840 kubelet[2675]: I0715 04:47:21.623696 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3dde1c96-ab62-40f8-ba4c-252df247f179-tigera-ca-bundle\") pod \"calico-kube-controllers-7bf87f4ddb-bbvjj\" (UID: \"3dde1c96-ab62-40f8-ba4c-252df247f179\") " pod="calico-system/calico-kube-controllers-7bf87f4ddb-bbvjj" Jul 15 04:47:21.623840 kubelet[2675]: I0715 04:47:21.623715 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/13855d06-ab16-423c-92da-241b51e79162-whisker-backend-key-pair\") pod \"whisker-54dc45866c-ptnmh\" (UID: \"13855d06-ab16-423c-92da-241b51e79162\") " pod="calico-system/whisker-54dc45866c-ptnmh" Jul 15 04:47:21.623945 kubelet[2675]: I0715 04:47:21.623730 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/20642bd1-05fb-429a-a40a-fda9cd5820ba-config-volume\") pod \"coredns-674b8bbfcf-gg8bb\" (UID: \"20642bd1-05fb-429a-a40a-fda9cd5820ba\") " pod="kube-system/coredns-674b8bbfcf-gg8bb" Jul 15 04:47:21.760918 systemd[1]: Created slice kubepods-besteffort-pod084d4de4_ee2b_48f0_ba8f_271876d17fba.slice - libcontainer container kubepods-besteffort-pod084d4de4_ee2b_48f0_ba8f_271876d17fba.slice. Jul 15 04:47:21.762858 containerd[1527]: time="2025-07-15T04:47:21.762823901Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nvc78,Uid:084d4de4-ee2b-48f0-ba8f-271876d17fba,Namespace:calico-system,Attempt:0,}" Jul 15 04:47:21.766089 kubelet[2675]: E0715 04:47:21.766057 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 04:47:21.766444 containerd[1527]: time="2025-07-15T04:47:21.766413366Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gg8bb,Uid:20642bd1-05fb-429a-a40a-fda9cd5820ba,Namespace:kube-system,Attempt:0,}" Jul 15 04:47:21.786384 containerd[1527]: time="2025-07-15T04:47:21.783346195Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5555fdf644-2ngbv,Uid:348e8462-7e3a-4709-a2d9-1e4cf20af94a,Namespace:calico-apiserver,Attempt:0,}" Jul 15 04:47:21.800658 containerd[1527]: time="2025-07-15T04:47:21.797534920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-9sgfr,Uid:89c9e28e-100f-4e96-85cc-ea4bcf2583b8,Namespace:calico-system,Attempt:0,}" Jul 15 04:47:21.816945 containerd[1527]: time="2025-07-15T04:47:21.804829781Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-54dc45866c-ptnmh,Uid:13855d06-ab16-423c-92da-241b51e79162,Namespace:calico-system,Attempt:0,}" Jul 15 04:47:21.817403 kubelet[2675]: E0715 04:47:21.817157 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 04:47:21.823645 containerd[1527]: time="2025-07-15T04:47:21.821202676Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xgvdj,Uid:41c5b645-6734-48b4-a165-074bd90a2006,Namespace:kube-system,Attempt:0,}" Jul 15 04:47:21.823645 containerd[1527]: time="2025-07-15T04:47:21.821529668Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5555fdf644-6dvsm,Uid:d34fce70-fd78-417f-b0ec-e1f83ffebe90,Namespace:calico-apiserver,Attempt:0,}" Jul 15 04:47:21.874898 containerd[1527]: time="2025-07-15T04:47:21.874040318Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 15 04:47:22.060201 containerd[1527]: time="2025-07-15T04:47:22.054402000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7bf87f4ddb-bbvjj,Uid:3dde1c96-ab62-40f8-ba4c-252df247f179,Namespace:calico-system,Attempt:0,}" Jul 15 04:47:22.251449 containerd[1527]: time="2025-07-15T04:47:22.251279784Z" level=error msg="Failed to destroy network for sandbox \"1ac18261082133b08b9c16afd44d333ebc361b8f51f955c1ca1df83f2ddbc42b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:47:22.254465 containerd[1527]: time="2025-07-15T04:47:22.253944873Z" level=error msg="Failed to destroy network for sandbox \"d896ac437981cf5c9f1e3243bff8dbc03c1fc5bb70dd4d3d726b81dc626564b8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:47:22.260458 containerd[1527]: time="2025-07-15T04:47:22.260347109Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-9sgfr,Uid:89c9e28e-100f-4e96-85cc-ea4bcf2583b8,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ac18261082133b08b9c16afd44d333ebc361b8f51f955c1ca1df83f2ddbc42b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:47:22.261621 containerd[1527]: time="2025-07-15T04:47:22.261573343Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7bf87f4ddb-bbvjj,Uid:3dde1c96-ab62-40f8-ba4c-252df247f179,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d896ac437981cf5c9f1e3243bff8dbc03c1fc5bb70dd4d3d726b81dc626564b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:47:22.265400 kubelet[2675]: E0715 04:47:22.265321 2675 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ac18261082133b08b9c16afd44d333ebc361b8f51f955c1ca1df83f2ddbc42b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:47:22.265534 kubelet[2675]: E0715 04:47:22.265430 2675 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ac18261082133b08b9c16afd44d333ebc361b8f51f955c1ca1df83f2ddbc42b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-9sgfr" Jul 15 04:47:22.265534 kubelet[2675]: E0715 04:47:22.265473 2675 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ac18261082133b08b9c16afd44d333ebc361b8f51f955c1ca1df83f2ddbc42b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-9sgfr" Jul 15 04:47:22.265636 kubelet[2675]: E0715 04:47:22.265595 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-9sgfr_calico-system(89c9e28e-100f-4e96-85cc-ea4bcf2583b8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-9sgfr_calico-system(89c9e28e-100f-4e96-85cc-ea4bcf2583b8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1ac18261082133b08b9c16afd44d333ebc361b8f51f955c1ca1df83f2ddbc42b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-9sgfr" podUID="89c9e28e-100f-4e96-85cc-ea4bcf2583b8" Jul 15 04:47:22.266224 containerd[1527]: time="2025-07-15T04:47:22.266184133Z" level=error msg="Failed to destroy network for sandbox \"e7c99879617cee928e04e5fe7997cda667a048cb490b1af97b1f53d087713f4d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:47:22.267417 kubelet[2675]: E0715 04:47:22.266192 2675 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d896ac437981cf5c9f1e3243bff8dbc03c1fc5bb70dd4d3d726b81dc626564b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:47:22.267648 kubelet[2675]: E0715 04:47:22.267610 2675 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d896ac437981cf5c9f1e3243bff8dbc03c1fc5bb70dd4d3d726b81dc626564b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7bf87f4ddb-bbvjj" Jul 15 04:47:22.267786 kubelet[2675]: E0715 04:47:22.267762 2675 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d896ac437981cf5c9f1e3243bff8dbc03c1fc5bb70dd4d3d726b81dc626564b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7bf87f4ddb-bbvjj" Jul 15 04:47:22.267917 kubelet[2675]: E0715 04:47:22.267883 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7bf87f4ddb-bbvjj_calico-system(3dde1c96-ab62-40f8-ba4c-252df247f179)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7bf87f4ddb-bbvjj_calico-system(3dde1c96-ab62-40f8-ba4c-252df247f179)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d896ac437981cf5c9f1e3243bff8dbc03c1fc5bb70dd4d3d726b81dc626564b8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7bf87f4ddb-bbvjj" podUID="3dde1c96-ab62-40f8-ba4c-252df247f179" Jul 15 04:47:22.268289 containerd[1527]: time="2025-07-15T04:47:22.268242845Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nvc78,Uid:084d4de4-ee2b-48f0-ba8f-271876d17fba,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7c99879617cee928e04e5fe7997cda667a048cb490b1af97b1f53d087713f4d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:47:22.268468 kubelet[2675]: E0715 04:47:22.268434 2675 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7c99879617cee928e04e5fe7997cda667a048cb490b1af97b1f53d087713f4d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:47:22.268608 kubelet[2675]: E0715 04:47:22.268585 2675 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7c99879617cee928e04e5fe7997cda667a048cb490b1af97b1f53d087713f4d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-nvc78" Jul 15 04:47:22.268670 kubelet[2675]: E0715 04:47:22.268610 2675 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7c99879617cee928e04e5fe7997cda667a048cb490b1af97b1f53d087713f4d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-nvc78" Jul 15 04:47:22.268670 kubelet[2675]: E0715 04:47:22.268648 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-nvc78_calico-system(084d4de4-ee2b-48f0-ba8f-271876d17fba)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-nvc78_calico-system(084d4de4-ee2b-48f0-ba8f-271876d17fba)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e7c99879617cee928e04e5fe7997cda667a048cb490b1af97b1f53d087713f4d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-nvc78" podUID="084d4de4-ee2b-48f0-ba8f-271876d17fba" Jul 15 04:47:22.280516 containerd[1527]: time="2025-07-15T04:47:22.280466744Z" level=error msg="Failed to destroy network for sandbox \"b44c98d658209a8a873ca3b3395096c8f08640f3f60453bcd6f5ede455dda37a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:47:22.281581 containerd[1527]: time="2025-07-15T04:47:22.281534923Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5555fdf644-2ngbv,Uid:348e8462-7e3a-4709-a2d9-1e4cf20af94a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b44c98d658209a8a873ca3b3395096c8f08640f3f60453bcd6f5ede455dda37a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:47:22.281793 kubelet[2675]: E0715 04:47:22.281758 2675 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b44c98d658209a8a873ca3b3395096c8f08640f3f60453bcd6f5ede455dda37a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:47:22.281846 kubelet[2675]: E0715 04:47:22.281812 2675 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b44c98d658209a8a873ca3b3395096c8f08640f3f60453bcd6f5ede455dda37a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5555fdf644-2ngbv" Jul 15 04:47:22.281846 kubelet[2675]: E0715 04:47:22.281832 2675 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b44c98d658209a8a873ca3b3395096c8f08640f3f60453bcd6f5ede455dda37a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5555fdf644-2ngbv" Jul 15 04:47:22.281894 kubelet[2675]: E0715 04:47:22.281874 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5555fdf644-2ngbv_calico-apiserver(348e8462-7e3a-4709-a2d9-1e4cf20af94a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5555fdf644-2ngbv_calico-apiserver(348e8462-7e3a-4709-a2d9-1e4cf20af94a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b44c98d658209a8a873ca3b3395096c8f08640f3f60453bcd6f5ede455dda37a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5555fdf644-2ngbv" podUID="348e8462-7e3a-4709-a2d9-1e4cf20af94a" Jul 15 04:47:22.285070 containerd[1527]: time="2025-07-15T04:47:22.285028489Z" level=error msg="Failed to destroy network for sandbox \"7671f593a13244405fce68ede9e596dc24097ff23603dc5abb8e0c55f955365d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:47:22.285653 containerd[1527]: time="2025-07-15T04:47:22.285593582Z" level=error msg="Failed to destroy network for sandbox \"10a7f48e4aad23827e19c9c4354bfbd88d664eb796434196a301c9f4ecf4a2d8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:47:22.286064 containerd[1527]: time="2025-07-15T04:47:22.286016181Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xgvdj,Uid:41c5b645-6734-48b4-a165-074bd90a2006,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7671f593a13244405fce68ede9e596dc24097ff23603dc5abb8e0c55f955365d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:47:22.286254 kubelet[2675]: E0715 04:47:22.286218 2675 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7671f593a13244405fce68ede9e596dc24097ff23603dc5abb8e0c55f955365d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:47:22.286317 kubelet[2675]: E0715 04:47:22.286266 2675 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7671f593a13244405fce68ede9e596dc24097ff23603dc5abb8e0c55f955365d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-xgvdj" Jul 15 04:47:22.286317 kubelet[2675]: E0715 04:47:22.286289 2675 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7671f593a13244405fce68ede9e596dc24097ff23603dc5abb8e0c55f955365d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-xgvdj" Jul 15 04:47:22.286404 kubelet[2675]: E0715 04:47:22.286327 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-xgvdj_kube-system(41c5b645-6734-48b4-a165-074bd90a2006)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-xgvdj_kube-system(41c5b645-6734-48b4-a165-074bd90a2006)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7671f593a13244405fce68ede9e596dc24097ff23603dc5abb8e0c55f955365d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-xgvdj" podUID="41c5b645-6734-48b4-a165-074bd90a2006" Jul 15 04:47:22.286501 containerd[1527]: time="2025-07-15T04:47:22.286472063Z" level=error msg="Failed to destroy network for sandbox \"7ca451e9b88a558cea4a92271c357006f02a7241d4afe930af0cf75fc31bd04b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:47:22.287781 containerd[1527]: time="2025-07-15T04:47:22.287606809Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5555fdf644-6dvsm,Uid:d34fce70-fd78-417f-b0ec-e1f83ffebe90,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"10a7f48e4aad23827e19c9c4354bfbd88d664eb796434196a301c9f4ecf4a2d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:47:22.288183 containerd[1527]: time="2025-07-15T04:47:22.288045250Z" level=error msg="Failed to destroy network for sandbox \"35379f1af689ba2a5a03c084862f1630e2a1f3a5ffca20971991a2f23a8df57d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:47:22.288243 kubelet[2675]: E0715 04:47:22.287965 2675 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"10a7f48e4aad23827e19c9c4354bfbd88d664eb796434196a301c9f4ecf4a2d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:47:22.288243 kubelet[2675]: E0715 04:47:22.288009 2675 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"10a7f48e4aad23827e19c9c4354bfbd88d664eb796434196a301c9f4ecf4a2d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5555fdf644-6dvsm" Jul 15 04:47:22.288243 kubelet[2675]: E0715 04:47:22.288035 2675 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"10a7f48e4aad23827e19c9c4354bfbd88d664eb796434196a301c9f4ecf4a2d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5555fdf644-6dvsm" Jul 15 04:47:22.288318 containerd[1527]: time="2025-07-15T04:47:22.288208905Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-54dc45866c-ptnmh,Uid:13855d06-ab16-423c-92da-241b51e79162,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ca451e9b88a558cea4a92271c357006f02a7241d4afe930af0cf75fc31bd04b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:47:22.288412 kubelet[2675]: E0715 04:47:22.288078 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5555fdf644-6dvsm_calico-apiserver(d34fce70-fd78-417f-b0ec-e1f83ffebe90)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5555fdf644-6dvsm_calico-apiserver(d34fce70-fd78-417f-b0ec-e1f83ffebe90)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"10a7f48e4aad23827e19c9c4354bfbd88d664eb796434196a301c9f4ecf4a2d8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5555fdf644-6dvsm" podUID="d34fce70-fd78-417f-b0ec-e1f83ffebe90" Jul 15 04:47:22.288806 kubelet[2675]: E0715 04:47:22.288346 2675 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ca451e9b88a558cea4a92271c357006f02a7241d4afe930af0cf75fc31bd04b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:47:22.288863 kubelet[2675]: E0715 04:47:22.288823 2675 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ca451e9b88a558cea4a92271c357006f02a7241d4afe930af0cf75fc31bd04b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-54dc45866c-ptnmh" Jul 15 04:47:22.288863 kubelet[2675]: E0715 04:47:22.288846 2675 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ca451e9b88a558cea4a92271c357006f02a7241d4afe930af0cf75fc31bd04b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-54dc45866c-ptnmh" Jul 15 04:47:22.288935 kubelet[2675]: E0715 04:47:22.288887 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-54dc45866c-ptnmh_calico-system(13855d06-ab16-423c-92da-241b51e79162)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-54dc45866c-ptnmh_calico-system(13855d06-ab16-423c-92da-241b51e79162)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7ca451e9b88a558cea4a92271c357006f02a7241d4afe930af0cf75fc31bd04b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-54dc45866c-ptnmh" podUID="13855d06-ab16-423c-92da-241b51e79162" Jul 15 04:47:22.289783 containerd[1527]: time="2025-07-15T04:47:22.289747649Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gg8bb,Uid:20642bd1-05fb-429a-a40a-fda9cd5820ba,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"35379f1af689ba2a5a03c084862f1630e2a1f3a5ffca20971991a2f23a8df57d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:47:22.289925 kubelet[2675]: E0715 04:47:22.289897 2675 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"35379f1af689ba2a5a03c084862f1630e2a1f3a5ffca20971991a2f23a8df57d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:47:22.289965 kubelet[2675]: E0715 04:47:22.289928 2675 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"35379f1af689ba2a5a03c084862f1630e2a1f3a5ffca20971991a2f23a8df57d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-gg8bb" Jul 15 04:47:22.289965 kubelet[2675]: E0715 04:47:22.289944 2675 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"35379f1af689ba2a5a03c084862f1630e2a1f3a5ffca20971991a2f23a8df57d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-gg8bb" Jul 15 04:47:22.290017 kubelet[2675]: E0715 04:47:22.289985 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-gg8bb_kube-system(20642bd1-05fb-429a-a40a-fda9cd5820ba)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-gg8bb_kube-system(20642bd1-05fb-429a-a40a-fda9cd5820ba)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"35379f1af689ba2a5a03c084862f1630e2a1f3a5ffca20971991a2f23a8df57d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-gg8bb" podUID="20642bd1-05fb-429a-a40a-fda9cd5820ba" Jul 15 04:47:25.874162 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1827878324.mount: Deactivated successfully. Jul 15 04:47:25.879941 kubelet[2675]: I0715 04:47:25.879880 2675 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 04:47:25.880414 kubelet[2675]: E0715 04:47:25.880218 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 04:47:26.215902 containerd[1527]: time="2025-07-15T04:47:26.215745774Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:47:26.236297 containerd[1527]: time="2025-07-15T04:47:26.216831743Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=152544909" Jul 15 04:47:26.236297 containerd[1527]: time="2025-07-15T04:47:26.229232800Z" level=info msg="ImageCreate event name:\"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:47:26.236475 containerd[1527]: time="2025-07-15T04:47:26.232007028Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"152544771\" in 4.357877381s" Jul 15 04:47:26.236475 containerd[1527]: time="2025-07-15T04:47:26.236406109Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\"" Jul 15 04:47:26.237290 containerd[1527]: time="2025-07-15T04:47:26.237243778Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:47:26.253624 containerd[1527]: time="2025-07-15T04:47:26.253570558Z" level=info msg="CreateContainer within sandbox \"1f3cbf0d15e2177adc6b2b867d3e1312fc34c70d6933596c5cfc88bf7acc7239\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 15 04:47:26.267857 containerd[1527]: time="2025-07-15T04:47:26.267741241Z" level=info msg="Container f193ff3088296433b6d4d1b010f1b243be74f5c38818ffdc9c8b4c3af49dafb2: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:47:26.280345 containerd[1527]: time="2025-07-15T04:47:26.280274029Z" level=info msg="CreateContainer within sandbox \"1f3cbf0d15e2177adc6b2b867d3e1312fc34c70d6933596c5cfc88bf7acc7239\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"f193ff3088296433b6d4d1b010f1b243be74f5c38818ffdc9c8b4c3af49dafb2\"" Jul 15 04:47:26.280956 containerd[1527]: time="2025-07-15T04:47:26.280917042Z" level=info msg="StartContainer for \"f193ff3088296433b6d4d1b010f1b243be74f5c38818ffdc9c8b4c3af49dafb2\"" Jul 15 04:47:26.282638 containerd[1527]: time="2025-07-15T04:47:26.282610701Z" level=info msg="connecting to shim f193ff3088296433b6d4d1b010f1b243be74f5c38818ffdc9c8b4c3af49dafb2" address="unix:///run/containerd/s/6204c916c8ec9110a4280ce61abb6d3db870c6b863ce784cc6c62b40af0cf710" protocol=ttrpc version=3 Jul 15 04:47:26.312566 systemd[1]: Started cri-containerd-f193ff3088296433b6d4d1b010f1b243be74f5c38818ffdc9c8b4c3af49dafb2.scope - libcontainer container f193ff3088296433b6d4d1b010f1b243be74f5c38818ffdc9c8b4c3af49dafb2. Jul 15 04:47:26.356871 containerd[1527]: time="2025-07-15T04:47:26.356835432Z" level=info msg="StartContainer for \"f193ff3088296433b6d4d1b010f1b243be74f5c38818ffdc9c8b4c3af49dafb2\" returns successfully" Jul 15 04:47:26.598614 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 15 04:47:26.598734 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 15 04:47:26.762761 kubelet[2675]: I0715 04:47:26.762716 2675 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kqm5s\" (UniqueName: \"kubernetes.io/projected/13855d06-ab16-423c-92da-241b51e79162-kube-api-access-kqm5s\") pod \"13855d06-ab16-423c-92da-241b51e79162\" (UID: \"13855d06-ab16-423c-92da-241b51e79162\") " Jul 15 04:47:26.762761 kubelet[2675]: I0715 04:47:26.762770 2675 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/13855d06-ab16-423c-92da-241b51e79162-whisker-backend-key-pair\") pod \"13855d06-ab16-423c-92da-241b51e79162\" (UID: \"13855d06-ab16-423c-92da-241b51e79162\") " Jul 15 04:47:26.763353 kubelet[2675]: I0715 04:47:26.762799 2675 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13855d06-ab16-423c-92da-241b51e79162-whisker-ca-bundle\") pod \"13855d06-ab16-423c-92da-241b51e79162\" (UID: \"13855d06-ab16-423c-92da-241b51e79162\") " Jul 15 04:47:26.770834 kubelet[2675]: I0715 04:47:26.770797 2675 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13855d06-ab16-423c-92da-241b51e79162-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "13855d06-ab16-423c-92da-241b51e79162" (UID: "13855d06-ab16-423c-92da-241b51e79162"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 15 04:47:26.775370 kubelet[2675]: I0715 04:47:26.773458 2675 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13855d06-ab16-423c-92da-241b51e79162-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "13855d06-ab16-423c-92da-241b51e79162" (UID: "13855d06-ab16-423c-92da-241b51e79162"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 15 04:47:26.775846 kubelet[2675]: I0715 04:47:26.775818 2675 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13855d06-ab16-423c-92da-241b51e79162-kube-api-access-kqm5s" (OuterVolumeSpecName: "kube-api-access-kqm5s") pod "13855d06-ab16-423c-92da-241b51e79162" (UID: "13855d06-ab16-423c-92da-241b51e79162"). InnerVolumeSpecName "kube-api-access-kqm5s". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 15 04:47:26.863405 kubelet[2675]: I0715 04:47:26.863283 2675 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/13855d06-ab16-423c-92da-241b51e79162-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jul 15 04:47:26.863405 kubelet[2675]: I0715 04:47:26.863321 2675 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13855d06-ab16-423c-92da-241b51e79162-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jul 15 04:47:26.863405 kubelet[2675]: I0715 04:47:26.863332 2675 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kqm5s\" (UniqueName: \"kubernetes.io/projected/13855d06-ab16-423c-92da-241b51e79162-kube-api-access-kqm5s\") on node \"localhost\" DevicePath \"\"" Jul 15 04:47:26.875053 systemd[1]: var-lib-kubelet-pods-13855d06\x2dab16\x2d423c\x2d92da\x2d241b51e79162-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkqm5s.mount: Deactivated successfully. Jul 15 04:47:26.875138 systemd[1]: var-lib-kubelet-pods-13855d06\x2dab16\x2d423c\x2d92da\x2d241b51e79162-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 15 04:47:26.886980 kubelet[2675]: E0715 04:47:26.886935 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 04:47:26.891505 systemd[1]: Removed slice kubepods-besteffort-pod13855d06_ab16_423c_92da_241b51e79162.slice - libcontainer container kubepods-besteffort-pod13855d06_ab16_423c_92da_241b51e79162.slice. Jul 15 04:47:26.911646 kubelet[2675]: I0715 04:47:26.911586 2675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-gzkfz" podStartSLOduration=1.6994899239999999 podStartE2EDuration="13.911573358s" podCreationTimestamp="2025-07-15 04:47:13 +0000 UTC" firstStartedPulling="2025-07-15 04:47:14.025393563 +0000 UTC m=+19.360947441" lastFinishedPulling="2025-07-15 04:47:26.237476997 +0000 UTC m=+31.573030875" observedRunningTime="2025-07-15 04:47:26.911250132 +0000 UTC m=+32.246804090" watchObservedRunningTime="2025-07-15 04:47:26.911573358 +0000 UTC m=+32.247127276" Jul 15 04:47:26.978284 systemd[1]: Created slice kubepods-besteffort-pod872c69ce_4bab_493e_b46b_309092a44c63.slice - libcontainer container kubepods-besteffort-pod872c69ce_4bab_493e_b46b_309092a44c63.slice. Jul 15 04:47:27.067438 kubelet[2675]: I0715 04:47:27.067391 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rp8df\" (UniqueName: \"kubernetes.io/projected/872c69ce-4bab-493e-b46b-309092a44c63-kube-api-access-rp8df\") pod \"whisker-6c96bfcdc7-2mcln\" (UID: \"872c69ce-4bab-493e-b46b-309092a44c63\") " pod="calico-system/whisker-6c96bfcdc7-2mcln" Jul 15 04:47:27.067438 kubelet[2675]: I0715 04:47:27.067440 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/872c69ce-4bab-493e-b46b-309092a44c63-whisker-backend-key-pair\") pod \"whisker-6c96bfcdc7-2mcln\" (UID: \"872c69ce-4bab-493e-b46b-309092a44c63\") " pod="calico-system/whisker-6c96bfcdc7-2mcln" Jul 15 04:47:27.067614 kubelet[2675]: I0715 04:47:27.067459 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/872c69ce-4bab-493e-b46b-309092a44c63-whisker-ca-bundle\") pod \"whisker-6c96bfcdc7-2mcln\" (UID: \"872c69ce-4bab-493e-b46b-309092a44c63\") " pod="calico-system/whisker-6c96bfcdc7-2mcln" Jul 15 04:47:27.283004 containerd[1527]: time="2025-07-15T04:47:27.282823944Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6c96bfcdc7-2mcln,Uid:872c69ce-4bab-493e-b46b-309092a44c63,Namespace:calico-system,Attempt:0,}" Jul 15 04:47:27.500525 systemd-networkd[1433]: cali6edf56609b8: Link UP Jul 15 04:47:27.501166 systemd-networkd[1433]: cali6edf56609b8: Gained carrier Jul 15 04:47:27.517159 containerd[1527]: 2025-07-15 04:47:27.303 [INFO][3899] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 15 04:47:27.517159 containerd[1527]: 2025-07-15 04:47:27.359 [INFO][3899] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--6c96bfcdc7--2mcln-eth0 whisker-6c96bfcdc7- calico-system 872c69ce-4bab-493e-b46b-309092a44c63 903 0 2025-07-15 04:47:26 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6c96bfcdc7 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-6c96bfcdc7-2mcln eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali6edf56609b8 [] [] }} ContainerID="aa3bb2542717b5fd125658f42b2606852f6642aa5a878d15a08bf8608c135a01" Namespace="calico-system" Pod="whisker-6c96bfcdc7-2mcln" WorkloadEndpoint="localhost-k8s-whisker--6c96bfcdc7--2mcln-" Jul 15 04:47:27.517159 containerd[1527]: 2025-07-15 04:47:27.359 [INFO][3899] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="aa3bb2542717b5fd125658f42b2606852f6642aa5a878d15a08bf8608c135a01" Namespace="calico-system" Pod="whisker-6c96bfcdc7-2mcln" WorkloadEndpoint="localhost-k8s-whisker--6c96bfcdc7--2mcln-eth0" Jul 15 04:47:27.517159 containerd[1527]: 2025-07-15 04:47:27.442 [INFO][3912] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="aa3bb2542717b5fd125658f42b2606852f6642aa5a878d15a08bf8608c135a01" HandleID="k8s-pod-network.aa3bb2542717b5fd125658f42b2606852f6642aa5a878d15a08bf8608c135a01" Workload="localhost-k8s-whisker--6c96bfcdc7--2mcln-eth0" Jul 15 04:47:27.517466 containerd[1527]: 2025-07-15 04:47:27.442 [INFO][3912] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="aa3bb2542717b5fd125658f42b2606852f6642aa5a878d15a08bf8608c135a01" HandleID="k8s-pod-network.aa3bb2542717b5fd125658f42b2606852f6642aa5a878d15a08bf8608c135a01" Workload="localhost-k8s-whisker--6c96bfcdc7--2mcln-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c2140), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-6c96bfcdc7-2mcln", "timestamp":"2025-07-15 04:47:27.442760099 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 04:47:27.517466 containerd[1527]: 2025-07-15 04:47:27.443 [INFO][3912] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 04:47:27.517466 containerd[1527]: 2025-07-15 04:47:27.443 [INFO][3912] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 04:47:27.517466 containerd[1527]: 2025-07-15 04:47:27.443 [INFO][3912] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 15 04:47:27.517466 containerd[1527]: 2025-07-15 04:47:27.455 [INFO][3912] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.aa3bb2542717b5fd125658f42b2606852f6642aa5a878d15a08bf8608c135a01" host="localhost" Jul 15 04:47:27.517466 containerd[1527]: 2025-07-15 04:47:27.461 [INFO][3912] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 15 04:47:27.517466 containerd[1527]: 2025-07-15 04:47:27.467 [INFO][3912] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 15 04:47:27.517466 containerd[1527]: 2025-07-15 04:47:27.470 [INFO][3912] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 15 04:47:27.517466 containerd[1527]: 2025-07-15 04:47:27.476 [INFO][3912] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 15 04:47:27.517466 containerd[1527]: 2025-07-15 04:47:27.476 [INFO][3912] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.aa3bb2542717b5fd125658f42b2606852f6642aa5a878d15a08bf8608c135a01" host="localhost" Jul 15 04:47:27.517724 containerd[1527]: 2025-07-15 04:47:27.478 [INFO][3912] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.aa3bb2542717b5fd125658f42b2606852f6642aa5a878d15a08bf8608c135a01 Jul 15 04:47:27.517724 containerd[1527]: 2025-07-15 04:47:27.484 [INFO][3912] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.aa3bb2542717b5fd125658f42b2606852f6642aa5a878d15a08bf8608c135a01" host="localhost" Jul 15 04:47:27.517724 containerd[1527]: 2025-07-15 04:47:27.488 [INFO][3912] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.aa3bb2542717b5fd125658f42b2606852f6642aa5a878d15a08bf8608c135a01" host="localhost" Jul 15 04:47:27.517724 containerd[1527]: 2025-07-15 04:47:27.488 [INFO][3912] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.aa3bb2542717b5fd125658f42b2606852f6642aa5a878d15a08bf8608c135a01" host="localhost" Jul 15 04:47:27.517724 containerd[1527]: 2025-07-15 04:47:27.489 [INFO][3912] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 04:47:27.517724 containerd[1527]: 2025-07-15 04:47:27.489 [INFO][3912] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="aa3bb2542717b5fd125658f42b2606852f6642aa5a878d15a08bf8608c135a01" HandleID="k8s-pod-network.aa3bb2542717b5fd125658f42b2606852f6642aa5a878d15a08bf8608c135a01" Workload="localhost-k8s-whisker--6c96bfcdc7--2mcln-eth0" Jul 15 04:47:27.517895 containerd[1527]: 2025-07-15 04:47:27.491 [INFO][3899] cni-plugin/k8s.go 418: Populated endpoint ContainerID="aa3bb2542717b5fd125658f42b2606852f6642aa5a878d15a08bf8608c135a01" Namespace="calico-system" Pod="whisker-6c96bfcdc7-2mcln" WorkloadEndpoint="localhost-k8s-whisker--6c96bfcdc7--2mcln-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6c96bfcdc7--2mcln-eth0", GenerateName:"whisker-6c96bfcdc7-", Namespace:"calico-system", SelfLink:"", UID:"872c69ce-4bab-493e-b46b-309092a44c63", ResourceVersion:"903", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 4, 47, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6c96bfcdc7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-6c96bfcdc7-2mcln", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali6edf56609b8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 04:47:27.517895 containerd[1527]: 2025-07-15 04:47:27.492 [INFO][3899] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="aa3bb2542717b5fd125658f42b2606852f6642aa5a878d15a08bf8608c135a01" Namespace="calico-system" Pod="whisker-6c96bfcdc7-2mcln" WorkloadEndpoint="localhost-k8s-whisker--6c96bfcdc7--2mcln-eth0" Jul 15 04:47:27.517969 containerd[1527]: 2025-07-15 04:47:27.492 [INFO][3899] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6edf56609b8 ContainerID="aa3bb2542717b5fd125658f42b2606852f6642aa5a878d15a08bf8608c135a01" Namespace="calico-system" Pod="whisker-6c96bfcdc7-2mcln" WorkloadEndpoint="localhost-k8s-whisker--6c96bfcdc7--2mcln-eth0" Jul 15 04:47:27.517969 containerd[1527]: 2025-07-15 04:47:27.501 [INFO][3899] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="aa3bb2542717b5fd125658f42b2606852f6642aa5a878d15a08bf8608c135a01" Namespace="calico-system" Pod="whisker-6c96bfcdc7-2mcln" WorkloadEndpoint="localhost-k8s-whisker--6c96bfcdc7--2mcln-eth0" Jul 15 04:47:27.518049 containerd[1527]: 2025-07-15 04:47:27.502 [INFO][3899] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="aa3bb2542717b5fd125658f42b2606852f6642aa5a878d15a08bf8608c135a01" Namespace="calico-system" Pod="whisker-6c96bfcdc7-2mcln" WorkloadEndpoint="localhost-k8s-whisker--6c96bfcdc7--2mcln-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6c96bfcdc7--2mcln-eth0", GenerateName:"whisker-6c96bfcdc7-", Namespace:"calico-system", SelfLink:"", UID:"872c69ce-4bab-493e-b46b-309092a44c63", ResourceVersion:"903", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 4, 47, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6c96bfcdc7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"aa3bb2542717b5fd125658f42b2606852f6642aa5a878d15a08bf8608c135a01", Pod:"whisker-6c96bfcdc7-2mcln", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali6edf56609b8", MAC:"ae:eb:76:d2:97:ff", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 04:47:27.518101 containerd[1527]: 2025-07-15 04:47:27.513 [INFO][3899] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="aa3bb2542717b5fd125658f42b2606852f6642aa5a878d15a08bf8608c135a01" Namespace="calico-system" Pod="whisker-6c96bfcdc7-2mcln" WorkloadEndpoint="localhost-k8s-whisker--6c96bfcdc7--2mcln-eth0" Jul 15 04:47:27.576964 containerd[1527]: time="2025-07-15T04:47:27.576810236Z" level=info msg="connecting to shim aa3bb2542717b5fd125658f42b2606852f6642aa5a878d15a08bf8608c135a01" address="unix:///run/containerd/s/544ba4acefcbbc49a90412ac8a2fa7dcf9a5a5abe10d2542ec69e864076f5854" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:47:27.596552 systemd[1]: Started cri-containerd-aa3bb2542717b5fd125658f42b2606852f6642aa5a878d15a08bf8608c135a01.scope - libcontainer container aa3bb2542717b5fd125658f42b2606852f6642aa5a878d15a08bf8608c135a01. Jul 15 04:47:27.609489 systemd-resolved[1350]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 15 04:47:27.629904 containerd[1527]: time="2025-07-15T04:47:27.629863854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6c96bfcdc7-2mcln,Uid:872c69ce-4bab-493e-b46b-309092a44c63,Namespace:calico-system,Attempt:0,} returns sandbox id \"aa3bb2542717b5fd125658f42b2606852f6642aa5a878d15a08bf8608c135a01\"" Jul 15 04:47:27.631704 containerd[1527]: time="2025-07-15T04:47:27.631506345Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 15 04:47:27.889202 kubelet[2675]: I0715 04:47:27.889158 2675 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 04:47:28.418150 systemd-networkd[1433]: vxlan.calico: Link UP Jul 15 04:47:28.418158 systemd-networkd[1433]: vxlan.calico: Gained carrier Jul 15 04:47:28.693097 containerd[1527]: time="2025-07-15T04:47:28.692972575Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:47:28.693602 containerd[1527]: time="2025-07-15T04:47:28.693561181Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4605614" Jul 15 04:47:28.697478 containerd[1527]: time="2025-07-15T04:47:28.697442840Z" level=info msg="ImageCreate event name:\"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:47:28.700030 containerd[1527]: time="2025-07-15T04:47:28.699994716Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:47:28.700721 containerd[1527]: time="2025-07-15T04:47:28.700556519Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"5974847\" in 1.069018532s" Jul 15 04:47:28.700721 containerd[1527]: time="2025-07-15T04:47:28.700582601Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\"" Jul 15 04:47:28.704228 containerd[1527]: time="2025-07-15T04:47:28.704196360Z" level=info msg="CreateContainer within sandbox \"aa3bb2542717b5fd125658f42b2606852f6642aa5a878d15a08bf8608c135a01\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 15 04:47:28.709232 containerd[1527]: time="2025-07-15T04:47:28.709184224Z" level=info msg="Container 75add94a5012fbeaa1b8c931c3a2e16785e8f2a4f26138c206fbae6e0a16f57f: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:47:28.758018 kubelet[2675]: I0715 04:47:28.757975 2675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13855d06-ab16-423c-92da-241b51e79162" path="/var/lib/kubelet/pods/13855d06-ab16-423c-92da-241b51e79162/volumes" Jul 15 04:47:28.770032 containerd[1527]: time="2025-07-15T04:47:28.769971346Z" level=info msg="CreateContainer within sandbox \"aa3bb2542717b5fd125658f42b2606852f6642aa5a878d15a08bf8608c135a01\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"75add94a5012fbeaa1b8c931c3a2e16785e8f2a4f26138c206fbae6e0a16f57f\"" Jul 15 04:47:28.770573 containerd[1527]: time="2025-07-15T04:47:28.770536029Z" level=info msg="StartContainer for \"75add94a5012fbeaa1b8c931c3a2e16785e8f2a4f26138c206fbae6e0a16f57f\"" Jul 15 04:47:28.771853 containerd[1527]: time="2025-07-15T04:47:28.771783165Z" level=info msg="connecting to shim 75add94a5012fbeaa1b8c931c3a2e16785e8f2a4f26138c206fbae6e0a16f57f" address="unix:///run/containerd/s/544ba4acefcbbc49a90412ac8a2fa7dcf9a5a5abe10d2542ec69e864076f5854" protocol=ttrpc version=3 Jul 15 04:47:28.793590 systemd[1]: Started cri-containerd-75add94a5012fbeaa1b8c931c3a2e16785e8f2a4f26138c206fbae6e0a16f57f.scope - libcontainer container 75add94a5012fbeaa1b8c931c3a2e16785e8f2a4f26138c206fbae6e0a16f57f. Jul 15 04:47:28.872843 containerd[1527]: time="2025-07-15T04:47:28.872801626Z" level=info msg="StartContainer for \"75add94a5012fbeaa1b8c931c3a2e16785e8f2a4f26138c206fbae6e0a16f57f\" returns successfully" Jul 15 04:47:28.873966 containerd[1527]: time="2025-07-15T04:47:28.873919712Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 15 04:47:29.082556 systemd-networkd[1433]: cali6edf56609b8: Gained IPv6LL Jul 15 04:47:30.298523 systemd-networkd[1433]: vxlan.calico: Gained IPv6LL Jul 15 04:47:31.043285 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2370256615.mount: Deactivated successfully. Jul 15 04:47:31.054586 containerd[1527]: time="2025-07-15T04:47:31.054516144Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:47:31.055041 containerd[1527]: time="2025-07-15T04:47:31.055009458Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=30814581" Jul 15 04:47:31.055931 containerd[1527]: time="2025-07-15T04:47:31.055895960Z" level=info msg="ImageCreate event name:\"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:47:31.057697 containerd[1527]: time="2025-07-15T04:47:31.057662084Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:47:31.058370 containerd[1527]: time="2025-07-15T04:47:31.058326291Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"30814411\" in 2.184370616s" Jul 15 04:47:31.058450 containerd[1527]: time="2025-07-15T04:47:31.058380214Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\"" Jul 15 04:47:31.062909 containerd[1527]: time="2025-07-15T04:47:31.062876969Z" level=info msg="CreateContainer within sandbox \"aa3bb2542717b5fd125658f42b2606852f6642aa5a878d15a08bf8608c135a01\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 15 04:47:31.068663 containerd[1527]: time="2025-07-15T04:47:31.068610211Z" level=info msg="Container bbb6234123aff1ba423ff94325a4638a5a1c249023ecbb4e80f19782fbd149ad: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:47:31.075608 containerd[1527]: time="2025-07-15T04:47:31.075559537Z" level=info msg="CreateContainer within sandbox \"aa3bb2542717b5fd125658f42b2606852f6642aa5a878d15a08bf8608c135a01\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"bbb6234123aff1ba423ff94325a4638a5a1c249023ecbb4e80f19782fbd149ad\"" Jul 15 04:47:31.076058 containerd[1527]: time="2025-07-15T04:47:31.076029890Z" level=info msg="StartContainer for \"bbb6234123aff1ba423ff94325a4638a5a1c249023ecbb4e80f19782fbd149ad\"" Jul 15 04:47:31.085553 containerd[1527]: time="2025-07-15T04:47:31.085497873Z" level=info msg="connecting to shim bbb6234123aff1ba423ff94325a4638a5a1c249023ecbb4e80f19782fbd149ad" address="unix:///run/containerd/s/544ba4acefcbbc49a90412ac8a2fa7dcf9a5a5abe10d2542ec69e864076f5854" protocol=ttrpc version=3 Jul 15 04:47:31.112602 systemd[1]: Started cri-containerd-bbb6234123aff1ba423ff94325a4638a5a1c249023ecbb4e80f19782fbd149ad.scope - libcontainer container bbb6234123aff1ba423ff94325a4638a5a1c249023ecbb4e80f19782fbd149ad. Jul 15 04:47:31.205039 containerd[1527]: time="2025-07-15T04:47:31.204985880Z" level=info msg="StartContainer for \"bbb6234123aff1ba423ff94325a4638a5a1c249023ecbb4e80f19782fbd149ad\" returns successfully" Jul 15 04:47:31.915130 kubelet[2675]: I0715 04:47:31.915029 2675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-6c96bfcdc7-2mcln" podStartSLOduration=2.48706289 podStartE2EDuration="5.914920471s" podCreationTimestamp="2025-07-15 04:47:26 +0000 UTC" firstStartedPulling="2025-07-15 04:47:27.631224882 +0000 UTC m=+32.966778800" lastFinishedPulling="2025-07-15 04:47:31.059082463 +0000 UTC m=+36.394636381" observedRunningTime="2025-07-15 04:47:31.913637781 +0000 UTC m=+37.249191699" watchObservedRunningTime="2025-07-15 04:47:31.914920471 +0000 UTC m=+37.250474389" Jul 15 04:47:33.755860 containerd[1527]: time="2025-07-15T04:47:33.755814251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7bf87f4ddb-bbvjj,Uid:3dde1c96-ab62-40f8-ba4c-252df247f179,Namespace:calico-system,Attempt:0,}" Jul 15 04:47:33.756246 containerd[1527]: time="2025-07-15T04:47:33.755814211Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nvc78,Uid:084d4de4-ee2b-48f0-ba8f-271876d17fba,Namespace:calico-system,Attempt:0,}" Jul 15 04:47:33.914938 systemd-networkd[1433]: cali4d36ef5472f: Link UP Jul 15 04:47:33.915144 systemd-networkd[1433]: cali4d36ef5472f: Gained carrier Jul 15 04:47:33.930595 containerd[1527]: 2025-07-15 04:47:33.823 [INFO][4284] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--nvc78-eth0 csi-node-driver- calico-system 084d4de4-ee2b-48f0-ba8f-271876d17fba 717 0 2025-07-15 04:47:13 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-nvc78 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali4d36ef5472f [] [] }} ContainerID="713170ba9fbf2cb08a7c800f8bc0d2e9aa9f0e83a999b2c8d9b27d9bc3407888" Namespace="calico-system" Pod="csi-node-driver-nvc78" WorkloadEndpoint="localhost-k8s-csi--node--driver--nvc78-" Jul 15 04:47:33.930595 containerd[1527]: 2025-07-15 04:47:33.823 [INFO][4284] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="713170ba9fbf2cb08a7c800f8bc0d2e9aa9f0e83a999b2c8d9b27d9bc3407888" Namespace="calico-system" Pod="csi-node-driver-nvc78" WorkloadEndpoint="localhost-k8s-csi--node--driver--nvc78-eth0" Jul 15 04:47:33.930595 containerd[1527]: 2025-07-15 04:47:33.865 [INFO][4302] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="713170ba9fbf2cb08a7c800f8bc0d2e9aa9f0e83a999b2c8d9b27d9bc3407888" HandleID="k8s-pod-network.713170ba9fbf2cb08a7c800f8bc0d2e9aa9f0e83a999b2c8d9b27d9bc3407888" Workload="localhost-k8s-csi--node--driver--nvc78-eth0" Jul 15 04:47:33.930893 containerd[1527]: 2025-07-15 04:47:33.865 [INFO][4302] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="713170ba9fbf2cb08a7c800f8bc0d2e9aa9f0e83a999b2c8d9b27d9bc3407888" HandleID="k8s-pod-network.713170ba9fbf2cb08a7c800f8bc0d2e9aa9f0e83a999b2c8d9b27d9bc3407888" Workload="localhost-k8s-csi--node--driver--nvc78-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400051b250), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-nvc78", "timestamp":"2025-07-15 04:47:33.865777797 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 04:47:33.930893 containerd[1527]: 2025-07-15 04:47:33.865 [INFO][4302] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 04:47:33.930893 containerd[1527]: 2025-07-15 04:47:33.866 [INFO][4302] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 04:47:33.930893 containerd[1527]: 2025-07-15 04:47:33.866 [INFO][4302] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 15 04:47:33.930893 containerd[1527]: 2025-07-15 04:47:33.877 [INFO][4302] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.713170ba9fbf2cb08a7c800f8bc0d2e9aa9f0e83a999b2c8d9b27d9bc3407888" host="localhost" Jul 15 04:47:33.930893 containerd[1527]: 2025-07-15 04:47:33.886 [INFO][4302] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 15 04:47:33.930893 containerd[1527]: 2025-07-15 04:47:33.891 [INFO][4302] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 15 04:47:33.930893 containerd[1527]: 2025-07-15 04:47:33.892 [INFO][4302] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 15 04:47:33.930893 containerd[1527]: 2025-07-15 04:47:33.894 [INFO][4302] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 15 04:47:33.930893 containerd[1527]: 2025-07-15 04:47:33.895 [INFO][4302] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.713170ba9fbf2cb08a7c800f8bc0d2e9aa9f0e83a999b2c8d9b27d9bc3407888" host="localhost" Jul 15 04:47:33.931124 containerd[1527]: 2025-07-15 04:47:33.896 [INFO][4302] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.713170ba9fbf2cb08a7c800f8bc0d2e9aa9f0e83a999b2c8d9b27d9bc3407888 Jul 15 04:47:33.931124 containerd[1527]: 2025-07-15 04:47:33.901 [INFO][4302] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.713170ba9fbf2cb08a7c800f8bc0d2e9aa9f0e83a999b2c8d9b27d9bc3407888" host="localhost" Jul 15 04:47:33.931124 containerd[1527]: 2025-07-15 04:47:33.906 [INFO][4302] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.713170ba9fbf2cb08a7c800f8bc0d2e9aa9f0e83a999b2c8d9b27d9bc3407888" host="localhost" Jul 15 04:47:33.931124 containerd[1527]: 2025-07-15 04:47:33.906 [INFO][4302] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.713170ba9fbf2cb08a7c800f8bc0d2e9aa9f0e83a999b2c8d9b27d9bc3407888" host="localhost" Jul 15 04:47:33.931124 containerd[1527]: 2025-07-15 04:47:33.906 [INFO][4302] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 04:47:33.931124 containerd[1527]: 2025-07-15 04:47:33.906 [INFO][4302] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="713170ba9fbf2cb08a7c800f8bc0d2e9aa9f0e83a999b2c8d9b27d9bc3407888" HandleID="k8s-pod-network.713170ba9fbf2cb08a7c800f8bc0d2e9aa9f0e83a999b2c8d9b27d9bc3407888" Workload="localhost-k8s-csi--node--driver--nvc78-eth0" Jul 15 04:47:33.931261 containerd[1527]: 2025-07-15 04:47:33.910 [INFO][4284] cni-plugin/k8s.go 418: Populated endpoint ContainerID="713170ba9fbf2cb08a7c800f8bc0d2e9aa9f0e83a999b2c8d9b27d9bc3407888" Namespace="calico-system" Pod="csi-node-driver-nvc78" WorkloadEndpoint="localhost-k8s-csi--node--driver--nvc78-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--nvc78-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"084d4de4-ee2b-48f0-ba8f-271876d17fba", ResourceVersion:"717", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 4, 47, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-nvc78", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4d36ef5472f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 04:47:33.931314 containerd[1527]: 2025-07-15 04:47:33.910 [INFO][4284] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="713170ba9fbf2cb08a7c800f8bc0d2e9aa9f0e83a999b2c8d9b27d9bc3407888" Namespace="calico-system" Pod="csi-node-driver-nvc78" WorkloadEndpoint="localhost-k8s-csi--node--driver--nvc78-eth0" Jul 15 04:47:33.931314 containerd[1527]: 2025-07-15 04:47:33.910 [INFO][4284] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4d36ef5472f ContainerID="713170ba9fbf2cb08a7c800f8bc0d2e9aa9f0e83a999b2c8d9b27d9bc3407888" Namespace="calico-system" Pod="csi-node-driver-nvc78" WorkloadEndpoint="localhost-k8s-csi--node--driver--nvc78-eth0" Jul 15 04:47:33.931314 containerd[1527]: 2025-07-15 04:47:33.914 [INFO][4284] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="713170ba9fbf2cb08a7c800f8bc0d2e9aa9f0e83a999b2c8d9b27d9bc3407888" Namespace="calico-system" Pod="csi-node-driver-nvc78" WorkloadEndpoint="localhost-k8s-csi--node--driver--nvc78-eth0" Jul 15 04:47:33.931482 containerd[1527]: 2025-07-15 04:47:33.915 [INFO][4284] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="713170ba9fbf2cb08a7c800f8bc0d2e9aa9f0e83a999b2c8d9b27d9bc3407888" Namespace="calico-system" Pod="csi-node-driver-nvc78" WorkloadEndpoint="localhost-k8s-csi--node--driver--nvc78-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--nvc78-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"084d4de4-ee2b-48f0-ba8f-271876d17fba", ResourceVersion:"717", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 4, 47, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"713170ba9fbf2cb08a7c800f8bc0d2e9aa9f0e83a999b2c8d9b27d9bc3407888", Pod:"csi-node-driver-nvc78", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4d36ef5472f", MAC:"02:3d:86:83:7b:a9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 04:47:33.931542 containerd[1527]: 2025-07-15 04:47:33.927 [INFO][4284] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="713170ba9fbf2cb08a7c800f8bc0d2e9aa9f0e83a999b2c8d9b27d9bc3407888" Namespace="calico-system" Pod="csi-node-driver-nvc78" WorkloadEndpoint="localhost-k8s-csi--node--driver--nvc78-eth0" Jul 15 04:47:33.958454 containerd[1527]: time="2025-07-15T04:47:33.958408764Z" level=info msg="connecting to shim 713170ba9fbf2cb08a7c800f8bc0d2e9aa9f0e83a999b2c8d9b27d9bc3407888" address="unix:///run/containerd/s/8c289b12d6542c4606d29464c7cf088036169e2639b346b90eea7958b6701321" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:47:33.979545 systemd[1]: Started cri-containerd-713170ba9fbf2cb08a7c800f8bc0d2e9aa9f0e83a999b2c8d9b27d9bc3407888.scope - libcontainer container 713170ba9fbf2cb08a7c800f8bc0d2e9aa9f0e83a999b2c8d9b27d9bc3407888. Jul 15 04:47:33.993853 systemd-resolved[1350]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 15 04:47:34.013419 containerd[1527]: time="2025-07-15T04:47:34.013293184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nvc78,Uid:084d4de4-ee2b-48f0-ba8f-271876d17fba,Namespace:calico-system,Attempt:0,} returns sandbox id \"713170ba9fbf2cb08a7c800f8bc0d2e9aa9f0e83a999b2c8d9b27d9bc3407888\"" Jul 15 04:47:34.019722 containerd[1527]: time="2025-07-15T04:47:34.019654069Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 15 04:47:34.020408 systemd-networkd[1433]: cali642bee7b838: Link UP Jul 15 04:47:34.020615 systemd-networkd[1433]: cali642bee7b838: Gained carrier Jul 15 04:47:34.033597 containerd[1527]: 2025-07-15 04:47:33.822 [INFO][4272] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--7bf87f4ddb--bbvjj-eth0 calico-kube-controllers-7bf87f4ddb- calico-system 3dde1c96-ab62-40f8-ba4c-252df247f179 820 0 2025-07-15 04:47:14 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7bf87f4ddb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-7bf87f4ddb-bbvjj eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali642bee7b838 [] [] }} ContainerID="65d61d7c83f227e52f610890d7e6382d07cfed7f8a674ce1b191ff1d8a8f54ff" Namespace="calico-system" Pod="calico-kube-controllers-7bf87f4ddb-bbvjj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7bf87f4ddb--bbvjj-" Jul 15 04:47:34.033597 containerd[1527]: 2025-07-15 04:47:33.823 [INFO][4272] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="65d61d7c83f227e52f610890d7e6382d07cfed7f8a674ce1b191ff1d8a8f54ff" Namespace="calico-system" Pod="calico-kube-controllers-7bf87f4ddb-bbvjj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7bf87f4ddb--bbvjj-eth0" Jul 15 04:47:34.033597 containerd[1527]: 2025-07-15 04:47:33.866 [INFO][4304] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="65d61d7c83f227e52f610890d7e6382d07cfed7f8a674ce1b191ff1d8a8f54ff" HandleID="k8s-pod-network.65d61d7c83f227e52f610890d7e6382d07cfed7f8a674ce1b191ff1d8a8f54ff" Workload="localhost-k8s-calico--kube--controllers--7bf87f4ddb--bbvjj-eth0" Jul 15 04:47:34.033793 containerd[1527]: 2025-07-15 04:47:33.866 [INFO][4304] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="65d61d7c83f227e52f610890d7e6382d07cfed7f8a674ce1b191ff1d8a8f54ff" HandleID="k8s-pod-network.65d61d7c83f227e52f610890d7e6382d07cfed7f8a674ce1b191ff1d8a8f54ff" Workload="localhost-k8s-calico--kube--controllers--7bf87f4ddb--bbvjj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c3120), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-7bf87f4ddb-bbvjj", "timestamp":"2025-07-15 04:47:33.866112859 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 04:47:34.033793 containerd[1527]: 2025-07-15 04:47:33.866 [INFO][4304] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 04:47:34.033793 containerd[1527]: 2025-07-15 04:47:33.906 [INFO][4304] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 04:47:34.033793 containerd[1527]: 2025-07-15 04:47:33.906 [INFO][4304] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 15 04:47:34.033793 containerd[1527]: 2025-07-15 04:47:33.978 [INFO][4304] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.65d61d7c83f227e52f610890d7e6382d07cfed7f8a674ce1b191ff1d8a8f54ff" host="localhost" Jul 15 04:47:34.033793 containerd[1527]: 2025-07-15 04:47:33.986 [INFO][4304] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 15 04:47:34.033793 containerd[1527]: 2025-07-15 04:47:33.992 [INFO][4304] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 15 04:47:34.033793 containerd[1527]: 2025-07-15 04:47:33.994 [INFO][4304] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 15 04:47:34.033793 containerd[1527]: 2025-07-15 04:47:33.996 [INFO][4304] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 15 04:47:34.033793 containerd[1527]: 2025-07-15 04:47:33.996 [INFO][4304] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.65d61d7c83f227e52f610890d7e6382d07cfed7f8a674ce1b191ff1d8a8f54ff" host="localhost" Jul 15 04:47:34.034023 containerd[1527]: 2025-07-15 04:47:33.998 [INFO][4304] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.65d61d7c83f227e52f610890d7e6382d07cfed7f8a674ce1b191ff1d8a8f54ff Jul 15 04:47:34.034023 containerd[1527]: 2025-07-15 04:47:34.003 [INFO][4304] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.65d61d7c83f227e52f610890d7e6382d07cfed7f8a674ce1b191ff1d8a8f54ff" host="localhost" Jul 15 04:47:34.034023 containerd[1527]: 2025-07-15 04:47:34.009 [INFO][4304] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.65d61d7c83f227e52f610890d7e6382d07cfed7f8a674ce1b191ff1d8a8f54ff" host="localhost" Jul 15 04:47:34.034023 containerd[1527]: 2025-07-15 04:47:34.009 [INFO][4304] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.65d61d7c83f227e52f610890d7e6382d07cfed7f8a674ce1b191ff1d8a8f54ff" host="localhost" Jul 15 04:47:34.034023 containerd[1527]: 2025-07-15 04:47:34.009 [INFO][4304] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 04:47:34.034023 containerd[1527]: 2025-07-15 04:47:34.009 [INFO][4304] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="65d61d7c83f227e52f610890d7e6382d07cfed7f8a674ce1b191ff1d8a8f54ff" HandleID="k8s-pod-network.65d61d7c83f227e52f610890d7e6382d07cfed7f8a674ce1b191ff1d8a8f54ff" Workload="localhost-k8s-calico--kube--controllers--7bf87f4ddb--bbvjj-eth0" Jul 15 04:47:34.034196 containerd[1527]: 2025-07-15 04:47:34.015 [INFO][4272] cni-plugin/k8s.go 418: Populated endpoint ContainerID="65d61d7c83f227e52f610890d7e6382d07cfed7f8a674ce1b191ff1d8a8f54ff" Namespace="calico-system" Pod="calico-kube-controllers-7bf87f4ddb-bbvjj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7bf87f4ddb--bbvjj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7bf87f4ddb--bbvjj-eth0", GenerateName:"calico-kube-controllers-7bf87f4ddb-", Namespace:"calico-system", SelfLink:"", UID:"3dde1c96-ab62-40f8-ba4c-252df247f179", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 4, 47, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7bf87f4ddb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-7bf87f4ddb-bbvjj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali642bee7b838", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 04:47:34.034266 containerd[1527]: 2025-07-15 04:47:34.016 [INFO][4272] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="65d61d7c83f227e52f610890d7e6382d07cfed7f8a674ce1b191ff1d8a8f54ff" Namespace="calico-system" Pod="calico-kube-controllers-7bf87f4ddb-bbvjj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7bf87f4ddb--bbvjj-eth0" Jul 15 04:47:34.034266 containerd[1527]: 2025-07-15 04:47:34.016 [INFO][4272] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali642bee7b838 ContainerID="65d61d7c83f227e52f610890d7e6382d07cfed7f8a674ce1b191ff1d8a8f54ff" Namespace="calico-system" Pod="calico-kube-controllers-7bf87f4ddb-bbvjj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7bf87f4ddb--bbvjj-eth0" Jul 15 04:47:34.034266 containerd[1527]: 2025-07-15 04:47:34.020 [INFO][4272] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="65d61d7c83f227e52f610890d7e6382d07cfed7f8a674ce1b191ff1d8a8f54ff" Namespace="calico-system" Pod="calico-kube-controllers-7bf87f4ddb-bbvjj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7bf87f4ddb--bbvjj-eth0" Jul 15 04:47:34.034345 containerd[1527]: 2025-07-15 04:47:34.020 [INFO][4272] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="65d61d7c83f227e52f610890d7e6382d07cfed7f8a674ce1b191ff1d8a8f54ff" Namespace="calico-system" Pod="calico-kube-controllers-7bf87f4ddb-bbvjj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7bf87f4ddb--bbvjj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7bf87f4ddb--bbvjj-eth0", GenerateName:"calico-kube-controllers-7bf87f4ddb-", Namespace:"calico-system", SelfLink:"", UID:"3dde1c96-ab62-40f8-ba4c-252df247f179", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 4, 47, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7bf87f4ddb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"65d61d7c83f227e52f610890d7e6382d07cfed7f8a674ce1b191ff1d8a8f54ff", Pod:"calico-kube-controllers-7bf87f4ddb-bbvjj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali642bee7b838", MAC:"4e:a7:ba:89:69:aa", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 04:47:34.034791 containerd[1527]: 2025-07-15 04:47:34.030 [INFO][4272] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="65d61d7c83f227e52f610890d7e6382d07cfed7f8a674ce1b191ff1d8a8f54ff" Namespace="calico-system" Pod="calico-kube-controllers-7bf87f4ddb-bbvjj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7bf87f4ddb--bbvjj-eth0" Jul 15 04:47:34.053072 containerd[1527]: time="2025-07-15T04:47:34.052880584Z" level=info msg="connecting to shim 65d61d7c83f227e52f610890d7e6382d07cfed7f8a674ce1b191ff1d8a8f54ff" address="unix:///run/containerd/s/570d350f3f2c154db3e52acd4e95042e7c80747bb93ed5509700a91f576a18e8" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:47:34.081521 systemd[1]: Started cri-containerd-65d61d7c83f227e52f610890d7e6382d07cfed7f8a674ce1b191ff1d8a8f54ff.scope - libcontainer container 65d61d7c83f227e52f610890d7e6382d07cfed7f8a674ce1b191ff1d8a8f54ff. Jul 15 04:47:34.094188 systemd-resolved[1350]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 15 04:47:34.114040 containerd[1527]: time="2025-07-15T04:47:34.114001915Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7bf87f4ddb-bbvjj,Uid:3dde1c96-ab62-40f8-ba4c-252df247f179,Namespace:calico-system,Attempt:0,} returns sandbox id \"65d61d7c83f227e52f610890d7e6382d07cfed7f8a674ce1b191ff1d8a8f54ff\"" Jul 15 04:47:34.756207 containerd[1527]: time="2025-07-15T04:47:34.756146354Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5555fdf644-6dvsm,Uid:d34fce70-fd78-417f-b0ec-e1f83ffebe90,Namespace:calico-apiserver,Attempt:0,}" Jul 15 04:47:34.884090 systemd-networkd[1433]: calibbe8343c83a: Link UP Jul 15 04:47:34.884316 systemd-networkd[1433]: calibbe8343c83a: Gained carrier Jul 15 04:47:34.897417 containerd[1527]: 2025-07-15 04:47:34.793 [INFO][4431] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5555fdf644--6dvsm-eth0 calico-apiserver-5555fdf644- calico-apiserver d34fce70-fd78-417f-b0ec-e1f83ffebe90 830 0 2025-07-15 04:47:10 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5555fdf644 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5555fdf644-6dvsm eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calibbe8343c83a [] [] }} ContainerID="80264b626144f32f2291760def57f1f32a464b4e2790b2396ea2876ab4f288b4" Namespace="calico-apiserver" Pod="calico-apiserver-5555fdf644-6dvsm" WorkloadEndpoint="localhost-k8s-calico--apiserver--5555fdf644--6dvsm-" Jul 15 04:47:34.897417 containerd[1527]: 2025-07-15 04:47:34.793 [INFO][4431] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="80264b626144f32f2291760def57f1f32a464b4e2790b2396ea2876ab4f288b4" Namespace="calico-apiserver" Pod="calico-apiserver-5555fdf644-6dvsm" WorkloadEndpoint="localhost-k8s-calico--apiserver--5555fdf644--6dvsm-eth0" Jul 15 04:47:34.897417 containerd[1527]: 2025-07-15 04:47:34.832 [INFO][4445] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="80264b626144f32f2291760def57f1f32a464b4e2790b2396ea2876ab4f288b4" HandleID="k8s-pod-network.80264b626144f32f2291760def57f1f32a464b4e2790b2396ea2876ab4f288b4" Workload="localhost-k8s-calico--apiserver--5555fdf644--6dvsm-eth0" Jul 15 04:47:34.897633 containerd[1527]: 2025-07-15 04:47:34.832 [INFO][4445] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="80264b626144f32f2291760def57f1f32a464b4e2790b2396ea2876ab4f288b4" HandleID="k8s-pod-network.80264b626144f32f2291760def57f1f32a464b4e2790b2396ea2876ab4f288b4" Workload="localhost-k8s-calico--apiserver--5555fdf644--6dvsm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000364fe0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5555fdf644-6dvsm", "timestamp":"2025-07-15 04:47:34.832130632 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 04:47:34.897633 containerd[1527]: 2025-07-15 04:47:34.832 [INFO][4445] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 04:47:34.897633 containerd[1527]: 2025-07-15 04:47:34.832 [INFO][4445] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 04:47:34.897633 containerd[1527]: 2025-07-15 04:47:34.832 [INFO][4445] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 15 04:47:34.897633 containerd[1527]: 2025-07-15 04:47:34.844 [INFO][4445] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.80264b626144f32f2291760def57f1f32a464b4e2790b2396ea2876ab4f288b4" host="localhost" Jul 15 04:47:34.897633 containerd[1527]: 2025-07-15 04:47:34.851 [INFO][4445] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 15 04:47:34.897633 containerd[1527]: 2025-07-15 04:47:34.860 [INFO][4445] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 15 04:47:34.897633 containerd[1527]: 2025-07-15 04:47:34.863 [INFO][4445] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 15 04:47:34.897633 containerd[1527]: 2025-07-15 04:47:34.866 [INFO][4445] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 15 04:47:34.897633 containerd[1527]: 2025-07-15 04:47:34.866 [INFO][4445] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.80264b626144f32f2291760def57f1f32a464b4e2790b2396ea2876ab4f288b4" host="localhost" Jul 15 04:47:34.897858 containerd[1527]: 2025-07-15 04:47:34.868 [INFO][4445] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.80264b626144f32f2291760def57f1f32a464b4e2790b2396ea2876ab4f288b4 Jul 15 04:47:34.897858 containerd[1527]: 2025-07-15 04:47:34.871 [INFO][4445] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.80264b626144f32f2291760def57f1f32a464b4e2790b2396ea2876ab4f288b4" host="localhost" Jul 15 04:47:34.897858 containerd[1527]: 2025-07-15 04:47:34.878 [INFO][4445] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.80264b626144f32f2291760def57f1f32a464b4e2790b2396ea2876ab4f288b4" host="localhost" Jul 15 04:47:34.897858 containerd[1527]: 2025-07-15 04:47:34.878 [INFO][4445] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.80264b626144f32f2291760def57f1f32a464b4e2790b2396ea2876ab4f288b4" host="localhost" Jul 15 04:47:34.897858 containerd[1527]: 2025-07-15 04:47:34.878 [INFO][4445] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 04:47:34.897858 containerd[1527]: 2025-07-15 04:47:34.878 [INFO][4445] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="80264b626144f32f2291760def57f1f32a464b4e2790b2396ea2876ab4f288b4" HandleID="k8s-pod-network.80264b626144f32f2291760def57f1f32a464b4e2790b2396ea2876ab4f288b4" Workload="localhost-k8s-calico--apiserver--5555fdf644--6dvsm-eth0" Jul 15 04:47:34.897983 containerd[1527]: 2025-07-15 04:47:34.882 [INFO][4431] cni-plugin/k8s.go 418: Populated endpoint ContainerID="80264b626144f32f2291760def57f1f32a464b4e2790b2396ea2876ab4f288b4" Namespace="calico-apiserver" Pod="calico-apiserver-5555fdf644-6dvsm" WorkloadEndpoint="localhost-k8s-calico--apiserver--5555fdf644--6dvsm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5555fdf644--6dvsm-eth0", GenerateName:"calico-apiserver-5555fdf644-", Namespace:"calico-apiserver", SelfLink:"", UID:"d34fce70-fd78-417f-b0ec-e1f83ffebe90", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 4, 47, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5555fdf644", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5555fdf644-6dvsm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibbe8343c83a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 04:47:34.898142 containerd[1527]: 2025-07-15 04:47:34.882 [INFO][4431] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="80264b626144f32f2291760def57f1f32a464b4e2790b2396ea2876ab4f288b4" Namespace="calico-apiserver" Pod="calico-apiserver-5555fdf644-6dvsm" WorkloadEndpoint="localhost-k8s-calico--apiserver--5555fdf644--6dvsm-eth0" Jul 15 04:47:34.898142 containerd[1527]: 2025-07-15 04:47:34.882 [INFO][4431] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibbe8343c83a ContainerID="80264b626144f32f2291760def57f1f32a464b4e2790b2396ea2876ab4f288b4" Namespace="calico-apiserver" Pod="calico-apiserver-5555fdf644-6dvsm" WorkloadEndpoint="localhost-k8s-calico--apiserver--5555fdf644--6dvsm-eth0" Jul 15 04:47:34.898142 containerd[1527]: 2025-07-15 04:47:34.884 [INFO][4431] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="80264b626144f32f2291760def57f1f32a464b4e2790b2396ea2876ab4f288b4" Namespace="calico-apiserver" Pod="calico-apiserver-5555fdf644-6dvsm" WorkloadEndpoint="localhost-k8s-calico--apiserver--5555fdf644--6dvsm-eth0" Jul 15 04:47:34.898232 containerd[1527]: 2025-07-15 04:47:34.884 [INFO][4431] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="80264b626144f32f2291760def57f1f32a464b4e2790b2396ea2876ab4f288b4" Namespace="calico-apiserver" Pod="calico-apiserver-5555fdf644-6dvsm" WorkloadEndpoint="localhost-k8s-calico--apiserver--5555fdf644--6dvsm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5555fdf644--6dvsm-eth0", GenerateName:"calico-apiserver-5555fdf644-", Namespace:"calico-apiserver", SelfLink:"", UID:"d34fce70-fd78-417f-b0ec-e1f83ffebe90", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 4, 47, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5555fdf644", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"80264b626144f32f2291760def57f1f32a464b4e2790b2396ea2876ab4f288b4", Pod:"calico-apiserver-5555fdf644-6dvsm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibbe8343c83a", MAC:"12:5f:94:da:4b:89", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 04:47:34.898306 containerd[1527]: 2025-07-15 04:47:34.894 [INFO][4431] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="80264b626144f32f2291760def57f1f32a464b4e2790b2396ea2876ab4f288b4" Namespace="calico-apiserver" Pod="calico-apiserver-5555fdf644-6dvsm" WorkloadEndpoint="localhost-k8s-calico--apiserver--5555fdf644--6dvsm-eth0" Jul 15 04:47:34.931567 containerd[1527]: time="2025-07-15T04:47:34.931512398Z" level=info msg="connecting to shim 80264b626144f32f2291760def57f1f32a464b4e2790b2396ea2876ab4f288b4" address="unix:///run/containerd/s/d66106f25e092ead9a7ad615e3bb8ab0cccb06c45988ffb2ef1dd911209c77c8" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:47:34.962521 systemd[1]: Started cri-containerd-80264b626144f32f2291760def57f1f32a464b4e2790b2396ea2876ab4f288b4.scope - libcontainer container 80264b626144f32f2291760def57f1f32a464b4e2790b2396ea2876ab4f288b4. Jul 15 04:47:34.974603 systemd-resolved[1350]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 15 04:47:35.000929 containerd[1527]: time="2025-07-15T04:47:35.000889014Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5555fdf644-6dvsm,Uid:d34fce70-fd78-417f-b0ec-e1f83ffebe90,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"80264b626144f32f2291760def57f1f32a464b4e2790b2396ea2876ab4f288b4\"" Jul 15 04:47:35.035919 systemd-networkd[1433]: cali4d36ef5472f: Gained IPv6LL Jul 15 04:47:35.036463 containerd[1527]: time="2025-07-15T04:47:35.036412485Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:47:35.039448 containerd[1527]: time="2025-07-15T04:47:35.039406229Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8225702" Jul 15 04:47:35.040255 containerd[1527]: time="2025-07-15T04:47:35.040214879Z" level=info msg="ImageCreate event name:\"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:47:35.042009 containerd[1527]: time="2025-07-15T04:47:35.041980348Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:47:35.042779 containerd[1527]: time="2025-07-15T04:47:35.042754196Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"9594943\" in 1.022875152s" Jul 15 04:47:35.042959 containerd[1527]: time="2025-07-15T04:47:35.042783037Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\"" Jul 15 04:47:35.043742 containerd[1527]: time="2025-07-15T04:47:35.043715655Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 15 04:47:35.052766 containerd[1527]: time="2025-07-15T04:47:35.052710450Z" level=info msg="CreateContainer within sandbox \"713170ba9fbf2cb08a7c800f8bc0d2e9aa9f0e83a999b2c8d9b27d9bc3407888\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 15 04:47:35.076989 containerd[1527]: time="2025-07-15T04:47:35.076878460Z" level=info msg="Container 1d7176750d2bfef7711fe3e6fd3ad90326c2380f1fd032993492910bc382ad44: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:47:35.092342 containerd[1527]: time="2025-07-15T04:47:35.092268089Z" level=info msg="CreateContainer within sandbox \"713170ba9fbf2cb08a7c800f8bc0d2e9aa9f0e83a999b2c8d9b27d9bc3407888\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"1d7176750d2bfef7711fe3e6fd3ad90326c2380f1fd032993492910bc382ad44\"" Jul 15 04:47:35.094516 containerd[1527]: time="2025-07-15T04:47:35.094451464Z" level=info msg="StartContainer for \"1d7176750d2bfef7711fe3e6fd3ad90326c2380f1fd032993492910bc382ad44\"" Jul 15 04:47:35.096489 containerd[1527]: time="2025-07-15T04:47:35.096458788Z" level=info msg="connecting to shim 1d7176750d2bfef7711fe3e6fd3ad90326c2380f1fd032993492910bc382ad44" address="unix:///run/containerd/s/8c289b12d6542c4606d29464c7cf088036169e2639b346b90eea7958b6701321" protocol=ttrpc version=3 Jul 15 04:47:35.120554 systemd[1]: Started cri-containerd-1d7176750d2bfef7711fe3e6fd3ad90326c2380f1fd032993492910bc382ad44.scope - libcontainer container 1d7176750d2bfef7711fe3e6fd3ad90326c2380f1fd032993492910bc382ad44. Jul 15 04:47:35.175524 containerd[1527]: time="2025-07-15T04:47:35.175471981Z" level=info msg="StartContainer for \"1d7176750d2bfef7711fe3e6fd3ad90326c2380f1fd032993492910bc382ad44\" returns successfully" Jul 15 04:47:35.482520 systemd-networkd[1433]: cali642bee7b838: Gained IPv6LL Jul 15 04:47:35.756193 kubelet[2675]: E0715 04:47:35.756088 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 04:47:35.757141 containerd[1527]: time="2025-07-15T04:47:35.756553137Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xgvdj,Uid:41c5b645-6734-48b4-a165-074bd90a2006,Namespace:kube-system,Attempt:0,}" Jul 15 04:47:35.757141 containerd[1527]: time="2025-07-15T04:47:35.756712906Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5555fdf644-2ngbv,Uid:348e8462-7e3a-4709-a2d9-1e4cf20af94a,Namespace:calico-apiserver,Attempt:0,}" Jul 15 04:47:35.757503 containerd[1527]: time="2025-07-15T04:47:35.757435991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-9sgfr,Uid:89c9e28e-100f-4e96-85cc-ea4bcf2583b8,Namespace:calico-system,Attempt:0,}" Jul 15 04:47:35.887234 systemd-networkd[1433]: cali2c3d7bbde4a: Link UP Jul 15 04:47:35.887629 systemd-networkd[1433]: cali2c3d7bbde4a: Gained carrier Jul 15 04:47:35.900435 containerd[1527]: 2025-07-15 04:47:35.807 [INFO][4547] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5555fdf644--2ngbv-eth0 calico-apiserver-5555fdf644- calico-apiserver 348e8462-7e3a-4709-a2d9-1e4cf20af94a 826 0 2025-07-15 04:47:10 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5555fdf644 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5555fdf644-2ngbv eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2c3d7bbde4a [] [] }} ContainerID="32019ddce349ebc02d6cf04ceb03a30788aa96922c3e52dc0f0408e3e70743ac" Namespace="calico-apiserver" Pod="calico-apiserver-5555fdf644-2ngbv" WorkloadEndpoint="localhost-k8s-calico--apiserver--5555fdf644--2ngbv-" Jul 15 04:47:35.900435 containerd[1527]: 2025-07-15 04:47:35.807 [INFO][4547] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="32019ddce349ebc02d6cf04ceb03a30788aa96922c3e52dc0f0408e3e70743ac" Namespace="calico-apiserver" Pod="calico-apiserver-5555fdf644-2ngbv" WorkloadEndpoint="localhost-k8s-calico--apiserver--5555fdf644--2ngbv-eth0" Jul 15 04:47:35.900435 containerd[1527]: 2025-07-15 04:47:35.833 [INFO][4586] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="32019ddce349ebc02d6cf04ceb03a30788aa96922c3e52dc0f0408e3e70743ac" HandleID="k8s-pod-network.32019ddce349ebc02d6cf04ceb03a30788aa96922c3e52dc0f0408e3e70743ac" Workload="localhost-k8s-calico--apiserver--5555fdf644--2ngbv-eth0" Jul 15 04:47:35.900705 containerd[1527]: 2025-07-15 04:47:35.833 [INFO][4586] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="32019ddce349ebc02d6cf04ceb03a30788aa96922c3e52dc0f0408e3e70743ac" HandleID="k8s-pod-network.32019ddce349ebc02d6cf04ceb03a30788aa96922c3e52dc0f0408e3e70743ac" Workload="localhost-k8s-calico--apiserver--5555fdf644--2ngbv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000518350), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5555fdf644-2ngbv", "timestamp":"2025-07-15 04:47:35.833671853 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 04:47:35.900705 containerd[1527]: 2025-07-15 04:47:35.833 [INFO][4586] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 04:47:35.900705 containerd[1527]: 2025-07-15 04:47:35.833 [INFO][4586] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 04:47:35.900705 containerd[1527]: 2025-07-15 04:47:35.833 [INFO][4586] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 15 04:47:35.900705 containerd[1527]: 2025-07-15 04:47:35.849 [INFO][4586] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.32019ddce349ebc02d6cf04ceb03a30788aa96922c3e52dc0f0408e3e70743ac" host="localhost" Jul 15 04:47:35.900705 containerd[1527]: 2025-07-15 04:47:35.854 [INFO][4586] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 15 04:47:35.900705 containerd[1527]: 2025-07-15 04:47:35.859 [INFO][4586] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 15 04:47:35.900705 containerd[1527]: 2025-07-15 04:47:35.861 [INFO][4586] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 15 04:47:35.900705 containerd[1527]: 2025-07-15 04:47:35.864 [INFO][4586] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 15 04:47:35.900705 containerd[1527]: 2025-07-15 04:47:35.864 [INFO][4586] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.32019ddce349ebc02d6cf04ceb03a30788aa96922c3e52dc0f0408e3e70743ac" host="localhost" Jul 15 04:47:35.900976 containerd[1527]: 2025-07-15 04:47:35.866 [INFO][4586] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.32019ddce349ebc02d6cf04ceb03a30788aa96922c3e52dc0f0408e3e70743ac Jul 15 04:47:35.900976 containerd[1527]: 2025-07-15 04:47:35.871 [INFO][4586] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.32019ddce349ebc02d6cf04ceb03a30788aa96922c3e52dc0f0408e3e70743ac" host="localhost" Jul 15 04:47:35.900976 containerd[1527]: 2025-07-15 04:47:35.879 [INFO][4586] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.32019ddce349ebc02d6cf04ceb03a30788aa96922c3e52dc0f0408e3e70743ac" host="localhost" Jul 15 04:47:35.900976 containerd[1527]: 2025-07-15 04:47:35.879 [INFO][4586] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.32019ddce349ebc02d6cf04ceb03a30788aa96922c3e52dc0f0408e3e70743ac" host="localhost" Jul 15 04:47:35.900976 containerd[1527]: 2025-07-15 04:47:35.879 [INFO][4586] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 04:47:35.900976 containerd[1527]: 2025-07-15 04:47:35.879 [INFO][4586] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="32019ddce349ebc02d6cf04ceb03a30788aa96922c3e52dc0f0408e3e70743ac" HandleID="k8s-pod-network.32019ddce349ebc02d6cf04ceb03a30788aa96922c3e52dc0f0408e3e70743ac" Workload="localhost-k8s-calico--apiserver--5555fdf644--2ngbv-eth0" Jul 15 04:47:35.901124 containerd[1527]: 2025-07-15 04:47:35.884 [INFO][4547] cni-plugin/k8s.go 418: Populated endpoint ContainerID="32019ddce349ebc02d6cf04ceb03a30788aa96922c3e52dc0f0408e3e70743ac" Namespace="calico-apiserver" Pod="calico-apiserver-5555fdf644-2ngbv" WorkloadEndpoint="localhost-k8s-calico--apiserver--5555fdf644--2ngbv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5555fdf644--2ngbv-eth0", GenerateName:"calico-apiserver-5555fdf644-", Namespace:"calico-apiserver", SelfLink:"", UID:"348e8462-7e3a-4709-a2d9-1e4cf20af94a", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 4, 47, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5555fdf644", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5555fdf644-2ngbv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2c3d7bbde4a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 04:47:35.901181 containerd[1527]: 2025-07-15 04:47:35.884 [INFO][4547] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="32019ddce349ebc02d6cf04ceb03a30788aa96922c3e52dc0f0408e3e70743ac" Namespace="calico-apiserver" Pod="calico-apiserver-5555fdf644-2ngbv" WorkloadEndpoint="localhost-k8s-calico--apiserver--5555fdf644--2ngbv-eth0" Jul 15 04:47:35.901181 containerd[1527]: 2025-07-15 04:47:35.884 [INFO][4547] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2c3d7bbde4a ContainerID="32019ddce349ebc02d6cf04ceb03a30788aa96922c3e52dc0f0408e3e70743ac" Namespace="calico-apiserver" Pod="calico-apiserver-5555fdf644-2ngbv" WorkloadEndpoint="localhost-k8s-calico--apiserver--5555fdf644--2ngbv-eth0" Jul 15 04:47:35.901181 containerd[1527]: 2025-07-15 04:47:35.889 [INFO][4547] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="32019ddce349ebc02d6cf04ceb03a30788aa96922c3e52dc0f0408e3e70743ac" Namespace="calico-apiserver" Pod="calico-apiserver-5555fdf644-2ngbv" WorkloadEndpoint="localhost-k8s-calico--apiserver--5555fdf644--2ngbv-eth0" Jul 15 04:47:35.901260 containerd[1527]: 2025-07-15 04:47:35.889 [INFO][4547] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="32019ddce349ebc02d6cf04ceb03a30788aa96922c3e52dc0f0408e3e70743ac" Namespace="calico-apiserver" Pod="calico-apiserver-5555fdf644-2ngbv" WorkloadEndpoint="localhost-k8s-calico--apiserver--5555fdf644--2ngbv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5555fdf644--2ngbv-eth0", GenerateName:"calico-apiserver-5555fdf644-", Namespace:"calico-apiserver", SelfLink:"", UID:"348e8462-7e3a-4709-a2d9-1e4cf20af94a", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 4, 47, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5555fdf644", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"32019ddce349ebc02d6cf04ceb03a30788aa96922c3e52dc0f0408e3e70743ac", Pod:"calico-apiserver-5555fdf644-2ngbv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2c3d7bbde4a", MAC:"5e:e0:34:cb:51:2b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 04:47:35.901327 containerd[1527]: 2025-07-15 04:47:35.897 [INFO][4547] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="32019ddce349ebc02d6cf04ceb03a30788aa96922c3e52dc0f0408e3e70743ac" Namespace="calico-apiserver" Pod="calico-apiserver-5555fdf644-2ngbv" WorkloadEndpoint="localhost-k8s-calico--apiserver--5555fdf644--2ngbv-eth0" Jul 15 04:47:35.923001 containerd[1527]: time="2025-07-15T04:47:35.922960279Z" level=info msg="connecting to shim 32019ddce349ebc02d6cf04ceb03a30788aa96922c3e52dc0f0408e3e70743ac" address="unix:///run/containerd/s/a62b205e75cfdcd3ca26764072234299c109d4a403f45c3f48be79267e6715e1" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:47:35.960591 systemd[1]: Started cri-containerd-32019ddce349ebc02d6cf04ceb03a30788aa96922c3e52dc0f0408e3e70743ac.scope - libcontainer container 32019ddce349ebc02d6cf04ceb03a30788aa96922c3e52dc0f0408e3e70743ac. Jul 15 04:47:35.978256 systemd-resolved[1350]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 15 04:47:35.981447 systemd-networkd[1433]: calie07fe1361cc: Link UP Jul 15 04:47:35.981629 systemd-networkd[1433]: calie07fe1361cc: Gained carrier Jul 15 04:47:35.999074 containerd[1527]: 2025-07-15 04:47:35.811 [INFO][4553] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--768f4c5c69--9sgfr-eth0 goldmane-768f4c5c69- calico-system 89c9e28e-100f-4e96-85cc-ea4bcf2583b8 827 0 2025-07-15 04:47:13 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-768f4c5c69-9sgfr eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calie07fe1361cc [] [] }} ContainerID="95bf3d9452d397ec3063e750578d07c744865d63be9286a64a8cd057fe8fbe0a" Namespace="calico-system" Pod="goldmane-768f4c5c69-9sgfr" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--9sgfr-" Jul 15 04:47:35.999074 containerd[1527]: 2025-07-15 04:47:35.811 [INFO][4553] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="95bf3d9452d397ec3063e750578d07c744865d63be9286a64a8cd057fe8fbe0a" Namespace="calico-system" Pod="goldmane-768f4c5c69-9sgfr" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--9sgfr-eth0" Jul 15 04:47:35.999074 containerd[1527]: 2025-07-15 04:47:35.840 [INFO][4592] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="95bf3d9452d397ec3063e750578d07c744865d63be9286a64a8cd057fe8fbe0a" HandleID="k8s-pod-network.95bf3d9452d397ec3063e750578d07c744865d63be9286a64a8cd057fe8fbe0a" Workload="localhost-k8s-goldmane--768f4c5c69--9sgfr-eth0" Jul 15 04:47:35.999312 containerd[1527]: 2025-07-15 04:47:35.840 [INFO][4592] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="95bf3d9452d397ec3063e750578d07c744865d63be9286a64a8cd057fe8fbe0a" HandleID="k8s-pod-network.95bf3d9452d397ec3063e750578d07c744865d63be9286a64a8cd057fe8fbe0a" Workload="localhost-k8s-goldmane--768f4c5c69--9sgfr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c760), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-768f4c5c69-9sgfr", "timestamp":"2025-07-15 04:47:35.840653803 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 04:47:35.999312 containerd[1527]: 2025-07-15 04:47:35.840 [INFO][4592] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 04:47:35.999312 containerd[1527]: 2025-07-15 04:47:35.879 [INFO][4592] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 04:47:35.999312 containerd[1527]: 2025-07-15 04:47:35.879 [INFO][4592] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 15 04:47:35.999312 containerd[1527]: 2025-07-15 04:47:35.950 [INFO][4592] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.95bf3d9452d397ec3063e750578d07c744865d63be9286a64a8cd057fe8fbe0a" host="localhost" Jul 15 04:47:35.999312 containerd[1527]: 2025-07-15 04:47:35.955 [INFO][4592] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 15 04:47:35.999312 containerd[1527]: 2025-07-15 04:47:35.960 [INFO][4592] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 15 04:47:35.999312 containerd[1527]: 2025-07-15 04:47:35.962 [INFO][4592] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 15 04:47:35.999312 containerd[1527]: 2025-07-15 04:47:35.965 [INFO][4592] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 15 04:47:35.999312 containerd[1527]: 2025-07-15 04:47:35.965 [INFO][4592] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.95bf3d9452d397ec3063e750578d07c744865d63be9286a64a8cd057fe8fbe0a" host="localhost" Jul 15 04:47:35.999551 containerd[1527]: 2025-07-15 04:47:35.967 [INFO][4592] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.95bf3d9452d397ec3063e750578d07c744865d63be9286a64a8cd057fe8fbe0a Jul 15 04:47:35.999551 containerd[1527]: 2025-07-15 04:47:35.971 [INFO][4592] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.95bf3d9452d397ec3063e750578d07c744865d63be9286a64a8cd057fe8fbe0a" host="localhost" Jul 15 04:47:35.999551 containerd[1527]: 2025-07-15 04:47:35.976 [INFO][4592] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.95bf3d9452d397ec3063e750578d07c744865d63be9286a64a8cd057fe8fbe0a" host="localhost" Jul 15 04:47:35.999551 containerd[1527]: 2025-07-15 04:47:35.976 [INFO][4592] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.95bf3d9452d397ec3063e750578d07c744865d63be9286a64a8cd057fe8fbe0a" host="localhost" Jul 15 04:47:35.999551 containerd[1527]: 2025-07-15 04:47:35.976 [INFO][4592] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 04:47:35.999551 containerd[1527]: 2025-07-15 04:47:35.976 [INFO][4592] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="95bf3d9452d397ec3063e750578d07c744865d63be9286a64a8cd057fe8fbe0a" HandleID="k8s-pod-network.95bf3d9452d397ec3063e750578d07c744865d63be9286a64a8cd057fe8fbe0a" Workload="localhost-k8s-goldmane--768f4c5c69--9sgfr-eth0" Jul 15 04:47:35.999666 containerd[1527]: 2025-07-15 04:47:35.979 [INFO][4553] cni-plugin/k8s.go 418: Populated endpoint ContainerID="95bf3d9452d397ec3063e750578d07c744865d63be9286a64a8cd057fe8fbe0a" Namespace="calico-system" Pod="goldmane-768f4c5c69-9sgfr" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--9sgfr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--9sgfr-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"89c9e28e-100f-4e96-85cc-ea4bcf2583b8", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 4, 47, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-768f4c5c69-9sgfr", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie07fe1361cc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 04:47:35.999666 containerd[1527]: 2025-07-15 04:47:35.979 [INFO][4553] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="95bf3d9452d397ec3063e750578d07c744865d63be9286a64a8cd057fe8fbe0a" Namespace="calico-system" Pod="goldmane-768f4c5c69-9sgfr" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--9sgfr-eth0" Jul 15 04:47:35.999735 containerd[1527]: 2025-07-15 04:47:35.979 [INFO][4553] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie07fe1361cc ContainerID="95bf3d9452d397ec3063e750578d07c744865d63be9286a64a8cd057fe8fbe0a" Namespace="calico-system" Pod="goldmane-768f4c5c69-9sgfr" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--9sgfr-eth0" Jul 15 04:47:35.999735 containerd[1527]: 2025-07-15 04:47:35.981 [INFO][4553] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="95bf3d9452d397ec3063e750578d07c744865d63be9286a64a8cd057fe8fbe0a" Namespace="calico-system" Pod="goldmane-768f4c5c69-9sgfr" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--9sgfr-eth0" Jul 15 04:47:35.999777 containerd[1527]: 2025-07-15 04:47:35.982 [INFO][4553] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="95bf3d9452d397ec3063e750578d07c744865d63be9286a64a8cd057fe8fbe0a" Namespace="calico-system" Pod="goldmane-768f4c5c69-9sgfr" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--9sgfr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--9sgfr-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"89c9e28e-100f-4e96-85cc-ea4bcf2583b8", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 4, 47, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"95bf3d9452d397ec3063e750578d07c744865d63be9286a64a8cd057fe8fbe0a", Pod:"goldmane-768f4c5c69-9sgfr", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie07fe1361cc", MAC:"96:53:90:34:07:05", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 04:47:35.999824 containerd[1527]: 2025-07-15 04:47:35.994 [INFO][4553] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="95bf3d9452d397ec3063e750578d07c744865d63be9286a64a8cd057fe8fbe0a" Namespace="calico-system" Pod="goldmane-768f4c5c69-9sgfr" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--9sgfr-eth0" Jul 15 04:47:36.018653 containerd[1527]: time="2025-07-15T04:47:36.017017928Z" level=info msg="connecting to shim 95bf3d9452d397ec3063e750578d07c744865d63be9286a64a8cd057fe8fbe0a" address="unix:///run/containerd/s/db53db4f4106861f873d115125dda230ae2f13be1f7fa35b51c91d1874a6fc18" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:47:36.018653 containerd[1527]: time="2025-07-15T04:47:36.018397010Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5555fdf644-2ngbv,Uid:348e8462-7e3a-4709-a2d9-1e4cf20af94a,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"32019ddce349ebc02d6cf04ceb03a30788aa96922c3e52dc0f0408e3e70743ac\"" Jul 15 04:47:36.041536 systemd[1]: Started cri-containerd-95bf3d9452d397ec3063e750578d07c744865d63be9286a64a8cd057fe8fbe0a.scope - libcontainer container 95bf3d9452d397ec3063e750578d07c744865d63be9286a64a8cd057fe8fbe0a. Jul 15 04:47:36.055019 systemd-resolved[1350]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 15 04:47:36.088753 containerd[1527]: time="2025-07-15T04:47:36.088690090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-9sgfr,Uid:89c9e28e-100f-4e96-85cc-ea4bcf2583b8,Namespace:calico-system,Attempt:0,} returns sandbox id \"95bf3d9452d397ec3063e750578d07c744865d63be9286a64a8cd057fe8fbe0a\"" Jul 15 04:47:36.098825 systemd-networkd[1433]: cali4f5010e844c: Link UP Jul 15 04:47:36.099879 systemd-networkd[1433]: cali4f5010e844c: Gained carrier Jul 15 04:47:36.130130 containerd[1527]: 2025-07-15 04:47:35.825 [INFO][4541] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--xgvdj-eth0 coredns-674b8bbfcf- kube-system 41c5b645-6734-48b4-a165-074bd90a2006 829 0 2025-07-15 04:47:02 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-xgvdj eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali4f5010e844c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="6f4ba3eea0678d3a746b680eba5109148b2112ef1e5829f047c4be937e4d7711" Namespace="kube-system" Pod="coredns-674b8bbfcf-xgvdj" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--xgvdj-" Jul 15 04:47:36.130130 containerd[1527]: 2025-07-15 04:47:35.825 [INFO][4541] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6f4ba3eea0678d3a746b680eba5109148b2112ef1e5829f047c4be937e4d7711" Namespace="kube-system" Pod="coredns-674b8bbfcf-xgvdj" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--xgvdj-eth0" Jul 15 04:47:36.130130 containerd[1527]: 2025-07-15 04:47:35.855 [INFO][4600] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6f4ba3eea0678d3a746b680eba5109148b2112ef1e5829f047c4be937e4d7711" HandleID="k8s-pod-network.6f4ba3eea0678d3a746b680eba5109148b2112ef1e5829f047c4be937e4d7711" Workload="localhost-k8s-coredns--674b8bbfcf--xgvdj-eth0" Jul 15 04:47:36.130326 containerd[1527]: 2025-07-15 04:47:35.855 [INFO][4600] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6f4ba3eea0678d3a746b680eba5109148b2112ef1e5829f047c4be937e4d7711" HandleID="k8s-pod-network.6f4ba3eea0678d3a746b680eba5109148b2112ef1e5829f047c4be937e4d7711" Workload="localhost-k8s-coredns--674b8bbfcf--xgvdj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002dd5f0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-xgvdj", "timestamp":"2025-07-15 04:47:35.855421034 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 04:47:36.130326 containerd[1527]: 2025-07-15 04:47:35.855 [INFO][4600] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 04:47:36.130326 containerd[1527]: 2025-07-15 04:47:35.976 [INFO][4600] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 04:47:36.130326 containerd[1527]: 2025-07-15 04:47:35.976 [INFO][4600] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 15 04:47:36.130326 containerd[1527]: 2025-07-15 04:47:36.055 [INFO][4600] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6f4ba3eea0678d3a746b680eba5109148b2112ef1e5829f047c4be937e4d7711" host="localhost" Jul 15 04:47:36.130326 containerd[1527]: 2025-07-15 04:47:36.063 [INFO][4600] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 15 04:47:36.130326 containerd[1527]: 2025-07-15 04:47:36.069 [INFO][4600] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 15 04:47:36.130326 containerd[1527]: 2025-07-15 04:47:36.071 [INFO][4600] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 15 04:47:36.130326 containerd[1527]: 2025-07-15 04:47:36.074 [INFO][4600] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 15 04:47:36.130326 containerd[1527]: 2025-07-15 04:47:36.074 [INFO][4600] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6f4ba3eea0678d3a746b680eba5109148b2112ef1e5829f047c4be937e4d7711" host="localhost" Jul 15 04:47:36.130635 containerd[1527]: 2025-07-15 04:47:36.075 [INFO][4600] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6f4ba3eea0678d3a746b680eba5109148b2112ef1e5829f047c4be937e4d7711 Jul 15 04:47:36.130635 containerd[1527]: 2025-07-15 04:47:36.080 [INFO][4600] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6f4ba3eea0678d3a746b680eba5109148b2112ef1e5829f047c4be937e4d7711" host="localhost" Jul 15 04:47:36.130635 containerd[1527]: 2025-07-15 04:47:36.093 [INFO][4600] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.6f4ba3eea0678d3a746b680eba5109148b2112ef1e5829f047c4be937e4d7711" host="localhost" Jul 15 04:47:36.130635 containerd[1527]: 2025-07-15 04:47:36.093 [INFO][4600] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.6f4ba3eea0678d3a746b680eba5109148b2112ef1e5829f047c4be937e4d7711" host="localhost" Jul 15 04:47:36.130635 containerd[1527]: 2025-07-15 04:47:36.093 [INFO][4600] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 04:47:36.130635 containerd[1527]: 2025-07-15 04:47:36.093 [INFO][4600] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="6f4ba3eea0678d3a746b680eba5109148b2112ef1e5829f047c4be937e4d7711" HandleID="k8s-pod-network.6f4ba3eea0678d3a746b680eba5109148b2112ef1e5829f047c4be937e4d7711" Workload="localhost-k8s-coredns--674b8bbfcf--xgvdj-eth0" Jul 15 04:47:36.130777 containerd[1527]: 2025-07-15 04:47:36.096 [INFO][4541] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6f4ba3eea0678d3a746b680eba5109148b2112ef1e5829f047c4be937e4d7711" Namespace="kube-system" Pod="coredns-674b8bbfcf-xgvdj" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--xgvdj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--xgvdj-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"41c5b645-6734-48b4-a165-074bd90a2006", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 4, 47, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-xgvdj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4f5010e844c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 04:47:36.130853 containerd[1527]: 2025-07-15 04:47:36.096 [INFO][4541] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="6f4ba3eea0678d3a746b680eba5109148b2112ef1e5829f047c4be937e4d7711" Namespace="kube-system" Pod="coredns-674b8bbfcf-xgvdj" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--xgvdj-eth0" Jul 15 04:47:36.130853 containerd[1527]: 2025-07-15 04:47:36.096 [INFO][4541] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4f5010e844c ContainerID="6f4ba3eea0678d3a746b680eba5109148b2112ef1e5829f047c4be937e4d7711" Namespace="kube-system" Pod="coredns-674b8bbfcf-xgvdj" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--xgvdj-eth0" Jul 15 04:47:36.130853 containerd[1527]: 2025-07-15 04:47:36.098 [INFO][4541] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6f4ba3eea0678d3a746b680eba5109148b2112ef1e5829f047c4be937e4d7711" Namespace="kube-system" Pod="coredns-674b8bbfcf-xgvdj" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--xgvdj-eth0" Jul 15 04:47:36.130927 containerd[1527]: 2025-07-15 04:47:36.100 [INFO][4541] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6f4ba3eea0678d3a746b680eba5109148b2112ef1e5829f047c4be937e4d7711" Namespace="kube-system" Pod="coredns-674b8bbfcf-xgvdj" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--xgvdj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--xgvdj-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"41c5b645-6734-48b4-a165-074bd90a2006", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 4, 47, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6f4ba3eea0678d3a746b680eba5109148b2112ef1e5829f047c4be937e4d7711", Pod:"coredns-674b8bbfcf-xgvdj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4f5010e844c", MAC:"62:2e:24:c0:12:12", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 04:47:36.130927 containerd[1527]: 2025-07-15 04:47:36.127 [INFO][4541] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6f4ba3eea0678d3a746b680eba5109148b2112ef1e5829f047c4be937e4d7711" Namespace="kube-system" Pod="coredns-674b8bbfcf-xgvdj" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--xgvdj-eth0" Jul 15 04:47:36.155689 containerd[1527]: time="2025-07-15T04:47:36.155642330Z" level=info msg="connecting to shim 6f4ba3eea0678d3a746b680eba5109148b2112ef1e5829f047c4be937e4d7711" address="unix:///run/containerd/s/533561a73f54d1008bd2b0f9680fe407448db2e7371b72d9020f7578307d834b" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:47:36.188546 systemd[1]: Started cri-containerd-6f4ba3eea0678d3a746b680eba5109148b2112ef1e5829f047c4be937e4d7711.scope - libcontainer container 6f4ba3eea0678d3a746b680eba5109148b2112ef1e5829f047c4be937e4d7711. Jul 15 04:47:36.199822 systemd-resolved[1350]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 15 04:47:36.220755 containerd[1527]: time="2025-07-15T04:47:36.220715897Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xgvdj,Uid:41c5b645-6734-48b4-a165-074bd90a2006,Namespace:kube-system,Attempt:0,} returns sandbox id \"6f4ba3eea0678d3a746b680eba5109148b2112ef1e5829f047c4be937e4d7711\"" Jul 15 04:47:36.221531 kubelet[2675]: E0715 04:47:36.221501 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 04:47:36.225503 containerd[1527]: time="2025-07-15T04:47:36.225474942Z" level=info msg="CreateContainer within sandbox \"6f4ba3eea0678d3a746b680eba5109148b2112ef1e5829f047c4be937e4d7711\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 15 04:47:36.235812 containerd[1527]: time="2025-07-15T04:47:36.235755636Z" level=info msg="Container 11400dbfc2c2e7cfc24737af6a0f5dc746dc3d09a5571635ffbc1a114422c74e: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:47:36.242439 containerd[1527]: time="2025-07-15T04:47:36.242400553Z" level=info msg="CreateContainer within sandbox \"6f4ba3eea0678d3a746b680eba5109148b2112ef1e5829f047c4be937e4d7711\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"11400dbfc2c2e7cfc24737af6a0f5dc746dc3d09a5571635ffbc1a114422c74e\"" Jul 15 04:47:36.242924 containerd[1527]: time="2025-07-15T04:47:36.242897863Z" level=info msg="StartContainer for \"11400dbfc2c2e7cfc24737af6a0f5dc746dc3d09a5571635ffbc1a114422c74e\"" Jul 15 04:47:36.244353 containerd[1527]: time="2025-07-15T04:47:36.244259544Z" level=info msg="connecting to shim 11400dbfc2c2e7cfc24737af6a0f5dc746dc3d09a5571635ffbc1a114422c74e" address="unix:///run/containerd/s/533561a73f54d1008bd2b0f9680fe407448db2e7371b72d9020f7578307d834b" protocol=ttrpc version=3 Jul 15 04:47:36.267716 systemd[1]: Started cri-containerd-11400dbfc2c2e7cfc24737af6a0f5dc746dc3d09a5571635ffbc1a114422c74e.scope - libcontainer container 11400dbfc2c2e7cfc24737af6a0f5dc746dc3d09a5571635ffbc1a114422c74e. Jul 15 04:47:36.295259 containerd[1527]: time="2025-07-15T04:47:36.295111102Z" level=info msg="StartContainer for \"11400dbfc2c2e7cfc24737af6a0f5dc746dc3d09a5571635ffbc1a114422c74e\" returns successfully" Jul 15 04:47:36.698508 systemd-networkd[1433]: calibbe8343c83a: Gained IPv6LL Jul 15 04:47:36.926028 kubelet[2675]: E0715 04:47:36.925995 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 04:47:36.941716 kubelet[2675]: I0715 04:47:36.941652 2675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-xgvdj" podStartSLOduration=34.941637728 podStartE2EDuration="34.941637728s" podCreationTimestamp="2025-07-15 04:47:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 04:47:36.939474719 +0000 UTC m=+42.275028597" watchObservedRunningTime="2025-07-15 04:47:36.941637728 +0000 UTC m=+42.277191646" Jul 15 04:47:37.062557 kubelet[2675]: I0715 04:47:37.062462 2675 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 04:47:37.064776 systemd[1]: Started sshd@7-10.0.0.76:22-10.0.0.1:39020.service - OpenSSH per-connection server daemon (10.0.0.1:39020). Jul 15 04:47:37.134707 sshd[4830]: Accepted publickey for core from 10.0.0.1 port 39020 ssh2: RSA SHA256:sv36Sv5cF+dK4scc2r2cUvpDU+BCYvXiqSSRxSnX4+c Jul 15 04:47:37.137896 sshd-session[4830]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:47:37.146299 systemd-logind[1508]: New session 8 of user core. Jul 15 04:47:37.153552 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 15 04:47:37.210864 systemd-networkd[1433]: calie07fe1361cc: Gained IPv6LL Jul 15 04:47:37.214522 containerd[1527]: time="2025-07-15T04:47:37.214461110Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f193ff3088296433b6d4d1b010f1b243be74f5c38818ffdc9c8b4c3af49dafb2\" id:\"207a94ebdf4bd72d8a435f5368282fb96e1ab9f732da7a3449d2fe794b634b61\" pid:4847 exited_at:{seconds:1752554857 nanos:214098209}" Jul 15 04:47:37.340596 containerd[1527]: time="2025-07-15T04:47:37.340558688Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f193ff3088296433b6d4d1b010f1b243be74f5c38818ffdc9c8b4c3af49dafb2\" id:\"5a4d29fef6cd446fe07874b45ddd93d9c49d6d87bbf4370e81448896c3828f93\" pid:4884 exited_at:{seconds:1752554857 nanos:339986775}" Jul 15 04:47:37.456481 containerd[1527]: time="2025-07-15T04:47:37.456333589Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:47:37.456806 containerd[1527]: time="2025-07-15T04:47:37.456728852Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=48128336" Jul 15 04:47:37.457919 containerd[1527]: time="2025-07-15T04:47:37.457879478Z" level=info msg="ImageCreate event name:\"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:47:37.459348 sshd[4860]: Connection closed by 10.0.0.1 port 39020 Jul 15 04:47:37.459222 sshd-session[4830]: pam_unix(sshd:session): session closed for user core Jul 15 04:47:37.461939 containerd[1527]: time="2025-07-15T04:47:37.461570692Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:47:37.464457 systemd[1]: sshd@7-10.0.0.76:22-10.0.0.1:39020.service: Deactivated successfully. Jul 15 04:47:37.467204 systemd[1]: session-8.scope: Deactivated successfully. Jul 15 04:47:37.468177 systemd-logind[1508]: Session 8 logged out. Waiting for processes to exit. Jul 15 04:47:37.470467 containerd[1527]: time="2025-07-15T04:47:37.470309798Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"49497545\" in 2.426446894s" Jul 15 04:47:37.470467 containerd[1527]: time="2025-07-15T04:47:37.470344760Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\"" Jul 15 04:47:37.471411 systemd-logind[1508]: Removed session 8. Jul 15 04:47:37.472760 containerd[1527]: time="2025-07-15T04:47:37.472706416Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 15 04:47:37.482154 containerd[1527]: time="2025-07-15T04:47:37.481727699Z" level=info msg="CreateContainer within sandbox \"65d61d7c83f227e52f610890d7e6382d07cfed7f8a674ce1b191ff1d8a8f54ff\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 15 04:47:37.487230 containerd[1527]: time="2025-07-15T04:47:37.487197575Z" level=info msg="Container f7a877fac417eac7f53d5f31e493bb7201fae9dc750b810a4b6935f047fa9125: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:47:37.494087 containerd[1527]: time="2025-07-15T04:47:37.494049132Z" level=info msg="CreateContainer within sandbox \"65d61d7c83f227e52f610890d7e6382d07cfed7f8a674ce1b191ff1d8a8f54ff\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"f7a877fac417eac7f53d5f31e493bb7201fae9dc750b810a4b6935f047fa9125\"" Jul 15 04:47:37.495062 containerd[1527]: time="2025-07-15T04:47:37.495023868Z" level=info msg="StartContainer for \"f7a877fac417eac7f53d5f31e493bb7201fae9dc750b810a4b6935f047fa9125\"" Jul 15 04:47:37.496433 containerd[1527]: time="2025-07-15T04:47:37.496405748Z" level=info msg="connecting to shim f7a877fac417eac7f53d5f31e493bb7201fae9dc750b810a4b6935f047fa9125" address="unix:///run/containerd/s/570d350f3f2c154db3e52acd4e95042e7c80747bb93ed5509700a91f576a18e8" protocol=ttrpc version=3 Jul 15 04:47:37.517532 systemd[1]: Started cri-containerd-f7a877fac417eac7f53d5f31e493bb7201fae9dc750b810a4b6935f047fa9125.scope - libcontainer container f7a877fac417eac7f53d5f31e493bb7201fae9dc750b810a4b6935f047fa9125. Jul 15 04:47:37.552072 containerd[1527]: time="2025-07-15T04:47:37.551942202Z" level=info msg="StartContainer for \"f7a877fac417eac7f53d5f31e493bb7201fae9dc750b810a4b6935f047fa9125\" returns successfully" Jul 15 04:47:37.658510 systemd-networkd[1433]: cali2c3d7bbde4a: Gained IPv6LL Jul 15 04:47:37.756389 kubelet[2675]: E0715 04:47:37.756306 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 04:47:37.757028 containerd[1527]: time="2025-07-15T04:47:37.756946747Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gg8bb,Uid:20642bd1-05fb-429a-a40a-fda9cd5820ba,Namespace:kube-system,Attempt:0,}" Jul 15 04:47:37.787569 systemd-networkd[1433]: cali4f5010e844c: Gained IPv6LL Jul 15 04:47:37.861316 systemd-networkd[1433]: cali675e8030333: Link UP Jul 15 04:47:37.861774 systemd-networkd[1433]: cali675e8030333: Gained carrier Jul 15 04:47:37.875224 containerd[1527]: 2025-07-15 04:47:37.794 [INFO][4945] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--gg8bb-eth0 coredns-674b8bbfcf- kube-system 20642bd1-05fb-429a-a40a-fda9cd5820ba 824 0 2025-07-15 04:47:02 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-gg8bb eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali675e8030333 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="d875f7aa02653780516bbe43a267c9df8ef6cbd52ff6c6c9e7ef740542556a96" Namespace="kube-system" Pod="coredns-674b8bbfcf-gg8bb" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--gg8bb-" Jul 15 04:47:37.875224 containerd[1527]: 2025-07-15 04:47:37.794 [INFO][4945] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d875f7aa02653780516bbe43a267c9df8ef6cbd52ff6c6c9e7ef740542556a96" Namespace="kube-system" Pod="coredns-674b8bbfcf-gg8bb" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--gg8bb-eth0" Jul 15 04:47:37.875224 containerd[1527]: 2025-07-15 04:47:37.822 [INFO][4959] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d875f7aa02653780516bbe43a267c9df8ef6cbd52ff6c6c9e7ef740542556a96" HandleID="k8s-pod-network.d875f7aa02653780516bbe43a267c9df8ef6cbd52ff6c6c9e7ef740542556a96" Workload="localhost-k8s-coredns--674b8bbfcf--gg8bb-eth0" Jul 15 04:47:37.875224 containerd[1527]: 2025-07-15 04:47:37.822 [INFO][4959] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d875f7aa02653780516bbe43a267c9df8ef6cbd52ff6c6c9e7ef740542556a96" HandleID="k8s-pod-network.d875f7aa02653780516bbe43a267c9df8ef6cbd52ff6c6c9e7ef740542556a96" Workload="localhost-k8s-coredns--674b8bbfcf--gg8bb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000137980), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-gg8bb", "timestamp":"2025-07-15 04:47:37.82229533 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 04:47:37.875224 containerd[1527]: 2025-07-15 04:47:37.822 [INFO][4959] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 04:47:37.875224 containerd[1527]: 2025-07-15 04:47:37.822 [INFO][4959] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 04:47:37.875224 containerd[1527]: 2025-07-15 04:47:37.822 [INFO][4959] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 15 04:47:37.875224 containerd[1527]: 2025-07-15 04:47:37.832 [INFO][4959] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d875f7aa02653780516bbe43a267c9df8ef6cbd52ff6c6c9e7ef740542556a96" host="localhost" Jul 15 04:47:37.875224 containerd[1527]: 2025-07-15 04:47:37.837 [INFO][4959] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 15 04:47:37.875224 containerd[1527]: 2025-07-15 04:47:37.841 [INFO][4959] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 15 04:47:37.875224 containerd[1527]: 2025-07-15 04:47:37.843 [INFO][4959] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 15 04:47:37.875224 containerd[1527]: 2025-07-15 04:47:37.845 [INFO][4959] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 15 04:47:37.875224 containerd[1527]: 2025-07-15 04:47:37.845 [INFO][4959] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d875f7aa02653780516bbe43a267c9df8ef6cbd52ff6c6c9e7ef740542556a96" host="localhost" Jul 15 04:47:37.875224 containerd[1527]: 2025-07-15 04:47:37.847 [INFO][4959] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d875f7aa02653780516bbe43a267c9df8ef6cbd52ff6c6c9e7ef740542556a96 Jul 15 04:47:37.875224 containerd[1527]: 2025-07-15 04:47:37.850 [INFO][4959] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d875f7aa02653780516bbe43a267c9df8ef6cbd52ff6c6c9e7ef740542556a96" host="localhost" Jul 15 04:47:37.875224 containerd[1527]: 2025-07-15 04:47:37.857 [INFO][4959] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.d875f7aa02653780516bbe43a267c9df8ef6cbd52ff6c6c9e7ef740542556a96" host="localhost" Jul 15 04:47:37.875224 containerd[1527]: 2025-07-15 04:47:37.857 [INFO][4959] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.d875f7aa02653780516bbe43a267c9df8ef6cbd52ff6c6c9e7ef740542556a96" host="localhost" Jul 15 04:47:37.875224 containerd[1527]: 2025-07-15 04:47:37.857 [INFO][4959] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 04:47:37.875224 containerd[1527]: 2025-07-15 04:47:37.857 [INFO][4959] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="d875f7aa02653780516bbe43a267c9df8ef6cbd52ff6c6c9e7ef740542556a96" HandleID="k8s-pod-network.d875f7aa02653780516bbe43a267c9df8ef6cbd52ff6c6c9e7ef740542556a96" Workload="localhost-k8s-coredns--674b8bbfcf--gg8bb-eth0" Jul 15 04:47:37.875940 containerd[1527]: 2025-07-15 04:47:37.859 [INFO][4945] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d875f7aa02653780516bbe43a267c9df8ef6cbd52ff6c6c9e7ef740542556a96" Namespace="kube-system" Pod="coredns-674b8bbfcf-gg8bb" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--gg8bb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--gg8bb-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"20642bd1-05fb-429a-a40a-fda9cd5820ba", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 4, 47, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-gg8bb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali675e8030333", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 04:47:37.875940 containerd[1527]: 2025-07-15 04:47:37.859 [INFO][4945] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="d875f7aa02653780516bbe43a267c9df8ef6cbd52ff6c6c9e7ef740542556a96" Namespace="kube-system" Pod="coredns-674b8bbfcf-gg8bb" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--gg8bb-eth0" Jul 15 04:47:37.875940 containerd[1527]: 2025-07-15 04:47:37.859 [INFO][4945] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali675e8030333 ContainerID="d875f7aa02653780516bbe43a267c9df8ef6cbd52ff6c6c9e7ef740542556a96" Namespace="kube-system" Pod="coredns-674b8bbfcf-gg8bb" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--gg8bb-eth0" Jul 15 04:47:37.875940 containerd[1527]: 2025-07-15 04:47:37.862 [INFO][4945] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d875f7aa02653780516bbe43a267c9df8ef6cbd52ff6c6c9e7ef740542556a96" Namespace="kube-system" Pod="coredns-674b8bbfcf-gg8bb" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--gg8bb-eth0" Jul 15 04:47:37.875940 containerd[1527]: 2025-07-15 04:47:37.863 [INFO][4945] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d875f7aa02653780516bbe43a267c9df8ef6cbd52ff6c6c9e7ef740542556a96" Namespace="kube-system" Pod="coredns-674b8bbfcf-gg8bb" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--gg8bb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--gg8bb-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"20642bd1-05fb-429a-a40a-fda9cd5820ba", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 4, 47, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d875f7aa02653780516bbe43a267c9df8ef6cbd52ff6c6c9e7ef740542556a96", Pod:"coredns-674b8bbfcf-gg8bb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali675e8030333", MAC:"3e:fe:01:2a:52:cd", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 04:47:37.875940 containerd[1527]: 2025-07-15 04:47:37.872 [INFO][4945] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d875f7aa02653780516bbe43a267c9df8ef6cbd52ff6c6c9e7ef740542556a96" Namespace="kube-system" Pod="coredns-674b8bbfcf-gg8bb" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--gg8bb-eth0" Jul 15 04:47:37.919183 containerd[1527]: time="2025-07-15T04:47:37.918806035Z" level=info msg="connecting to shim d875f7aa02653780516bbe43a267c9df8ef6cbd52ff6c6c9e7ef740542556a96" address="unix:///run/containerd/s/8d15bd7ef6069978d48da565030681702bb8ea1c7b1d93a27d383fefa3bdc167" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:47:37.938891 kubelet[2675]: E0715 04:47:37.938839 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 04:47:37.941155 systemd[1]: Started cri-containerd-d875f7aa02653780516bbe43a267c9df8ef6cbd52ff6c6c9e7ef740542556a96.scope - libcontainer container d875f7aa02653780516bbe43a267c9df8ef6cbd52ff6c6c9e7ef740542556a96. Jul 15 04:47:37.953945 systemd-resolved[1350]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 15 04:47:37.987184 containerd[1527]: time="2025-07-15T04:47:37.987128790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gg8bb,Uid:20642bd1-05fb-429a-a40a-fda9cd5820ba,Namespace:kube-system,Attempt:0,} returns sandbox id \"d875f7aa02653780516bbe43a267c9df8ef6cbd52ff6c6c9e7ef740542556a96\"" Jul 15 04:47:37.988085 kubelet[2675]: E0715 04:47:37.987980 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 04:47:37.993061 containerd[1527]: time="2025-07-15T04:47:37.993024171Z" level=info msg="CreateContainer within sandbox \"d875f7aa02653780516bbe43a267c9df8ef6cbd52ff6c6c9e7ef740542556a96\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 15 04:47:38.009753 containerd[1527]: time="2025-07-15T04:47:38.009644758Z" level=info msg="Container 5b4fe387e4a13f1ececf0759a3ab9501d61869b53b6df911e622f22b8b9d8987: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:47:38.015029 containerd[1527]: time="2025-07-15T04:47:38.014990018Z" level=info msg="CreateContainer within sandbox \"d875f7aa02653780516bbe43a267c9df8ef6cbd52ff6c6c9e7ef740542556a96\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5b4fe387e4a13f1ececf0759a3ab9501d61869b53b6df911e622f22b8b9d8987\"" Jul 15 04:47:38.015688 containerd[1527]: time="2025-07-15T04:47:38.015459204Z" level=info msg="StartContainer for \"5b4fe387e4a13f1ececf0759a3ab9501d61869b53b6df911e622f22b8b9d8987\"" Jul 15 04:47:38.016265 containerd[1527]: time="2025-07-15T04:47:38.016242368Z" level=info msg="connecting to shim 5b4fe387e4a13f1ececf0759a3ab9501d61869b53b6df911e622f22b8b9d8987" address="unix:///run/containerd/s/8d15bd7ef6069978d48da565030681702bb8ea1c7b1d93a27d383fefa3bdc167" protocol=ttrpc version=3 Jul 15 04:47:38.039541 systemd[1]: Started cri-containerd-5b4fe387e4a13f1ececf0759a3ab9501d61869b53b6df911e622f22b8b9d8987.scope - libcontainer container 5b4fe387e4a13f1ececf0759a3ab9501d61869b53b6df911e622f22b8b9d8987. Jul 15 04:47:38.072560 containerd[1527]: time="2025-07-15T04:47:38.072521643Z" level=info msg="StartContainer for \"5b4fe387e4a13f1ececf0759a3ab9501d61869b53b6df911e622f22b8b9d8987\" returns successfully" Jul 15 04:47:38.943216 kubelet[2675]: E0715 04:47:38.942822 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 04:47:38.943216 kubelet[2675]: E0715 04:47:38.942872 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 04:47:38.956850 kubelet[2675]: I0715 04:47:38.956775 2675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7bf87f4ddb-bbvjj" podStartSLOduration=21.600380262 podStartE2EDuration="24.956756501s" podCreationTimestamp="2025-07-15 04:47:14 +0000 UTC" firstStartedPulling="2025-07-15 04:47:34.115285917 +0000 UTC m=+39.450839835" lastFinishedPulling="2025-07-15 04:47:37.471662156 +0000 UTC m=+42.807216074" observedRunningTime="2025-07-15 04:47:37.951516849 +0000 UTC m=+43.287070807" watchObservedRunningTime="2025-07-15 04:47:38.956756501 +0000 UTC m=+44.292310419" Jul 15 04:47:38.971417 kubelet[2675]: I0715 04:47:38.971323 2675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-gg8bb" podStartSLOduration=36.971288516 podStartE2EDuration="36.971288516s" podCreationTimestamp="2025-07-15 04:47:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 04:47:38.957418858 +0000 UTC m=+44.292972736" watchObservedRunningTime="2025-07-15 04:47:38.971288516 +0000 UTC m=+44.306842434" Jul 15 04:47:39.004537 containerd[1527]: time="2025-07-15T04:47:39.000247540Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f7a877fac417eac7f53d5f31e493bb7201fae9dc750b810a4b6935f047fa9125\" id:\"946af0ad3fa151c72db7f5949e7882acc66cf06fd3a4ee04ef5512d8e7671a56\" pid:5074 exited_at:{seconds:1752554858 nanos:997003198}" Jul 15 04:47:39.194542 systemd-networkd[1433]: cali675e8030333: Gained IPv6LL Jul 15 04:47:39.525084 containerd[1527]: time="2025-07-15T04:47:39.524967603Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:47:39.525443 containerd[1527]: time="2025-07-15T04:47:39.525411747Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=44517149" Jul 15 04:47:39.526433 containerd[1527]: time="2025-07-15T04:47:39.526397240Z" level=info msg="ImageCreate event name:\"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:47:39.528390 containerd[1527]: time="2025-07-15T04:47:39.528328865Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:47:39.528926 containerd[1527]: time="2025-07-15T04:47:39.528888056Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 2.056152077s" Jul 15 04:47:39.528926 containerd[1527]: time="2025-07-15T04:47:39.528920217Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 15 04:47:39.530349 containerd[1527]: time="2025-07-15T04:47:39.530315493Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 15 04:47:39.532894 containerd[1527]: time="2025-07-15T04:47:39.532858191Z" level=info msg="CreateContainer within sandbox \"80264b626144f32f2291760def57f1f32a464b4e2790b2396ea2876ab4f288b4\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 15 04:47:39.540331 containerd[1527]: time="2025-07-15T04:47:39.539530874Z" level=info msg="Container 5623b1439b65286bf70ae5e944347a0b3252f2f2f3e2959c7faaffb525089a3a: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:47:39.549422 containerd[1527]: time="2025-07-15T04:47:39.549324566Z" level=info msg="CreateContainer within sandbox \"80264b626144f32f2291760def57f1f32a464b4e2790b2396ea2876ab4f288b4\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"5623b1439b65286bf70ae5e944347a0b3252f2f2f3e2959c7faaffb525089a3a\"" Jul 15 04:47:39.549948 containerd[1527]: time="2025-07-15T04:47:39.549908117Z" level=info msg="StartContainer for \"5623b1439b65286bf70ae5e944347a0b3252f2f2f3e2959c7faaffb525089a3a\"" Jul 15 04:47:39.550986 containerd[1527]: time="2025-07-15T04:47:39.550959014Z" level=info msg="connecting to shim 5623b1439b65286bf70ae5e944347a0b3252f2f2f3e2959c7faaffb525089a3a" address="unix:///run/containerd/s/d66106f25e092ead9a7ad615e3bb8ab0cccb06c45988ffb2ef1dd911209c77c8" protocol=ttrpc version=3 Jul 15 04:47:39.572535 systemd[1]: Started cri-containerd-5623b1439b65286bf70ae5e944347a0b3252f2f2f3e2959c7faaffb525089a3a.scope - libcontainer container 5623b1439b65286bf70ae5e944347a0b3252f2f2f3e2959c7faaffb525089a3a. Jul 15 04:47:39.616390 containerd[1527]: time="2025-07-15T04:47:39.616334205Z" level=info msg="StartContainer for \"5623b1439b65286bf70ae5e944347a0b3252f2f2f3e2959c7faaffb525089a3a\" returns successfully" Jul 15 04:47:39.947869 kubelet[2675]: E0715 04:47:39.947830 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 04:47:39.961436 kubelet[2675]: I0715 04:47:39.961174 2675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5555fdf644-6dvsm" podStartSLOduration=25.43347344 podStartE2EDuration="29.961159175s" podCreationTimestamp="2025-07-15 04:47:10 +0000 UTC" firstStartedPulling="2025-07-15 04:47:35.002219856 +0000 UTC m=+40.337773774" lastFinishedPulling="2025-07-15 04:47:39.529905591 +0000 UTC m=+44.865459509" observedRunningTime="2025-07-15 04:47:39.958326141 +0000 UTC m=+45.293880059" watchObservedRunningTime="2025-07-15 04:47:39.961159175 +0000 UTC m=+45.296713093" Jul 15 04:47:40.950529 kubelet[2675]: I0715 04:47:40.950491 2675 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 04:47:40.951271 kubelet[2675]: E0715 04:47:40.951252 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 04:47:41.019149 containerd[1527]: time="2025-07-15T04:47:41.019088562Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:47:41.019939 containerd[1527]: time="2025-07-15T04:47:41.019894083Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=13754366" Jul 15 04:47:41.020940 containerd[1527]: time="2025-07-15T04:47:41.020912135Z" level=info msg="ImageCreate event name:\"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:47:41.023671 containerd[1527]: time="2025-07-15T04:47:41.023630673Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:47:41.024195 containerd[1527]: time="2025-07-15T04:47:41.024150580Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"15123559\" in 1.493795124s" Jul 15 04:47:41.024195 containerd[1527]: time="2025-07-15T04:47:41.024193062Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\"" Jul 15 04:47:41.025349 containerd[1527]: time="2025-07-15T04:47:41.025151151Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 15 04:47:41.029706 containerd[1527]: time="2025-07-15T04:47:41.029669021Z" level=info msg="CreateContainer within sandbox \"713170ba9fbf2cb08a7c800f8bc0d2e9aa9f0e83a999b2c8d9b27d9bc3407888\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 15 04:47:41.036555 containerd[1527]: time="2025-07-15T04:47:41.036491929Z" level=info msg="Container a19b6c3d601d517e04d0276c5e8e9d144801d247124b63f9e088d87e2b6c39cc: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:47:41.044943 containerd[1527]: time="2025-07-15T04:47:41.044890717Z" level=info msg="CreateContainer within sandbox \"713170ba9fbf2cb08a7c800f8bc0d2e9aa9f0e83a999b2c8d9b27d9bc3407888\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"a19b6c3d601d517e04d0276c5e8e9d144801d247124b63f9e088d87e2b6c39cc\"" Jul 15 04:47:41.045595 containerd[1527]: time="2025-07-15T04:47:41.045569392Z" level=info msg="StartContainer for \"a19b6c3d601d517e04d0276c5e8e9d144801d247124b63f9e088d87e2b6c39cc\"" Jul 15 04:47:41.047334 containerd[1527]: time="2025-07-15T04:47:41.047292239Z" level=info msg="connecting to shim a19b6c3d601d517e04d0276c5e8e9d144801d247124b63f9e088d87e2b6c39cc" address="unix:///run/containerd/s/8c289b12d6542c4606d29464c7cf088036169e2639b346b90eea7958b6701321" protocol=ttrpc version=3 Jul 15 04:47:41.072583 systemd[1]: Started cri-containerd-a19b6c3d601d517e04d0276c5e8e9d144801d247124b63f9e088d87e2b6c39cc.scope - libcontainer container a19b6c3d601d517e04d0276c5e8e9d144801d247124b63f9e088d87e2b6c39cc. Jul 15 04:47:41.121502 containerd[1527]: time="2025-07-15T04:47:41.121453940Z" level=info msg="StartContainer for \"a19b6c3d601d517e04d0276c5e8e9d144801d247124b63f9e088d87e2b6c39cc\" returns successfully" Jul 15 04:47:41.285711 containerd[1527]: time="2025-07-15T04:47:41.285580026Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:47:41.286687 containerd[1527]: time="2025-07-15T04:47:41.286659521Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 15 04:47:41.288956 containerd[1527]: time="2025-07-15T04:47:41.288923596Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 263.738124ms" Jul 15 04:47:41.288956 containerd[1527]: time="2025-07-15T04:47:41.288958558Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 15 04:47:41.291723 containerd[1527]: time="2025-07-15T04:47:41.291497088Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 15 04:47:41.294591 containerd[1527]: time="2025-07-15T04:47:41.294557764Z" level=info msg="CreateContainer within sandbox \"32019ddce349ebc02d6cf04ceb03a30788aa96922c3e52dc0f0408e3e70743ac\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 15 04:47:41.314330 containerd[1527]: time="2025-07-15T04:47:41.313594214Z" level=info msg="Container e9c72afb54a85758d4353a61241005f589989d114381607c83ce2ae495052684: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:47:41.324560 containerd[1527]: time="2025-07-15T04:47:41.324487449Z" level=info msg="CreateContainer within sandbox \"32019ddce349ebc02d6cf04ceb03a30788aa96922c3e52dc0f0408e3e70743ac\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"e9c72afb54a85758d4353a61241005f589989d114381607c83ce2ae495052684\"" Jul 15 04:47:41.325404 containerd[1527]: time="2025-07-15T04:47:41.325121602Z" level=info msg="StartContainer for \"e9c72afb54a85758d4353a61241005f589989d114381607c83ce2ae495052684\"" Jul 15 04:47:41.327196 containerd[1527]: time="2025-07-15T04:47:41.327161706Z" level=info msg="connecting to shim e9c72afb54a85758d4353a61241005f589989d114381607c83ce2ae495052684" address="unix:///run/containerd/s/a62b205e75cfdcd3ca26764072234299c109d4a403f45c3f48be79267e6715e1" protocol=ttrpc version=3 Jul 15 04:47:41.354747 systemd[1]: Started cri-containerd-e9c72afb54a85758d4353a61241005f589989d114381607c83ce2ae495052684.scope - libcontainer container e9c72afb54a85758d4353a61241005f589989d114381607c83ce2ae495052684. Jul 15 04:47:41.409014 containerd[1527]: time="2025-07-15T04:47:41.408956475Z" level=info msg="StartContainer for \"e9c72afb54a85758d4353a61241005f589989d114381607c83ce2ae495052684\" returns successfully" Jul 15 04:47:41.828277 kubelet[2675]: I0715 04:47:41.828235 2675 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 15 04:47:41.828277 kubelet[2675]: I0715 04:47:41.828284 2675 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 15 04:47:41.980638 kubelet[2675]: I0715 04:47:41.980136 2675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5555fdf644-2ngbv" podStartSLOduration=26.709867156 podStartE2EDuration="31.98011743s" podCreationTimestamp="2025-07-15 04:47:10 +0000 UTC" firstStartedPulling="2025-07-15 04:47:36.020434052 +0000 UTC m=+41.355987970" lastFinishedPulling="2025-07-15 04:47:41.290684246 +0000 UTC m=+46.626238244" observedRunningTime="2025-07-15 04:47:41.979217584 +0000 UTC m=+47.314771582" watchObservedRunningTime="2025-07-15 04:47:41.98011743 +0000 UTC m=+47.315671388" Jul 15 04:47:41.999591 kubelet[2675]: I0715 04:47:41.999471 2675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-nvc78" podStartSLOduration=21.99108911 podStartE2EDuration="28.999452296s" podCreationTimestamp="2025-07-15 04:47:13 +0000 UTC" firstStartedPulling="2025-07-15 04:47:34.016652598 +0000 UTC m=+39.352206516" lastFinishedPulling="2025-07-15 04:47:41.025015784 +0000 UTC m=+46.360569702" observedRunningTime="2025-07-15 04:47:41.997838373 +0000 UTC m=+47.333392291" watchObservedRunningTime="2025-07-15 04:47:41.999452296 +0000 UTC m=+47.335006214" Jul 15 04:47:42.478994 systemd[1]: Started sshd@8-10.0.0.76:22-10.0.0.1:53180.service - OpenSSH per-connection server daemon (10.0.0.1:53180). Jul 15 04:47:42.593535 sshd[5209]: Accepted publickey for core from 10.0.0.1 port 53180 ssh2: RSA SHA256:sv36Sv5cF+dK4scc2r2cUvpDU+BCYvXiqSSRxSnX4+c Jul 15 04:47:42.598036 sshd-session[5209]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:47:42.604424 systemd-logind[1508]: New session 9 of user core. Jul 15 04:47:42.618687 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 15 04:47:42.853038 sshd[5212]: Connection closed by 10.0.0.1 port 53180 Jul 15 04:47:42.853845 sshd-session[5209]: pam_unix(sshd:session): session closed for user core Jul 15 04:47:42.858784 systemd[1]: sshd@8-10.0.0.76:22-10.0.0.1:53180.service: Deactivated successfully. Jul 15 04:47:42.861582 systemd[1]: session-9.scope: Deactivated successfully. Jul 15 04:47:42.866555 systemd-logind[1508]: Session 9 logged out. Waiting for processes to exit. Jul 15 04:47:42.867693 systemd-logind[1508]: Removed session 9. Jul 15 04:47:42.969198 kubelet[2675]: I0715 04:47:42.969160 2675 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 04:47:43.464168 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2952125839.mount: Deactivated successfully. Jul 15 04:47:43.964375 containerd[1527]: time="2025-07-15T04:47:43.963373277Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:47:43.964375 containerd[1527]: time="2025-07-15T04:47:43.963867020Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=61838790" Jul 15 04:47:43.965513 containerd[1527]: time="2025-07-15T04:47:43.965473817Z" level=info msg="ImageCreate event name:\"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:47:43.968440 containerd[1527]: time="2025-07-15T04:47:43.968387037Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:47:43.969300 containerd[1527]: time="2025-07-15T04:47:43.969270919Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"61838636\" in 2.67774275s" Jul 15 04:47:43.969455 containerd[1527]: time="2025-07-15T04:47:43.969438847Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\"" Jul 15 04:47:43.975401 containerd[1527]: time="2025-07-15T04:47:43.974660057Z" level=info msg="CreateContainer within sandbox \"95bf3d9452d397ec3063e750578d07c744865d63be9286a64a8cd057fe8fbe0a\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 15 04:47:44.026741 containerd[1527]: time="2025-07-15T04:47:44.026679308Z" level=info msg="Container e0e03c14820aaba1b91c6dbf034ff4dad67d0c82ac2db7e8204009453ae21f04: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:47:44.034284 containerd[1527]: time="2025-07-15T04:47:44.034176735Z" level=info msg="CreateContainer within sandbox \"95bf3d9452d397ec3063e750578d07c744865d63be9286a64a8cd057fe8fbe0a\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"e0e03c14820aaba1b91c6dbf034ff4dad67d0c82ac2db7e8204009453ae21f04\"" Jul 15 04:47:44.034852 containerd[1527]: time="2025-07-15T04:47:44.034811644Z" level=info msg="StartContainer for \"e0e03c14820aaba1b91c6dbf034ff4dad67d0c82ac2db7e8204009453ae21f04\"" Jul 15 04:47:44.036171 containerd[1527]: time="2025-07-15T04:47:44.036122745Z" level=info msg="connecting to shim e0e03c14820aaba1b91c6dbf034ff4dad67d0c82ac2db7e8204009453ae21f04" address="unix:///run/containerd/s/db53db4f4106861f873d115125dda230ae2f13be1f7fa35b51c91d1874a6fc18" protocol=ttrpc version=3 Jul 15 04:47:44.060595 systemd[1]: Started cri-containerd-e0e03c14820aaba1b91c6dbf034ff4dad67d0c82ac2db7e8204009453ae21f04.scope - libcontainer container e0e03c14820aaba1b91c6dbf034ff4dad67d0c82ac2db7e8204009453ae21f04. Jul 15 04:47:44.102634 containerd[1527]: time="2025-07-15T04:47:44.102568905Z" level=info msg="StartContainer for \"e0e03c14820aaba1b91c6dbf034ff4dad67d0c82ac2db7e8204009453ae21f04\" returns successfully" Jul 15 04:47:45.001008 kubelet[2675]: I0715 04:47:45.000927 2675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-768f4c5c69-9sgfr" podStartSLOduration=24.121328498 podStartE2EDuration="32.000910977s" podCreationTimestamp="2025-07-15 04:47:13 +0000 UTC" firstStartedPulling="2025-07-15 04:47:36.090591163 +0000 UTC m=+41.426145081" lastFinishedPulling="2025-07-15 04:47:43.970173642 +0000 UTC m=+49.305727560" observedRunningTime="2025-07-15 04:47:45.000645605 +0000 UTC m=+50.336199523" watchObservedRunningTime="2025-07-15 04:47:45.000910977 +0000 UTC m=+50.336464895" Jul 15 04:47:45.136269 containerd[1527]: time="2025-07-15T04:47:45.136222253Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e0e03c14820aaba1b91c6dbf034ff4dad67d0c82ac2db7e8204009453ae21f04\" id:\"544ed409650ba8e9531e7de9b602c35ddec9c0ec158cfcdf55c3a559e6cb2c47\" pid:5290 exit_status:1 exited_at:{seconds:1752554865 nanos:135866997}" Jul 15 04:47:46.082993 containerd[1527]: time="2025-07-15T04:47:46.082783476Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e0e03c14820aaba1b91c6dbf034ff4dad67d0c82ac2db7e8204009453ae21f04\" id:\"8c31b054d900974f5a06d74ed81cb2392666e164b9855c0574ee37ce2c0afd0e\" pid:5320 exited_at:{seconds:1752554866 nanos:82212771}" Jul 15 04:47:47.868645 systemd[1]: Started sshd@9-10.0.0.76:22-10.0.0.1:53188.service - OpenSSH per-connection server daemon (10.0.0.1:53188). Jul 15 04:47:47.928816 sshd[5335]: Accepted publickey for core from 10.0.0.1 port 53188 ssh2: RSA SHA256:sv36Sv5cF+dK4scc2r2cUvpDU+BCYvXiqSSRxSnX4+c Jul 15 04:47:47.930309 sshd-session[5335]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:47:47.935191 systemd-logind[1508]: New session 10 of user core. Jul 15 04:47:47.945526 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 15 04:47:48.110374 sshd[5340]: Connection closed by 10.0.0.1 port 53188 Jul 15 04:47:48.111272 sshd-session[5335]: pam_unix(sshd:session): session closed for user core Jul 15 04:47:48.121097 systemd[1]: sshd@9-10.0.0.76:22-10.0.0.1:53188.service: Deactivated successfully. Jul 15 04:47:48.123393 systemd[1]: session-10.scope: Deactivated successfully. Jul 15 04:47:48.124134 systemd-logind[1508]: Session 10 logged out. Waiting for processes to exit. Jul 15 04:47:48.126938 systemd[1]: Started sshd@10-10.0.0.76:22-10.0.0.1:53192.service - OpenSSH per-connection server daemon (10.0.0.1:53192). Jul 15 04:47:48.128204 systemd-logind[1508]: Removed session 10. Jul 15 04:47:48.176675 sshd[5354]: Accepted publickey for core from 10.0.0.1 port 53192 ssh2: RSA SHA256:sv36Sv5cF+dK4scc2r2cUvpDU+BCYvXiqSSRxSnX4+c Jul 15 04:47:48.178033 sshd-session[5354]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:47:48.184062 systemd-logind[1508]: New session 11 of user core. Jul 15 04:47:48.199582 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 15 04:47:48.393096 sshd[5357]: Connection closed by 10.0.0.1 port 53192 Jul 15 04:47:48.393518 sshd-session[5354]: pam_unix(sshd:session): session closed for user core Jul 15 04:47:48.410560 systemd[1]: sshd@10-10.0.0.76:22-10.0.0.1:53192.service: Deactivated successfully. Jul 15 04:47:48.414600 systemd[1]: session-11.scope: Deactivated successfully. Jul 15 04:47:48.416024 systemd-logind[1508]: Session 11 logged out. Waiting for processes to exit. Jul 15 04:47:48.422299 systemd[1]: Started sshd@11-10.0.0.76:22-10.0.0.1:53196.service - OpenSSH per-connection server daemon (10.0.0.1:53196). Jul 15 04:47:48.423345 systemd-logind[1508]: Removed session 11. Jul 15 04:47:48.473604 sshd[5369]: Accepted publickey for core from 10.0.0.1 port 53196 ssh2: RSA SHA256:sv36Sv5cF+dK4scc2r2cUvpDU+BCYvXiqSSRxSnX4+c Jul 15 04:47:48.474898 sshd-session[5369]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:47:48.480706 systemd-logind[1508]: New session 12 of user core. Jul 15 04:47:48.488509 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 15 04:47:48.674344 sshd[5374]: Connection closed by 10.0.0.1 port 53196 Jul 15 04:47:48.674603 sshd-session[5369]: pam_unix(sshd:session): session closed for user core Jul 15 04:47:48.678787 systemd[1]: sshd@11-10.0.0.76:22-10.0.0.1:53196.service: Deactivated successfully. Jul 15 04:47:48.680688 systemd[1]: session-12.scope: Deactivated successfully. Jul 15 04:47:48.681319 systemd-logind[1508]: Session 12 logged out. Waiting for processes to exit. Jul 15 04:47:48.682220 systemd-logind[1508]: Removed session 12. Jul 15 04:47:50.070940 kubelet[2675]: I0715 04:47:50.070891 2675 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 04:47:53.687021 systemd[1]: Started sshd@12-10.0.0.76:22-10.0.0.1:41722.service - OpenSSH per-connection server daemon (10.0.0.1:41722). Jul 15 04:47:53.755448 sshd[5399]: Accepted publickey for core from 10.0.0.1 port 41722 ssh2: RSA SHA256:sv36Sv5cF+dK4scc2r2cUvpDU+BCYvXiqSSRxSnX4+c Jul 15 04:47:53.756760 sshd-session[5399]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:47:53.760528 systemd-logind[1508]: New session 13 of user core. Jul 15 04:47:53.766528 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 15 04:47:53.944309 sshd[5402]: Connection closed by 10.0.0.1 port 41722 Jul 15 04:47:53.944709 sshd-session[5399]: pam_unix(sshd:session): session closed for user core Jul 15 04:47:53.955546 systemd[1]: sshd@12-10.0.0.76:22-10.0.0.1:41722.service: Deactivated successfully. Jul 15 04:47:53.957372 systemd[1]: session-13.scope: Deactivated successfully. Jul 15 04:47:53.958865 systemd-logind[1508]: Session 13 logged out. Waiting for processes to exit. Jul 15 04:47:53.961432 systemd[1]: Started sshd@13-10.0.0.76:22-10.0.0.1:41730.service - OpenSSH per-connection server daemon (10.0.0.1:41730). Jul 15 04:47:53.963846 systemd-logind[1508]: Removed session 13. Jul 15 04:47:54.017420 sshd[5416]: Accepted publickey for core from 10.0.0.1 port 41730 ssh2: RSA SHA256:sv36Sv5cF+dK4scc2r2cUvpDU+BCYvXiqSSRxSnX4+c Jul 15 04:47:54.018373 sshd-session[5416]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:47:54.022416 systemd-logind[1508]: New session 14 of user core. Jul 15 04:47:54.038497 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 15 04:47:54.225081 sshd[5419]: Connection closed by 10.0.0.1 port 41730 Jul 15 04:47:54.225336 sshd-session[5416]: pam_unix(sshd:session): session closed for user core Jul 15 04:47:54.241416 systemd[1]: sshd@13-10.0.0.76:22-10.0.0.1:41730.service: Deactivated successfully. Jul 15 04:47:54.243071 systemd[1]: session-14.scope: Deactivated successfully. Jul 15 04:47:54.244564 systemd-logind[1508]: Session 14 logged out. Waiting for processes to exit. Jul 15 04:47:54.246517 systemd[1]: Started sshd@14-10.0.0.76:22-10.0.0.1:41734.service - OpenSSH per-connection server daemon (10.0.0.1:41734). Jul 15 04:47:54.247510 systemd-logind[1508]: Removed session 14. Jul 15 04:47:54.305160 sshd[5431]: Accepted publickey for core from 10.0.0.1 port 41734 ssh2: RSA SHA256:sv36Sv5cF+dK4scc2r2cUvpDU+BCYvXiqSSRxSnX4+c Jul 15 04:47:54.306456 sshd-session[5431]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:47:54.310269 systemd-logind[1508]: New session 15 of user core. Jul 15 04:47:54.317587 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 15 04:47:55.147430 sshd[5434]: Connection closed by 10.0.0.1 port 41734 Jul 15 04:47:55.148335 sshd-session[5431]: pam_unix(sshd:session): session closed for user core Jul 15 04:47:55.158753 systemd[1]: sshd@14-10.0.0.76:22-10.0.0.1:41734.service: Deactivated successfully. Jul 15 04:47:55.161322 systemd[1]: session-15.scope: Deactivated successfully. Jul 15 04:47:55.162554 systemd-logind[1508]: Session 15 logged out. Waiting for processes to exit. Jul 15 04:47:55.170886 systemd[1]: Started sshd@15-10.0.0.76:22-10.0.0.1:41750.service - OpenSSH per-connection server daemon (10.0.0.1:41750). Jul 15 04:47:55.174129 systemd-logind[1508]: Removed session 15. Jul 15 04:47:55.222792 sshd[5455]: Accepted publickey for core from 10.0.0.1 port 41750 ssh2: RSA SHA256:sv36Sv5cF+dK4scc2r2cUvpDU+BCYvXiqSSRxSnX4+c Jul 15 04:47:55.223993 sshd-session[5455]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:47:55.227842 systemd-logind[1508]: New session 16 of user core. Jul 15 04:47:55.241492 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 15 04:47:55.524311 sshd[5458]: Connection closed by 10.0.0.1 port 41750 Jul 15 04:47:55.525454 sshd-session[5455]: pam_unix(sshd:session): session closed for user core Jul 15 04:47:55.540332 systemd[1]: sshd@15-10.0.0.76:22-10.0.0.1:41750.service: Deactivated successfully. Jul 15 04:47:55.543272 systemd[1]: session-16.scope: Deactivated successfully. Jul 15 04:47:55.544785 systemd-logind[1508]: Session 16 logged out. Waiting for processes to exit. Jul 15 04:47:55.548455 systemd[1]: Started sshd@16-10.0.0.76:22-10.0.0.1:41764.service - OpenSSH per-connection server daemon (10.0.0.1:41764). Jul 15 04:47:55.549429 systemd-logind[1508]: Removed session 16. Jul 15 04:47:55.606234 sshd[5470]: Accepted publickey for core from 10.0.0.1 port 41764 ssh2: RSA SHA256:sv36Sv5cF+dK4scc2r2cUvpDU+BCYvXiqSSRxSnX4+c Jul 15 04:47:55.607552 sshd-session[5470]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:47:55.615236 systemd-logind[1508]: New session 17 of user core. Jul 15 04:47:55.625582 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 15 04:47:55.694106 containerd[1527]: time="2025-07-15T04:47:55.694057003Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f7a877fac417eac7f53d5f31e493bb7201fae9dc750b810a4b6935f047fa9125\" id:\"33f3c641c764ca30024c5caabd5cc58d22b647b22830093941a03289c4238f2a\" pid:5485 exited_at:{seconds:1752554875 nanos:693816435}" Jul 15 04:47:55.780967 sshd[5473]: Connection closed by 10.0.0.1 port 41764 Jul 15 04:47:55.781675 sshd-session[5470]: pam_unix(sshd:session): session closed for user core Jul 15 04:47:55.785352 systemd[1]: sshd@16-10.0.0.76:22-10.0.0.1:41764.service: Deactivated successfully. Jul 15 04:47:55.787081 systemd[1]: session-17.scope: Deactivated successfully. Jul 15 04:47:55.787767 systemd-logind[1508]: Session 17 logged out. Waiting for processes to exit. Jul 15 04:47:55.788775 systemd-logind[1508]: Removed session 17. Jul 15 04:47:56.434384 containerd[1527]: time="2025-07-15T04:47:56.434243192Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e0e03c14820aaba1b91c6dbf034ff4dad67d0c82ac2db7e8204009453ae21f04\" id:\"253b1fe0ec06c4571c5a25ec64ae67ce4a2c707f7bda5a37f9afc4ab16753b81\" pid:5518 exited_at:{seconds:1752554876 nanos:433953502}" Jul 15 04:48:00.793385 systemd[1]: Started sshd@17-10.0.0.76:22-10.0.0.1:41766.service - OpenSSH per-connection server daemon (10.0.0.1:41766). Jul 15 04:48:00.837961 sshd[5535]: Accepted publickey for core from 10.0.0.1 port 41766 ssh2: RSA SHA256:sv36Sv5cF+dK4scc2r2cUvpDU+BCYvXiqSSRxSnX4+c Jul 15 04:48:00.838862 sshd-session[5535]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:48:00.842716 systemd-logind[1508]: New session 18 of user core. Jul 15 04:48:00.854613 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 15 04:48:01.005567 sshd[5538]: Connection closed by 10.0.0.1 port 41766 Jul 15 04:48:01.006093 sshd-session[5535]: pam_unix(sshd:session): session closed for user core Jul 15 04:48:01.009751 systemd[1]: sshd@17-10.0.0.76:22-10.0.0.1:41766.service: Deactivated successfully. Jul 15 04:48:01.011544 systemd[1]: session-18.scope: Deactivated successfully. Jul 15 04:48:01.012338 systemd-logind[1508]: Session 18 logged out. Waiting for processes to exit. Jul 15 04:48:01.013455 systemd-logind[1508]: Removed session 18. Jul 15 04:48:06.017846 systemd[1]: Started sshd@18-10.0.0.76:22-10.0.0.1:32794.service - OpenSSH per-connection server daemon (10.0.0.1:32794). Jul 15 04:48:06.061899 sshd[5556]: Accepted publickey for core from 10.0.0.1 port 32794 ssh2: RSA SHA256:sv36Sv5cF+dK4scc2r2cUvpDU+BCYvXiqSSRxSnX4+c Jul 15 04:48:06.063478 sshd-session[5556]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:48:06.070283 systemd-logind[1508]: New session 19 of user core. Jul 15 04:48:06.079540 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 15 04:48:06.235729 sshd[5559]: Connection closed by 10.0.0.1 port 32794 Jul 15 04:48:06.236235 sshd-session[5556]: pam_unix(sshd:session): session closed for user core Jul 15 04:48:06.241752 systemd[1]: sshd@18-10.0.0.76:22-10.0.0.1:32794.service: Deactivated successfully. Jul 15 04:48:06.243572 systemd[1]: session-19.scope: Deactivated successfully. Jul 15 04:48:06.244462 systemd-logind[1508]: Session 19 logged out. Waiting for processes to exit. Jul 15 04:48:06.245953 systemd-logind[1508]: Removed session 19. Jul 15 04:48:07.316301 containerd[1527]: time="2025-07-15T04:48:07.316250015Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f193ff3088296433b6d4d1b010f1b243be74f5c38818ffdc9c8b4c3af49dafb2\" id:\"68553301ef8c76a437cabf6bd4918016906cb7eec83197b484c9f1bc7889ec30\" pid:5585 exited_at:{seconds:1752554887 nanos:315924888}" Jul 15 04:48:08.756026 kubelet[2675]: E0715 04:48:08.755935 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 04:48:08.974462 containerd[1527]: time="2025-07-15T04:48:08.974421962Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f7a877fac417eac7f53d5f31e493bb7201fae9dc750b810a4b6935f047fa9125\" id:\"2a4bc095db228e2f2ec289919d7ea495bf916e182b1f83a3e31601ca4e0e9096\" pid:5615 exited_at:{seconds:1752554888 nanos:974184357}" Jul 15 04:48:11.250893 systemd[1]: Started sshd@19-10.0.0.76:22-10.0.0.1:32808.service - OpenSSH per-connection server daemon (10.0.0.1:32808). Jul 15 04:48:11.318966 sshd[5626]: Accepted publickey for core from 10.0.0.1 port 32808 ssh2: RSA SHA256:sv36Sv5cF+dK4scc2r2cUvpDU+BCYvXiqSSRxSnX4+c Jul 15 04:48:11.320245 sshd-session[5626]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:48:11.324241 systemd-logind[1508]: New session 20 of user core. Jul 15 04:48:11.342768 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 15 04:48:11.505925 sshd[5629]: Connection closed by 10.0.0.1 port 32808 Jul 15 04:48:11.506601 sshd-session[5626]: pam_unix(sshd:session): session closed for user core Jul 15 04:48:11.512410 systemd[1]: sshd@19-10.0.0.76:22-10.0.0.1:32808.service: Deactivated successfully. Jul 15 04:48:11.517182 systemd[1]: session-20.scope: Deactivated successfully. Jul 15 04:48:11.518952 systemd-logind[1508]: Session 20 logged out. Waiting for processes to exit. Jul 15 04:48:11.520850 systemd-logind[1508]: Removed session 20.