Mar 20 21:36:28.905836 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Mar 20 21:36:28.905857 kernel: Linux version 6.6.83-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Thu Mar 20 19:37:53 -00 2025 Mar 20 21:36:28.905866 kernel: KASLR enabled Mar 20 21:36:28.905872 kernel: efi: EFI v2.7 by EDK II Mar 20 21:36:28.905878 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbae018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40498 Mar 20 21:36:28.905883 kernel: random: crng init done Mar 20 21:36:28.905890 kernel: secureboot: Secure boot disabled Mar 20 21:36:28.905895 kernel: ACPI: Early table checksum verification disabled Mar 20 21:36:28.905901 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) Mar 20 21:36:28.905908 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) Mar 20 21:36:28.905914 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Mar 20 21:36:28.905919 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 20 21:36:28.905925 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Mar 20 21:36:28.905931 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 20 21:36:28.905938 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 20 21:36:28.905946 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 20 21:36:28.905952 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 20 21:36:28.905958 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Mar 20 21:36:28.905964 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 20 21:36:28.905970 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Mar 20 21:36:28.905976 kernel: NUMA: Failed to initialise from firmware Mar 20 21:36:28.905982 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Mar 20 21:36:28.905988 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Mar 20 21:36:28.905994 kernel: Zone ranges: Mar 20 21:36:28.906000 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Mar 20 21:36:28.906007 kernel: DMA32 empty Mar 20 21:36:28.906012 kernel: Normal empty Mar 20 21:36:28.906018 kernel: Movable zone start for each node Mar 20 21:36:28.906024 kernel: Early memory node ranges Mar 20 21:36:28.906030 kernel: node 0: [mem 0x0000000040000000-0x00000000d967ffff] Mar 20 21:36:28.906036 kernel: node 0: [mem 0x00000000d9680000-0x00000000d968ffff] Mar 20 21:36:28.906042 kernel: node 0: [mem 0x00000000d9690000-0x00000000d976ffff] Mar 20 21:36:28.906048 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Mar 20 21:36:28.906054 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Mar 20 21:36:28.906060 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Mar 20 21:36:28.906065 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Mar 20 21:36:28.906072 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Mar 20 21:36:28.906079 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Mar 20 21:36:28.906086 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Mar 20 21:36:28.906092 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Mar 20 21:36:28.906100 kernel: psci: probing for conduit method from ACPI. Mar 20 21:36:28.906106 kernel: psci: PSCIv1.1 detected in firmware. Mar 20 21:36:28.906126 kernel: psci: Using standard PSCI v0.2 function IDs Mar 20 21:36:28.906134 kernel: psci: Trusted OS migration not required Mar 20 21:36:28.906141 kernel: psci: SMC Calling Convention v1.1 Mar 20 21:36:28.906148 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Mar 20 21:36:28.906154 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Mar 20 21:36:28.906161 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Mar 20 21:36:28.906167 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Mar 20 21:36:28.906174 kernel: Detected PIPT I-cache on CPU0 Mar 20 21:36:28.906180 kernel: CPU features: detected: GIC system register CPU interface Mar 20 21:36:28.906187 kernel: CPU features: detected: Hardware dirty bit management Mar 20 21:36:28.906193 kernel: CPU features: detected: Spectre-v4 Mar 20 21:36:28.906200 kernel: CPU features: detected: Spectre-BHB Mar 20 21:36:28.906207 kernel: CPU features: kernel page table isolation forced ON by KASLR Mar 20 21:36:28.906213 kernel: CPU features: detected: Kernel page table isolation (KPTI) Mar 20 21:36:28.906220 kernel: CPU features: detected: ARM erratum 1418040 Mar 20 21:36:28.906226 kernel: CPU features: detected: SSBS not fully self-synchronizing Mar 20 21:36:28.906232 kernel: alternatives: applying boot alternatives Mar 20 21:36:28.906240 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=0beb08f475de014f6ab4e06127ed84e918521fd470084f537ae9409b262d0ed3 Mar 20 21:36:28.906247 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 20 21:36:28.906253 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 20 21:36:28.906260 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 20 21:36:28.906266 kernel: Fallback order for Node 0: 0 Mar 20 21:36:28.906274 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Mar 20 21:36:28.906280 kernel: Policy zone: DMA Mar 20 21:36:28.906287 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 20 21:36:28.906293 kernel: software IO TLB: area num 4. Mar 20 21:36:28.906299 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Mar 20 21:36:28.906306 kernel: Memory: 2387412K/2572288K available (10304K kernel code, 2186K rwdata, 8096K rodata, 38464K init, 897K bss, 184876K reserved, 0K cma-reserved) Mar 20 21:36:28.906313 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 20 21:36:28.906319 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 20 21:36:28.906326 kernel: rcu: RCU event tracing is enabled. Mar 20 21:36:28.906333 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 20 21:36:28.906339 kernel: Trampoline variant of Tasks RCU enabled. Mar 20 21:36:28.906346 kernel: Tracing variant of Tasks RCU enabled. Mar 20 21:36:28.906353 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 20 21:36:28.906360 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 20 21:36:28.906366 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 20 21:36:28.906372 kernel: GICv3: 256 SPIs implemented Mar 20 21:36:28.906379 kernel: GICv3: 0 Extended SPIs implemented Mar 20 21:36:28.906385 kernel: Root IRQ handler: gic_handle_irq Mar 20 21:36:28.906391 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Mar 20 21:36:28.906397 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Mar 20 21:36:28.906403 kernel: ITS [mem 0x08080000-0x0809ffff] Mar 20 21:36:28.906410 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Mar 20 21:36:28.906416 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Mar 20 21:36:28.906424 kernel: GICv3: using LPI property table @0x00000000400f0000 Mar 20 21:36:28.906430 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Mar 20 21:36:28.906437 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 20 21:36:28.906443 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 20 21:36:28.906449 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Mar 20 21:36:28.906456 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Mar 20 21:36:28.906462 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Mar 20 21:36:28.906468 kernel: arm-pv: using stolen time PV Mar 20 21:36:28.906475 kernel: Console: colour dummy device 80x25 Mar 20 21:36:28.906482 kernel: ACPI: Core revision 20230628 Mar 20 21:36:28.906488 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Mar 20 21:36:28.906496 kernel: pid_max: default: 32768 minimum: 301 Mar 20 21:36:28.906503 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 20 21:36:28.906509 kernel: landlock: Up and running. Mar 20 21:36:28.906516 kernel: SELinux: Initializing. Mar 20 21:36:28.906522 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 20 21:36:28.906529 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 20 21:36:28.906535 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 20 21:36:28.906542 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 20 21:36:28.906548 kernel: rcu: Hierarchical SRCU implementation. Mar 20 21:36:28.906556 kernel: rcu: Max phase no-delay instances is 400. Mar 20 21:36:28.906563 kernel: Platform MSI: ITS@0x8080000 domain created Mar 20 21:36:28.906569 kernel: PCI/MSI: ITS@0x8080000 domain created Mar 20 21:36:28.906575 kernel: Remapping and enabling EFI services. Mar 20 21:36:28.906582 kernel: smp: Bringing up secondary CPUs ... Mar 20 21:36:28.906588 kernel: Detected PIPT I-cache on CPU1 Mar 20 21:36:28.906594 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Mar 20 21:36:28.906601 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Mar 20 21:36:28.906617 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 20 21:36:28.906626 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Mar 20 21:36:28.906633 kernel: Detected PIPT I-cache on CPU2 Mar 20 21:36:28.906661 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Mar 20 21:36:28.906672 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Mar 20 21:36:28.906679 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 20 21:36:28.906686 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Mar 20 21:36:28.906693 kernel: Detected PIPT I-cache on CPU3 Mar 20 21:36:28.906700 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Mar 20 21:36:28.906707 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Mar 20 21:36:28.906715 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 20 21:36:28.906722 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Mar 20 21:36:28.906728 kernel: smp: Brought up 1 node, 4 CPUs Mar 20 21:36:28.906735 kernel: SMP: Total of 4 processors activated. Mar 20 21:36:28.906742 kernel: CPU features: detected: 32-bit EL0 Support Mar 20 21:36:28.906749 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Mar 20 21:36:28.906756 kernel: CPU features: detected: Common not Private translations Mar 20 21:36:28.906763 kernel: CPU features: detected: CRC32 instructions Mar 20 21:36:28.906771 kernel: CPU features: detected: Enhanced Virtualization Traps Mar 20 21:36:28.906778 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Mar 20 21:36:28.906785 kernel: CPU features: detected: LSE atomic instructions Mar 20 21:36:28.906791 kernel: CPU features: detected: Privileged Access Never Mar 20 21:36:28.906798 kernel: CPU features: detected: RAS Extension Support Mar 20 21:36:28.906805 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Mar 20 21:36:28.906812 kernel: CPU: All CPU(s) started at EL1 Mar 20 21:36:28.906819 kernel: alternatives: applying system-wide alternatives Mar 20 21:36:28.906825 kernel: devtmpfs: initialized Mar 20 21:36:28.906833 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 20 21:36:28.906845 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 20 21:36:28.906852 kernel: pinctrl core: initialized pinctrl subsystem Mar 20 21:36:28.906859 kernel: SMBIOS 3.0.0 present. Mar 20 21:36:28.906866 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Mar 20 21:36:28.906872 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 20 21:36:28.906879 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 20 21:36:28.906886 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 20 21:36:28.906893 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 20 21:36:28.906901 kernel: audit: initializing netlink subsys (disabled) Mar 20 21:36:28.906908 kernel: audit: type=2000 audit(0.019:1): state=initialized audit_enabled=0 res=1 Mar 20 21:36:28.906915 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 20 21:36:28.906922 kernel: cpuidle: using governor menu Mar 20 21:36:28.906929 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 20 21:36:28.906936 kernel: ASID allocator initialised with 32768 entries Mar 20 21:36:28.906943 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 20 21:36:28.906950 kernel: Serial: AMBA PL011 UART driver Mar 20 21:36:28.906957 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Mar 20 21:36:28.906965 kernel: Modules: 0 pages in range for non-PLT usage Mar 20 21:36:28.906971 kernel: Modules: 509248 pages in range for PLT usage Mar 20 21:36:28.906978 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 20 21:36:28.906985 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Mar 20 21:36:28.906992 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Mar 20 21:36:28.906999 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Mar 20 21:36:28.907006 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 20 21:36:28.907012 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Mar 20 21:36:28.907019 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Mar 20 21:36:28.907027 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Mar 20 21:36:28.907034 kernel: ACPI: Added _OSI(Module Device) Mar 20 21:36:28.907041 kernel: ACPI: Added _OSI(Processor Device) Mar 20 21:36:28.907048 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 20 21:36:28.907054 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 20 21:36:28.907061 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 20 21:36:28.907068 kernel: ACPI: Interpreter enabled Mar 20 21:36:28.907075 kernel: ACPI: Using GIC for interrupt routing Mar 20 21:36:28.907081 kernel: ACPI: MCFG table detected, 1 entries Mar 20 21:36:28.907088 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Mar 20 21:36:28.907097 kernel: printk: console [ttyAMA0] enabled Mar 20 21:36:28.907103 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 20 21:36:28.907226 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 20 21:36:28.907313 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Mar 20 21:36:28.907377 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Mar 20 21:36:28.907441 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Mar 20 21:36:28.907504 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Mar 20 21:36:28.907515 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Mar 20 21:36:28.907522 kernel: PCI host bridge to bus 0000:00 Mar 20 21:36:28.907590 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Mar 20 21:36:28.907731 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Mar 20 21:36:28.907798 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Mar 20 21:36:28.907858 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 20 21:36:28.907938 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Mar 20 21:36:28.908022 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Mar 20 21:36:28.908089 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Mar 20 21:36:28.908153 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Mar 20 21:36:28.908217 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Mar 20 21:36:28.908281 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Mar 20 21:36:28.908345 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Mar 20 21:36:28.908409 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Mar 20 21:36:28.908470 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Mar 20 21:36:28.908527 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Mar 20 21:36:28.908583 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Mar 20 21:36:28.908592 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Mar 20 21:36:28.908600 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Mar 20 21:36:28.908619 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Mar 20 21:36:28.908628 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Mar 20 21:36:28.908721 kernel: iommu: Default domain type: Translated Mar 20 21:36:28.908730 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 20 21:36:28.908737 kernel: efivars: Registered efivars operations Mar 20 21:36:28.908744 kernel: vgaarb: loaded Mar 20 21:36:28.908751 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 20 21:36:28.908758 kernel: VFS: Disk quotas dquot_6.6.0 Mar 20 21:36:28.908765 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 20 21:36:28.908772 kernel: pnp: PnP ACPI init Mar 20 21:36:28.908865 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Mar 20 21:36:28.908879 kernel: pnp: PnP ACPI: found 1 devices Mar 20 21:36:28.908886 kernel: NET: Registered PF_INET protocol family Mar 20 21:36:28.908893 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 20 21:36:28.908900 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 20 21:36:28.908907 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 20 21:36:28.908914 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 20 21:36:28.908921 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 20 21:36:28.908928 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 20 21:36:28.908937 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 20 21:36:28.908944 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 20 21:36:28.908951 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 20 21:36:28.908957 kernel: PCI: CLS 0 bytes, default 64 Mar 20 21:36:28.908964 kernel: kvm [1]: HYP mode not available Mar 20 21:36:28.908971 kernel: Initialise system trusted keyrings Mar 20 21:36:28.908978 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 20 21:36:28.908985 kernel: Key type asymmetric registered Mar 20 21:36:28.908992 kernel: Asymmetric key parser 'x509' registered Mar 20 21:36:28.908999 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 20 21:36:28.909007 kernel: io scheduler mq-deadline registered Mar 20 21:36:28.909014 kernel: io scheduler kyber registered Mar 20 21:36:28.909021 kernel: io scheduler bfq registered Mar 20 21:36:28.909028 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Mar 20 21:36:28.909035 kernel: ACPI: button: Power Button [PWRB] Mar 20 21:36:28.909042 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Mar 20 21:36:28.909113 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Mar 20 21:36:28.909123 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 20 21:36:28.909130 kernel: thunder_xcv, ver 1.0 Mar 20 21:36:28.909138 kernel: thunder_bgx, ver 1.0 Mar 20 21:36:28.909145 kernel: nicpf, ver 1.0 Mar 20 21:36:28.909152 kernel: nicvf, ver 1.0 Mar 20 21:36:28.909224 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 20 21:36:28.909286 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-03-20T21:36:28 UTC (1742506588) Mar 20 21:36:28.909296 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 20 21:36:28.909303 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Mar 20 21:36:28.909310 kernel: watchdog: Delayed init of the lockup detector failed: -19 Mar 20 21:36:28.909318 kernel: watchdog: Hard watchdog permanently disabled Mar 20 21:36:28.909325 kernel: NET: Registered PF_INET6 protocol family Mar 20 21:36:28.909332 kernel: Segment Routing with IPv6 Mar 20 21:36:28.909339 kernel: In-situ OAM (IOAM) with IPv6 Mar 20 21:36:28.909346 kernel: NET: Registered PF_PACKET protocol family Mar 20 21:36:28.909353 kernel: Key type dns_resolver registered Mar 20 21:36:28.909359 kernel: registered taskstats version 1 Mar 20 21:36:28.909366 kernel: Loading compiled-in X.509 certificates Mar 20 21:36:28.909373 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.83-flatcar: 3a6f52a6c751e8bbe3389ae978b265effe8f77af' Mar 20 21:36:28.909382 kernel: Key type .fscrypt registered Mar 20 21:36:28.909388 kernel: Key type fscrypt-provisioning registered Mar 20 21:36:28.909395 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 20 21:36:28.909402 kernel: ima: Allocated hash algorithm: sha1 Mar 20 21:36:28.909409 kernel: ima: No architecture policies found Mar 20 21:36:28.909416 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 20 21:36:28.909422 kernel: clk: Disabling unused clocks Mar 20 21:36:28.909429 kernel: Freeing unused kernel memory: 38464K Mar 20 21:36:28.909436 kernel: Run /init as init process Mar 20 21:36:28.909444 kernel: with arguments: Mar 20 21:36:28.909451 kernel: /init Mar 20 21:36:28.909458 kernel: with environment: Mar 20 21:36:28.909464 kernel: HOME=/ Mar 20 21:36:28.909471 kernel: TERM=linux Mar 20 21:36:28.909477 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 20 21:36:28.909485 systemd[1]: Successfully made /usr/ read-only. Mar 20 21:36:28.909494 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 20 21:36:28.909503 systemd[1]: Detected virtualization kvm. Mar 20 21:36:28.909510 systemd[1]: Detected architecture arm64. Mar 20 21:36:28.909518 systemd[1]: Running in initrd. Mar 20 21:36:28.909525 systemd[1]: No hostname configured, using default hostname. Mar 20 21:36:28.909532 systemd[1]: Hostname set to . Mar 20 21:36:28.909539 systemd[1]: Initializing machine ID from VM UUID. Mar 20 21:36:28.909547 systemd[1]: Queued start job for default target initrd.target. Mar 20 21:36:28.909554 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 20 21:36:28.909563 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 20 21:36:28.909571 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 20 21:36:28.909579 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 20 21:36:28.909586 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 20 21:36:28.909595 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 20 21:36:28.909603 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 20 21:36:28.909636 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 20 21:36:28.909644 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 20 21:36:28.909660 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 20 21:36:28.909668 systemd[1]: Reached target paths.target - Path Units. Mar 20 21:36:28.909687 systemd[1]: Reached target slices.target - Slice Units. Mar 20 21:36:28.909695 systemd[1]: Reached target swap.target - Swaps. Mar 20 21:36:28.909702 systemd[1]: Reached target timers.target - Timer Units. Mar 20 21:36:28.909711 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 20 21:36:28.909718 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 20 21:36:28.909728 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 20 21:36:28.909736 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 20 21:36:28.909743 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 20 21:36:28.909751 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 20 21:36:28.909758 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 20 21:36:28.909765 systemd[1]: Reached target sockets.target - Socket Units. Mar 20 21:36:28.909773 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 20 21:36:28.909780 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 20 21:36:28.909789 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 20 21:36:28.909796 systemd[1]: Starting systemd-fsck-usr.service... Mar 20 21:36:28.909804 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 20 21:36:28.909812 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 20 21:36:28.909819 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 20 21:36:28.909827 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 20 21:36:28.909834 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 20 21:36:28.909844 systemd[1]: Finished systemd-fsck-usr.service. Mar 20 21:36:28.909851 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 20 21:36:28.909877 systemd-journald[234]: Collecting audit messages is disabled. Mar 20 21:36:28.909897 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 20 21:36:28.909905 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 20 21:36:28.909913 systemd-journald[234]: Journal started Mar 20 21:36:28.909931 systemd-journald[234]: Runtime Journal (/run/log/journal/88046a90c141478f8e95a43ad4e3a5b0) is 5.9M, max 47.3M, 41.4M free. Mar 20 21:36:28.913709 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 20 21:36:28.899239 systemd-modules-load[238]: Inserted module 'overlay' Mar 20 21:36:28.915446 systemd[1]: Started systemd-journald.service - Journal Service. Mar 20 21:36:28.915463 kernel: Bridge firewalling registered Mar 20 21:36:28.914849 systemd-modules-load[238]: Inserted module 'br_netfilter' Mar 20 21:36:28.916555 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 20 21:36:28.918144 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 20 21:36:28.921346 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 20 21:36:28.924056 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 20 21:36:28.926167 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 20 21:36:28.931120 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 20 21:36:28.933017 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 20 21:36:28.937697 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 20 21:36:28.939630 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 20 21:36:28.941444 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 20 21:36:28.943399 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 20 21:36:28.956432 dracut-cmdline[279]: dracut-dracut-053 Mar 20 21:36:28.958721 dracut-cmdline[279]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=0beb08f475de014f6ab4e06127ed84e918521fd470084f537ae9409b262d0ed3 Mar 20 21:36:28.974468 systemd-resolved[280]: Positive Trust Anchors: Mar 20 21:36:28.974487 systemd-resolved[280]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 20 21:36:28.974518 systemd-resolved[280]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 20 21:36:28.979280 systemd-resolved[280]: Defaulting to hostname 'linux'. Mar 20 21:36:28.980583 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 20 21:36:28.981516 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 20 21:36:29.027630 kernel: SCSI subsystem initialized Mar 20 21:36:29.032627 kernel: Loading iSCSI transport class v2.0-870. Mar 20 21:36:29.039634 kernel: iscsi: registered transport (tcp) Mar 20 21:36:29.051833 kernel: iscsi: registered transport (qla4xxx) Mar 20 21:36:29.051852 kernel: QLogic iSCSI HBA Driver Mar 20 21:36:29.091039 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 20 21:36:29.092931 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 20 21:36:29.118365 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 20 21:36:29.118406 kernel: device-mapper: uevent: version 1.0.3 Mar 20 21:36:29.118417 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 20 21:36:29.164636 kernel: raid6: neonx8 gen() 15733 MB/s Mar 20 21:36:29.181631 kernel: raid6: neonx4 gen() 15771 MB/s Mar 20 21:36:29.198630 kernel: raid6: neonx2 gen() 13170 MB/s Mar 20 21:36:29.215632 kernel: raid6: neonx1 gen() 10458 MB/s Mar 20 21:36:29.232636 kernel: raid6: int64x8 gen() 6774 MB/s Mar 20 21:36:29.249633 kernel: raid6: int64x4 gen() 7330 MB/s Mar 20 21:36:29.266633 kernel: raid6: int64x2 gen() 6096 MB/s Mar 20 21:36:29.283633 kernel: raid6: int64x1 gen() 5052 MB/s Mar 20 21:36:29.283665 kernel: raid6: using algorithm neonx4 gen() 15771 MB/s Mar 20 21:36:29.300628 kernel: raid6: .... xor() 12543 MB/s, rmw enabled Mar 20 21:36:29.300641 kernel: raid6: using neon recovery algorithm Mar 20 21:36:29.305790 kernel: xor: measuring software checksum speed Mar 20 21:36:29.305814 kernel: 8regs : 21630 MB/sec Mar 20 21:36:29.305837 kernel: 32regs : 21704 MB/sec Mar 20 21:36:29.306727 kernel: arm64_neon : 27794 MB/sec Mar 20 21:36:29.306742 kernel: xor: using function: arm64_neon (27794 MB/sec) Mar 20 21:36:29.355638 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 20 21:36:29.365334 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 20 21:36:29.367778 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 20 21:36:29.392699 systemd-udevd[463]: Using default interface naming scheme 'v255'. Mar 20 21:36:29.396319 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 20 21:36:29.398952 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 20 21:36:29.421563 dracut-pre-trigger[472]: rd.md=0: removing MD RAID activation Mar 20 21:36:29.445389 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 20 21:36:29.447245 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 20 21:36:29.506051 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 20 21:36:29.508461 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 20 21:36:29.528653 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 20 21:36:29.529800 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 20 21:36:29.531424 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 20 21:36:29.533089 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 20 21:36:29.536728 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 20 21:36:29.554042 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 20 21:36:29.561658 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Mar 20 21:36:29.572039 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 20 21:36:29.572136 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 20 21:36:29.572152 kernel: GPT:9289727 != 19775487 Mar 20 21:36:29.572162 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 20 21:36:29.572172 kernel: GPT:9289727 != 19775487 Mar 20 21:36:29.572180 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 20 21:36:29.572189 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 20 21:36:29.562808 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 20 21:36:29.562906 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 20 21:36:29.568384 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 20 21:36:29.571135 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 20 21:36:29.571260 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 20 21:36:29.575314 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 20 21:36:29.576943 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 20 21:36:29.595694 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by (udev-worker) (521) Mar 20 21:36:29.595735 kernel: BTRFS: device fsid 892d57a1-84f1-442c-90df-b8383db1b8c3 devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (523) Mar 20 21:36:29.599288 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 20 21:36:29.612320 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 20 21:36:29.623876 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 20 21:36:29.630979 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 20 21:36:29.636856 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 20 21:36:29.637759 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 20 21:36:29.640074 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 20 21:36:29.642212 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 20 21:36:29.662375 disk-uuid[551]: Primary Header is updated. Mar 20 21:36:29.662375 disk-uuid[551]: Secondary Entries is updated. Mar 20 21:36:29.662375 disk-uuid[551]: Secondary Header is updated. Mar 20 21:36:29.666641 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 20 21:36:29.670646 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 20 21:36:30.677656 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 20 21:36:30.678155 disk-uuid[556]: The operation has completed successfully. Mar 20 21:36:30.703048 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 20 21:36:30.703164 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 20 21:36:30.727356 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 20 21:36:30.742223 sh[573]: Success Mar 20 21:36:30.754630 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Mar 20 21:36:30.780324 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 20 21:36:30.782689 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 20 21:36:30.796511 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 20 21:36:30.801927 kernel: BTRFS info (device dm-0): first mount of filesystem 892d57a1-84f1-442c-90df-b8383db1b8c3 Mar 20 21:36:30.801959 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Mar 20 21:36:30.802766 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 20 21:36:30.803789 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 20 21:36:30.803827 kernel: BTRFS info (device dm-0): using free space tree Mar 20 21:36:30.807315 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 20 21:36:30.808346 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 20 21:36:30.809018 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 20 21:36:30.811526 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 20 21:36:30.833133 kernel: BTRFS info (device vda6): first mount of filesystem d2d05864-61d3-424d-8bc5-6b85db5f6d34 Mar 20 21:36:30.833169 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Mar 20 21:36:30.833179 kernel: BTRFS info (device vda6): using free space tree Mar 20 21:36:30.835122 kernel: BTRFS info (device vda6): auto enabling async discard Mar 20 21:36:30.839655 kernel: BTRFS info (device vda6): last unmount of filesystem d2d05864-61d3-424d-8bc5-6b85db5f6d34 Mar 20 21:36:30.842132 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 20 21:36:30.844961 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 20 21:36:30.908370 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 20 21:36:30.911108 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 20 21:36:30.940266 ignition[665]: Ignition 2.20.0 Mar 20 21:36:30.940276 ignition[665]: Stage: fetch-offline Mar 20 21:36:30.940304 ignition[665]: no configs at "/usr/lib/ignition/base.d" Mar 20 21:36:30.940312 ignition[665]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 20 21:36:30.940458 ignition[665]: parsed url from cmdline: "" Mar 20 21:36:30.940461 ignition[665]: no config URL provided Mar 20 21:36:30.940466 ignition[665]: reading system config file "/usr/lib/ignition/user.ign" Mar 20 21:36:30.940474 ignition[665]: no config at "/usr/lib/ignition/user.ign" Mar 20 21:36:30.940497 ignition[665]: op(1): [started] loading QEMU firmware config module Mar 20 21:36:30.940502 ignition[665]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 20 21:36:30.950723 ignition[665]: op(1): [finished] loading QEMU firmware config module Mar 20 21:36:30.952048 systemd-networkd[761]: lo: Link UP Mar 20 21:36:30.952060 systemd-networkd[761]: lo: Gained carrier Mar 20 21:36:30.952863 systemd-networkd[761]: Enumeration completed Mar 20 21:36:30.953238 systemd-networkd[761]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 20 21:36:30.953241 systemd-networkd[761]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 20 21:36:30.953694 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 20 21:36:30.953841 systemd-networkd[761]: eth0: Link UP Mar 20 21:36:30.953844 systemd-networkd[761]: eth0: Gained carrier Mar 20 21:36:30.953850 systemd-networkd[761]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 20 21:36:30.954622 systemd[1]: Reached target network.target - Network. Mar 20 21:36:30.981666 systemd-networkd[761]: eth0: DHCPv4 address 10.0.0.3/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 20 21:36:30.999482 ignition[665]: parsing config with SHA512: 3a6836e920915def065de7ce79ec044d76dc566bbe48889db009cd48e36c5b07508d79651447f37b076f50ff38ae6923ac472c4bbaed4e1ed2fa6474a8f71d5b Mar 20 21:36:31.005871 unknown[665]: fetched base config from "system" Mar 20 21:36:31.005881 unknown[665]: fetched user config from "qemu" Mar 20 21:36:31.006370 ignition[665]: fetch-offline: fetch-offline passed Mar 20 21:36:31.006453 ignition[665]: Ignition finished successfully Mar 20 21:36:31.008422 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 20 21:36:31.009673 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 20 21:36:31.011745 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 20 21:36:31.033103 ignition[769]: Ignition 2.20.0 Mar 20 21:36:31.033111 ignition[769]: Stage: kargs Mar 20 21:36:31.033244 ignition[769]: no configs at "/usr/lib/ignition/base.d" Mar 20 21:36:31.033253 ignition[769]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 20 21:36:31.034044 ignition[769]: kargs: kargs passed Mar 20 21:36:31.034081 ignition[769]: Ignition finished successfully Mar 20 21:36:31.036477 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 20 21:36:31.038733 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 20 21:36:31.055580 ignition[777]: Ignition 2.20.0 Mar 20 21:36:31.055589 ignition[777]: Stage: disks Mar 20 21:36:31.055776 ignition[777]: no configs at "/usr/lib/ignition/base.d" Mar 20 21:36:31.055786 ignition[777]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 20 21:36:31.057384 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 20 21:36:31.056546 ignition[777]: disks: disks passed Mar 20 21:36:31.059013 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 20 21:36:31.056585 ignition[777]: Ignition finished successfully Mar 20 21:36:31.060152 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 20 21:36:31.061328 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 20 21:36:31.062724 systemd[1]: Reached target sysinit.target - System Initialization. Mar 20 21:36:31.063822 systemd[1]: Reached target basic.target - Basic System. Mar 20 21:36:31.065984 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 20 21:36:31.083661 systemd-fsck[788]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 20 21:36:31.087288 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 20 21:36:31.089026 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 20 21:36:31.145395 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 20 21:36:31.146688 kernel: EXT4-fs (vda9): mounted filesystem 78c526d9-91af-4481-a769-6d3064caa829 r/w with ordered data mode. Quota mode: none. Mar 20 21:36:31.146409 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 20 21:36:31.151104 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 20 21:36:31.153073 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 20 21:36:31.153884 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 20 21:36:31.153923 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 20 21:36:31.153945 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 20 21:36:31.164915 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 20 21:36:31.166576 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 20 21:36:31.170620 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (797) Mar 20 21:36:31.172638 kernel: BTRFS info (device vda6): first mount of filesystem d2d05864-61d3-424d-8bc5-6b85db5f6d34 Mar 20 21:36:31.172658 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Mar 20 21:36:31.172668 kernel: BTRFS info (device vda6): using free space tree Mar 20 21:36:31.175644 kernel: BTRFS info (device vda6): auto enabling async discard Mar 20 21:36:31.175792 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 20 21:36:31.207456 initrd-setup-root[821]: cut: /sysroot/etc/passwd: No such file or directory Mar 20 21:36:31.211229 initrd-setup-root[828]: cut: /sysroot/etc/group: No such file or directory Mar 20 21:36:31.214894 initrd-setup-root[835]: cut: /sysroot/etc/shadow: No such file or directory Mar 20 21:36:31.218323 initrd-setup-root[842]: cut: /sysroot/etc/gshadow: No such file or directory Mar 20 21:36:31.288928 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 20 21:36:31.291086 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 20 21:36:31.292603 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 20 21:36:31.305657 kernel: BTRFS info (device vda6): last unmount of filesystem d2d05864-61d3-424d-8bc5-6b85db5f6d34 Mar 20 21:36:31.317825 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 20 21:36:31.326942 ignition[911]: INFO : Ignition 2.20.0 Mar 20 21:36:31.326942 ignition[911]: INFO : Stage: mount Mar 20 21:36:31.328388 ignition[911]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 20 21:36:31.328388 ignition[911]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 20 21:36:31.328388 ignition[911]: INFO : mount: mount passed Mar 20 21:36:31.328388 ignition[911]: INFO : Ignition finished successfully Mar 20 21:36:31.330420 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 20 21:36:31.332624 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 20 21:36:31.932795 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 20 21:36:31.934513 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 20 21:36:31.955694 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/vda6 scanned by mount (924) Mar 20 21:36:31.955727 kernel: BTRFS info (device vda6): first mount of filesystem d2d05864-61d3-424d-8bc5-6b85db5f6d34 Mar 20 21:36:31.955738 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Mar 20 21:36:31.957099 kernel: BTRFS info (device vda6): using free space tree Mar 20 21:36:31.959646 kernel: BTRFS info (device vda6): auto enabling async discard Mar 20 21:36:31.960038 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 20 21:36:31.982014 ignition[941]: INFO : Ignition 2.20.0 Mar 20 21:36:31.982014 ignition[941]: INFO : Stage: files Mar 20 21:36:31.983435 ignition[941]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 20 21:36:31.983435 ignition[941]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 20 21:36:31.983435 ignition[941]: DEBUG : files: compiled without relabeling support, skipping Mar 20 21:36:31.986602 ignition[941]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 20 21:36:31.986602 ignition[941]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 20 21:36:31.989392 ignition[941]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 20 21:36:31.990722 ignition[941]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 20 21:36:31.990722 ignition[941]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 20 21:36:31.989975 unknown[941]: wrote ssh authorized keys file for user: core Mar 20 21:36:31.994253 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Mar 20 21:36:31.994253 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Mar 20 21:36:32.036869 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 20 21:36:32.152335 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Mar 20 21:36:32.154356 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 20 21:36:32.154356 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 20 21:36:32.154356 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 20 21:36:32.154356 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 20 21:36:32.154356 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 20 21:36:32.154356 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 20 21:36:32.154356 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 20 21:36:32.154356 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 20 21:36:32.154356 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 20 21:36:32.154356 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 20 21:36:32.154356 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Mar 20 21:36:32.154356 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Mar 20 21:36:32.154356 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Mar 20 21:36:32.154356 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 Mar 20 21:36:32.207255 systemd-networkd[761]: eth0: Gained IPv6LL Mar 20 21:36:32.470967 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 20 21:36:32.697115 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Mar 20 21:36:32.697115 ignition[941]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 20 21:36:32.700397 ignition[941]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 20 21:36:32.700397 ignition[941]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 20 21:36:32.700397 ignition[941]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 20 21:36:32.700397 ignition[941]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Mar 20 21:36:32.700397 ignition[941]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 20 21:36:32.700397 ignition[941]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 20 21:36:32.700397 ignition[941]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Mar 20 21:36:32.700397 ignition[941]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Mar 20 21:36:32.715049 ignition[941]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 20 21:36:32.718098 ignition[941]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 20 21:36:32.719288 ignition[941]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Mar 20 21:36:32.719288 ignition[941]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Mar 20 21:36:32.719288 ignition[941]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Mar 20 21:36:32.719288 ignition[941]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 20 21:36:32.719288 ignition[941]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 20 21:36:32.719288 ignition[941]: INFO : files: files passed Mar 20 21:36:32.719288 ignition[941]: INFO : Ignition finished successfully Mar 20 21:36:32.722643 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 20 21:36:32.726806 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 20 21:36:32.729290 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 20 21:36:32.748795 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 20 21:36:32.749595 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 20 21:36:32.751181 initrd-setup-root-after-ignition[970]: grep: /sysroot/oem/oem-release: No such file or directory Mar 20 21:36:32.752242 initrd-setup-root-after-ignition[972]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 20 21:36:32.752242 initrd-setup-root-after-ignition[972]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 20 21:36:32.754512 initrd-setup-root-after-ignition[976]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 20 21:36:32.754365 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 20 21:36:32.755946 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 20 21:36:32.759059 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 20 21:36:32.800794 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 20 21:36:32.800912 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 20 21:36:32.802899 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 20 21:36:32.804419 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 20 21:36:32.805941 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 20 21:36:32.806632 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 20 21:36:32.820427 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 20 21:36:32.822382 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 20 21:36:32.839623 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 20 21:36:32.840499 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 20 21:36:32.842022 systemd[1]: Stopped target timers.target - Timer Units. Mar 20 21:36:32.843314 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 20 21:36:32.843419 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 20 21:36:32.845362 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 20 21:36:32.846824 systemd[1]: Stopped target basic.target - Basic System. Mar 20 21:36:32.848096 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 20 21:36:32.849333 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 20 21:36:32.850860 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 20 21:36:32.852521 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 20 21:36:32.853821 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 20 21:36:32.855342 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 20 21:36:32.856894 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 20 21:36:32.858138 systemd[1]: Stopped target swap.target - Swaps. Mar 20 21:36:32.859225 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 20 21:36:32.859334 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 20 21:36:32.861004 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 20 21:36:32.862451 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 20 21:36:32.864012 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 20 21:36:32.864670 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 20 21:36:32.865846 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 20 21:36:32.865954 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 20 21:36:32.867947 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 20 21:36:32.868066 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 20 21:36:32.869428 systemd[1]: Stopped target paths.target - Path Units. Mar 20 21:36:32.870518 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 20 21:36:32.873689 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 20 21:36:32.875537 systemd[1]: Stopped target slices.target - Slice Units. Mar 20 21:36:32.876304 systemd[1]: Stopped target sockets.target - Socket Units. Mar 20 21:36:32.877465 systemd[1]: iscsid.socket: Deactivated successfully. Mar 20 21:36:32.877544 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 20 21:36:32.878656 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 20 21:36:32.878735 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 20 21:36:32.879928 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 20 21:36:32.880027 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 20 21:36:32.881301 systemd[1]: ignition-files.service: Deactivated successfully. Mar 20 21:36:32.881401 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 20 21:36:32.883136 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 20 21:36:32.884478 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 20 21:36:32.884633 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 20 21:36:32.893084 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 20 21:36:32.893733 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 20 21:36:32.893844 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 20 21:36:32.895182 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 20 21:36:32.895270 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 20 21:36:32.901633 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 20 21:36:32.901786 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 20 21:36:32.904785 ignition[997]: INFO : Ignition 2.20.0 Mar 20 21:36:32.904785 ignition[997]: INFO : Stage: umount Mar 20 21:36:32.907150 ignition[997]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 20 21:36:32.907150 ignition[997]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 20 21:36:32.907150 ignition[997]: INFO : umount: umount passed Mar 20 21:36:32.907150 ignition[997]: INFO : Ignition finished successfully Mar 20 21:36:32.906390 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 20 21:36:32.908036 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 20 21:36:32.908121 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 20 21:36:32.909632 systemd[1]: Stopped target network.target - Network. Mar 20 21:36:32.910878 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 20 21:36:32.910943 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 20 21:36:32.912165 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 20 21:36:32.912205 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 20 21:36:32.913381 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 20 21:36:32.913418 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 20 21:36:32.914704 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 20 21:36:32.914745 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 20 21:36:32.916100 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 20 21:36:32.917448 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 20 21:36:32.922522 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 20 21:36:32.922681 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 20 21:36:32.925292 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 20 21:36:32.925530 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 20 21:36:32.925567 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 20 21:36:32.928355 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 20 21:36:32.928580 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 20 21:36:32.928725 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 20 21:36:32.931340 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 20 21:36:32.931880 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 20 21:36:32.931937 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 20 21:36:32.933709 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 20 21:36:32.934519 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 20 21:36:32.934583 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 20 21:36:32.936257 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 20 21:36:32.936317 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 20 21:36:32.938368 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 20 21:36:32.938420 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 20 21:36:32.940014 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 20 21:36:32.943858 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 20 21:36:32.952380 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 20 21:36:32.953236 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 20 21:36:32.960341 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 20 21:36:32.960478 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 20 21:36:32.962187 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 20 21:36:32.962223 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 20 21:36:32.963622 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 20 21:36:32.963662 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 20 21:36:32.965195 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 20 21:36:32.965241 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 20 21:36:32.967337 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 20 21:36:32.967383 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 20 21:36:32.969340 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 20 21:36:32.969382 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 20 21:36:32.972139 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 20 21:36:32.973388 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 20 21:36:32.973436 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 20 21:36:32.976012 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 20 21:36:32.976053 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 20 21:36:32.979123 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Mar 20 21:36:32.979177 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 20 21:36:32.981827 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 20 21:36:32.981909 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 20 21:36:32.984048 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 20 21:36:32.984134 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 20 21:36:32.986848 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 20 21:36:32.987654 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 20 21:36:32.988891 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 20 21:36:32.990653 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 20 21:36:33.014167 systemd[1]: Switching root. Mar 20 21:36:33.043447 systemd-journald[234]: Journal stopped Mar 20 21:36:33.731145 systemd-journald[234]: Received SIGTERM from PID 1 (systemd). Mar 20 21:36:33.731199 kernel: SELinux: policy capability network_peer_controls=1 Mar 20 21:36:33.731211 kernel: SELinux: policy capability open_perms=1 Mar 20 21:36:33.731221 kernel: SELinux: policy capability extended_socket_class=1 Mar 20 21:36:33.731230 kernel: SELinux: policy capability always_check_network=0 Mar 20 21:36:33.731239 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 20 21:36:33.731249 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 20 21:36:33.731258 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 20 21:36:33.731270 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 20 21:36:33.731279 kernel: audit: type=1403 audit(1742506593.171:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 20 21:36:33.731290 systemd[1]: Successfully loaded SELinux policy in 29.694ms. Mar 20 21:36:33.731309 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.146ms. Mar 20 21:36:33.731320 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 20 21:36:33.731331 systemd[1]: Detected virtualization kvm. Mar 20 21:36:33.731341 systemd[1]: Detected architecture arm64. Mar 20 21:36:33.731351 systemd[1]: Detected first boot. Mar 20 21:36:33.731361 systemd[1]: Initializing machine ID from VM UUID. Mar 20 21:36:33.731372 kernel: NET: Registered PF_VSOCK protocol family Mar 20 21:36:33.731382 zram_generator::config[1045]: No configuration found. Mar 20 21:36:33.731394 systemd[1]: Populated /etc with preset unit settings. Mar 20 21:36:33.731405 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 20 21:36:33.731415 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 20 21:36:33.731425 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 20 21:36:33.731438 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 20 21:36:33.731449 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 20 21:36:33.731460 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 20 21:36:33.731471 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 20 21:36:33.731481 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 20 21:36:33.731491 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 20 21:36:33.731501 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 20 21:36:33.731512 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 20 21:36:33.731522 systemd[1]: Created slice user.slice - User and Session Slice. Mar 20 21:36:33.731532 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 20 21:36:33.731542 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 20 21:36:33.731554 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 20 21:36:33.731564 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 20 21:36:33.731575 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 20 21:36:33.731585 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 20 21:36:33.731595 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Mar 20 21:36:33.731623 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 20 21:36:33.731639 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 20 21:36:33.731651 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 20 21:36:33.731661 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 20 21:36:33.731672 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 20 21:36:33.731682 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 20 21:36:33.731692 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 20 21:36:33.731702 systemd[1]: Reached target slices.target - Slice Units. Mar 20 21:36:33.731712 systemd[1]: Reached target swap.target - Swaps. Mar 20 21:36:33.731722 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 20 21:36:33.731732 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 20 21:36:33.731744 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 20 21:36:33.731754 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 20 21:36:33.731765 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 20 21:36:33.731775 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 20 21:36:33.731785 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 20 21:36:33.731796 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 20 21:36:33.731806 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 20 21:36:33.731816 systemd[1]: Mounting media.mount - External Media Directory... Mar 20 21:36:33.731826 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 20 21:36:33.731838 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 20 21:36:33.731848 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 20 21:36:33.731864 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 20 21:36:33.731874 systemd[1]: Reached target machines.target - Containers. Mar 20 21:36:33.731884 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 20 21:36:33.731894 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 20 21:36:33.731904 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 20 21:36:33.731915 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 20 21:36:33.731924 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 20 21:36:33.731937 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 20 21:36:33.731947 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 20 21:36:33.731957 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 20 21:36:33.731967 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 20 21:36:33.731978 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 20 21:36:33.731988 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 20 21:36:33.731998 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 20 21:36:33.732008 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 20 21:36:33.732020 systemd[1]: Stopped systemd-fsck-usr.service. Mar 20 21:36:33.732031 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 20 21:36:33.732041 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 20 21:36:33.732051 kernel: fuse: init (API version 7.39) Mar 20 21:36:33.732060 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 20 21:36:33.732070 kernel: loop: module loaded Mar 20 21:36:33.732081 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 20 21:36:33.732091 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 20 21:36:33.732101 kernel: ACPI: bus type drm_connector registered Mar 20 21:36:33.732112 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 20 21:36:33.732122 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 20 21:36:33.732133 systemd[1]: verity-setup.service: Deactivated successfully. Mar 20 21:36:33.732143 systemd[1]: Stopped verity-setup.service. Mar 20 21:36:33.732153 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 20 21:36:33.732165 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 20 21:36:33.732175 systemd[1]: Mounted media.mount - External Media Directory. Mar 20 21:36:33.732186 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 20 21:36:33.732196 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 20 21:36:33.732206 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 20 21:36:33.732216 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 20 21:36:33.732243 systemd-journald[1113]: Collecting audit messages is disabled. Mar 20 21:36:33.732266 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 20 21:36:33.732277 systemd-journald[1113]: Journal started Mar 20 21:36:33.732297 systemd-journald[1113]: Runtime Journal (/run/log/journal/88046a90c141478f8e95a43ad4e3a5b0) is 5.9M, max 47.3M, 41.4M free. Mar 20 21:36:33.550817 systemd[1]: Queued start job for default target multi-user.target. Mar 20 21:36:33.562439 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 20 21:36:33.562837 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 20 21:36:33.734364 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 20 21:36:33.736118 systemd[1]: Started systemd-journald.service - Journal Service. Mar 20 21:36:33.736984 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 20 21:36:33.738376 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 20 21:36:33.738555 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 20 21:36:33.740903 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 20 21:36:33.741060 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 20 21:36:33.742298 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 20 21:36:33.742453 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 20 21:36:33.743900 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 20 21:36:33.744047 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 20 21:36:33.745431 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 20 21:36:33.746644 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 20 21:36:33.747895 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 20 21:36:33.749215 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 20 21:36:33.750729 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 20 21:36:33.752054 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 20 21:36:33.763566 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 20 21:36:33.765788 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 20 21:36:33.767661 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 20 21:36:33.768720 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 20 21:36:33.768750 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 20 21:36:33.770507 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 20 21:36:33.778579 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 20 21:36:33.780522 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 20 21:36:33.781782 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 20 21:36:33.783143 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 20 21:36:33.785218 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 20 21:36:33.786511 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 20 21:36:33.789739 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 20 21:36:33.790849 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 20 21:36:33.792658 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 20 21:36:33.794492 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 20 21:36:33.797843 systemd-journald[1113]: Time spent on flushing to /var/log/journal/88046a90c141478f8e95a43ad4e3a5b0 is 20.246ms for 867 entries. Mar 20 21:36:33.797843 systemd-journald[1113]: System Journal (/var/log/journal/88046a90c141478f8e95a43ad4e3a5b0) is 8M, max 195.6M, 187.6M free. Mar 20 21:36:33.829759 systemd-journald[1113]: Received client request to flush runtime journal. Mar 20 21:36:33.829811 kernel: loop0: detected capacity change from 0 to 189592 Mar 20 21:36:33.798107 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 20 21:36:33.802693 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 20 21:36:33.813964 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 20 21:36:33.816531 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 20 21:36:33.819885 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 20 21:36:33.822306 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 20 21:36:33.824743 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 20 21:36:33.831692 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 20 21:36:33.836733 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 20 21:36:33.839628 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 20 21:36:33.842009 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 20 21:36:33.844827 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 20 21:36:33.851942 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 20 21:36:33.859760 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 20 21:36:33.862947 udevadm[1180]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 20 21:36:33.876660 kernel: loop1: detected capacity change from 0 to 126448 Mar 20 21:36:33.881640 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 20 21:36:33.891516 systemd-tmpfiles[1182]: ACLs are not supported, ignoring. Mar 20 21:36:33.891536 systemd-tmpfiles[1182]: ACLs are not supported, ignoring. Mar 20 21:36:33.896163 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 20 21:36:33.926643 kernel: loop2: detected capacity change from 0 to 103832 Mar 20 21:36:33.972632 kernel: loop3: detected capacity change from 0 to 189592 Mar 20 21:36:33.985797 kernel: loop4: detected capacity change from 0 to 126448 Mar 20 21:36:33.990729 kernel: loop5: detected capacity change from 0 to 103832 Mar 20 21:36:33.993965 (sd-merge)[1189]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 20 21:36:33.994326 (sd-merge)[1189]: Merged extensions into '/usr'. Mar 20 21:36:33.998107 systemd[1]: Reload requested from client PID 1163 ('systemd-sysext') (unit systemd-sysext.service)... Mar 20 21:36:33.998529 systemd[1]: Reloading... Mar 20 21:36:34.060683 zram_generator::config[1216]: No configuration found. Mar 20 21:36:34.067155 ldconfig[1158]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 20 21:36:34.144984 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 20 21:36:34.193573 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 20 21:36:34.193981 systemd[1]: Reloading finished in 194 ms. Mar 20 21:36:34.213276 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 20 21:36:34.216183 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 20 21:36:34.235905 systemd[1]: Starting ensure-sysext.service... Mar 20 21:36:34.237761 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 20 21:36:34.247681 systemd[1]: Reload requested from client PID 1251 ('systemctl') (unit ensure-sysext.service)... Mar 20 21:36:34.247697 systemd[1]: Reloading... Mar 20 21:36:34.258294 systemd-tmpfiles[1252]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 20 21:36:34.258507 systemd-tmpfiles[1252]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 20 21:36:34.259159 systemd-tmpfiles[1252]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 20 21:36:34.259351 systemd-tmpfiles[1252]: ACLs are not supported, ignoring. Mar 20 21:36:34.259410 systemd-tmpfiles[1252]: ACLs are not supported, ignoring. Mar 20 21:36:34.261908 systemd-tmpfiles[1252]: Detected autofs mount point /boot during canonicalization of boot. Mar 20 21:36:34.261922 systemd-tmpfiles[1252]: Skipping /boot Mar 20 21:36:34.270362 systemd-tmpfiles[1252]: Detected autofs mount point /boot during canonicalization of boot. Mar 20 21:36:34.270382 systemd-tmpfiles[1252]: Skipping /boot Mar 20 21:36:34.292624 zram_generator::config[1281]: No configuration found. Mar 20 21:36:34.369199 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 20 21:36:34.418193 systemd[1]: Reloading finished in 170 ms. Mar 20 21:36:34.435029 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 20 21:36:34.448136 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 20 21:36:34.455573 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 20 21:36:34.457900 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 20 21:36:34.463490 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 20 21:36:34.466221 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 20 21:36:34.469817 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 20 21:36:34.474947 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 20 21:36:34.482013 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 20 21:36:34.489866 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 20 21:36:34.493513 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 20 21:36:34.497368 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 20 21:36:34.498826 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 20 21:36:34.498951 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 20 21:36:34.501760 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 20 21:36:34.506082 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 20 21:36:34.508033 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 20 21:36:34.509667 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 20 21:36:34.511708 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 20 21:36:34.511872 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 20 21:36:34.515454 systemd-udevd[1324]: Using default interface naming scheme 'v255'. Mar 20 21:36:34.515824 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 20 21:36:34.515976 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 20 21:36:34.521577 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 20 21:36:34.526674 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 20 21:36:34.530827 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 20 21:36:34.532975 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 20 21:36:34.535865 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 20 21:36:34.537902 augenrules[1356]: No rules Mar 20 21:36:34.550050 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 20 21:36:34.553873 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 20 21:36:34.554006 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 20 21:36:34.555818 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 20 21:36:34.556581 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 20 21:36:34.558229 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 20 21:36:34.560743 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 20 21:36:34.563906 systemd[1]: Finished ensure-sysext.service. Mar 20 21:36:34.572409 systemd[1]: audit-rules.service: Deactivated successfully. Mar 20 21:36:34.574648 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 20 21:36:34.575749 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 20 21:36:34.575886 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 20 21:36:34.577189 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 20 21:36:34.577326 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 20 21:36:34.578490 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 20 21:36:34.578640 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 20 21:36:34.581146 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 20 21:36:34.581630 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 20 21:36:34.585086 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 20 21:36:34.589432 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 20 21:36:34.593705 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1367) Mar 20 21:36:34.614233 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Mar 20 21:36:34.621210 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 20 21:36:34.622223 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 20 21:36:34.622279 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 20 21:36:34.624833 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 20 21:36:34.631373 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 20 21:36:34.635167 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 20 21:36:34.647130 systemd-resolved[1323]: Positive Trust Anchors: Mar 20 21:36:34.647408 systemd-resolved[1323]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 20 21:36:34.647495 systemd-resolved[1323]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 20 21:36:34.657771 systemd-resolved[1323]: Defaulting to hostname 'linux'. Mar 20 21:36:34.659373 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 20 21:36:34.660574 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 20 21:36:34.665244 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 20 21:36:34.706253 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 20 21:36:34.717321 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 20 21:36:34.718681 systemd[1]: Reached target time-set.target - System Time Set. Mar 20 21:36:34.724566 systemd-networkd[1398]: lo: Link UP Mar 20 21:36:34.724574 systemd-networkd[1398]: lo: Gained carrier Mar 20 21:36:34.725422 systemd-networkd[1398]: Enumeration completed Mar 20 21:36:34.725537 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 20 21:36:34.727088 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 20 21:36:34.729026 systemd[1]: Reached target network.target - Network. Mar 20 21:36:34.729718 systemd-networkd[1398]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 20 21:36:34.729729 systemd-networkd[1398]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 20 21:36:34.731744 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 20 21:36:34.732184 systemd-networkd[1398]: eth0: Link UP Mar 20 21:36:34.732187 systemd-networkd[1398]: eth0: Gained carrier Mar 20 21:36:34.732201 systemd-networkd[1398]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 20 21:36:34.734753 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 20 21:36:34.743596 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 20 21:36:34.746822 systemd-networkd[1398]: eth0: DHCPv4 address 10.0.0.3/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 20 21:36:34.747651 systemd-timesyncd[1399]: Network configuration changed, trying to establish connection. Mar 20 21:36:34.748307 systemd-timesyncd[1399]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 20 21:36:34.748361 systemd-timesyncd[1399]: Initial clock synchronization to Thu 2025-03-20 21:36:35.098134 UTC. Mar 20 21:36:34.754682 lvm[1415]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 20 21:36:34.755090 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 20 21:36:34.768681 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 20 21:36:34.791589 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 20 21:36:34.793388 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 20 21:36:34.794503 systemd[1]: Reached target sysinit.target - System Initialization. Mar 20 21:36:34.795660 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 20 21:36:34.796848 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 20 21:36:34.798179 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 20 21:36:34.799417 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 20 21:36:34.800649 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 20 21:36:34.801838 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 20 21:36:34.801875 systemd[1]: Reached target paths.target - Path Units. Mar 20 21:36:34.802725 systemd[1]: Reached target timers.target - Timer Units. Mar 20 21:36:34.804414 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 20 21:36:34.806705 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 20 21:36:34.809707 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 20 21:36:34.811054 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 20 21:36:34.812305 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 20 21:36:34.819532 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 20 21:36:34.820970 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 20 21:36:34.823086 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 20 21:36:34.824704 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 20 21:36:34.825824 systemd[1]: Reached target sockets.target - Socket Units. Mar 20 21:36:34.826731 systemd[1]: Reached target basic.target - Basic System. Mar 20 21:36:34.827711 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 20 21:36:34.827742 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 20 21:36:34.828562 systemd[1]: Starting containerd.service - containerd container runtime... Mar 20 21:36:34.830227 lvm[1425]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 20 21:36:34.830408 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 20 21:36:34.833447 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 20 21:36:34.837763 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 20 21:36:34.838525 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 20 21:36:34.839508 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 20 21:36:34.840736 jq[1428]: false Mar 20 21:36:34.841198 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 20 21:36:34.843857 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 20 21:36:34.845853 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 20 21:36:34.849785 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 20 21:36:34.854502 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 20 21:36:34.855289 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 20 21:36:34.856844 systemd[1]: Starting update-engine.service - Update Engine... Mar 20 21:36:34.858667 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 20 21:36:34.860389 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 20 21:36:34.862780 extend-filesystems[1429]: Found loop3 Mar 20 21:36:34.862780 extend-filesystems[1429]: Found loop4 Mar 20 21:36:34.869676 extend-filesystems[1429]: Found loop5 Mar 20 21:36:34.869676 extend-filesystems[1429]: Found vda Mar 20 21:36:34.869676 extend-filesystems[1429]: Found vda1 Mar 20 21:36:34.869676 extend-filesystems[1429]: Found vda2 Mar 20 21:36:34.869676 extend-filesystems[1429]: Found vda3 Mar 20 21:36:34.869676 extend-filesystems[1429]: Found usr Mar 20 21:36:34.869676 extend-filesystems[1429]: Found vda4 Mar 20 21:36:34.869676 extend-filesystems[1429]: Found vda6 Mar 20 21:36:34.869676 extend-filesystems[1429]: Found vda7 Mar 20 21:36:34.869676 extend-filesystems[1429]: Found vda9 Mar 20 21:36:34.869676 extend-filesystems[1429]: Checking size of /dev/vda9 Mar 20 21:36:34.863977 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 20 21:36:34.867826 dbus-daemon[1427]: [system] SELinux support is enabled Mar 20 21:36:34.864156 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 20 21:36:34.892966 jq[1444]: true Mar 20 21:36:34.864950 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 20 21:36:34.865116 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 20 21:36:34.868351 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 20 21:36:34.892952 systemd[1]: motdgen.service: Deactivated successfully. Mar 20 21:36:34.893145 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 20 21:36:34.894773 jq[1451]: true Mar 20 21:36:34.896631 extend-filesystems[1429]: Resized partition /dev/vda9 Mar 20 21:36:34.899265 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 20 21:36:34.899308 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 20 21:36:34.899569 tar[1448]: linux-arm64/helm Mar 20 21:36:34.903075 extend-filesystems[1465]: resize2fs 1.47.2 (1-Jan-2025) Mar 20 21:36:34.907194 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 20 21:36:34.905914 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 20 21:36:34.905934 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 20 21:36:34.908564 (ntainerd)[1452]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 20 21:36:34.922041 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1370) Mar 20 21:36:34.924638 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 20 21:36:34.939021 extend-filesystems[1465]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 20 21:36:34.939021 extend-filesystems[1465]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 20 21:36:34.939021 extend-filesystems[1465]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 20 21:36:34.944885 extend-filesystems[1429]: Resized filesystem in /dev/vda9 Mar 20 21:36:34.944918 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 20 21:36:34.945107 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 20 21:36:34.946392 systemd-logind[1436]: Watching system buttons on /dev/input/event0 (Power Button) Mar 20 21:36:34.946716 systemd-logind[1436]: New seat seat0. Mar 20 21:36:34.947964 systemd[1]: Started systemd-logind.service - User Login Management. Mar 20 21:36:34.978866 update_engine[1440]: I20250320 21:36:34.975502 1440 main.cc:92] Flatcar Update Engine starting Mar 20 21:36:34.978065 systemd[1]: Started update-engine.service - Update Engine. Mar 20 21:36:34.981547 update_engine[1440]: I20250320 21:36:34.980667 1440 update_check_scheduler.cc:74] Next update check in 9m55s Mar 20 21:36:34.981103 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 20 21:36:34.989791 bash[1482]: Updated "/home/core/.ssh/authorized_keys" Mar 20 21:36:34.992414 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 20 21:36:34.994902 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 20 21:36:35.078472 locksmithd[1483]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 20 21:36:35.157513 containerd[1452]: time="2025-03-20T21:36:35Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Mar 20 21:36:35.160044 containerd[1452]: time="2025-03-20T21:36:35.159884702Z" level=info msg="starting containerd" revision=88aa2f531d6c2922003cc7929e51daf1c14caa0a version=v2.0.1 Mar 20 21:36:35.171553 containerd[1452]: time="2025-03-20T21:36:35.171516391Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="7.431µs" Mar 20 21:36:35.172068 containerd[1452]: time="2025-03-20T21:36:35.171915933Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Mar 20 21:36:35.172212 containerd[1452]: time="2025-03-20T21:36:35.172189726Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Mar 20 21:36:35.172541 containerd[1452]: time="2025-03-20T21:36:35.172517125Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Mar 20 21:36:35.172710 containerd[1452]: time="2025-03-20T21:36:35.172606302Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Mar 20 21:36:35.173074 containerd[1452]: time="2025-03-20T21:36:35.172773383Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 20 21:36:35.173074 containerd[1452]: time="2025-03-20T21:36:35.172919047Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 20 21:36:35.173074 containerd[1452]: time="2025-03-20T21:36:35.172935538Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 20 21:36:35.173525 containerd[1452]: time="2025-03-20T21:36:35.173499490Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 20 21:36:35.173686 containerd[1452]: time="2025-03-20T21:36:35.173642607Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 20 21:36:35.173957 containerd[1452]: time="2025-03-20T21:36:35.173801672Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 20 21:36:35.173957 containerd[1452]: time="2025-03-20T21:36:35.173818330Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Mar 20 21:36:35.174120 containerd[1452]: time="2025-03-20T21:36:35.174049288Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Mar 20 21:36:35.174553 containerd[1452]: time="2025-03-20T21:36:35.174529157Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 20 21:36:35.175200 containerd[1452]: time="2025-03-20T21:36:35.174707176Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 20 21:36:35.175200 containerd[1452]: time="2025-03-20T21:36:35.174723333Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Mar 20 21:36:35.175200 containerd[1452]: time="2025-03-20T21:36:35.174752182Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Mar 20 21:36:35.175200 containerd[1452]: time="2025-03-20T21:36:35.175005977Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Mar 20 21:36:35.175200 containerd[1452]: time="2025-03-20T21:36:35.175080917Z" level=info msg="metadata content store policy set" policy=shared Mar 20 21:36:35.179251 containerd[1452]: time="2025-03-20T21:36:35.179213653Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Mar 20 21:36:35.179422 containerd[1452]: time="2025-03-20T21:36:35.179402611Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Mar 20 21:36:35.179526 containerd[1452]: time="2025-03-20T21:36:35.179508739Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Mar 20 21:36:35.179667 containerd[1452]: time="2025-03-20T21:36:35.179631565Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Mar 20 21:36:35.179798 containerd[1452]: time="2025-03-20T21:36:35.179777521Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Mar 20 21:36:35.179870 containerd[1452]: time="2025-03-20T21:36:35.179855885Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Mar 20 21:36:35.180644 containerd[1452]: time="2025-03-20T21:36:35.179974788Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Mar 20 21:36:35.180644 containerd[1452]: time="2025-03-20T21:36:35.179998794Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Mar 20 21:36:35.180644 containerd[1452]: time="2025-03-20T21:36:35.180013239Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Mar 20 21:36:35.180644 containerd[1452]: time="2025-03-20T21:36:35.180026306Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Mar 20 21:36:35.180644 containerd[1452]: time="2025-03-20T21:36:35.180037078Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Mar 20 21:36:35.180644 containerd[1452]: time="2025-03-20T21:36:35.180048976Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Mar 20 21:36:35.180644 containerd[1452]: time="2025-03-20T21:36:35.180176646Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Mar 20 21:36:35.180644 containerd[1452]: time="2025-03-20T21:36:35.180200443Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Mar 20 21:36:35.180644 containerd[1452]: time="2025-03-20T21:36:35.180213344Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Mar 20 21:36:35.180644 containerd[1452]: time="2025-03-20T21:36:35.180224992Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Mar 20 21:36:35.180644 containerd[1452]: time="2025-03-20T21:36:35.180236223Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Mar 20 21:36:35.180644 containerd[1452]: time="2025-03-20T21:36:35.180248539Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Mar 20 21:36:35.180644 containerd[1452]: time="2025-03-20T21:36:35.180268286Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Mar 20 21:36:35.180644 containerd[1452]: time="2025-03-20T21:36:35.180279976Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Mar 20 21:36:35.180644 containerd[1452]: time="2025-03-20T21:36:35.180292501Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Mar 20 21:36:35.180941 containerd[1452]: time="2025-03-20T21:36:35.180307280Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Mar 20 21:36:35.180941 containerd[1452]: time="2025-03-20T21:36:35.180319889Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Mar 20 21:36:35.180941 containerd[1452]: time="2025-03-20T21:36:35.180591928Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Mar 20 21:36:35.180941 containerd[1452]: time="2025-03-20T21:36:35.180606749Z" level=info msg="Start snapshots syncer" Mar 20 21:36:35.181054 containerd[1452]: time="2025-03-20T21:36:35.181034764Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Mar 20 21:36:35.181488 containerd[1452]: time="2025-03-20T21:36:35.181447416Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Mar 20 21:36:35.181679 containerd[1452]: time="2025-03-20T21:36:35.181659294Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Mar 20 21:36:35.181834 containerd[1452]: time="2025-03-20T21:36:35.181814310Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Mar 20 21:36:35.182005 containerd[1452]: time="2025-03-20T21:36:35.181980389Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Mar 20 21:36:35.182126 containerd[1452]: time="2025-03-20T21:36:35.182107976Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Mar 20 21:36:35.182187 containerd[1452]: time="2025-03-20T21:36:35.182173606Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Mar 20 21:36:35.182253 containerd[1452]: time="2025-03-20T21:36:35.182238067Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Mar 20 21:36:35.182311 containerd[1452]: time="2025-03-20T21:36:35.182297977Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Mar 20 21:36:35.182388 containerd[1452]: time="2025-03-20T21:36:35.182373001Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Mar 20 21:36:35.182459 containerd[1452]: time="2025-03-20T21:36:35.182444309Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Mar 20 21:36:35.182532 containerd[1452]: time="2025-03-20T21:36:35.182517496Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Mar 20 21:36:35.182590 containerd[1452]: time="2025-03-20T21:36:35.182576029Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Mar 20 21:36:35.182715 containerd[1452]: time="2025-03-20T21:36:35.182628508Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Mar 20 21:36:35.182792 containerd[1452]: time="2025-03-20T21:36:35.182775007Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 20 21:36:35.183120 containerd[1452]: time="2025-03-20T21:36:35.182894494Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 20 21:36:35.183120 containerd[1452]: time="2025-03-20T21:36:35.182911068Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 20 21:36:35.183120 containerd[1452]: time="2025-03-20T21:36:35.182921213Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 20 21:36:35.183120 containerd[1452]: time="2025-03-20T21:36:35.182929480Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Mar 20 21:36:35.183120 containerd[1452]: time="2025-03-20T21:36:35.182939625Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Mar 20 21:36:35.183120 containerd[1452]: time="2025-03-20T21:36:35.182949853Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Mar 20 21:36:35.183120 containerd[1452]: time="2025-03-20T21:36:35.183030263Z" level=info msg="runtime interface created" Mar 20 21:36:35.183120 containerd[1452]: time="2025-03-20T21:36:35.183035231Z" level=info msg="created NRI interface" Mar 20 21:36:35.183120 containerd[1452]: time="2025-03-20T21:36:35.183043623Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Mar 20 21:36:35.183120 containerd[1452]: time="2025-03-20T21:36:35.183055187Z" level=info msg="Connect containerd service" Mar 20 21:36:35.183120 containerd[1452]: time="2025-03-20T21:36:35.183089380Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 20 21:36:35.184064 containerd[1452]: time="2025-03-20T21:36:35.184031289Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 20 21:36:35.288690 sshd_keygen[1445]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 20 21:36:35.291078 containerd[1452]: time="2025-03-20T21:36:35.290507497Z" level=info msg="Start subscribing containerd event" Mar 20 21:36:35.291078 containerd[1452]: time="2025-03-20T21:36:35.290586988Z" level=info msg="Start recovering state" Mar 20 21:36:35.291078 containerd[1452]: time="2025-03-20T21:36:35.290685350Z" level=info msg="Start event monitor" Mar 20 21:36:35.291078 containerd[1452]: time="2025-03-20T21:36:35.290701715Z" level=info msg="Start cni network conf syncer for default" Mar 20 21:36:35.291078 containerd[1452]: time="2025-03-20T21:36:35.290713197Z" level=info msg="Start streaming server" Mar 20 21:36:35.291078 containerd[1452]: time="2025-03-20T21:36:35.290722507Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Mar 20 21:36:35.291078 containerd[1452]: time="2025-03-20T21:36:35.290730230Z" level=info msg="runtime interface starting up..." Mar 20 21:36:35.291078 containerd[1452]: time="2025-03-20T21:36:35.290736033Z" level=info msg="starting plugins..." Mar 20 21:36:35.291078 containerd[1452]: time="2025-03-20T21:36:35.290750729Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Mar 20 21:36:35.291815 containerd[1452]: time="2025-03-20T21:36:35.291627050Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 20 21:36:35.291815 containerd[1452]: time="2025-03-20T21:36:35.291767412Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 20 21:36:35.292056 containerd[1452]: time="2025-03-20T21:36:35.291984926Z" level=info msg="containerd successfully booted in 0.135022s" Mar 20 21:36:35.292082 systemd[1]: Started containerd.service - containerd container runtime. Mar 20 21:36:35.307386 tar[1448]: linux-arm64/LICENSE Mar 20 21:36:35.307475 tar[1448]: linux-arm64/README.md Mar 20 21:36:35.310957 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 20 21:36:35.320801 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 20 21:36:35.322443 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 20 21:36:35.342059 systemd[1]: issuegen.service: Deactivated successfully. Mar 20 21:36:35.342245 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 20 21:36:35.344823 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 20 21:36:35.363555 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 20 21:36:35.366291 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 20 21:36:35.368398 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Mar 20 21:36:35.369784 systemd[1]: Reached target getty.target - Login Prompts. Mar 20 21:36:35.920704 systemd-networkd[1398]: eth0: Gained IPv6LL Mar 20 21:36:35.923112 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 20 21:36:35.925779 systemd[1]: Reached target network-online.target - Network is Online. Mar 20 21:36:35.928434 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 20 21:36:35.930996 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 20 21:36:35.939504 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 20 21:36:35.952998 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 20 21:36:35.953319 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 20 21:36:35.955421 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 20 21:36:35.959091 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 20 21:36:36.439011 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 20 21:36:36.440280 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 20 21:36:36.442835 systemd[1]: Startup finished in 515ms (kernel) + 4.479s (initrd) + 3.304s (userspace) = 8.299s. Mar 20 21:36:36.443826 (kubelet)[1553]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 20 21:36:36.866660 kubelet[1553]: E0320 21:36:36.866528 1553 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 20 21:36:36.869226 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 20 21:36:36.869376 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 20 21:36:36.869730 systemd[1]: kubelet.service: Consumed 767ms CPU time, 231M memory peak. Mar 20 21:36:41.931041 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 20 21:36:41.932168 systemd[1]: Started sshd@0-10.0.0.3:22-10.0.0.1:46234.service - OpenSSH per-connection server daemon (10.0.0.1:46234). Mar 20 21:36:42.034460 sshd[1566]: Accepted publickey for core from 10.0.0.1 port 46234 ssh2: RSA SHA256:RPxckmxBxmDHHfBFzj0E8HhfLPWbeWZYhF2T7Zu87Y8 Mar 20 21:36:42.036011 sshd-session[1566]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:36:42.041662 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 20 21:36:42.042524 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 20 21:36:42.047132 systemd-logind[1436]: New session 1 of user core. Mar 20 21:36:42.063746 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 20 21:36:42.067180 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 20 21:36:42.082388 (systemd)[1570]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 20 21:36:42.084362 systemd-logind[1436]: New session c1 of user core. Mar 20 21:36:42.189992 systemd[1570]: Queued start job for default target default.target. Mar 20 21:36:42.198477 systemd[1570]: Created slice app.slice - User Application Slice. Mar 20 21:36:42.198507 systemd[1570]: Reached target paths.target - Paths. Mar 20 21:36:42.198540 systemd[1570]: Reached target timers.target - Timers. Mar 20 21:36:42.199693 systemd[1570]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 20 21:36:42.208134 systemd[1570]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 20 21:36:42.208196 systemd[1570]: Reached target sockets.target - Sockets. Mar 20 21:36:42.208234 systemd[1570]: Reached target basic.target - Basic System. Mar 20 21:36:42.208262 systemd[1570]: Reached target default.target - Main User Target. Mar 20 21:36:42.208286 systemd[1570]: Startup finished in 118ms. Mar 20 21:36:42.208389 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 20 21:36:42.209679 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 20 21:36:42.273988 systemd[1]: Started sshd@1-10.0.0.3:22-10.0.0.1:46244.service - OpenSSH per-connection server daemon (10.0.0.1:46244). Mar 20 21:36:42.329729 sshd[1581]: Accepted publickey for core from 10.0.0.1 port 46244 ssh2: RSA SHA256:RPxckmxBxmDHHfBFzj0E8HhfLPWbeWZYhF2T7Zu87Y8 Mar 20 21:36:42.330913 sshd-session[1581]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:36:42.336401 systemd-logind[1436]: New session 2 of user core. Mar 20 21:36:42.342779 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 20 21:36:42.393355 sshd[1583]: Connection closed by 10.0.0.1 port 46244 Mar 20 21:36:42.393721 sshd-session[1581]: pam_unix(sshd:session): session closed for user core Mar 20 21:36:42.404622 systemd[1]: sshd@1-10.0.0.3:22-10.0.0.1:46244.service: Deactivated successfully. Mar 20 21:36:42.406959 systemd[1]: session-2.scope: Deactivated successfully. Mar 20 21:36:42.409800 systemd-logind[1436]: Session 2 logged out. Waiting for processes to exit. Mar 20 21:36:42.410883 systemd[1]: Started sshd@2-10.0.0.3:22-10.0.0.1:38312.service - OpenSSH per-connection server daemon (10.0.0.1:38312). Mar 20 21:36:42.411580 systemd-logind[1436]: Removed session 2. Mar 20 21:36:42.466504 sshd[1588]: Accepted publickey for core from 10.0.0.1 port 38312 ssh2: RSA SHA256:RPxckmxBxmDHHfBFzj0E8HhfLPWbeWZYhF2T7Zu87Y8 Mar 20 21:36:42.467719 sshd-session[1588]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:36:42.471781 systemd-logind[1436]: New session 3 of user core. Mar 20 21:36:42.478797 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 20 21:36:42.526419 sshd[1591]: Connection closed by 10.0.0.1 port 38312 Mar 20 21:36:42.526705 sshd-session[1588]: pam_unix(sshd:session): session closed for user core Mar 20 21:36:42.535539 systemd[1]: sshd@2-10.0.0.3:22-10.0.0.1:38312.service: Deactivated successfully. Mar 20 21:36:42.537268 systemd[1]: session-3.scope: Deactivated successfully. Mar 20 21:36:42.538855 systemd-logind[1436]: Session 3 logged out. Waiting for processes to exit. Mar 20 21:36:42.539895 systemd[1]: Started sshd@3-10.0.0.3:22-10.0.0.1:38322.service - OpenSSH per-connection server daemon (10.0.0.1:38322). Mar 20 21:36:42.540528 systemd-logind[1436]: Removed session 3. Mar 20 21:36:42.581261 sshd[1596]: Accepted publickey for core from 10.0.0.1 port 38322 ssh2: RSA SHA256:RPxckmxBxmDHHfBFzj0E8HhfLPWbeWZYhF2T7Zu87Y8 Mar 20 21:36:42.582319 sshd-session[1596]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:36:42.586344 systemd-logind[1436]: New session 4 of user core. Mar 20 21:36:42.603752 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 20 21:36:42.654168 sshd[1599]: Connection closed by 10.0.0.1 port 38322 Mar 20 21:36:42.654449 sshd-session[1596]: pam_unix(sshd:session): session closed for user core Mar 20 21:36:42.665411 systemd[1]: sshd@3-10.0.0.3:22-10.0.0.1:38322.service: Deactivated successfully. Mar 20 21:36:42.666620 systemd[1]: session-4.scope: Deactivated successfully. Mar 20 21:36:42.669383 systemd-logind[1436]: Session 4 logged out. Waiting for processes to exit. Mar 20 21:36:42.670983 systemd[1]: Started sshd@4-10.0.0.3:22-10.0.0.1:38328.service - OpenSSH per-connection server daemon (10.0.0.1:38328). Mar 20 21:36:42.671877 systemd-logind[1436]: Removed session 4. Mar 20 21:36:42.721213 sshd[1604]: Accepted publickey for core from 10.0.0.1 port 38328 ssh2: RSA SHA256:RPxckmxBxmDHHfBFzj0E8HhfLPWbeWZYhF2T7Zu87Y8 Mar 20 21:36:42.722480 sshd-session[1604]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:36:42.726493 systemd-logind[1436]: New session 5 of user core. Mar 20 21:36:42.737814 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 20 21:36:42.799275 sudo[1608]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 20 21:36:42.799530 sudo[1608]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 20 21:36:42.818478 sudo[1608]: pam_unix(sudo:session): session closed for user root Mar 20 21:36:42.820364 sshd[1607]: Connection closed by 10.0.0.1 port 38328 Mar 20 21:36:42.820167 sshd-session[1604]: pam_unix(sshd:session): session closed for user core Mar 20 21:36:42.838838 systemd[1]: sshd@4-10.0.0.3:22-10.0.0.1:38328.service: Deactivated successfully. Mar 20 21:36:42.840959 systemd[1]: session-5.scope: Deactivated successfully. Mar 20 21:36:42.841802 systemd-logind[1436]: Session 5 logged out. Waiting for processes to exit. Mar 20 21:36:42.843923 systemd[1]: Started sshd@5-10.0.0.3:22-10.0.0.1:38330.service - OpenSSH per-connection server daemon (10.0.0.1:38330). Mar 20 21:36:42.844562 systemd-logind[1436]: Removed session 5. Mar 20 21:36:42.892334 sshd[1613]: Accepted publickey for core from 10.0.0.1 port 38330 ssh2: RSA SHA256:RPxckmxBxmDHHfBFzj0E8HhfLPWbeWZYhF2T7Zu87Y8 Mar 20 21:36:42.893461 sshd-session[1613]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:36:42.896919 systemd-logind[1436]: New session 6 of user core. Mar 20 21:36:42.908762 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 20 21:36:42.958483 sudo[1618]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 20 21:36:42.958779 sudo[1618]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 20 21:36:42.961612 sudo[1618]: pam_unix(sudo:session): session closed for user root Mar 20 21:36:42.965798 sudo[1617]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 20 21:36:42.966251 sudo[1617]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 20 21:36:42.973908 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 20 21:36:43.015242 augenrules[1640]: No rules Mar 20 21:36:43.015872 systemd[1]: audit-rules.service: Deactivated successfully. Mar 20 21:36:43.017669 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 20 21:36:43.018421 sudo[1617]: pam_unix(sudo:session): session closed for user root Mar 20 21:36:43.019457 sshd[1616]: Connection closed by 10.0.0.1 port 38330 Mar 20 21:36:43.019889 sshd-session[1613]: pam_unix(sshd:session): session closed for user core Mar 20 21:36:43.029694 systemd[1]: sshd@5-10.0.0.3:22-10.0.0.1:38330.service: Deactivated successfully. Mar 20 21:36:43.031056 systemd[1]: session-6.scope: Deactivated successfully. Mar 20 21:36:43.031726 systemd-logind[1436]: Session 6 logged out. Waiting for processes to exit. Mar 20 21:36:43.033436 systemd[1]: Started sshd@6-10.0.0.3:22-10.0.0.1:38338.service - OpenSSH per-connection server daemon (10.0.0.1:38338). Mar 20 21:36:43.034312 systemd-logind[1436]: Removed session 6. Mar 20 21:36:43.083331 sshd[1648]: Accepted publickey for core from 10.0.0.1 port 38338 ssh2: RSA SHA256:RPxckmxBxmDHHfBFzj0E8HhfLPWbeWZYhF2T7Zu87Y8 Mar 20 21:36:43.084351 sshd-session[1648]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:36:43.088332 systemd-logind[1436]: New session 7 of user core. Mar 20 21:36:43.101834 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 20 21:36:43.152195 sudo[1652]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 20 21:36:43.152740 sudo[1652]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 20 21:36:43.487476 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 20 21:36:43.502979 (dockerd)[1673]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 20 21:36:43.754506 dockerd[1673]: time="2025-03-20T21:36:43.754370218Z" level=info msg="Starting up" Mar 20 21:36:43.756413 dockerd[1673]: time="2025-03-20T21:36:43.756355369Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Mar 20 21:36:43.850443 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1231690042-merged.mount: Deactivated successfully. Mar 20 21:36:43.865234 dockerd[1673]: time="2025-03-20T21:36:43.865009627Z" level=info msg="Loading containers: start." Mar 20 21:36:44.004639 kernel: Initializing XFRM netlink socket Mar 20 21:36:44.062436 systemd-networkd[1398]: docker0: Link UP Mar 20 21:36:44.119821 dockerd[1673]: time="2025-03-20T21:36:44.119735763Z" level=info msg="Loading containers: done." Mar 20 21:36:44.134937 dockerd[1673]: time="2025-03-20T21:36:44.134890495Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 20 21:36:44.135075 dockerd[1673]: time="2025-03-20T21:36:44.134971183Z" level=info msg="Docker daemon" commit=c710b88579fcb5e0d53f96dcae976d79323b9166 containerd-snapshotter=false storage-driver=overlay2 version=27.4.1 Mar 20 21:36:44.135173 dockerd[1673]: time="2025-03-20T21:36:44.135142729Z" level=info msg="Daemon has completed initialization" Mar 20 21:36:44.161514 dockerd[1673]: time="2025-03-20T21:36:44.161320408Z" level=info msg="API listen on /run/docker.sock" Mar 20 21:36:44.161604 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 20 21:36:44.849550 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2073667566-merged.mount: Deactivated successfully. Mar 20 21:36:44.954751 containerd[1452]: time="2025-03-20T21:36:44.954716105Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.7\"" Mar 20 21:36:45.606716 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3367964941.mount: Deactivated successfully. Mar 20 21:36:47.120243 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 20 21:36:47.122118 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 20 21:36:47.233202 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 20 21:36:47.236235 (kubelet)[1937]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 20 21:36:47.270625 kubelet[1937]: E0320 21:36:47.270575 1937 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 20 21:36:47.273426 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 20 21:36:47.273563 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 20 21:36:47.273992 systemd[1]: kubelet.service: Consumed 128ms CPU time, 97.3M memory peak. Mar 20 21:36:47.718474 containerd[1452]: time="2025-03-20T21:36:47.718412310Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:36:47.719488 containerd[1452]: time="2025-03-20T21:36:47.719382300Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.7: active requests=0, bytes read=25552768" Mar 20 21:36:47.720164 containerd[1452]: time="2025-03-20T21:36:47.720116269Z" level=info msg="ImageCreate event name:\"sha256:26ae5fde2308729bfda71fa20aa73cb5a1a4490f107f62dc7e1c4c49823cc084\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:36:47.722562 containerd[1452]: time="2025-03-20T21:36:47.722514249Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:22c19cc70fe5806d0a2cb28a6b6b33fd34e6f9e50616bdf6d53649bcfafbc277\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:36:47.723714 containerd[1452]: time="2025-03-20T21:36:47.723667762Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.7\" with image id \"sha256:26ae5fde2308729bfda71fa20aa73cb5a1a4490f107f62dc7e1c4c49823cc084\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:22c19cc70fe5806d0a2cb28a6b6b33fd34e6f9e50616bdf6d53649bcfafbc277\", size \"25549566\" in 2.768909632s" Mar 20 21:36:47.723714 containerd[1452]: time="2025-03-20T21:36:47.723709284Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.7\" returns image reference \"sha256:26ae5fde2308729bfda71fa20aa73cb5a1a4490f107f62dc7e1c4c49823cc084\"" Mar 20 21:36:47.724488 containerd[1452]: time="2025-03-20T21:36:47.724335150Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.7\"" Mar 20 21:36:49.397506 containerd[1452]: time="2025-03-20T21:36:49.397313844Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:36:49.398374 containerd[1452]: time="2025-03-20T21:36:49.398128744Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.7: active requests=0, bytes read=22458980" Mar 20 21:36:49.399680 containerd[1452]: time="2025-03-20T21:36:49.399592027Z" level=info msg="ImageCreate event name:\"sha256:3f2886c2c7c101461e78c37591f8beb12ac073f8dcf5e32c95da9e9689d0c1d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:36:49.401679 containerd[1452]: time="2025-03-20T21:36:49.401607690Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6abe7a0accecf29db6ebab18a10f844678ffed693d79e2e51a18a6f2b4530cbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:36:49.402745 containerd[1452]: time="2025-03-20T21:36:49.402707982Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.7\" with image id \"sha256:3f2886c2c7c101461e78c37591f8beb12ac073f8dcf5e32c95da9e9689d0c1d3\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6abe7a0accecf29db6ebab18a10f844678ffed693d79e2e51a18a6f2b4530cbb\", size \"23899774\" in 1.678344562s" Mar 20 21:36:49.402745 containerd[1452]: time="2025-03-20T21:36:49.402744667Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.7\" returns image reference \"sha256:3f2886c2c7c101461e78c37591f8beb12ac073f8dcf5e32c95da9e9689d0c1d3\"" Mar 20 21:36:49.403216 containerd[1452]: time="2025-03-20T21:36:49.403191058Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.7\"" Mar 20 21:36:50.740686 containerd[1452]: time="2025-03-20T21:36:50.740630457Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:36:50.741173 containerd[1452]: time="2025-03-20T21:36:50.741110312Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.7: active requests=0, bytes read=17125831" Mar 20 21:36:50.742432 containerd[1452]: time="2025-03-20T21:36:50.742385634Z" level=info msg="ImageCreate event name:\"sha256:3dd474fdc8c0d007008dd47bafecdd344fbdace928731ae8b09f58f633f4a30f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:36:50.745102 containerd[1452]: time="2025-03-20T21:36:50.745064631Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:fb80249bcb77ee72b1c9fa5b70bc28a83ed107c9ca71957841ad91db379963bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:36:50.745670 containerd[1452]: time="2025-03-20T21:36:50.745641335Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.7\" with image id \"sha256:3dd474fdc8c0d007008dd47bafecdd344fbdace928731ae8b09f58f633f4a30f\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:fb80249bcb77ee72b1c9fa5b70bc28a83ed107c9ca71957841ad91db379963bf\", size \"18566643\" in 1.342417343s" Mar 20 21:36:50.745724 containerd[1452]: time="2025-03-20T21:36:50.745672437Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.7\" returns image reference \"sha256:3dd474fdc8c0d007008dd47bafecdd344fbdace928731ae8b09f58f633f4a30f\"" Mar 20 21:36:50.746365 containerd[1452]: time="2025-03-20T21:36:50.746102158Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.7\"" Mar 20 21:36:51.701481 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount357107329.mount: Deactivated successfully. Mar 20 21:36:51.903364 containerd[1452]: time="2025-03-20T21:36:51.903313300Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:36:51.903957 containerd[1452]: time="2025-03-20T21:36:51.903901120Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.7: active requests=0, bytes read=26871917" Mar 20 21:36:51.904563 containerd[1452]: time="2025-03-20T21:36:51.904535820Z" level=info msg="ImageCreate event name:\"sha256:939054a0dc9c7c1596b061fc2380758139ce62751b44a0b21b3afc7abd7eb3ff\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:36:51.906352 containerd[1452]: time="2025-03-20T21:36:51.906319464Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e5839270c96c3ad1bea1dce4935126d3281297527f3655408d2970aa4b5cf178\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:36:51.907065 containerd[1452]: time="2025-03-20T21:36:51.907027461Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.7\" with image id \"sha256:939054a0dc9c7c1596b061fc2380758139ce62751b44a0b21b3afc7abd7eb3ff\", repo tag \"registry.k8s.io/kube-proxy:v1.31.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:e5839270c96c3ad1bea1dce4935126d3281297527f3655408d2970aa4b5cf178\", size \"26870934\" in 1.160891994s" Mar 20 21:36:51.907117 containerd[1452]: time="2025-03-20T21:36:51.907066984Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.7\" returns image reference \"sha256:939054a0dc9c7c1596b061fc2380758139ce62751b44a0b21b3afc7abd7eb3ff\"" Mar 20 21:36:51.907568 containerd[1452]: time="2025-03-20T21:36:51.907540416Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Mar 20 21:36:52.542946 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3459379725.mount: Deactivated successfully. Mar 20 21:36:53.441658 containerd[1452]: time="2025-03-20T21:36:53.441591443Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:36:53.442691 containerd[1452]: time="2025-03-20T21:36:53.442631740Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Mar 20 21:36:53.443417 containerd[1452]: time="2025-03-20T21:36:53.443381974Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:36:53.446681 containerd[1452]: time="2025-03-20T21:36:53.446450578Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:36:53.447076 containerd[1452]: time="2025-03-20T21:36:53.447038333Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.53946729s" Mar 20 21:36:53.447076 containerd[1452]: time="2025-03-20T21:36:53.447070660Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Mar 20 21:36:53.447538 containerd[1452]: time="2025-03-20T21:36:53.447517259Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 20 21:36:53.906307 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2500454045.mount: Deactivated successfully. Mar 20 21:36:53.911386 containerd[1452]: time="2025-03-20T21:36:53.911292071Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 20 21:36:53.912048 containerd[1452]: time="2025-03-20T21:36:53.911996043Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Mar 20 21:36:53.913049 containerd[1452]: time="2025-03-20T21:36:53.913007466Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 20 21:36:53.915121 containerd[1452]: time="2025-03-20T21:36:53.915072920Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 20 21:36:53.915752 containerd[1452]: time="2025-03-20T21:36:53.915687379Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 468.142451ms" Mar 20 21:36:53.915752 containerd[1452]: time="2025-03-20T21:36:53.915714566Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Mar 20 21:36:53.916130 containerd[1452]: time="2025-03-20T21:36:53.916104462Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Mar 20 21:36:54.368748 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3521593265.mount: Deactivated successfully. Mar 20 21:36:57.114139 containerd[1452]: time="2025-03-20T21:36:57.114079179Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:36:57.114605 containerd[1452]: time="2025-03-20T21:36:57.114533790Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406427" Mar 20 21:36:57.115454 containerd[1452]: time="2025-03-20T21:36:57.115424770Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:36:57.118932 containerd[1452]: time="2025-03-20T21:36:57.118895555Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:36:57.119634 containerd[1452]: time="2025-03-20T21:36:57.119528860Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 3.203384425s" Mar 20 21:36:57.119634 containerd[1452]: time="2025-03-20T21:36:57.119591444Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Mar 20 21:36:57.524945 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 20 21:36:57.527027 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 20 21:36:57.633403 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 20 21:36:57.636516 (kubelet)[2084]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 20 21:36:57.674325 kubelet[2084]: E0320 21:36:57.674274 2084 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 20 21:36:57.676806 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 20 21:36:57.676951 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 20 21:36:57.677241 systemd[1]: kubelet.service: Consumed 124ms CPU time, 97.3M memory peak. Mar 20 21:37:02.467146 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 20 21:37:02.467280 systemd[1]: kubelet.service: Consumed 124ms CPU time, 97.3M memory peak. Mar 20 21:37:02.469201 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 20 21:37:02.490536 systemd[1]: Reload requested from client PID 2113 ('systemctl') (unit session-7.scope)... Mar 20 21:37:02.490674 systemd[1]: Reloading... Mar 20 21:37:02.559645 zram_generator::config[2160]: No configuration found. Mar 20 21:37:02.675689 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 20 21:37:02.745767 systemd[1]: Reloading finished in 254 ms. Mar 20 21:37:02.798354 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 20 21:37:02.800696 systemd[1]: kubelet.service: Deactivated successfully. Mar 20 21:37:02.800980 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 20 21:37:02.801135 systemd[1]: kubelet.service: Consumed 82ms CPU time, 82.4M memory peak. Mar 20 21:37:02.802550 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 20 21:37:02.894893 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 20 21:37:02.898415 (kubelet)[2204]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 20 21:37:02.935438 kubelet[2204]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 20 21:37:02.935438 kubelet[2204]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 20 21:37:02.935438 kubelet[2204]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 20 21:37:02.935438 kubelet[2204]: I0320 21:37:02.935179 2204 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 20 21:37:03.312167 kubelet[2204]: I0320 21:37:03.312123 2204 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Mar 20 21:37:03.312167 kubelet[2204]: I0320 21:37:03.312151 2204 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 20 21:37:03.312365 kubelet[2204]: I0320 21:37:03.312333 2204 server.go:929] "Client rotation is on, will bootstrap in background" Mar 20 21:37:03.361910 kubelet[2204]: E0320 21:37:03.361874 2204 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.3:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.3:6443: connect: connection refused" logger="UnhandledError" Mar 20 21:37:03.362732 kubelet[2204]: I0320 21:37:03.362708 2204 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 20 21:37:03.370498 kubelet[2204]: I0320 21:37:03.370474 2204 server.go:1426] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 20 21:37:03.373851 kubelet[2204]: I0320 21:37:03.373826 2204 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 20 21:37:03.374149 kubelet[2204]: I0320 21:37:03.374130 2204 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 20 21:37:03.374271 kubelet[2204]: I0320 21:37:03.374241 2204 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 20 21:37:03.374421 kubelet[2204]: I0320 21:37:03.374266 2204 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 20 21:37:03.374557 kubelet[2204]: I0320 21:37:03.374545 2204 topology_manager.go:138] "Creating topology manager with none policy" Mar 20 21:37:03.374557 kubelet[2204]: I0320 21:37:03.374556 2204 container_manager_linux.go:300] "Creating device plugin manager" Mar 20 21:37:03.374767 kubelet[2204]: I0320 21:37:03.374746 2204 state_mem.go:36] "Initialized new in-memory state store" Mar 20 21:37:03.376829 kubelet[2204]: I0320 21:37:03.376482 2204 kubelet.go:408] "Attempting to sync node with API server" Mar 20 21:37:03.376829 kubelet[2204]: I0320 21:37:03.376508 2204 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 20 21:37:03.376829 kubelet[2204]: I0320 21:37:03.376589 2204 kubelet.go:314] "Adding apiserver pod source" Mar 20 21:37:03.376829 kubelet[2204]: I0320 21:37:03.376600 2204 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 20 21:37:03.380695 kubelet[2204]: W0320 21:37:03.380649 2204 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.3:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.3:6443: connect: connection refused Mar 20 21:37:03.381082 kubelet[2204]: E0320 21:37:03.381019 2204 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.3:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.3:6443: connect: connection refused" logger="UnhandledError" Mar 20 21:37:03.381082 kubelet[2204]: W0320 21:37:03.380883 2204 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.3:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.3:6443: connect: connection refused Mar 20 21:37:03.381082 kubelet[2204]: E0320 21:37:03.381057 2204 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.3:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.3:6443: connect: connection refused" logger="UnhandledError" Mar 20 21:37:03.382210 kubelet[2204]: I0320 21:37:03.382186 2204 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" Mar 20 21:37:03.384073 kubelet[2204]: I0320 21:37:03.384041 2204 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 20 21:37:03.386725 kubelet[2204]: W0320 21:37:03.386706 2204 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 20 21:37:03.387875 kubelet[2204]: I0320 21:37:03.387856 2204 server.go:1269] "Started kubelet" Mar 20 21:37:03.388922 kubelet[2204]: I0320 21:37:03.388847 2204 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 20 21:37:03.389657 kubelet[2204]: I0320 21:37:03.389452 2204 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 20 21:37:03.389657 kubelet[2204]: I0320 21:37:03.389216 2204 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 20 21:37:03.389749 kubelet[2204]: I0320 21:37:03.389716 2204 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 20 21:37:03.389927 kubelet[2204]: I0320 21:37:03.389857 2204 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 20 21:37:03.390642 kubelet[2204]: I0320 21:37:03.390602 2204 server.go:460] "Adding debug handlers to kubelet server" Mar 20 21:37:03.391535 kubelet[2204]: I0320 21:37:03.391519 2204 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 20 21:37:03.392555 kubelet[2204]: E0320 21:37:03.391924 2204 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 20 21:37:03.392555 kubelet[2204]: I0320 21:37:03.392202 2204 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 20 21:37:03.392555 kubelet[2204]: I0320 21:37:03.392251 2204 reconciler.go:26] "Reconciler: start to sync state" Mar 20 21:37:03.394952 kubelet[2204]: W0320 21:37:03.394892 2204 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.3:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.3:6443: connect: connection refused Mar 20 21:37:03.394952 kubelet[2204]: E0320 21:37:03.394949 2204 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.3:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.3:6443: connect: connection refused" logger="UnhandledError" Mar 20 21:37:03.395042 kubelet[2204]: E0320 21:37:03.394949 2204 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.3:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.3:6443: connect: connection refused" interval="200ms" Mar 20 21:37:03.395042 kubelet[2204]: I0320 21:37:03.394983 2204 factory.go:221] Registration of the systemd container factory successfully Mar 20 21:37:03.395239 kubelet[2204]: I0320 21:37:03.395209 2204 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 20 21:37:03.396832 kubelet[2204]: E0320 21:37:03.395841 2204 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.3:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.3:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.182ea08ab1cc1262 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-03-20 21:37:03.38783293 +0000 UTC m=+0.486270107,LastTimestamp:2025-03-20 21:37:03.38783293 +0000 UTC m=+0.486270107,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 20 21:37:03.397209 kubelet[2204]: E0320 21:37:03.397175 2204 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 20 21:37:03.397485 kubelet[2204]: I0320 21:37:03.397461 2204 factory.go:221] Registration of the containerd container factory successfully Mar 20 21:37:03.406924 kubelet[2204]: I0320 21:37:03.406880 2204 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 20 21:37:03.408111 kubelet[2204]: I0320 21:37:03.408072 2204 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 20 21:37:03.408111 kubelet[2204]: I0320 21:37:03.408092 2204 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 20 21:37:03.408111 kubelet[2204]: I0320 21:37:03.408114 2204 state_mem.go:36] "Initialized new in-memory state store" Mar 20 21:37:03.408352 kubelet[2204]: I0320 21:37:03.408322 2204 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 20 21:37:03.408352 kubelet[2204]: I0320 21:37:03.408346 2204 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 20 21:37:03.408399 kubelet[2204]: I0320 21:37:03.408370 2204 kubelet.go:2321] "Starting kubelet main sync loop" Mar 20 21:37:03.408423 kubelet[2204]: E0320 21:37:03.408408 2204 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 20 21:37:03.409405 kubelet[2204]: W0320 21:37:03.409298 2204 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.3:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.3:6443: connect: connection refused Mar 20 21:37:03.409405 kubelet[2204]: E0320 21:37:03.409353 2204 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.3:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.3:6443: connect: connection refused" logger="UnhandledError" Mar 20 21:37:03.473955 kubelet[2204]: I0320 21:37:03.473911 2204 policy_none.go:49] "None policy: Start" Mar 20 21:37:03.474870 kubelet[2204]: I0320 21:37:03.474847 2204 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 20 21:37:03.474902 kubelet[2204]: I0320 21:37:03.474874 2204 state_mem.go:35] "Initializing new in-memory state store" Mar 20 21:37:03.482040 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 20 21:37:03.492116 kubelet[2204]: E0320 21:37:03.492093 2204 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 20 21:37:03.492296 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 20 21:37:03.504832 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 20 21:37:03.506018 kubelet[2204]: I0320 21:37:03.505929 2204 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 20 21:37:03.506174 kubelet[2204]: I0320 21:37:03.506094 2204 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 20 21:37:03.506174 kubelet[2204]: I0320 21:37:03.506111 2204 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 20 21:37:03.507045 kubelet[2204]: I0320 21:37:03.506545 2204 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 20 21:37:03.507633 kubelet[2204]: E0320 21:37:03.507597 2204 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 20 21:37:03.522705 systemd[1]: Created slice kubepods-burstable-pod4f094e5ae0a9302b5d01b318229ea2de.slice - libcontainer container kubepods-burstable-pod4f094e5ae0a9302b5d01b318229ea2de.slice. Mar 20 21:37:03.549218 systemd[1]: Created slice kubepods-burstable-pod60762308083b5ef6c837b1be48ec53d6.slice - libcontainer container kubepods-burstable-pod60762308083b5ef6c837b1be48ec53d6.slice. Mar 20 21:37:03.563796 systemd[1]: Created slice kubepods-burstable-pod6f32907a07e55aea05abdc5cd284a8d5.slice - libcontainer container kubepods-burstable-pod6f32907a07e55aea05abdc5cd284a8d5.slice. Mar 20 21:37:03.592630 kubelet[2204]: I0320 21:37:03.592553 2204 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 20 21:37:03.592630 kubelet[2204]: I0320 21:37:03.592588 2204 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 20 21:37:03.592630 kubelet[2204]: I0320 21:37:03.592636 2204 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 20 21:37:03.592630 kubelet[2204]: I0320 21:37:03.592679 2204 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6f32907a07e55aea05abdc5cd284a8d5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6f32907a07e55aea05abdc5cd284a8d5\") " pod="kube-system/kube-scheduler-localhost" Mar 20 21:37:03.592822 kubelet[2204]: I0320 21:37:03.592697 2204 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f094e5ae0a9302b5d01b318229ea2de-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"4f094e5ae0a9302b5d01b318229ea2de\") " pod="kube-system/kube-apiserver-localhost" Mar 20 21:37:03.592822 kubelet[2204]: I0320 21:37:03.592712 2204 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 20 21:37:03.592822 kubelet[2204]: I0320 21:37:03.592726 2204 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 20 21:37:03.592822 kubelet[2204]: I0320 21:37:03.592739 2204 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f094e5ae0a9302b5d01b318229ea2de-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"4f094e5ae0a9302b5d01b318229ea2de\") " pod="kube-system/kube-apiserver-localhost" Mar 20 21:37:03.592822 kubelet[2204]: I0320 21:37:03.592753 2204 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f094e5ae0a9302b5d01b318229ea2de-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"4f094e5ae0a9302b5d01b318229ea2de\") " pod="kube-system/kube-apiserver-localhost" Mar 20 21:37:03.595740 kubelet[2204]: E0320 21:37:03.595706 2204 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.3:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.3:6443: connect: connection refused" interval="400ms" Mar 20 21:37:03.607636 kubelet[2204]: I0320 21:37:03.607602 2204 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Mar 20 21:37:03.607979 kubelet[2204]: E0320 21:37:03.607958 2204 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.3:6443/api/v1/nodes\": dial tcp 10.0.0.3:6443: connect: connection refused" node="localhost" Mar 20 21:37:03.809577 kubelet[2204]: I0320 21:37:03.809547 2204 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Mar 20 21:37:03.810020 kubelet[2204]: E0320 21:37:03.809989 2204 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.3:6443/api/v1/nodes\": dial tcp 10.0.0.3:6443: connect: connection refused" node="localhost" Mar 20 21:37:03.847575 kubelet[2204]: E0320 21:37:03.847432 2204 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:37:03.848394 containerd[1452]: time="2025-03-20T21:37:03.848316252Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:4f094e5ae0a9302b5d01b318229ea2de,Namespace:kube-system,Attempt:0,}" Mar 20 21:37:03.862785 kubelet[2204]: E0320 21:37:03.862619 2204 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:37:03.862990 containerd[1452]: time="2025-03-20T21:37:03.862951779Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:60762308083b5ef6c837b1be48ec53d6,Namespace:kube-system,Attempt:0,}" Mar 20 21:37:03.865374 containerd[1452]: time="2025-03-20T21:37:03.865343343Z" level=info msg="connecting to shim ca79fa3083dcb399358220e270edea2f6f6feb15c853ba0cba7711522fadd4ea" address="unix:///run/containerd/s/45cd5b5167bcf73fd067877aa788899a27fea0a0a96f1a300f5ec445298422bf" namespace=k8s.io protocol=ttrpc version=3 Mar 20 21:37:03.867314 kubelet[2204]: E0320 21:37:03.867241 2204 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:37:03.868095 containerd[1452]: time="2025-03-20T21:37:03.868055321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6f32907a07e55aea05abdc5cd284a8d5,Namespace:kube-system,Attempt:0,}" Mar 20 21:37:03.888299 containerd[1452]: time="2025-03-20T21:37:03.888261075Z" level=info msg="connecting to shim b5ab575c8d9ee6c948deb873e6329fd669a4f8f90e790653df20ed89892df003" address="unix:///run/containerd/s/881a8f9cb59455894cb35561c86ef703fd57e47ca41e68aa288dfdfa5b9b5144" namespace=k8s.io protocol=ttrpc version=3 Mar 20 21:37:03.890805 systemd[1]: Started cri-containerd-ca79fa3083dcb399358220e270edea2f6f6feb15c853ba0cba7711522fadd4ea.scope - libcontainer container ca79fa3083dcb399358220e270edea2f6f6feb15c853ba0cba7711522fadd4ea. Mar 20 21:37:03.897216 containerd[1452]: time="2025-03-20T21:37:03.897163765Z" level=info msg="connecting to shim 1e0554be4f4205226f9fae59be9bcaf19c2371ae8270cee0aa82e556e6d35e39" address="unix:///run/containerd/s/1be86b6e132f6524f90c92be50c8387fc007299c53d3b0e12501676f43ff4f97" namespace=k8s.io protocol=ttrpc version=3 Mar 20 21:37:03.919875 systemd[1]: Started cri-containerd-b5ab575c8d9ee6c948deb873e6329fd669a4f8f90e790653df20ed89892df003.scope - libcontainer container b5ab575c8d9ee6c948deb873e6329fd669a4f8f90e790653df20ed89892df003. Mar 20 21:37:03.924018 systemd[1]: Started cri-containerd-1e0554be4f4205226f9fae59be9bcaf19c2371ae8270cee0aa82e556e6d35e39.scope - libcontainer container 1e0554be4f4205226f9fae59be9bcaf19c2371ae8270cee0aa82e556e6d35e39. Mar 20 21:37:03.935454 containerd[1452]: time="2025-03-20T21:37:03.935238805Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:4f094e5ae0a9302b5d01b318229ea2de,Namespace:kube-system,Attempt:0,} returns sandbox id \"ca79fa3083dcb399358220e270edea2f6f6feb15c853ba0cba7711522fadd4ea\"" Mar 20 21:37:03.937714 kubelet[2204]: E0320 21:37:03.936464 2204 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:37:03.938811 containerd[1452]: time="2025-03-20T21:37:03.938744127Z" level=info msg="CreateContainer within sandbox \"ca79fa3083dcb399358220e270edea2f6f6feb15c853ba0cba7711522fadd4ea\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 20 21:37:03.949637 containerd[1452]: time="2025-03-20T21:37:03.948665115Z" level=info msg="Container 4e95632fcee350343857b7234e4d2e3110736904ab25d5ab09e21a337b68c1d9: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:37:03.957077 containerd[1452]: time="2025-03-20T21:37:03.957026122Z" level=info msg="CreateContainer within sandbox \"ca79fa3083dcb399358220e270edea2f6f6feb15c853ba0cba7711522fadd4ea\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"4e95632fcee350343857b7234e4d2e3110736904ab25d5ab09e21a337b68c1d9\"" Mar 20 21:37:03.957995 containerd[1452]: time="2025-03-20T21:37:03.957473146Z" level=info msg="StartContainer for \"4e95632fcee350343857b7234e4d2e3110736904ab25d5ab09e21a337b68c1d9\"" Mar 20 21:37:03.958542 containerd[1452]: time="2025-03-20T21:37:03.958516991Z" level=info msg="connecting to shim 4e95632fcee350343857b7234e4d2e3110736904ab25d5ab09e21a337b68c1d9" address="unix:///run/containerd/s/45cd5b5167bcf73fd067877aa788899a27fea0a0a96f1a300f5ec445298422bf" protocol=ttrpc version=3 Mar 20 21:37:03.961736 containerd[1452]: time="2025-03-20T21:37:03.961700538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:60762308083b5ef6c837b1be48ec53d6,Namespace:kube-system,Attempt:0,} returns sandbox id \"b5ab575c8d9ee6c948deb873e6329fd669a4f8f90e790653df20ed89892df003\"" Mar 20 21:37:03.962396 kubelet[2204]: E0320 21:37:03.962266 2204 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:37:03.965723 containerd[1452]: time="2025-03-20T21:37:03.965683277Z" level=info msg="CreateContainer within sandbox \"b5ab575c8d9ee6c948deb873e6329fd669a4f8f90e790653df20ed89892df003\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 20 21:37:03.969138 containerd[1452]: time="2025-03-20T21:37:03.969104872Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6f32907a07e55aea05abdc5cd284a8d5,Namespace:kube-system,Attempt:0,} returns sandbox id \"1e0554be4f4205226f9fae59be9bcaf19c2371ae8270cee0aa82e556e6d35e39\"" Mar 20 21:37:03.970181 kubelet[2204]: E0320 21:37:03.970158 2204 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:37:03.972327 containerd[1452]: time="2025-03-20T21:37:03.972281092Z" level=info msg="CreateContainer within sandbox \"1e0554be4f4205226f9fae59be9bcaf19c2371ae8270cee0aa82e556e6d35e39\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 20 21:37:03.975795 containerd[1452]: time="2025-03-20T21:37:03.975756583Z" level=info msg="Container ff9fabd5a7bac10e83deb49cb4f2279aeb1672c1fd17dbd71b55fc15b8911069: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:37:03.976824 systemd[1]: Started cri-containerd-4e95632fcee350343857b7234e4d2e3110736904ab25d5ab09e21a337b68c1d9.scope - libcontainer container 4e95632fcee350343857b7234e4d2e3110736904ab25d5ab09e21a337b68c1d9. Mar 20 21:37:03.983018 containerd[1452]: time="2025-03-20T21:37:03.982977445Z" level=info msg="CreateContainer within sandbox \"b5ab575c8d9ee6c948deb873e6329fd669a4f8f90e790653df20ed89892df003\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ff9fabd5a7bac10e83deb49cb4f2279aeb1672c1fd17dbd71b55fc15b8911069\"" Mar 20 21:37:03.983401 containerd[1452]: time="2025-03-20T21:37:03.983362245Z" level=info msg="StartContainer for \"ff9fabd5a7bac10e83deb49cb4f2279aeb1672c1fd17dbd71b55fc15b8911069\"" Mar 20 21:37:03.983731 containerd[1452]: time="2025-03-20T21:37:03.983592925Z" level=info msg="Container e20a69985a15ab46d7d0ffe208cb08b3b4907b83f165b3e19c33af51aa6f6b70: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:37:03.984380 containerd[1452]: time="2025-03-20T21:37:03.984342984Z" level=info msg="connecting to shim ff9fabd5a7bac10e83deb49cb4f2279aeb1672c1fd17dbd71b55fc15b8911069" address="unix:///run/containerd/s/881a8f9cb59455894cb35561c86ef703fd57e47ca41e68aa288dfdfa5b9b5144" protocol=ttrpc version=3 Mar 20 21:37:03.989422 containerd[1452]: time="2025-03-20T21:37:03.989391950Z" level=info msg="CreateContainer within sandbox \"1e0554be4f4205226f9fae59be9bcaf19c2371ae8270cee0aa82e556e6d35e39\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e20a69985a15ab46d7d0ffe208cb08b3b4907b83f165b3e19c33af51aa6f6b70\"" Mar 20 21:37:03.991957 containerd[1452]: time="2025-03-20T21:37:03.991868243Z" level=info msg="StartContainer for \"e20a69985a15ab46d7d0ffe208cb08b3b4907b83f165b3e19c33af51aa6f6b70\"" Mar 20 21:37:03.993071 containerd[1452]: time="2025-03-20T21:37:03.993043584Z" level=info msg="connecting to shim e20a69985a15ab46d7d0ffe208cb08b3b4907b83f165b3e19c33af51aa6f6b70" address="unix:///run/containerd/s/1be86b6e132f6524f90c92be50c8387fc007299c53d3b0e12501676f43ff4f97" protocol=ttrpc version=3 Mar 20 21:37:03.997029 kubelet[2204]: E0320 21:37:03.996977 2204 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.3:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.3:6443: connect: connection refused" interval="800ms" Mar 20 21:37:04.004759 systemd[1]: Started cri-containerd-ff9fabd5a7bac10e83deb49cb4f2279aeb1672c1fd17dbd71b55fc15b8911069.scope - libcontainer container ff9fabd5a7bac10e83deb49cb4f2279aeb1672c1fd17dbd71b55fc15b8911069. Mar 20 21:37:04.010412 systemd[1]: Started cri-containerd-e20a69985a15ab46d7d0ffe208cb08b3b4907b83f165b3e19c33af51aa6f6b70.scope - libcontainer container e20a69985a15ab46d7d0ffe208cb08b3b4907b83f165b3e19c33af51aa6f6b70. Mar 20 21:37:04.018901 containerd[1452]: time="2025-03-20T21:37:04.018831705Z" level=info msg="StartContainer for \"4e95632fcee350343857b7234e4d2e3110736904ab25d5ab09e21a337b68c1d9\" returns successfully" Mar 20 21:37:04.063446 containerd[1452]: time="2025-03-20T21:37:04.063360753Z" level=info msg="StartContainer for \"ff9fabd5a7bac10e83deb49cb4f2279aeb1672c1fd17dbd71b55fc15b8911069\" returns successfully" Mar 20 21:37:04.063756 containerd[1452]: time="2025-03-20T21:37:04.063679323Z" level=info msg="StartContainer for \"e20a69985a15ab46d7d0ffe208cb08b3b4907b83f165b3e19c33af51aa6f6b70\" returns successfully" Mar 20 21:37:04.212230 kubelet[2204]: I0320 21:37:04.211821 2204 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Mar 20 21:37:04.414033 kubelet[2204]: E0320 21:37:04.414000 2204 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:37:04.422910 kubelet[2204]: E0320 21:37:04.422882 2204 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:37:04.424892 kubelet[2204]: E0320 21:37:04.424867 2204 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:37:05.426384 kubelet[2204]: E0320 21:37:05.426346 2204 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:37:05.616080 kubelet[2204]: E0320 21:37:05.616044 2204 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:37:05.696062 kubelet[2204]: E0320 21:37:05.695785 2204 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Mar 20 21:37:05.809780 kubelet[2204]: E0320 21:37:05.809681 2204 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.182ea08ab1cc1262 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-03-20 21:37:03.38783293 +0000 UTC m=+0.486270107,LastTimestamp:2025-03-20 21:37:03.38783293 +0000 UTC m=+0.486270107,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 20 21:37:05.878499 kubelet[2204]: I0320 21:37:05.878446 2204 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Mar 20 21:37:05.878499 kubelet[2204]: E0320 21:37:05.878486 2204 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Mar 20 21:37:06.378432 kubelet[2204]: I0320 21:37:06.378401 2204 apiserver.go:52] "Watching apiserver" Mar 20 21:37:06.392423 kubelet[2204]: I0320 21:37:06.392398 2204 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 20 21:37:07.862655 systemd[1]: Reload requested from client PID 2481 ('systemctl') (unit session-7.scope)... Mar 20 21:37:07.862671 systemd[1]: Reloading... Mar 20 21:37:07.929657 zram_generator::config[2528]: No configuration found. Mar 20 21:37:08.086087 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 20 21:37:08.168523 systemd[1]: Reloading finished in 305 ms. Mar 20 21:37:08.197773 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 20 21:37:08.201052 systemd[1]: kubelet.service: Deactivated successfully. Mar 20 21:37:08.201243 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 20 21:37:08.201283 systemd[1]: kubelet.service: Consumed 881ms CPU time, 118.4M memory peak. Mar 20 21:37:08.203952 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 20 21:37:08.314418 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 20 21:37:08.318951 (kubelet)[2567]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 20 21:37:08.361243 kubelet[2567]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 20 21:37:08.361243 kubelet[2567]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 20 21:37:08.361243 kubelet[2567]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 20 21:37:08.362853 kubelet[2567]: I0320 21:37:08.361596 2567 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 20 21:37:08.366667 kubelet[2567]: I0320 21:37:08.366639 2567 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Mar 20 21:37:08.366760 kubelet[2567]: I0320 21:37:08.366750 2567 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 20 21:37:08.367021 kubelet[2567]: I0320 21:37:08.367001 2567 server.go:929] "Client rotation is on, will bootstrap in background" Mar 20 21:37:08.368337 kubelet[2567]: I0320 21:37:08.368312 2567 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 20 21:37:08.370337 kubelet[2567]: I0320 21:37:08.370302 2567 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 20 21:37:08.375039 kubelet[2567]: I0320 21:37:08.375017 2567 server.go:1426] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 20 21:37:08.377509 kubelet[2567]: I0320 21:37:08.377481 2567 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 20 21:37:08.377620 kubelet[2567]: I0320 21:37:08.377598 2567 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 20 21:37:08.377733 kubelet[2567]: I0320 21:37:08.377710 2567 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 20 21:37:08.377884 kubelet[2567]: I0320 21:37:08.377733 2567 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 20 21:37:08.377956 kubelet[2567]: I0320 21:37:08.377894 2567 topology_manager.go:138] "Creating topology manager with none policy" Mar 20 21:37:08.377956 kubelet[2567]: I0320 21:37:08.377903 2567 container_manager_linux.go:300] "Creating device plugin manager" Mar 20 21:37:08.377956 kubelet[2567]: I0320 21:37:08.377930 2567 state_mem.go:36] "Initialized new in-memory state store" Mar 20 21:37:08.378023 kubelet[2567]: I0320 21:37:08.378019 2567 kubelet.go:408] "Attempting to sync node with API server" Mar 20 21:37:08.378048 kubelet[2567]: I0320 21:37:08.378030 2567 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 20 21:37:08.378048 kubelet[2567]: I0320 21:37:08.378047 2567 kubelet.go:314] "Adding apiserver pod source" Mar 20 21:37:08.378087 kubelet[2567]: I0320 21:37:08.378056 2567 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 20 21:37:08.378649 kubelet[2567]: I0320 21:37:08.378543 2567 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" Mar 20 21:37:08.379025 kubelet[2567]: I0320 21:37:08.378999 2567 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 20 21:37:08.381858 kubelet[2567]: I0320 21:37:08.379379 2567 server.go:1269] "Started kubelet" Mar 20 21:37:08.381858 kubelet[2567]: I0320 21:37:08.381027 2567 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 20 21:37:08.381858 kubelet[2567]: I0320 21:37:08.381537 2567 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 20 21:37:08.381858 kubelet[2567]: I0320 21:37:08.381752 2567 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 20 21:37:08.385398 kubelet[2567]: I0320 21:37:08.385368 2567 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 20 21:37:08.386773 kubelet[2567]: I0320 21:37:08.386734 2567 server.go:460] "Adding debug handlers to kubelet server" Mar 20 21:37:08.392011 kubelet[2567]: I0320 21:37:08.391824 2567 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 20 21:37:08.392113 kubelet[2567]: E0320 21:37:08.392086 2567 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 20 21:37:08.392275 kubelet[2567]: I0320 21:37:08.392250 2567 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 20 21:37:08.393911 kubelet[2567]: E0320 21:37:08.393887 2567 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 20 21:37:08.394047 kubelet[2567]: I0320 21:37:08.394028 2567 factory.go:221] Registration of the systemd container factory successfully Mar 20 21:37:08.394125 kubelet[2567]: I0320 21:37:08.394108 2567 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 20 21:37:08.394437 kubelet[2567]: I0320 21:37:08.394414 2567 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 20 21:37:08.394667 kubelet[2567]: I0320 21:37:08.394540 2567 reconciler.go:26] "Reconciler: start to sync state" Mar 20 21:37:08.398202 kubelet[2567]: I0320 21:37:08.397495 2567 factory.go:221] Registration of the containerd container factory successfully Mar 20 21:37:08.398997 kubelet[2567]: I0320 21:37:08.398805 2567 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 20 21:37:08.402507 kubelet[2567]: I0320 21:37:08.402481 2567 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 20 21:37:08.402745 kubelet[2567]: I0320 21:37:08.402565 2567 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 20 21:37:08.402745 kubelet[2567]: I0320 21:37:08.402587 2567 kubelet.go:2321] "Starting kubelet main sync loop" Mar 20 21:37:08.403265 kubelet[2567]: E0320 21:37:08.403241 2567 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 20 21:37:08.429544 kubelet[2567]: I0320 21:37:08.429438 2567 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 20 21:37:08.429544 kubelet[2567]: I0320 21:37:08.429472 2567 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 20 21:37:08.429544 kubelet[2567]: I0320 21:37:08.429489 2567 state_mem.go:36] "Initialized new in-memory state store" Mar 20 21:37:08.429678 kubelet[2567]: I0320 21:37:08.429648 2567 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 20 21:37:08.429678 kubelet[2567]: I0320 21:37:08.429662 2567 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 20 21:37:08.429678 kubelet[2567]: I0320 21:37:08.429678 2567 policy_none.go:49] "None policy: Start" Mar 20 21:37:08.431294 kubelet[2567]: I0320 21:37:08.431266 2567 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 20 21:37:08.431345 kubelet[2567]: I0320 21:37:08.431296 2567 state_mem.go:35] "Initializing new in-memory state store" Mar 20 21:37:08.431428 kubelet[2567]: I0320 21:37:08.431414 2567 state_mem.go:75] "Updated machine memory state" Mar 20 21:37:08.435280 kubelet[2567]: I0320 21:37:08.435253 2567 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 20 21:37:08.435723 kubelet[2567]: I0320 21:37:08.435397 2567 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 20 21:37:08.435723 kubelet[2567]: I0320 21:37:08.435413 2567 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 20 21:37:08.435723 kubelet[2567]: I0320 21:37:08.435555 2567 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 20 21:37:08.538658 kubelet[2567]: I0320 21:37:08.538628 2567 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Mar 20 21:37:08.543268 kubelet[2567]: I0320 21:37:08.543235 2567 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Mar 20 21:37:08.543352 kubelet[2567]: I0320 21:37:08.543307 2567 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Mar 20 21:37:08.596093 kubelet[2567]: I0320 21:37:08.595980 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 20 21:37:08.596093 kubelet[2567]: I0320 21:37:08.596013 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 20 21:37:08.596093 kubelet[2567]: I0320 21:37:08.596033 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 20 21:37:08.596093 kubelet[2567]: I0320 21:37:08.596058 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6f32907a07e55aea05abdc5cd284a8d5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6f32907a07e55aea05abdc5cd284a8d5\") " pod="kube-system/kube-scheduler-localhost" Mar 20 21:37:08.596093 kubelet[2567]: I0320 21:37:08.596076 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f094e5ae0a9302b5d01b318229ea2de-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"4f094e5ae0a9302b5d01b318229ea2de\") " pod="kube-system/kube-apiserver-localhost" Mar 20 21:37:08.596276 kubelet[2567]: I0320 21:37:08.596091 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f094e5ae0a9302b5d01b318229ea2de-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"4f094e5ae0a9302b5d01b318229ea2de\") " pod="kube-system/kube-apiserver-localhost" Mar 20 21:37:08.596276 kubelet[2567]: I0320 21:37:08.596106 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 20 21:37:08.596276 kubelet[2567]: I0320 21:37:08.596120 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f094e5ae0a9302b5d01b318229ea2de-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"4f094e5ae0a9302b5d01b318229ea2de\") " pod="kube-system/kube-apiserver-localhost" Mar 20 21:37:08.596276 kubelet[2567]: I0320 21:37:08.596134 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 20 21:37:08.817671 kubelet[2567]: E0320 21:37:08.817564 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:37:08.817944 kubelet[2567]: E0320 21:37:08.817900 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:37:08.818687 kubelet[2567]: E0320 21:37:08.818671 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:37:09.378516 kubelet[2567]: I0320 21:37:09.378489 2567 apiserver.go:52] "Watching apiserver" Mar 20 21:37:09.395050 kubelet[2567]: I0320 21:37:09.394998 2567 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 20 21:37:09.417682 kubelet[2567]: E0320 21:37:09.417591 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:37:09.417682 kubelet[2567]: E0320 21:37:09.417649 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:37:09.424207 kubelet[2567]: E0320 21:37:09.423167 2567 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 20 21:37:09.424207 kubelet[2567]: E0320 21:37:09.423303 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:37:09.445267 kubelet[2567]: I0320 21:37:09.445197 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.4451810649999999 podStartE2EDuration="1.445181065s" podCreationTimestamp="2025-03-20 21:37:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-20 21:37:09.438865593 +0000 UTC m=+1.117158114" watchObservedRunningTime="2025-03-20 21:37:09.445181065 +0000 UTC m=+1.123473586" Mar 20 21:37:09.453102 kubelet[2567]: I0320 21:37:09.453022 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.453010833 podStartE2EDuration="1.453010833s" podCreationTimestamp="2025-03-20 21:37:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-20 21:37:09.445307847 +0000 UTC m=+1.123600328" watchObservedRunningTime="2025-03-20 21:37:09.453010833 +0000 UTC m=+1.131303394" Mar 20 21:37:09.459892 kubelet[2567]: I0320 21:37:09.459849 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.459839638 podStartE2EDuration="1.459839638s" podCreationTimestamp="2025-03-20 21:37:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-20 21:37:09.452941418 +0000 UTC m=+1.131233979" watchObservedRunningTime="2025-03-20 21:37:09.459839638 +0000 UTC m=+1.138132119" Mar 20 21:37:10.421075 kubelet[2567]: E0320 21:37:10.421027 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:37:13.020019 sudo[1652]: pam_unix(sudo:session): session closed for user root Mar 20 21:37:13.021653 sshd[1651]: Connection closed by 10.0.0.1 port 38338 Mar 20 21:37:13.022248 sshd-session[1648]: pam_unix(sshd:session): session closed for user core Mar 20 21:37:13.026800 systemd[1]: sshd@6-10.0.0.3:22-10.0.0.1:38338.service: Deactivated successfully. Mar 20 21:37:13.029401 systemd[1]: session-7.scope: Deactivated successfully. Mar 20 21:37:13.029575 systemd[1]: session-7.scope: Consumed 6.980s CPU time, 224.6M memory peak. Mar 20 21:37:13.031098 systemd-logind[1436]: Session 7 logged out. Waiting for processes to exit. Mar 20 21:37:13.032214 systemd-logind[1436]: Removed session 7. Mar 20 21:37:13.798216 kubelet[2567]: E0320 21:37:13.798190 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:37:14.381548 kubelet[2567]: I0320 21:37:14.381511 2567 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 20 21:37:14.381955 containerd[1452]: time="2025-03-20T21:37:14.381914551Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 20 21:37:14.382227 kubelet[2567]: I0320 21:37:14.382099 2567 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 20 21:37:15.149810 systemd[1]: Created slice kubepods-besteffort-pod5b86d1a8_f573_4896_b181_477f35ebce98.slice - libcontainer container kubepods-besteffort-pod5b86d1a8_f573_4896_b181_477f35ebce98.slice. Mar 20 21:37:15.239992 kubelet[2567]: I0320 21:37:15.239867 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5b86d1a8-f573-4896-b181-477f35ebce98-kube-proxy\") pod \"kube-proxy-l6fhm\" (UID: \"5b86d1a8-f573-4896-b181-477f35ebce98\") " pod="kube-system/kube-proxy-l6fhm" Mar 20 21:37:15.239992 kubelet[2567]: I0320 21:37:15.239905 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5b86d1a8-f573-4896-b181-477f35ebce98-lib-modules\") pod \"kube-proxy-l6fhm\" (UID: \"5b86d1a8-f573-4896-b181-477f35ebce98\") " pod="kube-system/kube-proxy-l6fhm" Mar 20 21:37:15.239992 kubelet[2567]: I0320 21:37:15.239923 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5b86d1a8-f573-4896-b181-477f35ebce98-xtables-lock\") pod \"kube-proxy-l6fhm\" (UID: \"5b86d1a8-f573-4896-b181-477f35ebce98\") " pod="kube-system/kube-proxy-l6fhm" Mar 20 21:37:15.239992 kubelet[2567]: I0320 21:37:15.239938 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfbqn\" (UniqueName: \"kubernetes.io/projected/5b86d1a8-f573-4896-b181-477f35ebce98-kube-api-access-nfbqn\") pod \"kube-proxy-l6fhm\" (UID: \"5b86d1a8-f573-4896-b181-477f35ebce98\") " pod="kube-system/kube-proxy-l6fhm" Mar 20 21:37:15.362288 kubelet[2567]: W0320 21:37:15.361878 2567 reflector.go:561] object-"tigera-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'localhost' and this object Mar 20 21:37:15.362784 kubelet[2567]: W0320 21:37:15.362683 2567 reflector.go:561] object-"tigera-operator"/"kubernetes-services-endpoint": failed to list *v1.ConfigMap: configmaps "kubernetes-services-endpoint" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'localhost' and this object Mar 20 21:37:15.363620 kubelet[2567]: E0320 21:37:15.363585 2567 reflector.go:158] "Unhandled Error" err="object-\"tigera-operator\"/\"kubernetes-services-endpoint\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kubernetes-services-endpoint\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Mar 20 21:37:15.363723 kubelet[2567]: E0320 21:37:15.363704 2567 reflector.go:158] "Unhandled Error" err="object-\"tigera-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Mar 20 21:37:15.371330 systemd[1]: Created slice kubepods-besteffort-podcd433c3b_2075_4747_b96e_56a8a812fadc.slice - libcontainer container kubepods-besteffort-podcd433c3b_2075_4747_b96e_56a8a812fadc.slice. Mar 20 21:37:15.441063 kubelet[2567]: I0320 21:37:15.440989 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/cd433c3b-2075-4747-b96e-56a8a812fadc-var-lib-calico\") pod \"tigera-operator-64ff5465b7-gjrv4\" (UID: \"cd433c3b-2075-4747-b96e-56a8a812fadc\") " pod="tigera-operator/tigera-operator-64ff5465b7-gjrv4" Mar 20 21:37:15.441180 kubelet[2567]: I0320 21:37:15.441163 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpf8t\" (UniqueName: \"kubernetes.io/projected/cd433c3b-2075-4747-b96e-56a8a812fadc-kube-api-access-gpf8t\") pod \"tigera-operator-64ff5465b7-gjrv4\" (UID: \"cd433c3b-2075-4747-b96e-56a8a812fadc\") " pod="tigera-operator/tigera-operator-64ff5465b7-gjrv4" Mar 20 21:37:15.463265 kubelet[2567]: E0320 21:37:15.463201 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:37:15.463747 containerd[1452]: time="2025-03-20T21:37:15.463699555Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-l6fhm,Uid:5b86d1a8-f573-4896-b181-477f35ebce98,Namespace:kube-system,Attempt:0,}" Mar 20 21:37:15.481351 containerd[1452]: time="2025-03-20T21:37:15.481306163Z" level=info msg="connecting to shim 0446557e7c496e8eb6e8c8805cbb69481eb7f00eb381128510102d9b6f04d44b" address="unix:///run/containerd/s/6974eb4fbc3ff0319ef4db8f26d54511800b3768811103c84872913eede63cac" namespace=k8s.io protocol=ttrpc version=3 Mar 20 21:37:15.505787 systemd[1]: Started cri-containerd-0446557e7c496e8eb6e8c8805cbb69481eb7f00eb381128510102d9b6f04d44b.scope - libcontainer container 0446557e7c496e8eb6e8c8805cbb69481eb7f00eb381128510102d9b6f04d44b. Mar 20 21:37:15.525293 containerd[1452]: time="2025-03-20T21:37:15.525263508Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-l6fhm,Uid:5b86d1a8-f573-4896-b181-477f35ebce98,Namespace:kube-system,Attempt:0,} returns sandbox id \"0446557e7c496e8eb6e8c8805cbb69481eb7f00eb381128510102d9b6f04d44b\"" Mar 20 21:37:15.526337 kubelet[2567]: E0320 21:37:15.526015 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:37:15.529764 containerd[1452]: time="2025-03-20T21:37:15.529685782Z" level=info msg="CreateContainer within sandbox \"0446557e7c496e8eb6e8c8805cbb69481eb7f00eb381128510102d9b6f04d44b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 20 21:37:15.536663 containerd[1452]: time="2025-03-20T21:37:15.536634995Z" level=info msg="Container cf4e2d3a59f4638c69369eb6714fb00cfffeacf7d7df5efc079ff00bfe8ddba2: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:37:15.539399 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount31522585.mount: Deactivated successfully. Mar 20 21:37:15.550730 containerd[1452]: time="2025-03-20T21:37:15.550095128Z" level=info msg="CreateContainer within sandbox \"0446557e7c496e8eb6e8c8805cbb69481eb7f00eb381128510102d9b6f04d44b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"cf4e2d3a59f4638c69369eb6714fb00cfffeacf7d7df5efc079ff00bfe8ddba2\"" Mar 20 21:37:15.555810 containerd[1452]: time="2025-03-20T21:37:15.555776689Z" level=info msg="StartContainer for \"cf4e2d3a59f4638c69369eb6714fb00cfffeacf7d7df5efc079ff00bfe8ddba2\"" Mar 20 21:37:15.557131 containerd[1452]: time="2025-03-20T21:37:15.557105937Z" level=info msg="connecting to shim cf4e2d3a59f4638c69369eb6714fb00cfffeacf7d7df5efc079ff00bfe8ddba2" address="unix:///run/containerd/s/6974eb4fbc3ff0319ef4db8f26d54511800b3768811103c84872913eede63cac" protocol=ttrpc version=3 Mar 20 21:37:15.580745 systemd[1]: Started cri-containerd-cf4e2d3a59f4638c69369eb6714fb00cfffeacf7d7df5efc079ff00bfe8ddba2.scope - libcontainer container cf4e2d3a59f4638c69369eb6714fb00cfffeacf7d7df5efc079ff00bfe8ddba2. Mar 20 21:37:15.615720 containerd[1452]: time="2025-03-20T21:37:15.615624971Z" level=info msg="StartContainer for \"cf4e2d3a59f4638c69369eb6714fb00cfffeacf7d7df5efc079ff00bfe8ddba2\" returns successfully" Mar 20 21:37:16.238782 kubelet[2567]: E0320 21:37:16.238755 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:37:16.431598 kubelet[2567]: E0320 21:37:16.431205 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:37:16.431598 kubelet[2567]: E0320 21:37:16.431229 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:37:16.449747 kubelet[2567]: I0320 21:37:16.449695 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-l6fhm" podStartSLOduration=1.449680068 podStartE2EDuration="1.449680068s" podCreationTimestamp="2025-03-20 21:37:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-20 21:37:16.449335959 +0000 UTC m=+8.127628480" watchObservedRunningTime="2025-03-20 21:37:16.449680068 +0000 UTC m=+8.127972549" Mar 20 21:37:16.549930 kubelet[2567]: E0320 21:37:16.549815 2567 projected.go:288] Couldn't get configMap tigera-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Mar 20 21:37:16.549930 kubelet[2567]: E0320 21:37:16.549840 2567 projected.go:194] Error preparing data for projected volume kube-api-access-gpf8t for pod tigera-operator/tigera-operator-64ff5465b7-gjrv4: failed to sync configmap cache: timed out waiting for the condition Mar 20 21:37:16.549930 kubelet[2567]: E0320 21:37:16.549893 2567 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cd433c3b-2075-4747-b96e-56a8a812fadc-kube-api-access-gpf8t podName:cd433c3b-2075-4747-b96e-56a8a812fadc nodeName:}" failed. No retries permitted until 2025-03-20 21:37:17.049875862 +0000 UTC m=+8.728168383 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-gpf8t" (UniqueName: "kubernetes.io/projected/cd433c3b-2075-4747-b96e-56a8a812fadc-kube-api-access-gpf8t") pod "tigera-operator-64ff5465b7-gjrv4" (UID: "cd433c3b-2075-4747-b96e-56a8a812fadc") : failed to sync configmap cache: timed out waiting for the condition Mar 20 21:37:17.175015 containerd[1452]: time="2025-03-20T21:37:17.174983187Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-64ff5465b7-gjrv4,Uid:cd433c3b-2075-4747-b96e-56a8a812fadc,Namespace:tigera-operator,Attempt:0,}" Mar 20 21:37:17.194638 containerd[1452]: time="2025-03-20T21:37:17.194117131Z" level=info msg="connecting to shim 8737324984e7b4b127c63f4f16f09b38977f6f5ce913575859cc0326211088fb" address="unix:///run/containerd/s/cbe35f9d3f6fab6266d79467e8b510ffc307cf68e01bc1b08cc2a0febdedbb97" namespace=k8s.io protocol=ttrpc version=3 Mar 20 21:37:17.219817 systemd[1]: Started cri-containerd-8737324984e7b4b127c63f4f16f09b38977f6f5ce913575859cc0326211088fb.scope - libcontainer container 8737324984e7b4b127c63f4f16f09b38977f6f5ce913575859cc0326211088fb. Mar 20 21:37:17.246073 containerd[1452]: time="2025-03-20T21:37:17.246032832Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-64ff5465b7-gjrv4,Uid:cd433c3b-2075-4747-b96e-56a8a812fadc,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"8737324984e7b4b127c63f4f16f09b38977f6f5ce913575859cc0326211088fb\"" Mar 20 21:37:17.249079 containerd[1452]: time="2025-03-20T21:37:17.249045637Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.5\"" Mar 20 21:37:18.479164 kubelet[2567]: E0320 21:37:18.479090 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:37:18.654471 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1997922035.mount: Deactivated successfully. Mar 20 21:37:18.911577 containerd[1452]: time="2025-03-20T21:37:18.911525099Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:37:18.912653 containerd[1452]: time="2025-03-20T21:37:18.912503262Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.5: active requests=0, bytes read=19271115" Mar 20 21:37:18.913446 containerd[1452]: time="2025-03-20T21:37:18.913398463Z" level=info msg="ImageCreate event name:\"sha256:a709184cc04589116e7266cb3575491ae8f2ac1c959975fea966447025f66eaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:37:18.915227 containerd[1452]: time="2025-03-20T21:37:18.915196631Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:3341fa9475c0325b86228c8726389f9bae9fd6c430c66fe5cd5dc39d7bb6ad4b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:37:18.915897 containerd[1452]: time="2025-03-20T21:37:18.915862639Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.5\" with image id \"sha256:a709184cc04589116e7266cb3575491ae8f2ac1c959975fea966447025f66eaa\", repo tag \"quay.io/tigera/operator:v1.36.5\", repo digest \"quay.io/tigera/operator@sha256:3341fa9475c0325b86228c8726389f9bae9fd6c430c66fe5cd5dc39d7bb6ad4b\", size \"19267110\" in 1.666687295s" Mar 20 21:37:18.915929 containerd[1452]: time="2025-03-20T21:37:18.915899298Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.5\" returns image reference \"sha256:a709184cc04589116e7266cb3575491ae8f2ac1c959975fea966447025f66eaa\"" Mar 20 21:37:18.922361 containerd[1452]: time="2025-03-20T21:37:18.922331672Z" level=info msg="CreateContainer within sandbox \"8737324984e7b4b127c63f4f16f09b38977f6f5ce913575859cc0326211088fb\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 20 21:37:18.956141 containerd[1452]: time="2025-03-20T21:37:18.956086609Z" level=info msg="Container dcfcf490f09912b8e75704f3e64a10a780195198b42d9b14d96332f69f8a307b: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:37:18.959486 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3482055417.mount: Deactivated successfully. Mar 20 21:37:18.962137 containerd[1452]: time="2025-03-20T21:37:18.962105420Z" level=info msg="CreateContainer within sandbox \"8737324984e7b4b127c63f4f16f09b38977f6f5ce913575859cc0326211088fb\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"dcfcf490f09912b8e75704f3e64a10a780195198b42d9b14d96332f69f8a307b\"" Mar 20 21:37:18.962579 containerd[1452]: time="2025-03-20T21:37:18.962545837Z" level=info msg="StartContainer for \"dcfcf490f09912b8e75704f3e64a10a780195198b42d9b14d96332f69f8a307b\"" Mar 20 21:37:18.963896 containerd[1452]: time="2025-03-20T21:37:18.963871531Z" level=info msg="connecting to shim dcfcf490f09912b8e75704f3e64a10a780195198b42d9b14d96332f69f8a307b" address="unix:///run/containerd/s/cbe35f9d3f6fab6266d79467e8b510ffc307cf68e01bc1b08cc2a0febdedbb97" protocol=ttrpc version=3 Mar 20 21:37:19.002787 systemd[1]: Started cri-containerd-dcfcf490f09912b8e75704f3e64a10a780195198b42d9b14d96332f69f8a307b.scope - libcontainer container dcfcf490f09912b8e75704f3e64a10a780195198b42d9b14d96332f69f8a307b. Mar 20 21:37:19.074543 containerd[1452]: time="2025-03-20T21:37:19.074149064Z" level=info msg="StartContainer for \"dcfcf490f09912b8e75704f3e64a10a780195198b42d9b14d96332f69f8a307b\" returns successfully" Mar 20 21:37:19.443283 kubelet[2567]: E0320 21:37:19.443235 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:37:19.457993 kubelet[2567]: I0320 21:37:19.457929 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-64ff5465b7-gjrv4" podStartSLOduration=2.785263748 podStartE2EDuration="4.457913614s" podCreationTimestamp="2025-03-20 21:37:15 +0000 UTC" firstStartedPulling="2025-03-20 21:37:17.248057764 +0000 UTC m=+8.926350285" lastFinishedPulling="2025-03-20 21:37:18.92070763 +0000 UTC m=+10.599000151" observedRunningTime="2025-03-20 21:37:19.456210896 +0000 UTC m=+11.134503417" watchObservedRunningTime="2025-03-20 21:37:19.457913614 +0000 UTC m=+11.136206135" Mar 20 21:37:20.145307 update_engine[1440]: I20250320 21:37:20.144773 1440 update_attempter.cc:509] Updating boot flags... Mar 20 21:37:20.168639 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2962) Mar 20 21:37:20.213650 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2961) Mar 20 21:37:20.247679 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2961) Mar 20 21:37:23.288776 systemd[1]: Created slice kubepods-besteffort-pod872e6251_6e97_43bb_8f50_79e0f03579b8.slice - libcontainer container kubepods-besteffort-pod872e6251_6e97_43bb_8f50_79e0f03579b8.slice. Mar 20 21:37:23.289298 kubelet[2567]: I0320 21:37:23.289207 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/872e6251-6e97-43bb-8f50-79e0f03579b8-tigera-ca-bundle\") pod \"calico-typha-bd9847f8d-t8x2q\" (UID: \"872e6251-6e97-43bb-8f50-79e0f03579b8\") " pod="calico-system/calico-typha-bd9847f8d-t8x2q" Mar 20 21:37:23.289298 kubelet[2567]: I0320 21:37:23.289252 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/872e6251-6e97-43bb-8f50-79e0f03579b8-typha-certs\") pod \"calico-typha-bd9847f8d-t8x2q\" (UID: \"872e6251-6e97-43bb-8f50-79e0f03579b8\") " pod="calico-system/calico-typha-bd9847f8d-t8x2q" Mar 20 21:37:23.289298 kubelet[2567]: I0320 21:37:23.289273 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56pbw\" (UniqueName: \"kubernetes.io/projected/872e6251-6e97-43bb-8f50-79e0f03579b8-kube-api-access-56pbw\") pod \"calico-typha-bd9847f8d-t8x2q\" (UID: \"872e6251-6e97-43bb-8f50-79e0f03579b8\") " pod="calico-system/calico-typha-bd9847f8d-t8x2q" Mar 20 21:37:23.333744 systemd[1]: Created slice kubepods-besteffort-podf4c6ed22_520f_437f_9056_61327fcbf4c9.slice - libcontainer container kubepods-besteffort-podf4c6ed22_520f_437f_9056_61327fcbf4c9.slice. Mar 20 21:37:23.390271 kubelet[2567]: I0320 21:37:23.390213 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f4c6ed22-520f-437f-9056-61327fcbf4c9-xtables-lock\") pod \"calico-node-vvxj8\" (UID: \"f4c6ed22-520f-437f-9056-61327fcbf4c9\") " pod="calico-system/calico-node-vvxj8" Mar 20 21:37:23.390271 kubelet[2567]: I0320 21:37:23.390256 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/f4c6ed22-520f-437f-9056-61327fcbf4c9-flexvol-driver-host\") pod \"calico-node-vvxj8\" (UID: \"f4c6ed22-520f-437f-9056-61327fcbf4c9\") " pod="calico-system/calico-node-vvxj8" Mar 20 21:37:23.390458 kubelet[2567]: I0320 21:37:23.390287 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f4c6ed22-520f-437f-9056-61327fcbf4c9-lib-modules\") pod \"calico-node-vvxj8\" (UID: \"f4c6ed22-520f-437f-9056-61327fcbf4c9\") " pod="calico-system/calico-node-vvxj8" Mar 20 21:37:23.390458 kubelet[2567]: I0320 21:37:23.390303 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/f4c6ed22-520f-437f-9056-61327fcbf4c9-cni-bin-dir\") pod \"calico-node-vvxj8\" (UID: \"f4c6ed22-520f-437f-9056-61327fcbf4c9\") " pod="calico-system/calico-node-vvxj8" Mar 20 21:37:23.390458 kubelet[2567]: I0320 21:37:23.390317 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/f4c6ed22-520f-437f-9056-61327fcbf4c9-cni-log-dir\") pod \"calico-node-vvxj8\" (UID: \"f4c6ed22-520f-437f-9056-61327fcbf4c9\") " pod="calico-system/calico-node-vvxj8" Mar 20 21:37:23.390458 kubelet[2567]: I0320 21:37:23.390332 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkhd9\" (UniqueName: \"kubernetes.io/projected/f4c6ed22-520f-437f-9056-61327fcbf4c9-kube-api-access-dkhd9\") pod \"calico-node-vvxj8\" (UID: \"f4c6ed22-520f-437f-9056-61327fcbf4c9\") " pod="calico-system/calico-node-vvxj8" Mar 20 21:37:23.390458 kubelet[2567]: I0320 21:37:23.390346 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/f4c6ed22-520f-437f-9056-61327fcbf4c9-var-run-calico\") pod \"calico-node-vvxj8\" (UID: \"f4c6ed22-520f-437f-9056-61327fcbf4c9\") " pod="calico-system/calico-node-vvxj8" Mar 20 21:37:23.390570 kubelet[2567]: I0320 21:37:23.390363 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/f4c6ed22-520f-437f-9056-61327fcbf4c9-cni-net-dir\") pod \"calico-node-vvxj8\" (UID: \"f4c6ed22-520f-437f-9056-61327fcbf4c9\") " pod="calico-system/calico-node-vvxj8" Mar 20 21:37:23.390570 kubelet[2567]: I0320 21:37:23.390379 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f4c6ed22-520f-437f-9056-61327fcbf4c9-tigera-ca-bundle\") pod \"calico-node-vvxj8\" (UID: \"f4c6ed22-520f-437f-9056-61327fcbf4c9\") " pod="calico-system/calico-node-vvxj8" Mar 20 21:37:23.390570 kubelet[2567]: I0320 21:37:23.390393 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/f4c6ed22-520f-437f-9056-61327fcbf4c9-node-certs\") pod \"calico-node-vvxj8\" (UID: \"f4c6ed22-520f-437f-9056-61327fcbf4c9\") " pod="calico-system/calico-node-vvxj8" Mar 20 21:37:23.390570 kubelet[2567]: I0320 21:37:23.390407 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f4c6ed22-520f-437f-9056-61327fcbf4c9-var-lib-calico\") pod \"calico-node-vvxj8\" (UID: \"f4c6ed22-520f-437f-9056-61327fcbf4c9\") " pod="calico-system/calico-node-vvxj8" Mar 20 21:37:23.390570 kubelet[2567]: I0320 21:37:23.390433 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/f4c6ed22-520f-437f-9056-61327fcbf4c9-policysync\") pod \"calico-node-vvxj8\" (UID: \"f4c6ed22-520f-437f-9056-61327fcbf4c9\") " pod="calico-system/calico-node-vvxj8" Mar 20 21:37:23.439383 kubelet[2567]: E0320 21:37:23.439209 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7lkcl" podUID="e8083da4-6460-47ba-b48a-9b8f613b80aa" Mar 20 21:37:23.491374 kubelet[2567]: I0320 21:37:23.491333 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e8083da4-6460-47ba-b48a-9b8f613b80aa-kubelet-dir\") pod \"csi-node-driver-7lkcl\" (UID: \"e8083da4-6460-47ba-b48a-9b8f613b80aa\") " pod="calico-system/csi-node-driver-7lkcl" Mar 20 21:37:23.491875 kubelet[2567]: I0320 21:37:23.491584 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5b87z\" (UniqueName: \"kubernetes.io/projected/e8083da4-6460-47ba-b48a-9b8f613b80aa-kube-api-access-5b87z\") pod \"csi-node-driver-7lkcl\" (UID: \"e8083da4-6460-47ba-b48a-9b8f613b80aa\") " pod="calico-system/csi-node-driver-7lkcl" Mar 20 21:37:23.491875 kubelet[2567]: I0320 21:37:23.491673 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e8083da4-6460-47ba-b48a-9b8f613b80aa-registration-dir\") pod \"csi-node-driver-7lkcl\" (UID: \"e8083da4-6460-47ba-b48a-9b8f613b80aa\") " pod="calico-system/csi-node-driver-7lkcl" Mar 20 21:37:23.491875 kubelet[2567]: I0320 21:37:23.491746 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e8083da4-6460-47ba-b48a-9b8f613b80aa-socket-dir\") pod \"csi-node-driver-7lkcl\" (UID: \"e8083da4-6460-47ba-b48a-9b8f613b80aa\") " pod="calico-system/csi-node-driver-7lkcl" Mar 20 21:37:23.491875 kubelet[2567]: I0320 21:37:23.491772 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/e8083da4-6460-47ba-b48a-9b8f613b80aa-varrun\") pod \"csi-node-driver-7lkcl\" (UID: \"e8083da4-6460-47ba-b48a-9b8f613b80aa\") " pod="calico-system/csi-node-driver-7lkcl" Mar 20 21:37:23.493020 kubelet[2567]: E0320 21:37:23.492996 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:37:23.493020 kubelet[2567]: W0320 21:37:23.493016 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:37:23.493236 kubelet[2567]: E0320 21:37:23.493214 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:37:23.494805 kubelet[2567]: E0320 21:37:23.494776 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:37:23.494805 kubelet[2567]: W0320 21:37:23.494797 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:37:23.494891 kubelet[2567]: E0320 21:37:23.494819 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:37:23.497595 kubelet[2567]: E0320 21:37:23.497575 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:37:23.497595 kubelet[2567]: W0320 21:37:23.497592 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:37:23.497748 kubelet[2567]: E0320 21:37:23.497720 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:37:23.497838 kubelet[2567]: E0320 21:37:23.497822 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:37:23.497838 kubelet[2567]: W0320 21:37:23.497834 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:37:23.497891 kubelet[2567]: E0320 21:37:23.497877 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:37:23.498332 kubelet[2567]: E0320 21:37:23.498321 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:37:23.498332 kubelet[2567]: W0320 21:37:23.498331 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:37:23.498401 kubelet[2567]: E0320 21:37:23.498374 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:37:23.498476 kubelet[2567]: E0320 21:37:23.498465 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:37:23.498510 kubelet[2567]: W0320 21:37:23.498476 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:37:23.498533 kubelet[2567]: E0320 21:37:23.498520 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:37:23.498636 kubelet[2567]: E0320 21:37:23.498625 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:37:23.498636 kubelet[2567]: W0320 21:37:23.498635 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:37:23.498701 kubelet[2567]: E0320 21:37:23.498679 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:37:23.498779 kubelet[2567]: E0320 21:37:23.498769 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:37:23.498813 kubelet[2567]: W0320 21:37:23.498779 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:37:23.498813 kubelet[2567]: E0320 21:37:23.498793 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:37:23.500208 kubelet[2567]: E0320 21:37:23.500112 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:37:23.500208 kubelet[2567]: W0320 21:37:23.500128 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:37:23.500208 kubelet[2567]: E0320 21:37:23.500144 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:37:23.500668 kubelet[2567]: E0320 21:37:23.500542 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:37:23.500668 kubelet[2567]: W0320 21:37:23.500565 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:37:23.500668 kubelet[2567]: E0320 21:37:23.500588 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:37:23.501097 kubelet[2567]: E0320 21:37:23.500955 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:37:23.501097 kubelet[2567]: W0320 21:37:23.500969 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:37:23.501097 kubelet[2567]: E0320 21:37:23.500982 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:37:23.501261 kubelet[2567]: E0320 21:37:23.501249 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:37:23.501318 kubelet[2567]: W0320 21:37:23.501307 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:37:23.501375 kubelet[2567]: E0320 21:37:23.501366 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:37:23.501691 kubelet[2567]: E0320 21:37:23.501591 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:37:23.501691 kubelet[2567]: W0320 21:37:23.501602 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:37:23.501691 kubelet[2567]: E0320 21:37:23.501620 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:37:23.501866 kubelet[2567]: E0320 21:37:23.501854 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:37:23.501920 kubelet[2567]: W0320 21:37:23.501909 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:37:23.501971 kubelet[2567]: E0320 21:37:23.501961 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:37:23.502776 kubelet[2567]: E0320 21:37:23.502749 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:37:23.502776 kubelet[2567]: W0320 21:37:23.502766 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:37:23.502865 kubelet[2567]: E0320 21:37:23.502783 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:37:23.502975 kubelet[2567]: E0320 21:37:23.502956 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:37:23.502975 kubelet[2567]: W0320 21:37:23.502967 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:37:23.502975 kubelet[2567]: E0320 21:37:23.502977 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:37:23.592926 kubelet[2567]: E0320 21:37:23.592814 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:37:23.592926 kubelet[2567]: W0320 21:37:23.592830 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:37:23.592926 kubelet[2567]: E0320 21:37:23.592844 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:37:23.593668 kubelet[2567]: E0320 21:37:23.593021 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:37:23.593668 kubelet[2567]: W0320 21:37:23.593029 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:37:23.593668 kubelet[2567]: E0320 21:37:23.593038 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:37:23.593668 kubelet[2567]: E0320 21:37:23.593289 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:37:23.593668 kubelet[2567]: W0320 21:37:23.593299 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:37:23.593668 kubelet[2567]: E0320 21:37:23.593308 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:37:23.593668 kubelet[2567]: E0320 21:37:23.593486 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:37:23.594111 containerd[1452]: time="2025-03-20T21:37:23.593924085Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-bd9847f8d-t8x2q,Uid:872e6251-6e97-43bb-8f50-79e0f03579b8,Namespace:calico-system,Attempt:0,}" Mar 20 21:37:23.594362 kubelet[2567]: E0320 21:37:23.593984 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:37:23.594362 kubelet[2567]: W0320 21:37:23.593996 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:37:23.594362 kubelet[2567]: E0320 21:37:23.594008 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:37:23.594847 kubelet[2567]: E0320 21:37:23.594551 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:37:23.594847 kubelet[2567]: W0320 21:37:23.594566 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:37:23.594847 kubelet[2567]: E0320 21:37:23.594596 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:37:23.595238 kubelet[2567]: E0320 21:37:23.595223 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:37:23.595423 kubelet[2567]: W0320 21:37:23.595291 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:37:23.595423 kubelet[2567]: E0320 21:37:23.595321 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:37:23.595898 kubelet[2567]: E0320 21:37:23.595786 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:37:23.595898 kubelet[2567]: W0320 21:37:23.595801 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:37:23.595898 kubelet[2567]: E0320 21:37:23.595818 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:37:23.596069 kubelet[2567]: E0320 21:37:23.596057 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:37:23.596119 kubelet[2567]: W0320 21:37:23.596108 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:37:23.596224 kubelet[2567]: E0320 21:37:23.596210 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:37:23.596474 kubelet[2567]: E0320 21:37:23.596457 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:37:23.596474 kubelet[2567]: W0320 21:37:23.596474 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:37:23.596545 kubelet[2567]: E0320 21:37:23.596490 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:37:23.596700 kubelet[2567]: E0320 21:37:23.596686 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:37:23.596700 kubelet[2567]: W0320 21:37:23.596696 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:37:23.596793 kubelet[2567]: E0320 21:37:23.596768 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:37:23.596978 kubelet[2567]: E0320 21:37:23.596963 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:37:23.597009 kubelet[2567]: W0320 21:37:23.596977 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:37:23.597079 kubelet[2567]: E0320 21:37:23.597052 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:37:23.597143 kubelet[2567]: E0320 21:37:23.597131 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:37:23.597143 kubelet[2567]: W0320 21:37:23.597143 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:37:23.597309 kubelet[2567]: E0320 21:37:23.597293 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:37:23.597808 kubelet[2567]: E0320 21:37:23.597725 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:37:23.597854 kubelet[2567]: W0320 21:37:23.597811 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:37:23.597961 kubelet[2567]: E0320 21:37:23.597932 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:37:23.598055 kubelet[2567]: E0320 21:37:23.598041 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:37:23.598055 kubelet[2567]: W0320 21:37:23.598054 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:37:23.598150 kubelet[2567]: E0320 21:37:23.598120 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:37:23.598711 kubelet[2567]: E0320 21:37:23.598694 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:37:23.598711 kubelet[2567]: W0320 21:37:23.598710 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:37:23.598879 kubelet[2567]: E0320 21:37:23.598829 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:37:23.599040 kubelet[2567]: E0320 21:37:23.598981 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:37:23.599040 kubelet[2567]: W0320 21:37:23.598996 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:37:23.599040 kubelet[2567]: E0320 21:37:23.599031 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:37:23.599288 kubelet[2567]: E0320 21:37:23.599270 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:37:23.599288 kubelet[2567]: W0320 21:37:23.599286 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:37:23.599350 kubelet[2567]: E0320 21:37:23.599314 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:37:23.600039 kubelet[2567]: E0320 21:37:23.600001 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:37:23.600039 kubelet[2567]: W0320 21:37:23.600021 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:37:23.600192 kubelet[2567]: E0320 21:37:23.600128 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:37:23.600426 kubelet[2567]: E0320 21:37:23.600408 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:37:23.600470 kubelet[2567]: W0320 21:37:23.600428 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:37:23.600493 kubelet[2567]: E0320 21:37:23.600479 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:37:23.600659 kubelet[2567]: E0320 21:37:23.600645 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:37:23.600659 kubelet[2567]: W0320 21:37:23.600657 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:37:23.600842 kubelet[2567]: E0320 21:37:23.600733 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:37:23.600912 kubelet[2567]: E0320 21:37:23.600891 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:37:23.600943 kubelet[2567]: W0320 21:37:23.600910 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:37:23.601019 kubelet[2567]: E0320 21:37:23.600982 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:37:23.601083 kubelet[2567]: E0320 21:37:23.601069 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:37:23.601083 kubelet[2567]: W0320 21:37:23.601080 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:37:23.601138 kubelet[2567]: E0320 21:37:23.601094 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:37:23.601300 kubelet[2567]: E0320 21:37:23.601287 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:37:23.601300 kubelet[2567]: W0320 21:37:23.601299 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:37:23.601362 kubelet[2567]: E0320 21:37:23.601311 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:37:23.601770 kubelet[2567]: E0320 21:37:23.601661 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:37:23.601770 kubelet[2567]: W0320 21:37:23.601675 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:37:23.601770 kubelet[2567]: E0320 21:37:23.601691 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:37:23.602030 kubelet[2567]: E0320 21:37:23.601984 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:37:23.602030 kubelet[2567]: W0320 21:37:23.601997 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:37:23.602030 kubelet[2567]: E0320 21:37:23.602008 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:37:23.612836 kubelet[2567]: E0320 21:37:23.612762 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:37:23.612836 kubelet[2567]: W0320 21:37:23.612778 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:37:23.612836 kubelet[2567]: E0320 21:37:23.612791 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:37:23.635850 containerd[1452]: time="2025-03-20T21:37:23.635811892Z" level=info msg="connecting to shim f19e6f38ef1ed753be0965e3236da63554bf10b5f1702cd4b3ad754aff90ac18" address="unix:///run/containerd/s/7887fef0e84e7f32f545dc2389fb8ad13ea4a25f37f2481c465fa319403e9373" namespace=k8s.io protocol=ttrpc version=3 Mar 20 21:37:23.637357 kubelet[2567]: E0320 21:37:23.637320 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:37:23.638824 containerd[1452]: time="2025-03-20T21:37:23.638799802Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-vvxj8,Uid:f4c6ed22-520f-437f-9056-61327fcbf4c9,Namespace:calico-system,Attempt:0,}" Mar 20 21:37:23.657848 containerd[1452]: time="2025-03-20T21:37:23.657677110Z" level=info msg="connecting to shim e73373abbb209ed61daa36d838f2241c6446bc211589f0e22be3ef71ddc4b8c1" address="unix:///run/containerd/s/e3b924c76a37434b949a37c6b37835d077f3a9bbfe4cc6c32b802a224da0ccac" namespace=k8s.io protocol=ttrpc version=3 Mar 20 21:37:23.679760 systemd[1]: Started cri-containerd-f19e6f38ef1ed753be0965e3236da63554bf10b5f1702cd4b3ad754aff90ac18.scope - libcontainer container f19e6f38ef1ed753be0965e3236da63554bf10b5f1702cd4b3ad754aff90ac18. Mar 20 21:37:23.692755 systemd[1]: Started cri-containerd-e73373abbb209ed61daa36d838f2241c6446bc211589f0e22be3ef71ddc4b8c1.scope - libcontainer container e73373abbb209ed61daa36d838f2241c6446bc211589f0e22be3ef71ddc4b8c1. Mar 20 21:37:23.721283 containerd[1452]: time="2025-03-20T21:37:23.721163073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-vvxj8,Uid:f4c6ed22-520f-437f-9056-61327fcbf4c9,Namespace:calico-system,Attempt:0,} returns sandbox id \"e73373abbb209ed61daa36d838f2241c6446bc211589f0e22be3ef71ddc4b8c1\"" Mar 20 21:37:23.723077 containerd[1452]: time="2025-03-20T21:37:23.723047639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-bd9847f8d-t8x2q,Uid:872e6251-6e97-43bb-8f50-79e0f03579b8,Namespace:calico-system,Attempt:0,} returns sandbox id \"f19e6f38ef1ed753be0965e3236da63554bf10b5f1702cd4b3ad754aff90ac18\"" Mar 20 21:37:23.726690 kubelet[2567]: E0320 21:37:23.723885 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:37:23.727274 kubelet[2567]: E0320 21:37:23.727041 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:37:23.729057 containerd[1452]: time="2025-03-20T21:37:23.728973600Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\"" Mar 20 21:37:23.806428 kubelet[2567]: E0320 21:37:23.806404 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:37:23.890118 kubelet[2567]: E0320 21:37:23.890030 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:37:23.890118 kubelet[2567]: W0320 21:37:23.890051 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:37:23.890118 kubelet[2567]: E0320 21:37:23.890068 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:37:23.890408 kubelet[2567]: E0320 21:37:23.890393 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:37:23.890447 kubelet[2567]: W0320 21:37:23.890407 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:37:23.890447 kubelet[2567]: E0320 21:37:23.890418 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:37:23.892898 kubelet[2567]: E0320 21:37:23.892880 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:37:23.892898 kubelet[2567]: W0320 21:37:23.892897 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:37:23.892968 kubelet[2567]: E0320 21:37:23.892909 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:37:23.893081 kubelet[2567]: E0320 21:37:23.893066 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:37:23.893081 kubelet[2567]: W0320 21:37:23.893077 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:37:23.893133 kubelet[2567]: E0320 21:37:23.893086 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:37:23.893244 kubelet[2567]: E0320 21:37:23.893230 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:37:23.893244 kubelet[2567]: W0320 21:37:23.893241 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:37:23.893296 kubelet[2567]: E0320 21:37:23.893249 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:37:23.893388 kubelet[2567]: E0320 21:37:23.893376 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:37:23.893416 kubelet[2567]: W0320 21:37:23.893390 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:37:23.893416 kubelet[2567]: E0320 21:37:23.893398 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:37:23.893533 kubelet[2567]: E0320 21:37:23.893522 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:37:23.893557 kubelet[2567]: W0320 21:37:23.893532 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:37:23.893557 kubelet[2567]: E0320 21:37:23.893540 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:37:23.893922 kubelet[2567]: E0320 21:37:23.893780 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:37:23.893922 kubelet[2567]: W0320 21:37:23.893796 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:37:23.893922 kubelet[2567]: E0320 21:37:23.893820 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:37:23.894016 kubelet[2567]: E0320 21:37:23.893969 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:37:23.894016 kubelet[2567]: W0320 21:37:23.893991 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:37:23.894016 kubelet[2567]: E0320 21:37:23.894001 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:37:23.895935 kubelet[2567]: E0320 21:37:23.894225 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:37:23.895935 kubelet[2567]: W0320 21:37:23.894237 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:37:23.895935 kubelet[2567]: E0320 21:37:23.894246 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:37:23.895935 kubelet[2567]: E0320 21:37:23.894386 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:37:23.895935 kubelet[2567]: W0320 21:37:23.894406 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:37:23.895935 kubelet[2567]: E0320 21:37:23.894416 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:37:23.895935 kubelet[2567]: E0320 21:37:23.894572 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:37:23.895935 kubelet[2567]: W0320 21:37:23.894579 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:37:23.895935 kubelet[2567]: E0320 21:37:23.894588 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:37:23.895935 kubelet[2567]: E0320 21:37:23.894823 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:37:23.896180 kubelet[2567]: W0320 21:37:23.894832 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:37:23.896180 kubelet[2567]: E0320 21:37:23.894840 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:37:23.896180 kubelet[2567]: E0320 21:37:23.894979 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:37:23.896180 kubelet[2567]: W0320 21:37:23.895000 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:37:23.896180 kubelet[2567]: E0320 21:37:23.895010 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:37:23.896180 kubelet[2567]: E0320 21:37:23.895170 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 20 21:37:23.896180 kubelet[2567]: W0320 21:37:23.895178 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 20 21:37:23.896180 kubelet[2567]: E0320 21:37:23.895185 2567 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 20 21:37:24.794400 containerd[1452]: time="2025-03-20T21:37:24.794325842Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:37:24.796654 containerd[1452]: time="2025-03-20T21:37:24.794895531Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2: active requests=0, bytes read=5120152" Mar 20 21:37:24.797968 containerd[1452]: time="2025-03-20T21:37:24.797926524Z" level=info msg="ImageCreate event name:\"sha256:bf0e51f0111c4e6f7bc448c15934e73123805f3c5e66e455c7eb7392854e0921\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:37:24.800977 containerd[1452]: time="2025-03-20T21:37:24.800940551Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:51d9341a4a37e278a906f40ecc73f5076e768612c21621f1b1d4f2b2f0735a1d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:37:24.802427 containerd[1452]: time="2025-03-20T21:37:24.802387002Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\" with image id \"sha256:bf0e51f0111c4e6f7bc448c15934e73123805f3c5e66e455c7eb7392854e0921\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:51d9341a4a37e278a906f40ecc73f5076e768612c21621f1b1d4f2b2f0735a1d\", size \"6489869\" in 1.073380469s" Mar 20 21:37:24.802427 containerd[1452]: time="2025-03-20T21:37:24.802421374Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\" returns image reference \"sha256:bf0e51f0111c4e6f7bc448c15934e73123805f3c5e66e455c7eb7392854e0921\"" Mar 20 21:37:24.807739 containerd[1452]: time="2025-03-20T21:37:24.807707836Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.2\"" Mar 20 21:37:24.810260 containerd[1452]: time="2025-03-20T21:37:24.809945497Z" level=info msg="CreateContainer within sandbox \"e73373abbb209ed61daa36d838f2241c6446bc211589f0e22be3ef71ddc4b8c1\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 20 21:37:24.816397 containerd[1452]: time="2025-03-20T21:37:24.816372017Z" level=info msg="Container 39db45afc20814fee94d62f79b5ec308822aab5b9a977553893866c140315829: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:37:24.828111 containerd[1452]: time="2025-03-20T21:37:24.827939185Z" level=info msg="CreateContainer within sandbox \"e73373abbb209ed61daa36d838f2241c6446bc211589f0e22be3ef71ddc4b8c1\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"39db45afc20814fee94d62f79b5ec308822aab5b9a977553893866c140315829\"" Mar 20 21:37:24.828660 containerd[1452]: time="2025-03-20T21:37:24.828483625Z" level=info msg="StartContainer for \"39db45afc20814fee94d62f79b5ec308822aab5b9a977553893866c140315829\"" Mar 20 21:37:24.829784 containerd[1452]: time="2025-03-20T21:37:24.829758573Z" level=info msg="connecting to shim 39db45afc20814fee94d62f79b5ec308822aab5b9a977553893866c140315829" address="unix:///run/containerd/s/e3b924c76a37434b949a37c6b37835d077f3a9bbfe4cc6c32b802a224da0ccac" protocol=ttrpc version=3 Mar 20 21:37:24.847752 systemd[1]: Started cri-containerd-39db45afc20814fee94d62f79b5ec308822aab5b9a977553893866c140315829.scope - libcontainer container 39db45afc20814fee94d62f79b5ec308822aab5b9a977553893866c140315829. Mar 20 21:37:24.898772 containerd[1452]: time="2025-03-20T21:37:24.898535228Z" level=info msg="StartContainer for \"39db45afc20814fee94d62f79b5ec308822aab5b9a977553893866c140315829\" returns successfully" Mar 20 21:37:24.919731 systemd[1]: cri-containerd-39db45afc20814fee94d62f79b5ec308822aab5b9a977553893866c140315829.scope: Deactivated successfully. Mar 20 21:37:24.920616 systemd[1]: cri-containerd-39db45afc20814fee94d62f79b5ec308822aab5b9a977553893866c140315829.scope: Consumed 41ms CPU time, 8M memory peak, 6.2M written to disk. Mar 20 21:37:24.946084 containerd[1452]: time="2025-03-20T21:37:24.945895339Z" level=info msg="TaskExit event in podsandbox handler container_id:\"39db45afc20814fee94d62f79b5ec308822aab5b9a977553893866c140315829\" id:\"39db45afc20814fee94d62f79b5ec308822aab5b9a977553893866c140315829\" pid:3153 exited_at:{seconds:1742506644 nanos:937106832}" Mar 20 21:37:24.959832 containerd[1452]: time="2025-03-20T21:37:24.959777597Z" level=info msg="received exit event container_id:\"39db45afc20814fee94d62f79b5ec308822aab5b9a977553893866c140315829\" id:\"39db45afc20814fee94d62f79b5ec308822aab5b9a977553893866c140315829\" pid:3153 exited_at:{seconds:1742506644 nanos:937106832}" Mar 20 21:37:24.991797 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-39db45afc20814fee94d62f79b5ec308822aab5b9a977553893866c140315829-rootfs.mount: Deactivated successfully. Mar 20 21:37:25.403222 kubelet[2567]: E0320 21:37:25.403167 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7lkcl" podUID="e8083da4-6460-47ba-b48a-9b8f613b80aa" Mar 20 21:37:25.464784 kubelet[2567]: E0320 21:37:25.464752 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:37:26.243322 containerd[1452]: time="2025-03-20T21:37:26.243276103Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:37:26.244363 containerd[1452]: time="2025-03-20T21:37:26.243806161Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.2: active requests=0, bytes read=28363957" Mar 20 21:37:26.244634 containerd[1452]: time="2025-03-20T21:37:26.244588583Z" level=info msg="ImageCreate event name:\"sha256:38a4e8457549414848315eae0d5ab8ecd6c51f4baaea849fe5edce714d81a999\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:37:26.246631 containerd[1452]: time="2025-03-20T21:37:26.246443924Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:9839fd34b4c1bad50beed72aec59c64893487a46eea57dc2d7d66c3041d7bcce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:37:26.247200 containerd[1452]: time="2025-03-20T21:37:26.247175009Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.2\" with image id \"sha256:38a4e8457549414848315eae0d5ab8ecd6c51f4baaea849fe5edce714d81a999\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:9839fd34b4c1bad50beed72aec59c64893487a46eea57dc2d7d66c3041d7bcce\", size \"29733706\" in 1.439435082s" Mar 20 21:37:26.247245 containerd[1452]: time="2025-03-20T21:37:26.247206659Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.2\" returns image reference \"sha256:38a4e8457549414848315eae0d5ab8ecd6c51f4baaea849fe5edce714d81a999\"" Mar 20 21:37:26.248315 containerd[1452]: time="2025-03-20T21:37:26.248180225Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.2\"" Mar 20 21:37:26.266453 containerd[1452]: time="2025-03-20T21:37:26.266381000Z" level=info msg="CreateContainer within sandbox \"f19e6f38ef1ed753be0965e3236da63554bf10b5f1702cd4b3ad754aff90ac18\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 20 21:37:26.274285 containerd[1452]: time="2025-03-20T21:37:26.274241592Z" level=info msg="Container a298a9451a4c08cfae0fae7fc04acfe264caa10b80a591347cfad1ee96df4197: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:37:26.286320 containerd[1452]: time="2025-03-20T21:37:26.286276062Z" level=info msg="CreateContainer within sandbox \"f19e6f38ef1ed753be0965e3236da63554bf10b5f1702cd4b3ad754aff90ac18\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"a298a9451a4c08cfae0fae7fc04acfe264caa10b80a591347cfad1ee96df4197\"" Mar 20 21:37:26.287133 containerd[1452]: time="2025-03-20T21:37:26.287096257Z" level=info msg="StartContainer for \"a298a9451a4c08cfae0fae7fc04acfe264caa10b80a591347cfad1ee96df4197\"" Mar 20 21:37:26.288496 containerd[1452]: time="2025-03-20T21:37:26.288461274Z" level=info msg="connecting to shim a298a9451a4c08cfae0fae7fc04acfe264caa10b80a591347cfad1ee96df4197" address="unix:///run/containerd/s/7887fef0e84e7f32f545dc2389fb8ad13ea4a25f37f2481c465fa319403e9373" protocol=ttrpc version=3 Mar 20 21:37:26.310792 systemd[1]: Started cri-containerd-a298a9451a4c08cfae0fae7fc04acfe264caa10b80a591347cfad1ee96df4197.scope - libcontainer container a298a9451a4c08cfae0fae7fc04acfe264caa10b80a591347cfad1ee96df4197. Mar 20 21:37:26.400065 containerd[1452]: time="2025-03-20T21:37:26.400022552Z" level=info msg="StartContainer for \"a298a9451a4c08cfae0fae7fc04acfe264caa10b80a591347cfad1ee96df4197\" returns successfully" Mar 20 21:37:26.468873 kubelet[2567]: E0320 21:37:26.468289 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:37:26.489214 kubelet[2567]: I0320 21:37:26.489119 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-bd9847f8d-t8x2q" podStartSLOduration=0.969858376 podStartE2EDuration="3.489101342s" podCreationTimestamp="2025-03-20 21:37:23 +0000 UTC" firstStartedPulling="2025-03-20 21:37:23.72879117 +0000 UTC m=+15.407083691" lastFinishedPulling="2025-03-20 21:37:26.248034136 +0000 UTC m=+17.926326657" observedRunningTime="2025-03-20 21:37:26.488354491 +0000 UTC m=+18.166646972" watchObservedRunningTime="2025-03-20 21:37:26.489101342 +0000 UTC m=+18.167393863" Mar 20 21:37:27.403193 kubelet[2567]: E0320 21:37:27.403147 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7lkcl" podUID="e8083da4-6460-47ba-b48a-9b8f613b80aa" Mar 20 21:37:27.471018 kubelet[2567]: I0320 21:37:27.470986 2567 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 20 21:37:27.471463 kubelet[2567]: E0320 21:37:27.471445 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:37:29.403032 kubelet[2567]: E0320 21:37:29.402984 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7lkcl" podUID="e8083da4-6460-47ba-b48a-9b8f613b80aa" Mar 20 21:37:29.555091 containerd[1452]: time="2025-03-20T21:37:29.555038309Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:37:29.556130 containerd[1452]: time="2025-03-20T21:37:29.556071012Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.2: active requests=0, bytes read=91227396" Mar 20 21:37:29.556795 containerd[1452]: time="2025-03-20T21:37:29.556760534Z" level=info msg="ImageCreate event name:\"sha256:57c2b1dcdc0045be5220c7237f900bce5f47c006714073859cf102b0eaa65290\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:37:29.559068 containerd[1452]: time="2025-03-20T21:37:29.559030640Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:890e1db6ae363695cfc23ffae4d612cc85cdd99d759bd539af6683969d0c3c25\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:37:29.559756 containerd[1452]: time="2025-03-20T21:37:29.559732246Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.2\" with image id \"sha256:57c2b1dcdc0045be5220c7237f900bce5f47c006714073859cf102b0eaa65290\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:890e1db6ae363695cfc23ffae4d612cc85cdd99d759bd539af6683969d0c3c25\", size \"92597153\" in 3.311523172s" Mar 20 21:37:29.559950 containerd[1452]: time="2025-03-20T21:37:29.559837437Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.2\" returns image reference \"sha256:57c2b1dcdc0045be5220c7237f900bce5f47c006714073859cf102b0eaa65290\"" Mar 20 21:37:29.561714 containerd[1452]: time="2025-03-20T21:37:29.561684059Z" level=info msg="CreateContainer within sandbox \"e73373abbb209ed61daa36d838f2241c6446bc211589f0e22be3ef71ddc4b8c1\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 20 21:37:29.569001 containerd[1452]: time="2025-03-20T21:37:29.568959035Z" level=info msg="Container 63eedf710cd7b37fc43507a19420c82a413ed711ccf61b62ce54644cb15e0c91: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:37:29.576626 containerd[1452]: time="2025-03-20T21:37:29.576585954Z" level=info msg="CreateContainer within sandbox \"e73373abbb209ed61daa36d838f2241c6446bc211589f0e22be3ef71ddc4b8c1\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"63eedf710cd7b37fc43507a19420c82a413ed711ccf61b62ce54644cb15e0c91\"" Mar 20 21:37:29.577015 containerd[1452]: time="2025-03-20T21:37:29.576986031Z" level=info msg="StartContainer for \"63eedf710cd7b37fc43507a19420c82a413ed711ccf61b62ce54644cb15e0c91\"" Mar 20 21:37:29.578310 containerd[1452]: time="2025-03-20T21:37:29.578285853Z" level=info msg="connecting to shim 63eedf710cd7b37fc43507a19420c82a413ed711ccf61b62ce54644cb15e0c91" address="unix:///run/containerd/s/e3b924c76a37434b949a37c6b37835d077f3a9bbfe4cc6c32b802a224da0ccac" protocol=ttrpc version=3 Mar 20 21:37:29.593730 systemd[1]: Started cri-containerd-63eedf710cd7b37fc43507a19420c82a413ed711ccf61b62ce54644cb15e0c91.scope - libcontainer container 63eedf710cd7b37fc43507a19420c82a413ed711ccf61b62ce54644cb15e0c91. Mar 20 21:37:29.667377 containerd[1452]: time="2025-03-20T21:37:29.667263131Z" level=info msg="StartContainer for \"63eedf710cd7b37fc43507a19420c82a413ed711ccf61b62ce54644cb15e0c91\" returns successfully" Mar 20 21:37:30.183495 containerd[1452]: time="2025-03-20T21:37:30.183452692Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 20 21:37:30.185670 systemd[1]: cri-containerd-63eedf710cd7b37fc43507a19420c82a413ed711ccf61b62ce54644cb15e0c91.scope: Deactivated successfully. Mar 20 21:37:30.185955 systemd[1]: cri-containerd-63eedf710cd7b37fc43507a19420c82a413ed711ccf61b62ce54644cb15e0c91.scope: Consumed 424ms CPU time, 155.8M memory peak, 4K read from disk, 150.3M written to disk. Mar 20 21:37:30.186978 containerd[1452]: time="2025-03-20T21:37:30.186816439Z" level=info msg="received exit event container_id:\"63eedf710cd7b37fc43507a19420c82a413ed711ccf61b62ce54644cb15e0c91\" id:\"63eedf710cd7b37fc43507a19420c82a413ed711ccf61b62ce54644cb15e0c91\" pid:3255 exited_at:{seconds:1742506650 nanos:186413805}" Mar 20 21:37:30.186978 containerd[1452]: time="2025-03-20T21:37:30.186828682Z" level=info msg="TaskExit event in podsandbox handler container_id:\"63eedf710cd7b37fc43507a19420c82a413ed711ccf61b62ce54644cb15e0c91\" id:\"63eedf710cd7b37fc43507a19420c82a413ed711ccf61b62ce54644cb15e0c91\" pid:3255 exited_at:{seconds:1742506650 nanos:186413805}" Mar 20 21:37:30.210645 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-63eedf710cd7b37fc43507a19420c82a413ed711ccf61b62ce54644cb15e0c91-rootfs.mount: Deactivated successfully. Mar 20 21:37:30.282476 kubelet[2567]: I0320 21:37:30.282429 2567 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Mar 20 21:37:30.313395 systemd[1]: Created slice kubepods-besteffort-pod48cfd0f2_c351_498a_b143_cbea7fdfcbf4.slice - libcontainer container kubepods-besteffort-pod48cfd0f2_c351_498a_b143_cbea7fdfcbf4.slice. Mar 20 21:37:30.319005 systemd[1]: Created slice kubepods-burstable-pod66e145b2_4210_4e17_a900_099df1f2a945.slice - libcontainer container kubepods-burstable-pod66e145b2_4210_4e17_a900_099df1f2a945.slice. Mar 20 21:37:30.324983 systemd[1]: Created slice kubepods-burstable-pod7d3d34a8_f5e5_459f_b088_f0d540cbdefd.slice - libcontainer container kubepods-burstable-pod7d3d34a8_f5e5_459f_b088_f0d540cbdefd.slice. Mar 20 21:37:30.330368 systemd[1]: Created slice kubepods-besteffort-pod0086dfc3_437b_4e05_be4a_aa74eab0bd69.slice - libcontainer container kubepods-besteffort-pod0086dfc3_437b_4e05_be4a_aa74eab0bd69.slice. Mar 20 21:37:30.337510 systemd[1]: Created slice kubepods-besteffort-pod19b6e272_78dd_4728_a0b7_c69cdcf39993.slice - libcontainer container kubepods-besteffort-pod19b6e272_78dd_4728_a0b7_c69cdcf39993.slice. Mar 20 21:37:30.457370 kubelet[2567]: I0320 21:37:30.457282 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2l2f8\" (UniqueName: \"kubernetes.io/projected/48cfd0f2-c351-498a-b143-cbea7fdfcbf4-kube-api-access-2l2f8\") pod \"calico-kube-controllers-9c4b576c4-ld24m\" (UID: \"48cfd0f2-c351-498a-b143-cbea7fdfcbf4\") " pod="calico-system/calico-kube-controllers-9c4b576c4-ld24m" Mar 20 21:37:30.458172 kubelet[2567]: I0320 21:37:30.457711 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnzt8\" (UniqueName: \"kubernetes.io/projected/0086dfc3-437b-4e05-be4a-aa74eab0bd69-kube-api-access-qnzt8\") pod \"calico-apiserver-7c9db95659-hdm9r\" (UID: \"0086dfc3-437b-4e05-be4a-aa74eab0bd69\") " pod="calico-apiserver/calico-apiserver-7c9db95659-hdm9r" Mar 20 21:37:30.458172 kubelet[2567]: I0320 21:37:30.457741 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/66e145b2-4210-4e17-a900-099df1f2a945-config-volume\") pod \"coredns-6f6b679f8f-dtv2b\" (UID: \"66e145b2-4210-4e17-a900-099df1f2a945\") " pod="kube-system/coredns-6f6b679f8f-dtv2b" Mar 20 21:37:30.458172 kubelet[2567]: I0320 21:37:30.457761 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/48cfd0f2-c351-498a-b143-cbea7fdfcbf4-tigera-ca-bundle\") pod \"calico-kube-controllers-9c4b576c4-ld24m\" (UID: \"48cfd0f2-c351-498a-b143-cbea7fdfcbf4\") " pod="calico-system/calico-kube-controllers-9c4b576c4-ld24m" Mar 20 21:37:30.458172 kubelet[2567]: I0320 21:37:30.457935 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/19b6e272-78dd-4728-a0b7-c69cdcf39993-calico-apiserver-certs\") pod \"calico-apiserver-7c9db95659-ffpcb\" (UID: \"19b6e272-78dd-4728-a0b7-c69cdcf39993\") " pod="calico-apiserver/calico-apiserver-7c9db95659-ffpcb" Mar 20 21:37:30.458172 kubelet[2567]: I0320 21:37:30.458015 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7d3d34a8-f5e5-459f-b088-f0d540cbdefd-config-volume\") pod \"coredns-6f6b679f8f-pfl2r\" (UID: \"7d3d34a8-f5e5-459f-b088-f0d540cbdefd\") " pod="kube-system/coredns-6f6b679f8f-pfl2r" Mar 20 21:37:30.458329 kubelet[2567]: I0320 21:37:30.458041 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vtcv\" (UniqueName: \"kubernetes.io/projected/7d3d34a8-f5e5-459f-b088-f0d540cbdefd-kube-api-access-8vtcv\") pod \"coredns-6f6b679f8f-pfl2r\" (UID: \"7d3d34a8-f5e5-459f-b088-f0d540cbdefd\") " pod="kube-system/coredns-6f6b679f8f-pfl2r" Mar 20 21:37:30.458329 kubelet[2567]: I0320 21:37:30.458059 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0086dfc3-437b-4e05-be4a-aa74eab0bd69-calico-apiserver-certs\") pod \"calico-apiserver-7c9db95659-hdm9r\" (UID: \"0086dfc3-437b-4e05-be4a-aa74eab0bd69\") " pod="calico-apiserver/calico-apiserver-7c9db95659-hdm9r" Mar 20 21:37:30.458329 kubelet[2567]: I0320 21:37:30.458077 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gd2sh\" (UniqueName: \"kubernetes.io/projected/19b6e272-78dd-4728-a0b7-c69cdcf39993-kube-api-access-gd2sh\") pod \"calico-apiserver-7c9db95659-ffpcb\" (UID: \"19b6e272-78dd-4728-a0b7-c69cdcf39993\") " pod="calico-apiserver/calico-apiserver-7c9db95659-ffpcb" Mar 20 21:37:30.458329 kubelet[2567]: I0320 21:37:30.458094 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8k5kq\" (UniqueName: \"kubernetes.io/projected/66e145b2-4210-4e17-a900-099df1f2a945-kube-api-access-8k5kq\") pod \"coredns-6f6b679f8f-dtv2b\" (UID: \"66e145b2-4210-4e17-a900-099df1f2a945\") " pod="kube-system/coredns-6f6b679f8f-dtv2b" Mar 20 21:37:30.479145 kubelet[2567]: E0320 21:37:30.479121 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:37:30.479816 containerd[1452]: time="2025-03-20T21:37:30.479775093Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.2\"" Mar 20 21:37:30.616756 containerd[1452]: time="2025-03-20T21:37:30.616718436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9c4b576c4-ld24m,Uid:48cfd0f2-c351-498a-b143-cbea7fdfcbf4,Namespace:calico-system,Attempt:0,}" Mar 20 21:37:30.623001 kubelet[2567]: E0320 21:37:30.622967 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:37:30.623402 containerd[1452]: time="2025-03-20T21:37:30.623340540Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-dtv2b,Uid:66e145b2-4210-4e17-a900-099df1f2a945,Namespace:kube-system,Attempt:0,}" Mar 20 21:37:30.627615 kubelet[2567]: E0320 21:37:30.627575 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:37:30.627952 containerd[1452]: time="2025-03-20T21:37:30.627918428Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-pfl2r,Uid:7d3d34a8-f5e5-459f-b088-f0d540cbdefd,Namespace:kube-system,Attempt:0,}" Mar 20 21:37:30.649784 containerd[1452]: time="2025-03-20T21:37:30.649756295Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c9db95659-ffpcb,Uid:19b6e272-78dd-4728-a0b7-c69cdcf39993,Namespace:calico-apiserver,Attempt:0,}" Mar 20 21:37:30.665396 containerd[1452]: time="2025-03-20T21:37:30.665182956Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c9db95659-hdm9r,Uid:0086dfc3-437b-4e05-be4a-aa74eab0bd69,Namespace:calico-apiserver,Attempt:0,}" Mar 20 21:37:31.040434 containerd[1452]: time="2025-03-20T21:37:31.040322984Z" level=error msg="Failed to destroy network for sandbox \"695d9b02cfe1422c64e7be524e892acac11c5507b1dcb688be19f05271b7cc17\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 20 21:37:31.041540 containerd[1452]: time="2025-03-20T21:37:31.040415649Z" level=error msg="Failed to destroy network for sandbox \"b76aaabee86930db0956a5e0cd60d39552c4ee511a8b64f7bae750514fe3c7c5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 20 21:37:31.041658 containerd[1452]: time="2025-03-20T21:37:31.041409358Z" level=error msg="Failed to destroy network for sandbox \"fec36e7ae872b44727360d77047c4840e99bdfa1e7feb85488c320bce61362aa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 20 21:37:31.045541 containerd[1452]: time="2025-03-20T21:37:31.045506064Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-pfl2r,Uid:7d3d34a8-f5e5-459f-b088-f0d540cbdefd,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"695d9b02cfe1422c64e7be524e892acac11c5507b1dcb688be19f05271b7cc17\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 20 21:37:31.046682 containerd[1452]: time="2025-03-20T21:37:31.046343130Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c9db95659-hdm9r,Uid:0086dfc3-437b-4e05-be4a-aa74eab0bd69,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fec36e7ae872b44727360d77047c4840e99bdfa1e7feb85488c320bce61362aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 20 21:37:31.047223 containerd[1452]: time="2025-03-20T21:37:31.047191359Z" level=error msg="Failed to destroy network for sandbox \"8b66deda86c39cfe027321de294fed0a61cc4b2fdf73a410cb990fac7bbc7934\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 20 21:37:31.048704 containerd[1452]: time="2025-03-20T21:37:31.048647393Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9c4b576c4-ld24m,Uid:48cfd0f2-c351-498a-b143-cbea7fdfcbf4,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b66deda86c39cfe027321de294fed0a61cc4b2fdf73a410cb990fac7bbc7934\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 20 21:37:31.049671 containerd[1452]: time="2025-03-20T21:37:31.049545675Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-dtv2b,Uid:66e145b2-4210-4e17-a900-099df1f2a945,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b76aaabee86930db0956a5e0cd60d39552c4ee511a8b64f7bae750514fe3c7c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 20 21:37:31.051099 kubelet[2567]: E0320 21:37:31.051049 2567 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b76aaabee86930db0956a5e0cd60d39552c4ee511a8b64f7bae750514fe3c7c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 20 21:37:31.051193 kubelet[2567]: E0320 21:37:31.051128 2567 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b76aaabee86930db0956a5e0cd60d39552c4ee511a8b64f7bae750514fe3c7c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-dtv2b" Mar 20 21:37:31.051225 kubelet[2567]: E0320 21:37:31.051197 2567 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b76aaabee86930db0956a5e0cd60d39552c4ee511a8b64f7bae750514fe3c7c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-dtv2b" Mar 20 21:37:31.051409 kubelet[2567]: E0320 21:37:31.051237 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-dtv2b_kube-system(66e145b2-4210-4e17-a900-099df1f2a945)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-dtv2b_kube-system(66e145b2-4210-4e17-a900-099df1f2a945)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b76aaabee86930db0956a5e0cd60d39552c4ee511a8b64f7bae750514fe3c7c5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-dtv2b" podUID="66e145b2-4210-4e17-a900-099df1f2a945" Mar 20 21:37:31.051409 kubelet[2567]: E0320 21:37:31.051258 2567 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"695d9b02cfe1422c64e7be524e892acac11c5507b1dcb688be19f05271b7cc17\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 20 21:37:31.051409 kubelet[2567]: E0320 21:37:31.051316 2567 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"695d9b02cfe1422c64e7be524e892acac11c5507b1dcb688be19f05271b7cc17\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-pfl2r" Mar 20 21:37:31.051532 kubelet[2567]: E0320 21:37:31.051333 2567 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"695d9b02cfe1422c64e7be524e892acac11c5507b1dcb688be19f05271b7cc17\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-pfl2r" Mar 20 21:37:31.051532 kubelet[2567]: E0320 21:37:31.051365 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-pfl2r_kube-system(7d3d34a8-f5e5-459f-b088-f0d540cbdefd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-pfl2r_kube-system(7d3d34a8-f5e5-459f-b088-f0d540cbdefd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"695d9b02cfe1422c64e7be524e892acac11c5507b1dcb688be19f05271b7cc17\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-pfl2r" podUID="7d3d34a8-f5e5-459f-b088-f0d540cbdefd" Mar 20 21:37:31.051532 kubelet[2567]: E0320 21:37:31.051403 2567 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b66deda86c39cfe027321de294fed0a61cc4b2fdf73a410cb990fac7bbc7934\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 20 21:37:31.051731 kubelet[2567]: E0320 21:37:31.051455 2567 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b66deda86c39cfe027321de294fed0a61cc4b2fdf73a410cb990fac7bbc7934\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-9c4b576c4-ld24m" Mar 20 21:37:31.051731 kubelet[2567]: E0320 21:37:31.051472 2567 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b66deda86c39cfe027321de294fed0a61cc4b2fdf73a410cb990fac7bbc7934\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-9c4b576c4-ld24m" Mar 20 21:37:31.051731 kubelet[2567]: E0320 21:37:31.051516 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-9c4b576c4-ld24m_calico-system(48cfd0f2-c351-498a-b143-cbea7fdfcbf4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-9c4b576c4-ld24m_calico-system(48cfd0f2-c351-498a-b143-cbea7fdfcbf4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8b66deda86c39cfe027321de294fed0a61cc4b2fdf73a410cb990fac7bbc7934\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-9c4b576c4-ld24m" podUID="48cfd0f2-c351-498a-b143-cbea7fdfcbf4" Mar 20 21:37:31.051922 kubelet[2567]: E0320 21:37:31.051890 2567 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fec36e7ae872b44727360d77047c4840e99bdfa1e7feb85488c320bce61362aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 20 21:37:31.051971 kubelet[2567]: E0320 21:37:31.051940 2567 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fec36e7ae872b44727360d77047c4840e99bdfa1e7feb85488c320bce61362aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7c9db95659-hdm9r" Mar 20 21:37:31.051971 kubelet[2567]: E0320 21:37:31.051966 2567 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fec36e7ae872b44727360d77047c4840e99bdfa1e7feb85488c320bce61362aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7c9db95659-hdm9r" Mar 20 21:37:31.052022 kubelet[2567]: E0320 21:37:31.052001 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7c9db95659-hdm9r_calico-apiserver(0086dfc3-437b-4e05-be4a-aa74eab0bd69)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7c9db95659-hdm9r_calico-apiserver(0086dfc3-437b-4e05-be4a-aa74eab0bd69)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fec36e7ae872b44727360d77047c4840e99bdfa1e7feb85488c320bce61362aa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7c9db95659-hdm9r" podUID="0086dfc3-437b-4e05-be4a-aa74eab0bd69" Mar 20 21:37:31.056962 containerd[1452]: time="2025-03-20T21:37:31.056905783Z" level=error msg="Failed to destroy network for sandbox \"2feb8206ce66364502c37bdd6a36d833c3ae56506a547f986b2a167412caa4ab\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 20 21:37:31.058453 containerd[1452]: time="2025-03-20T21:37:31.058395146Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c9db95659-ffpcb,Uid:19b6e272-78dd-4728-a0b7-c69cdcf39993,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2feb8206ce66364502c37bdd6a36d833c3ae56506a547f986b2a167412caa4ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 20 21:37:31.058743 kubelet[2567]: E0320 21:37:31.058718 2567 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2feb8206ce66364502c37bdd6a36d833c3ae56506a547f986b2a167412caa4ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 20 21:37:31.058802 kubelet[2567]: E0320 21:37:31.058759 2567 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2feb8206ce66364502c37bdd6a36d833c3ae56506a547f986b2a167412caa4ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7c9db95659-ffpcb" Mar 20 21:37:31.058802 kubelet[2567]: E0320 21:37:31.058776 2567 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2feb8206ce66364502c37bdd6a36d833c3ae56506a547f986b2a167412caa4ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7c9db95659-ffpcb" Mar 20 21:37:31.058856 kubelet[2567]: E0320 21:37:31.058817 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7c9db95659-ffpcb_calico-apiserver(19b6e272-78dd-4728-a0b7-c69cdcf39993)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7c9db95659-ffpcb_calico-apiserver(19b6e272-78dd-4728-a0b7-c69cdcf39993)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2feb8206ce66364502c37bdd6a36d833c3ae56506a547f986b2a167412caa4ab\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7c9db95659-ffpcb" podUID="19b6e272-78dd-4728-a0b7-c69cdcf39993" Mar 20 21:37:31.408945 systemd[1]: Created slice kubepods-besteffort-pode8083da4_6460_47ba_b48a_9b8f613b80aa.slice - libcontainer container kubepods-besteffort-pode8083da4_6460_47ba_b48a_9b8f613b80aa.slice. Mar 20 21:37:31.411036 containerd[1452]: time="2025-03-20T21:37:31.411002352Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7lkcl,Uid:e8083da4-6460-47ba-b48a-9b8f613b80aa,Namespace:calico-system,Attempt:0,}" Mar 20 21:37:31.452270 containerd[1452]: time="2025-03-20T21:37:31.452223486Z" level=error msg="Failed to destroy network for sandbox \"4086e31c510f6a5b40e0223e30646feecd06b273a643eb884efa0d5c8d20d382\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 20 21:37:31.453227 containerd[1452]: time="2025-03-20T21:37:31.453172022Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7lkcl,Uid:e8083da4-6460-47ba-b48a-9b8f613b80aa,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4086e31c510f6a5b40e0223e30646feecd06b273a643eb884efa0d5c8d20d382\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 20 21:37:31.453492 kubelet[2567]: E0320 21:37:31.453445 2567 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4086e31c510f6a5b40e0223e30646feecd06b273a643eb884efa0d5c8d20d382\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 20 21:37:31.453592 kubelet[2567]: E0320 21:37:31.453506 2567 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4086e31c510f6a5b40e0223e30646feecd06b273a643eb884efa0d5c8d20d382\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7lkcl" Mar 20 21:37:31.453592 kubelet[2567]: E0320 21:37:31.453525 2567 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4086e31c510f6a5b40e0223e30646feecd06b273a643eb884efa0d5c8d20d382\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7lkcl" Mar 20 21:37:31.453592 kubelet[2567]: E0320 21:37:31.453565 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7lkcl_calico-system(e8083da4-6460-47ba-b48a-9b8f613b80aa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7lkcl_calico-system(e8083da4-6460-47ba-b48a-9b8f613b80aa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4086e31c510f6a5b40e0223e30646feecd06b273a643eb884efa0d5c8d20d382\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7lkcl" podUID="e8083da4-6460-47ba-b48a-9b8f613b80aa" Mar 20 21:37:31.577249 systemd[1]: run-netns-cni\x2d8b0e8803\x2dc65c\x2dbe57\x2d8185\x2d162208646f9b.mount: Deactivated successfully. Mar 20 21:37:31.577348 systemd[1]: run-netns-cni\x2d7182444e\x2da5de\x2d549e\x2db483\x2d97f53bc4b499.mount: Deactivated successfully. Mar 20 21:37:34.253318 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1987523406.mount: Deactivated successfully. Mar 20 21:37:34.407540 containerd[1452]: time="2025-03-20T21:37:34.407496528Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:37:34.408416 containerd[1452]: time="2025-03-20T21:37:34.408211260Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.2: active requests=0, bytes read=137086024" Mar 20 21:37:34.409326 containerd[1452]: time="2025-03-20T21:37:34.409038699Z" level=info msg="ImageCreate event name:\"sha256:8fd1983cc851d15f05a37eb3ff85b0cde86869beec7630d2940c86fc7b98d0c1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:37:34.411817 containerd[1452]: time="2025-03-20T21:37:34.410873419Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:d9a21be37fe591ee5ab5a2e3dc26408ea165a44a55705102ffaa002de9908b32\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:37:34.411817 containerd[1452]: time="2025-03-20T21:37:34.411523735Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.2\" with image id \"sha256:8fd1983cc851d15f05a37eb3ff85b0cde86869beec7630d2940c86fc7b98d0c1\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:d9a21be37fe591ee5ab5a2e3dc26408ea165a44a55705102ffaa002de9908b32\", size \"137085886\" in 3.931701309s" Mar 20 21:37:34.411817 containerd[1452]: time="2025-03-20T21:37:34.411551782Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.2\" returns image reference \"sha256:8fd1983cc851d15f05a37eb3ff85b0cde86869beec7630d2940c86fc7b98d0c1\"" Mar 20 21:37:34.423288 containerd[1452]: time="2025-03-20T21:37:34.422893426Z" level=info msg="CreateContainer within sandbox \"e73373abbb209ed61daa36d838f2241c6446bc211589f0e22be3ef71ddc4b8c1\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 20 21:37:34.432223 containerd[1452]: time="2025-03-20T21:37:34.430710224Z" level=info msg="Container c224e159f0c7388fbca2b7942eac918c5faf0746fe5890ac6577392f1e9ee158: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:37:34.440063 containerd[1452]: time="2025-03-20T21:37:34.440015219Z" level=info msg="CreateContainer within sandbox \"e73373abbb209ed61daa36d838f2241c6446bc211589f0e22be3ef71ddc4b8c1\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"c224e159f0c7388fbca2b7942eac918c5faf0746fe5890ac6577392f1e9ee158\"" Mar 20 21:37:34.441884 containerd[1452]: time="2025-03-20T21:37:34.440472689Z" level=info msg="StartContainer for \"c224e159f0c7388fbca2b7942eac918c5faf0746fe5890ac6577392f1e9ee158\"" Mar 20 21:37:34.448335 containerd[1452]: time="2025-03-20T21:37:34.448298368Z" level=info msg="connecting to shim c224e159f0c7388fbca2b7942eac918c5faf0746fe5890ac6577392f1e9ee158" address="unix:///run/containerd/s/e3b924c76a37434b949a37c6b37835d077f3a9bbfe4cc6c32b802a224da0ccac" protocol=ttrpc version=3 Mar 20 21:37:34.467807 systemd[1]: Started cri-containerd-c224e159f0c7388fbca2b7942eac918c5faf0746fe5890ac6577392f1e9ee158.scope - libcontainer container c224e159f0c7388fbca2b7942eac918c5faf0746fe5890ac6577392f1e9ee158. Mar 20 21:37:34.514390 containerd[1452]: time="2025-03-20T21:37:34.514282937Z" level=info msg="StartContainer for \"c224e159f0c7388fbca2b7942eac918c5faf0746fe5890ac6577392f1e9ee158\" returns successfully" Mar 20 21:37:34.669381 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Mar 20 21:37:34.669496 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Mar 20 21:37:35.496893 kubelet[2567]: E0320 21:37:35.496564 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:37:35.599819 containerd[1452]: time="2025-03-20T21:37:35.599770422Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c224e159f0c7388fbca2b7942eac918c5faf0746fe5890ac6577392f1e9ee158\" id:\"ee08662d3415607fc3aafdacf0b58ddda5eea9b9325b72256c6db7730fd70100\" pid:3595 exit_status:1 exited_at:{seconds:1742506655 nanos:599467952}" Mar 20 21:37:36.497791 kubelet[2567]: E0320 21:37:36.497750 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:37:36.546830 containerd[1452]: time="2025-03-20T21:37:36.546778788Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c224e159f0c7388fbca2b7942eac918c5faf0746fe5890ac6577392f1e9ee158\" id:\"adb0fc5de4b1592fbc528d0ebf2eb072e27e83b5f4a202fc8e2013ef8f1ab670\" pid:3717 exit_status:1 exited_at:{seconds:1742506656 nanos:546487843}" Mar 20 21:37:37.540057 systemd[1]: Started sshd@7-10.0.0.3:22-10.0.0.1:39946.service - OpenSSH per-connection server daemon (10.0.0.1:39946). Mar 20 21:37:37.597799 sshd[3757]: Accepted publickey for core from 10.0.0.1 port 39946 ssh2: RSA SHA256:RPxckmxBxmDHHfBFzj0E8HhfLPWbeWZYhF2T7Zu87Y8 Mar 20 21:37:37.599039 sshd-session[3757]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:37:37.602666 systemd-logind[1436]: New session 8 of user core. Mar 20 21:37:37.612748 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 20 21:37:37.752740 sshd[3759]: Connection closed by 10.0.0.1 port 39946 Mar 20 21:37:37.753803 sshd-session[3757]: pam_unix(sshd:session): session closed for user core Mar 20 21:37:37.756949 systemd[1]: sshd@7-10.0.0.3:22-10.0.0.1:39946.service: Deactivated successfully. Mar 20 21:37:37.759657 systemd[1]: session-8.scope: Deactivated successfully. Mar 20 21:37:37.761208 systemd-logind[1436]: Session 8 logged out. Waiting for processes to exit. Mar 20 21:37:37.762027 systemd-logind[1436]: Removed session 8. Mar 20 21:37:38.698290 kubelet[2567]: I0320 21:37:38.698251 2567 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 20 21:37:38.700506 kubelet[2567]: E0320 21:37:38.700456 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:37:38.716517 kubelet[2567]: I0320 21:37:38.716460 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-vvxj8" podStartSLOduration=5.027701485 podStartE2EDuration="15.716446958s" podCreationTimestamp="2025-03-20 21:37:23 +0000 UTC" firstStartedPulling="2025-03-20 21:37:23.727254218 +0000 UTC m=+15.405546739" lastFinishedPulling="2025-03-20 21:37:34.415999691 +0000 UTC m=+26.094292212" observedRunningTime="2025-03-20 21:37:35.512369995 +0000 UTC m=+27.190662516" watchObservedRunningTime="2025-03-20 21:37:38.716446958 +0000 UTC m=+30.394739479" Mar 20 21:37:39.227640 kernel: bpftool[3861]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Mar 20 21:37:39.378974 systemd-networkd[1398]: vxlan.calico: Link UP Mar 20 21:37:39.378980 systemd-networkd[1398]: vxlan.calico: Gained carrier Mar 20 21:37:39.501672 kubelet[2567]: E0320 21:37:39.501570 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:37:40.687179 systemd-networkd[1398]: vxlan.calico: Gained IPv6LL Mar 20 21:37:41.404265 containerd[1452]: time="2025-03-20T21:37:41.404225795Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c9db95659-ffpcb,Uid:19b6e272-78dd-4728-a0b7-c69cdcf39993,Namespace:calico-apiserver,Attempt:0,}" Mar 20 21:37:41.404678 containerd[1452]: time="2025-03-20T21:37:41.404230996Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c9db95659-hdm9r,Uid:0086dfc3-437b-4e05-be4a-aa74eab0bd69,Namespace:calico-apiserver,Attempt:0,}" Mar 20 21:37:41.614838 systemd-networkd[1398]: cali5a551e3ad44: Link UP Mar 20 21:37:41.615433 systemd-networkd[1398]: cali5a551e3ad44: Gained carrier Mar 20 21:37:41.630228 containerd[1452]: 2025-03-20 21:37:41.473 [INFO][3943] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7c9db95659--ffpcb-eth0 calico-apiserver-7c9db95659- calico-apiserver 19b6e272-78dd-4728-a0b7-c69cdcf39993 759 0 2025-03-20 21:37:25 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7c9db95659 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7c9db95659-ffpcb eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali5a551e3ad44 [] []}} ContainerID="310ba2c80c048da96ae328966d9d0b3cba5b9d2380e11e7ae23a384b53d482dd" Namespace="calico-apiserver" Pod="calico-apiserver-7c9db95659-ffpcb" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c9db95659--ffpcb-" Mar 20 21:37:41.630228 containerd[1452]: 2025-03-20 21:37:41.473 [INFO][3943] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="310ba2c80c048da96ae328966d9d0b3cba5b9d2380e11e7ae23a384b53d482dd" Namespace="calico-apiserver" Pod="calico-apiserver-7c9db95659-ffpcb" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c9db95659--ffpcb-eth0" Mar 20 21:37:41.630228 containerd[1452]: 2025-03-20 21:37:41.567 [INFO][3966] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="310ba2c80c048da96ae328966d9d0b3cba5b9d2380e11e7ae23a384b53d482dd" HandleID="k8s-pod-network.310ba2c80c048da96ae328966d9d0b3cba5b9d2380e11e7ae23a384b53d482dd" Workload="localhost-k8s-calico--apiserver--7c9db95659--ffpcb-eth0" Mar 20 21:37:41.630419 containerd[1452]: 2025-03-20 21:37:41.585 [INFO][3966] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="310ba2c80c048da96ae328966d9d0b3cba5b9d2380e11e7ae23a384b53d482dd" HandleID="k8s-pod-network.310ba2c80c048da96ae328966d9d0b3cba5b9d2380e11e7ae23a384b53d482dd" Workload="localhost-k8s-calico--apiserver--7c9db95659--ffpcb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400034daf0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7c9db95659-ffpcb", "timestamp":"2025-03-20 21:37:41.567844281 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 20 21:37:41.630419 containerd[1452]: 2025-03-20 21:37:41.585 [INFO][3966] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 20 21:37:41.630419 containerd[1452]: 2025-03-20 21:37:41.585 [INFO][3966] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 20 21:37:41.630419 containerd[1452]: 2025-03-20 21:37:41.585 [INFO][3966] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 20 21:37:41.630419 containerd[1452]: 2025-03-20 21:37:41.587 [INFO][3966] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.310ba2c80c048da96ae328966d9d0b3cba5b9d2380e11e7ae23a384b53d482dd" host="localhost" Mar 20 21:37:41.630419 containerd[1452]: 2025-03-20 21:37:41.591 [INFO][3966] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Mar 20 21:37:41.630419 containerd[1452]: 2025-03-20 21:37:41.595 [INFO][3966] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Mar 20 21:37:41.630419 containerd[1452]: 2025-03-20 21:37:41.597 [INFO][3966] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 20 21:37:41.630419 containerd[1452]: 2025-03-20 21:37:41.598 [INFO][3966] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 20 21:37:41.630419 containerd[1452]: 2025-03-20 21:37:41.599 [INFO][3966] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.310ba2c80c048da96ae328966d9d0b3cba5b9d2380e11e7ae23a384b53d482dd" host="localhost" Mar 20 21:37:41.630670 containerd[1452]: 2025-03-20 21:37:41.600 [INFO][3966] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.310ba2c80c048da96ae328966d9d0b3cba5b9d2380e11e7ae23a384b53d482dd Mar 20 21:37:41.630670 containerd[1452]: 2025-03-20 21:37:41.603 [INFO][3966] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.310ba2c80c048da96ae328966d9d0b3cba5b9d2380e11e7ae23a384b53d482dd" host="localhost" Mar 20 21:37:41.630670 containerd[1452]: 2025-03-20 21:37:41.608 [INFO][3966] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.310ba2c80c048da96ae328966d9d0b3cba5b9d2380e11e7ae23a384b53d482dd" host="localhost" Mar 20 21:37:41.630670 containerd[1452]: 2025-03-20 21:37:41.608 [INFO][3966] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.310ba2c80c048da96ae328966d9d0b3cba5b9d2380e11e7ae23a384b53d482dd" host="localhost" Mar 20 21:37:41.630670 containerd[1452]: 2025-03-20 21:37:41.608 [INFO][3966] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 20 21:37:41.630670 containerd[1452]: 2025-03-20 21:37:41.608 [INFO][3966] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="310ba2c80c048da96ae328966d9d0b3cba5b9d2380e11e7ae23a384b53d482dd" HandleID="k8s-pod-network.310ba2c80c048da96ae328966d9d0b3cba5b9d2380e11e7ae23a384b53d482dd" Workload="localhost-k8s-calico--apiserver--7c9db95659--ffpcb-eth0" Mar 20 21:37:41.630784 containerd[1452]: 2025-03-20 21:37:41.611 [INFO][3943] cni-plugin/k8s.go 386: Populated endpoint ContainerID="310ba2c80c048da96ae328966d9d0b3cba5b9d2380e11e7ae23a384b53d482dd" Namespace="calico-apiserver" Pod="calico-apiserver-7c9db95659-ffpcb" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c9db95659--ffpcb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7c9db95659--ffpcb-eth0", GenerateName:"calico-apiserver-7c9db95659-", Namespace:"calico-apiserver", SelfLink:"", UID:"19b6e272-78dd-4728-a0b7-c69cdcf39993", ResourceVersion:"759", Generation:0, CreationTimestamp:time.Date(2025, time.March, 20, 21, 37, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c9db95659", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7c9db95659-ffpcb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5a551e3ad44", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 20 21:37:41.630850 containerd[1452]: 2025-03-20 21:37:41.611 [INFO][3943] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="310ba2c80c048da96ae328966d9d0b3cba5b9d2380e11e7ae23a384b53d482dd" Namespace="calico-apiserver" Pod="calico-apiserver-7c9db95659-ffpcb" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c9db95659--ffpcb-eth0" Mar 20 21:37:41.630850 containerd[1452]: 2025-03-20 21:37:41.611 [INFO][3943] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5a551e3ad44 ContainerID="310ba2c80c048da96ae328966d9d0b3cba5b9d2380e11e7ae23a384b53d482dd" Namespace="calico-apiserver" Pod="calico-apiserver-7c9db95659-ffpcb" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c9db95659--ffpcb-eth0" Mar 20 21:37:41.630850 containerd[1452]: 2025-03-20 21:37:41.615 [INFO][3943] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="310ba2c80c048da96ae328966d9d0b3cba5b9d2380e11e7ae23a384b53d482dd" Namespace="calico-apiserver" Pod="calico-apiserver-7c9db95659-ffpcb" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c9db95659--ffpcb-eth0" Mar 20 21:37:41.630908 containerd[1452]: 2025-03-20 21:37:41.616 [INFO][3943] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="310ba2c80c048da96ae328966d9d0b3cba5b9d2380e11e7ae23a384b53d482dd" Namespace="calico-apiserver" Pod="calico-apiserver-7c9db95659-ffpcb" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c9db95659--ffpcb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7c9db95659--ffpcb-eth0", GenerateName:"calico-apiserver-7c9db95659-", Namespace:"calico-apiserver", SelfLink:"", UID:"19b6e272-78dd-4728-a0b7-c69cdcf39993", ResourceVersion:"759", Generation:0, CreationTimestamp:time.Date(2025, time.March, 20, 21, 37, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c9db95659", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"310ba2c80c048da96ae328966d9d0b3cba5b9d2380e11e7ae23a384b53d482dd", Pod:"calico-apiserver-7c9db95659-ffpcb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5a551e3ad44", MAC:"ae:51:10:3a:4f:63", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 20 21:37:41.630954 containerd[1452]: 2025-03-20 21:37:41.627 [INFO][3943] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="310ba2c80c048da96ae328966d9d0b3cba5b9d2380e11e7ae23a384b53d482dd" Namespace="calico-apiserver" Pod="calico-apiserver-7c9db95659-ffpcb" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c9db95659--ffpcb-eth0" Mar 20 21:37:41.709902 containerd[1452]: time="2025-03-20T21:37:41.709774583Z" level=info msg="connecting to shim 310ba2c80c048da96ae328966d9d0b3cba5b9d2380e11e7ae23a384b53d482dd" address="unix:///run/containerd/s/4508dad61c477153cba2582d76f461bc33037cf01d944e1d55acea64b68d35dd" namespace=k8s.io protocol=ttrpc version=3 Mar 20 21:37:41.718630 systemd-networkd[1398]: cali23c1b45e2d5: Link UP Mar 20 21:37:41.719552 systemd-networkd[1398]: cali23c1b45e2d5: Gained carrier Mar 20 21:37:41.734119 containerd[1452]: 2025-03-20 21:37:41.473 [INFO][3941] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7c9db95659--hdm9r-eth0 calico-apiserver-7c9db95659- calico-apiserver 0086dfc3-437b-4e05-be4a-aa74eab0bd69 758 0 2025-03-20 21:37:25 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7c9db95659 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7c9db95659-hdm9r eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali23c1b45e2d5 [] []}} ContainerID="607141a4fe9426534415fb8649f2cb5309b889c2d7285bc9d31a205c9473642c" Namespace="calico-apiserver" Pod="calico-apiserver-7c9db95659-hdm9r" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c9db95659--hdm9r-" Mar 20 21:37:41.734119 containerd[1452]: 2025-03-20 21:37:41.473 [INFO][3941] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="607141a4fe9426534415fb8649f2cb5309b889c2d7285bc9d31a205c9473642c" Namespace="calico-apiserver" Pod="calico-apiserver-7c9db95659-hdm9r" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c9db95659--hdm9r-eth0" Mar 20 21:37:41.734119 containerd[1452]: 2025-03-20 21:37:41.567 [INFO][3964] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="607141a4fe9426534415fb8649f2cb5309b889c2d7285bc9d31a205c9473642c" HandleID="k8s-pod-network.607141a4fe9426534415fb8649f2cb5309b889c2d7285bc9d31a205c9473642c" Workload="localhost-k8s-calico--apiserver--7c9db95659--hdm9r-eth0" Mar 20 21:37:41.734313 containerd[1452]: 2025-03-20 21:37:41.585 [INFO][3964] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="607141a4fe9426534415fb8649f2cb5309b889c2d7285bc9d31a205c9473642c" HandleID="k8s-pod-network.607141a4fe9426534415fb8649f2cb5309b889c2d7285bc9d31a205c9473642c" Workload="localhost-k8s-calico--apiserver--7c9db95659--hdm9r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400038b9f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7c9db95659-hdm9r", "timestamp":"2025-03-20 21:37:41.56783912 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 20 21:37:41.734313 containerd[1452]: 2025-03-20 21:37:41.585 [INFO][3964] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 20 21:37:41.734313 containerd[1452]: 2025-03-20 21:37:41.608 [INFO][3964] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 20 21:37:41.734313 containerd[1452]: 2025-03-20 21:37:41.608 [INFO][3964] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 20 21:37:41.734313 containerd[1452]: 2025-03-20 21:37:41.689 [INFO][3964] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.607141a4fe9426534415fb8649f2cb5309b889c2d7285bc9d31a205c9473642c" host="localhost" Mar 20 21:37:41.734313 containerd[1452]: 2025-03-20 21:37:41.694 [INFO][3964] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Mar 20 21:37:41.734313 containerd[1452]: 2025-03-20 21:37:41.698 [INFO][3964] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Mar 20 21:37:41.734313 containerd[1452]: 2025-03-20 21:37:41.700 [INFO][3964] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 20 21:37:41.734313 containerd[1452]: 2025-03-20 21:37:41.702 [INFO][3964] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 20 21:37:41.734313 containerd[1452]: 2025-03-20 21:37:41.702 [INFO][3964] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.607141a4fe9426534415fb8649f2cb5309b889c2d7285bc9d31a205c9473642c" host="localhost" Mar 20 21:37:41.734507 containerd[1452]: 2025-03-20 21:37:41.703 [INFO][3964] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.607141a4fe9426534415fb8649f2cb5309b889c2d7285bc9d31a205c9473642c Mar 20 21:37:41.734507 containerd[1452]: 2025-03-20 21:37:41.707 [INFO][3964] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.607141a4fe9426534415fb8649f2cb5309b889c2d7285bc9d31a205c9473642c" host="localhost" Mar 20 21:37:41.734507 containerd[1452]: 2025-03-20 21:37:41.711 [INFO][3964] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.607141a4fe9426534415fb8649f2cb5309b889c2d7285bc9d31a205c9473642c" host="localhost" Mar 20 21:37:41.734507 containerd[1452]: 2025-03-20 21:37:41.711 [INFO][3964] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.607141a4fe9426534415fb8649f2cb5309b889c2d7285bc9d31a205c9473642c" host="localhost" Mar 20 21:37:41.734507 containerd[1452]: 2025-03-20 21:37:41.711 [INFO][3964] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 20 21:37:41.734507 containerd[1452]: 2025-03-20 21:37:41.711 [INFO][3964] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="607141a4fe9426534415fb8649f2cb5309b889c2d7285bc9d31a205c9473642c" HandleID="k8s-pod-network.607141a4fe9426534415fb8649f2cb5309b889c2d7285bc9d31a205c9473642c" Workload="localhost-k8s-calico--apiserver--7c9db95659--hdm9r-eth0" Mar 20 21:37:41.735189 containerd[1452]: 2025-03-20 21:37:41.715 [INFO][3941] cni-plugin/k8s.go 386: Populated endpoint ContainerID="607141a4fe9426534415fb8649f2cb5309b889c2d7285bc9d31a205c9473642c" Namespace="calico-apiserver" Pod="calico-apiserver-7c9db95659-hdm9r" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c9db95659--hdm9r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7c9db95659--hdm9r-eth0", GenerateName:"calico-apiserver-7c9db95659-", Namespace:"calico-apiserver", SelfLink:"", UID:"0086dfc3-437b-4e05-be4a-aa74eab0bd69", ResourceVersion:"758", Generation:0, CreationTimestamp:time.Date(2025, time.March, 20, 21, 37, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c9db95659", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7c9db95659-hdm9r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali23c1b45e2d5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 20 21:37:41.734971 systemd[1]: Started cri-containerd-310ba2c80c048da96ae328966d9d0b3cba5b9d2380e11e7ae23a384b53d482dd.scope - libcontainer container 310ba2c80c048da96ae328966d9d0b3cba5b9d2380e11e7ae23a384b53d482dd. Mar 20 21:37:41.735526 containerd[1452]: 2025-03-20 21:37:41.715 [INFO][3941] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="607141a4fe9426534415fb8649f2cb5309b889c2d7285bc9d31a205c9473642c" Namespace="calico-apiserver" Pod="calico-apiserver-7c9db95659-hdm9r" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c9db95659--hdm9r-eth0" Mar 20 21:37:41.735526 containerd[1452]: 2025-03-20 21:37:41.715 [INFO][3941] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali23c1b45e2d5 ContainerID="607141a4fe9426534415fb8649f2cb5309b889c2d7285bc9d31a205c9473642c" Namespace="calico-apiserver" Pod="calico-apiserver-7c9db95659-hdm9r" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c9db95659--hdm9r-eth0" Mar 20 21:37:41.735526 containerd[1452]: 2025-03-20 21:37:41.720 [INFO][3941] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="607141a4fe9426534415fb8649f2cb5309b889c2d7285bc9d31a205c9473642c" Namespace="calico-apiserver" Pod="calico-apiserver-7c9db95659-hdm9r" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c9db95659--hdm9r-eth0" Mar 20 21:37:41.735598 containerd[1452]: 2025-03-20 21:37:41.720 [INFO][3941] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="607141a4fe9426534415fb8649f2cb5309b889c2d7285bc9d31a205c9473642c" Namespace="calico-apiserver" Pod="calico-apiserver-7c9db95659-hdm9r" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c9db95659--hdm9r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7c9db95659--hdm9r-eth0", GenerateName:"calico-apiserver-7c9db95659-", Namespace:"calico-apiserver", SelfLink:"", UID:"0086dfc3-437b-4e05-be4a-aa74eab0bd69", ResourceVersion:"758", Generation:0, CreationTimestamp:time.Date(2025, time.March, 20, 21, 37, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c9db95659", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"607141a4fe9426534415fb8649f2cb5309b889c2d7285bc9d31a205c9473642c", Pod:"calico-apiserver-7c9db95659-hdm9r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali23c1b45e2d5", MAC:"ca:63:2c:47:d9:53", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 20 21:37:41.735672 containerd[1452]: 2025-03-20 21:37:41.730 [INFO][3941] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="607141a4fe9426534415fb8649f2cb5309b889c2d7285bc9d31a205c9473642c" Namespace="calico-apiserver" Pod="calico-apiserver-7c9db95659-hdm9r" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c9db95659--hdm9r-eth0" Mar 20 21:37:41.760137 systemd-resolved[1323]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 20 21:37:41.778389 containerd[1452]: time="2025-03-20T21:37:41.778303193Z" level=info msg="connecting to shim 607141a4fe9426534415fb8649f2cb5309b889c2d7285bc9d31a205c9473642c" address="unix:///run/containerd/s/98d222aad614010356370e146031eb8da51088448da67ccf3221b873677d9e04" namespace=k8s.io protocol=ttrpc version=3 Mar 20 21:37:41.782089 containerd[1452]: time="2025-03-20T21:37:41.782060304Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c9db95659-ffpcb,Uid:19b6e272-78dd-4728-a0b7-c69cdcf39993,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"310ba2c80c048da96ae328966d9d0b3cba5b9d2380e11e7ae23a384b53d482dd\"" Mar 20 21:37:41.784760 containerd[1452]: time="2025-03-20T21:37:41.784365500Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.2\"" Mar 20 21:37:41.803766 systemd[1]: Started cri-containerd-607141a4fe9426534415fb8649f2cb5309b889c2d7285bc9d31a205c9473642c.scope - libcontainer container 607141a4fe9426534415fb8649f2cb5309b889c2d7285bc9d31a205c9473642c. Mar 20 21:37:41.814653 systemd-resolved[1323]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 20 21:37:41.834182 containerd[1452]: time="2025-03-20T21:37:41.834133599Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c9db95659-hdm9r,Uid:0086dfc3-437b-4e05-be4a-aa74eab0bd69,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"607141a4fe9426534415fb8649f2cb5309b889c2d7285bc9d31a205c9473642c\"" Mar 20 21:37:42.770005 systemd[1]: Started sshd@8-10.0.0.3:22-10.0.0.1:48882.service - OpenSSH per-connection server daemon (10.0.0.1:48882). Mar 20 21:37:42.828375 sshd[4100]: Accepted publickey for core from 10.0.0.1 port 48882 ssh2: RSA SHA256:RPxckmxBxmDHHfBFzj0E8HhfLPWbeWZYhF2T7Zu87Y8 Mar 20 21:37:42.829678 sshd-session[4100]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:37:42.834328 systemd-logind[1436]: New session 9 of user core. Mar 20 21:37:42.843743 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 20 21:37:43.072282 sshd[4102]: Connection closed by 10.0.0.1 port 48882 Mar 20 21:37:43.072715 sshd-session[4100]: pam_unix(sshd:session): session closed for user core Mar 20 21:37:43.076480 systemd-logind[1436]: Session 9 logged out. Waiting for processes to exit. Mar 20 21:37:43.076794 systemd[1]: sshd@8-10.0.0.3:22-10.0.0.1:48882.service: Deactivated successfully. Mar 20 21:37:43.079366 systemd[1]: session-9.scope: Deactivated successfully. Mar 20 21:37:43.081176 systemd-logind[1436]: Removed session 9. Mar 20 21:37:43.357031 containerd[1452]: time="2025-03-20T21:37:43.356920670Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:37:43.357523 containerd[1452]: time="2025-03-20T21:37:43.357460526Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.2: active requests=0, bytes read=40253267" Mar 20 21:37:43.358056 containerd[1452]: time="2025-03-20T21:37:43.358024027Z" level=info msg="ImageCreate event name:\"sha256:15defb01cf01d9d97dc594b25d63dee89192c67a6c991b6a78d49fa834325f4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:37:43.360526 containerd[1452]: time="2025-03-20T21:37:43.360491907Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:3623f5b60fad0da3387a8649371b53171a4b1226f4d989d2acad9145dc0ef56f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:37:43.361340 containerd[1452]: time="2025-03-20T21:37:43.361299691Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.2\" with image id \"sha256:15defb01cf01d9d97dc594b25d63dee89192c67a6c991b6a78d49fa834325f4e\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:3623f5b60fad0da3387a8649371b53171a4b1226f4d989d2acad9145dc0ef56f\", size \"41623040\" in 1.576901665s" Mar 20 21:37:43.361340 containerd[1452]: time="2025-03-20T21:37:43.361337218Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.2\" returns image reference \"sha256:15defb01cf01d9d97dc594b25d63dee89192c67a6c991b6a78d49fa834325f4e\"" Mar 20 21:37:43.362446 containerd[1452]: time="2025-03-20T21:37:43.362363881Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.2\"" Mar 20 21:37:43.363530 containerd[1452]: time="2025-03-20T21:37:43.363482921Z" level=info msg="CreateContainer within sandbox \"310ba2c80c048da96ae328966d9d0b3cba5b9d2380e11e7ae23a384b53d482dd\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 20 21:37:43.370222 containerd[1452]: time="2025-03-20T21:37:43.369712433Z" level=info msg="Container bb7417cc6691762732b4321bb25d6107e2d8bb7a3a0485da58d6e1f2f15709a3: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:37:43.387877 containerd[1452]: time="2025-03-20T21:37:43.387718286Z" level=info msg="CreateContainer within sandbox \"310ba2c80c048da96ae328966d9d0b3cba5b9d2380e11e7ae23a384b53d482dd\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"bb7417cc6691762732b4321bb25d6107e2d8bb7a3a0485da58d6e1f2f15709a3\"" Mar 20 21:37:43.388246 containerd[1452]: time="2025-03-20T21:37:43.388212814Z" level=info msg="StartContainer for \"bb7417cc6691762732b4321bb25d6107e2d8bb7a3a0485da58d6e1f2f15709a3\"" Mar 20 21:37:43.389344 containerd[1452]: time="2025-03-20T21:37:43.389305769Z" level=info msg="connecting to shim bb7417cc6691762732b4321bb25d6107e2d8bb7a3a0485da58d6e1f2f15709a3" address="unix:///run/containerd/s/4508dad61c477153cba2582d76f461bc33037cf01d944e1d55acea64b68d35dd" protocol=ttrpc version=3 Mar 20 21:37:43.404440 kubelet[2567]: E0320 21:37:43.404396 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:37:43.405136 kubelet[2567]: E0320 21:37:43.405101 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:37:43.405495 containerd[1452]: time="2025-03-20T21:37:43.405452371Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-pfl2r,Uid:7d3d34a8-f5e5-459f-b088-f0d540cbdefd,Namespace:kube-system,Attempt:0,}" Mar 20 21:37:43.406121 containerd[1452]: time="2025-03-20T21:37:43.406093085Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-dtv2b,Uid:66e145b2-4210-4e17-a900-099df1f2a945,Namespace:kube-system,Attempt:0,}" Mar 20 21:37:43.406173 containerd[1452]: time="2025-03-20T21:37:43.406135413Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7lkcl,Uid:e8083da4-6460-47ba-b48a-9b8f613b80aa,Namespace:calico-system,Attempt:0,}" Mar 20 21:37:43.422923 systemd[1]: Started cri-containerd-bb7417cc6691762732b4321bb25d6107e2d8bb7a3a0485da58d6e1f2f15709a3.scope - libcontainer container bb7417cc6691762732b4321bb25d6107e2d8bb7a3a0485da58d6e1f2f15709a3. Mar 20 21:37:43.439920 systemd-networkd[1398]: cali5a551e3ad44: Gained IPv6LL Mar 20 21:37:43.475181 containerd[1452]: time="2025-03-20T21:37:43.474987020Z" level=info msg="StartContainer for \"bb7417cc6691762732b4321bb25d6107e2d8bb7a3a0485da58d6e1f2f15709a3\" returns successfully" Mar 20 21:37:43.531666 kubelet[2567]: I0320 21:37:43.531549 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7c9db95659-ffpcb" podStartSLOduration=16.952603953 podStartE2EDuration="18.531532151s" podCreationTimestamp="2025-03-20 21:37:25 +0000 UTC" firstStartedPulling="2025-03-20 21:37:41.783332385 +0000 UTC m=+33.461624906" lastFinishedPulling="2025-03-20 21:37:43.362260583 +0000 UTC m=+35.040553104" observedRunningTime="2025-03-20 21:37:43.530579581 +0000 UTC m=+35.208872102" watchObservedRunningTime="2025-03-20 21:37:43.531532151 +0000 UTC m=+35.209824672" Mar 20 21:37:43.640154 containerd[1452]: time="2025-03-20T21:37:43.640052958Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:37:43.642109 containerd[1452]: time="2025-03-20T21:37:43.642061876Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.2: active requests=0, bytes read=77" Mar 20 21:37:43.645911 containerd[1452]: time="2025-03-20T21:37:43.645879838Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.2\" with image id \"sha256:15defb01cf01d9d97dc594b25d63dee89192c67a6c991b6a78d49fa834325f4e\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:3623f5b60fad0da3387a8649371b53171a4b1226f4d989d2acad9145dc0ef56f\", size \"41623040\" in 283.485391ms" Mar 20 21:37:43.645962 containerd[1452]: time="2025-03-20T21:37:43.645915324Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.2\" returns image reference \"sha256:15defb01cf01d9d97dc594b25d63dee89192c67a6c991b6a78d49fa834325f4e\"" Mar 20 21:37:43.649752 containerd[1452]: time="2025-03-20T21:37:43.649717843Z" level=info msg="CreateContainer within sandbox \"607141a4fe9426534415fb8649f2cb5309b889c2d7285bc9d31a205c9473642c\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 20 21:37:43.657914 containerd[1452]: time="2025-03-20T21:37:43.657884020Z" level=info msg="Container 504c7c4a0d48742374e49dfeb75ebe2c824122c51c0eeb814db811eb99b7642d: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:37:43.669675 containerd[1452]: time="2025-03-20T21:37:43.669639318Z" level=info msg="CreateContainer within sandbox \"607141a4fe9426534415fb8649f2cb5309b889c2d7285bc9d31a205c9473642c\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"504c7c4a0d48742374e49dfeb75ebe2c824122c51c0eeb814db811eb99b7642d\"" Mar 20 21:37:43.670313 containerd[1452]: time="2025-03-20T21:37:43.670109882Z" level=info msg="StartContainer for \"504c7c4a0d48742374e49dfeb75ebe2c824122c51c0eeb814db811eb99b7642d\"" Mar 20 21:37:43.672861 containerd[1452]: time="2025-03-20T21:37:43.672214457Z" level=info msg="connecting to shim 504c7c4a0d48742374e49dfeb75ebe2c824122c51c0eeb814db811eb99b7642d" address="unix:///run/containerd/s/98d222aad614010356370e146031eb8da51088448da67ccf3221b873677d9e04" protocol=ttrpc version=3 Mar 20 21:37:43.673319 systemd-networkd[1398]: cali0b02f33f09f: Link UP Mar 20 21:37:43.675292 systemd-networkd[1398]: cali0b02f33f09f: Gained carrier Mar 20 21:37:43.696487 containerd[1452]: 2025-03-20 21:37:43.472 [INFO][4136] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--7lkcl-eth0 csi-node-driver- calico-system e8083da4-6460-47ba-b48a-9b8f613b80aa 598 0 2025-03-20 21:37:23 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:568c96974f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-7lkcl eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali0b02f33f09f [] []}} ContainerID="b56f49d33b6b4de031752e1d1d49d55443258a72580af815a87650fc8cdde8dd" Namespace="calico-system" Pod="csi-node-driver-7lkcl" WorkloadEndpoint="localhost-k8s-csi--node--driver--7lkcl-" Mar 20 21:37:43.696487 containerd[1452]: 2025-03-20 21:37:43.472 [INFO][4136] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b56f49d33b6b4de031752e1d1d49d55443258a72580af815a87650fc8cdde8dd" Namespace="calico-system" Pod="csi-node-driver-7lkcl" WorkloadEndpoint="localhost-k8s-csi--node--driver--7lkcl-eth0" Mar 20 21:37:43.696487 containerd[1452]: 2025-03-20 21:37:43.519 [INFO][4200] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b56f49d33b6b4de031752e1d1d49d55443258a72580af815a87650fc8cdde8dd" HandleID="k8s-pod-network.b56f49d33b6b4de031752e1d1d49d55443258a72580af815a87650fc8cdde8dd" Workload="localhost-k8s-csi--node--driver--7lkcl-eth0" Mar 20 21:37:43.697380 containerd[1452]: 2025-03-20 21:37:43.631 [INFO][4200] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b56f49d33b6b4de031752e1d1d49d55443258a72580af815a87650fc8cdde8dd" HandleID="k8s-pod-network.b56f49d33b6b4de031752e1d1d49d55443258a72580af815a87650fc8cdde8dd" Workload="localhost-k8s-csi--node--driver--7lkcl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001330b0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-7lkcl", "timestamp":"2025-03-20 21:37:43.519348137 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 20 21:37:43.697380 containerd[1452]: 2025-03-20 21:37:43.631 [INFO][4200] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 20 21:37:43.697380 containerd[1452]: 2025-03-20 21:37:43.631 [INFO][4200] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 20 21:37:43.697380 containerd[1452]: 2025-03-20 21:37:43.632 [INFO][4200] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 20 21:37:43.697380 containerd[1452]: 2025-03-20 21:37:43.635 [INFO][4200] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b56f49d33b6b4de031752e1d1d49d55443258a72580af815a87650fc8cdde8dd" host="localhost" Mar 20 21:37:43.697380 containerd[1452]: 2025-03-20 21:37:43.640 [INFO][4200] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Mar 20 21:37:43.697380 containerd[1452]: 2025-03-20 21:37:43.649 [INFO][4200] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Mar 20 21:37:43.697380 containerd[1452]: 2025-03-20 21:37:43.651 [INFO][4200] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 20 21:37:43.697380 containerd[1452]: 2025-03-20 21:37:43.653 [INFO][4200] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 20 21:37:43.697380 containerd[1452]: 2025-03-20 21:37:43.653 [INFO][4200] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b56f49d33b6b4de031752e1d1d49d55443258a72580af815a87650fc8cdde8dd" host="localhost" Mar 20 21:37:43.697595 containerd[1452]: 2025-03-20 21:37:43.654 [INFO][4200] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b56f49d33b6b4de031752e1d1d49d55443258a72580af815a87650fc8cdde8dd Mar 20 21:37:43.697595 containerd[1452]: 2025-03-20 21:37:43.661 [INFO][4200] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b56f49d33b6b4de031752e1d1d49d55443258a72580af815a87650fc8cdde8dd" host="localhost" Mar 20 21:37:43.697595 containerd[1452]: 2025-03-20 21:37:43.666 [INFO][4200] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.b56f49d33b6b4de031752e1d1d49d55443258a72580af815a87650fc8cdde8dd" host="localhost" Mar 20 21:37:43.697595 containerd[1452]: 2025-03-20 21:37:43.666 [INFO][4200] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.b56f49d33b6b4de031752e1d1d49d55443258a72580af815a87650fc8cdde8dd" host="localhost" Mar 20 21:37:43.697595 containerd[1452]: 2025-03-20 21:37:43.666 [INFO][4200] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 20 21:37:43.697595 containerd[1452]: 2025-03-20 21:37:43.666 [INFO][4200] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="b56f49d33b6b4de031752e1d1d49d55443258a72580af815a87650fc8cdde8dd" HandleID="k8s-pod-network.b56f49d33b6b4de031752e1d1d49d55443258a72580af815a87650fc8cdde8dd" Workload="localhost-k8s-csi--node--driver--7lkcl-eth0" Mar 20 21:37:43.697794 containerd[1452]: 2025-03-20 21:37:43.669 [INFO][4136] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b56f49d33b6b4de031752e1d1d49d55443258a72580af815a87650fc8cdde8dd" Namespace="calico-system" Pod="csi-node-driver-7lkcl" WorkloadEndpoint="localhost-k8s-csi--node--driver--7lkcl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--7lkcl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e8083da4-6460-47ba-b48a-9b8f613b80aa", ResourceVersion:"598", Generation:0, CreationTimestamp:time.Date(2025, time.March, 20, 21, 37, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"568c96974f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-7lkcl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0b02f33f09f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 20 21:37:43.697794 containerd[1452]: 2025-03-20 21:37:43.669 [INFO][4136] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="b56f49d33b6b4de031752e1d1d49d55443258a72580af815a87650fc8cdde8dd" Namespace="calico-system" Pod="csi-node-driver-7lkcl" WorkloadEndpoint="localhost-k8s-csi--node--driver--7lkcl-eth0" Mar 20 21:37:43.697867 containerd[1452]: 2025-03-20 21:37:43.669 [INFO][4136] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0b02f33f09f ContainerID="b56f49d33b6b4de031752e1d1d49d55443258a72580af815a87650fc8cdde8dd" Namespace="calico-system" Pod="csi-node-driver-7lkcl" WorkloadEndpoint="localhost-k8s-csi--node--driver--7lkcl-eth0" Mar 20 21:37:43.697867 containerd[1452]: 2025-03-20 21:37:43.675 [INFO][4136] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b56f49d33b6b4de031752e1d1d49d55443258a72580af815a87650fc8cdde8dd" Namespace="calico-system" Pod="csi-node-driver-7lkcl" WorkloadEndpoint="localhost-k8s-csi--node--driver--7lkcl-eth0" Mar 20 21:37:43.697906 containerd[1452]: 2025-03-20 21:37:43.678 [INFO][4136] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b56f49d33b6b4de031752e1d1d49d55443258a72580af815a87650fc8cdde8dd" Namespace="calico-system" Pod="csi-node-driver-7lkcl" WorkloadEndpoint="localhost-k8s-csi--node--driver--7lkcl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--7lkcl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e8083da4-6460-47ba-b48a-9b8f613b80aa", ResourceVersion:"598", Generation:0, CreationTimestamp:time.Date(2025, time.March, 20, 21, 37, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"568c96974f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b56f49d33b6b4de031752e1d1d49d55443258a72580af815a87650fc8cdde8dd", Pod:"csi-node-driver-7lkcl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0b02f33f09f", MAC:"2a:37:82:17:03:88", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 20 21:37:43.697952 containerd[1452]: 2025-03-20 21:37:43.690 [INFO][4136] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b56f49d33b6b4de031752e1d1d49d55443258a72580af815a87650fc8cdde8dd" Namespace="calico-system" Pod="csi-node-driver-7lkcl" WorkloadEndpoint="localhost-k8s-csi--node--driver--7lkcl-eth0" Mar 20 21:37:43.699791 systemd[1]: Started cri-containerd-504c7c4a0d48742374e49dfeb75ebe2c824122c51c0eeb814db811eb99b7642d.scope - libcontainer container 504c7c4a0d48742374e49dfeb75ebe2c824122c51c0eeb814db811eb99b7642d. Mar 20 21:37:43.717388 containerd[1452]: time="2025-03-20T21:37:43.717349232Z" level=info msg="connecting to shim b56f49d33b6b4de031752e1d1d49d55443258a72580af815a87650fc8cdde8dd" address="unix:///run/containerd/s/5541a1fe40968bd0e4c616d84b64ad9da154bc83c359f206ec0d0ac94771bcd9" namespace=k8s.io protocol=ttrpc version=3 Mar 20 21:37:43.740930 systemd[1]: Started cri-containerd-b56f49d33b6b4de031752e1d1d49d55443258a72580af815a87650fc8cdde8dd.scope - libcontainer container b56f49d33b6b4de031752e1d1d49d55443258a72580af815a87650fc8cdde8dd. Mar 20 21:37:43.758783 systemd-networkd[1398]: cali23c1b45e2d5: Gained IPv6LL Mar 20 21:37:43.760930 systemd-resolved[1323]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 20 21:37:43.767529 containerd[1452]: time="2025-03-20T21:37:43.767492861Z" level=info msg="StartContainer for \"504c7c4a0d48742374e49dfeb75ebe2c824122c51c0eeb814db811eb99b7642d\" returns successfully" Mar 20 21:37:43.780108 systemd-networkd[1398]: cali0e4d7528e17: Link UP Mar 20 21:37:43.780902 systemd-networkd[1398]: cali0e4d7528e17: Gained carrier Mar 20 21:37:43.784235 containerd[1452]: time="2025-03-20T21:37:43.784199002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7lkcl,Uid:e8083da4-6460-47ba-b48a-9b8f613b80aa,Namespace:calico-system,Attempt:0,} returns sandbox id \"b56f49d33b6b4de031752e1d1d49d55443258a72580af815a87650fc8cdde8dd\"" Mar 20 21:37:43.786103 containerd[1452]: time="2025-03-20T21:37:43.786081698Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.2\"" Mar 20 21:37:43.796445 containerd[1452]: 2025-03-20 21:37:43.471 [INFO][4138] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--6f6b679f8f--dtv2b-eth0 coredns-6f6b679f8f- kube-system 66e145b2-4210-4e17-a900-099df1f2a945 757 0 2025-03-20 21:37:15 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-6f6b679f8f-dtv2b eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali0e4d7528e17 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="60d8cc58f5d08d7ffa510a63f2658c9343da29ff6ed9f16d6d1d75f6ae880812" Namespace="kube-system" Pod="coredns-6f6b679f8f-dtv2b" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--dtv2b-" Mar 20 21:37:43.796445 containerd[1452]: 2025-03-20 21:37:43.472 [INFO][4138] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="60d8cc58f5d08d7ffa510a63f2658c9343da29ff6ed9f16d6d1d75f6ae880812" Namespace="kube-system" Pod="coredns-6f6b679f8f-dtv2b" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--dtv2b-eth0" Mar 20 21:37:43.796445 containerd[1452]: 2025-03-20 21:37:43.515 [INFO][4213] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="60d8cc58f5d08d7ffa510a63f2658c9343da29ff6ed9f16d6d1d75f6ae880812" HandleID="k8s-pod-network.60d8cc58f5d08d7ffa510a63f2658c9343da29ff6ed9f16d6d1d75f6ae880812" Workload="localhost-k8s-coredns--6f6b679f8f--dtv2b-eth0" Mar 20 21:37:43.796631 containerd[1452]: 2025-03-20 21:37:43.632 [INFO][4213] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="60d8cc58f5d08d7ffa510a63f2658c9343da29ff6ed9f16d6d1d75f6ae880812" HandleID="k8s-pod-network.60d8cc58f5d08d7ffa510a63f2658c9343da29ff6ed9f16d6d1d75f6ae880812" Workload="localhost-k8s-coredns--6f6b679f8f--dtv2b-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003aaff0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-6f6b679f8f-dtv2b", "timestamp":"2025-03-20 21:37:43.515542018 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 20 21:37:43.796631 containerd[1452]: 2025-03-20 21:37:43.632 [INFO][4213] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 20 21:37:43.796631 containerd[1452]: 2025-03-20 21:37:43.667 [INFO][4213] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 20 21:37:43.796631 containerd[1452]: 2025-03-20 21:37:43.667 [INFO][4213] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 20 21:37:43.796631 containerd[1452]: 2025-03-20 21:37:43.735 [INFO][4213] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.60d8cc58f5d08d7ffa510a63f2658c9343da29ff6ed9f16d6d1d75f6ae880812" host="localhost" Mar 20 21:37:43.796631 containerd[1452]: 2025-03-20 21:37:43.740 [INFO][4213] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Mar 20 21:37:43.796631 containerd[1452]: 2025-03-20 21:37:43.748 [INFO][4213] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Mar 20 21:37:43.796631 containerd[1452]: 2025-03-20 21:37:43.750 [INFO][4213] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 20 21:37:43.796631 containerd[1452]: 2025-03-20 21:37:43.754 [INFO][4213] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 20 21:37:43.796631 containerd[1452]: 2025-03-20 21:37:43.754 [INFO][4213] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.60d8cc58f5d08d7ffa510a63f2658c9343da29ff6ed9f16d6d1d75f6ae880812" host="localhost" Mar 20 21:37:43.796969 containerd[1452]: 2025-03-20 21:37:43.756 [INFO][4213] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.60d8cc58f5d08d7ffa510a63f2658c9343da29ff6ed9f16d6d1d75f6ae880812 Mar 20 21:37:43.796969 containerd[1452]: 2025-03-20 21:37:43.763 [INFO][4213] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.60d8cc58f5d08d7ffa510a63f2658c9343da29ff6ed9f16d6d1d75f6ae880812" host="localhost" Mar 20 21:37:43.796969 containerd[1452]: 2025-03-20 21:37:43.772 [INFO][4213] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.60d8cc58f5d08d7ffa510a63f2658c9343da29ff6ed9f16d6d1d75f6ae880812" host="localhost" Mar 20 21:37:43.796969 containerd[1452]: 2025-03-20 21:37:43.772 [INFO][4213] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.60d8cc58f5d08d7ffa510a63f2658c9343da29ff6ed9f16d6d1d75f6ae880812" host="localhost" Mar 20 21:37:43.796969 containerd[1452]: 2025-03-20 21:37:43.772 [INFO][4213] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 20 21:37:43.796969 containerd[1452]: 2025-03-20 21:37:43.772 [INFO][4213] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="60d8cc58f5d08d7ffa510a63f2658c9343da29ff6ed9f16d6d1d75f6ae880812" HandleID="k8s-pod-network.60d8cc58f5d08d7ffa510a63f2658c9343da29ff6ed9f16d6d1d75f6ae880812" Workload="localhost-k8s-coredns--6f6b679f8f--dtv2b-eth0" Mar 20 21:37:43.797081 containerd[1452]: 2025-03-20 21:37:43.775 [INFO][4138] cni-plugin/k8s.go 386: Populated endpoint ContainerID="60d8cc58f5d08d7ffa510a63f2658c9343da29ff6ed9f16d6d1d75f6ae880812" Namespace="kube-system" Pod="coredns-6f6b679f8f-dtv2b" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--dtv2b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--dtv2b-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"66e145b2-4210-4e17-a900-099df1f2a945", ResourceVersion:"757", Generation:0, CreationTimestamp:time.Date(2025, time.March, 20, 21, 37, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-6f6b679f8f-dtv2b", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0e4d7528e17", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 20 21:37:43.797137 containerd[1452]: 2025-03-20 21:37:43.776 [INFO][4138] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="60d8cc58f5d08d7ffa510a63f2658c9343da29ff6ed9f16d6d1d75f6ae880812" Namespace="kube-system" Pod="coredns-6f6b679f8f-dtv2b" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--dtv2b-eth0" Mar 20 21:37:43.797137 containerd[1452]: 2025-03-20 21:37:43.776 [INFO][4138] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0e4d7528e17 ContainerID="60d8cc58f5d08d7ffa510a63f2658c9343da29ff6ed9f16d6d1d75f6ae880812" Namespace="kube-system" Pod="coredns-6f6b679f8f-dtv2b" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--dtv2b-eth0" Mar 20 21:37:43.797137 containerd[1452]: 2025-03-20 21:37:43.781 [INFO][4138] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="60d8cc58f5d08d7ffa510a63f2658c9343da29ff6ed9f16d6d1d75f6ae880812" Namespace="kube-system" Pod="coredns-6f6b679f8f-dtv2b" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--dtv2b-eth0" Mar 20 21:37:43.797198 containerd[1452]: 2025-03-20 21:37:43.781 [INFO][4138] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="60d8cc58f5d08d7ffa510a63f2658c9343da29ff6ed9f16d6d1d75f6ae880812" Namespace="kube-system" Pod="coredns-6f6b679f8f-dtv2b" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--dtv2b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--dtv2b-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"66e145b2-4210-4e17-a900-099df1f2a945", ResourceVersion:"757", Generation:0, CreationTimestamp:time.Date(2025, time.March, 20, 21, 37, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"60d8cc58f5d08d7ffa510a63f2658c9343da29ff6ed9f16d6d1d75f6ae880812", Pod:"coredns-6f6b679f8f-dtv2b", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0e4d7528e17", MAC:"5a:81:2c:11:67:42", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 20 21:37:43.797198 containerd[1452]: 2025-03-20 21:37:43.792 [INFO][4138] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="60d8cc58f5d08d7ffa510a63f2658c9343da29ff6ed9f16d6d1d75f6ae880812" Namespace="kube-system" Pod="coredns-6f6b679f8f-dtv2b" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--dtv2b-eth0" Mar 20 21:37:43.816321 containerd[1452]: time="2025-03-20T21:37:43.816280688Z" level=info msg="connecting to shim 60d8cc58f5d08d7ffa510a63f2658c9343da29ff6ed9f16d6d1d75f6ae880812" address="unix:///run/containerd/s/f2f8c1d7ce773718b83027423ae8be051781bd6697d9b584207e7c487f5e903d" namespace=k8s.io protocol=ttrpc version=3 Mar 20 21:37:43.839784 systemd[1]: Started cri-containerd-60d8cc58f5d08d7ffa510a63f2658c9343da29ff6ed9f16d6d1d75f6ae880812.scope - libcontainer container 60d8cc58f5d08d7ffa510a63f2658c9343da29ff6ed9f16d6d1d75f6ae880812. Mar 20 21:37:43.853186 systemd-resolved[1323]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 20 21:37:43.874893 systemd-networkd[1398]: cali3619dadbe36: Link UP Mar 20 21:37:43.875707 systemd-networkd[1398]: cali3619dadbe36: Gained carrier Mar 20 21:37:43.891444 containerd[1452]: time="2025-03-20T21:37:43.891300596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-dtv2b,Uid:66e145b2-4210-4e17-a900-099df1f2a945,Namespace:kube-system,Attempt:0,} returns sandbox id \"60d8cc58f5d08d7ffa510a63f2658c9343da29ff6ed9f16d6d1d75f6ae880812\"" Mar 20 21:37:43.892741 containerd[1452]: 2025-03-20 21:37:43.475 [INFO][4159] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--6f6b679f8f--pfl2r-eth0 coredns-6f6b679f8f- kube-system 7d3d34a8-f5e5-459f-b088-f0d540cbdefd 760 0 2025-03-20 21:37:15 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-6f6b679f8f-pfl2r eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3619dadbe36 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="f00559f7085480139bbbe97d376ac9335c0e4252563c0c993164821e88b1f1e7" Namespace="kube-system" Pod="coredns-6f6b679f8f-pfl2r" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--pfl2r-" Mar 20 21:37:43.892741 containerd[1452]: 2025-03-20 21:37:43.476 [INFO][4159] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f00559f7085480139bbbe97d376ac9335c0e4252563c0c993164821e88b1f1e7" Namespace="kube-system" Pod="coredns-6f6b679f8f-pfl2r" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--pfl2r-eth0" Mar 20 21:37:43.892741 containerd[1452]: 2025-03-20 21:37:43.528 [INFO][4202] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f00559f7085480139bbbe97d376ac9335c0e4252563c0c993164821e88b1f1e7" HandleID="k8s-pod-network.f00559f7085480139bbbe97d376ac9335c0e4252563c0c993164821e88b1f1e7" Workload="localhost-k8s-coredns--6f6b679f8f--pfl2r-eth0" Mar 20 21:37:43.892741 containerd[1452]: 2025-03-20 21:37:43.635 [INFO][4202] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f00559f7085480139bbbe97d376ac9335c0e4252563c0c993164821e88b1f1e7" HandleID="k8s-pod-network.f00559f7085480139bbbe97d376ac9335c0e4252563c0c993164821e88b1f1e7" Workload="localhost-k8s-coredns--6f6b679f8f--pfl2r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40004e48d0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-6f6b679f8f-pfl2r", "timestamp":"2025-03-20 21:37:43.528573663 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 20 21:37:43.892741 containerd[1452]: 2025-03-20 21:37:43.635 [INFO][4202] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 20 21:37:43.892741 containerd[1452]: 2025-03-20 21:37:43.772 [INFO][4202] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 20 21:37:43.892741 containerd[1452]: 2025-03-20 21:37:43.772 [INFO][4202] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 20 21:37:43.892741 containerd[1452]: 2025-03-20 21:37:43.835 [INFO][4202] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f00559f7085480139bbbe97d376ac9335c0e4252563c0c993164821e88b1f1e7" host="localhost" Mar 20 21:37:43.892741 containerd[1452]: 2025-03-20 21:37:43.841 [INFO][4202] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Mar 20 21:37:43.892741 containerd[1452]: 2025-03-20 21:37:43.849 [INFO][4202] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Mar 20 21:37:43.892741 containerd[1452]: 2025-03-20 21:37:43.851 [INFO][4202] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 20 21:37:43.892741 containerd[1452]: 2025-03-20 21:37:43.854 [INFO][4202] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 20 21:37:43.892741 containerd[1452]: 2025-03-20 21:37:43.854 [INFO][4202] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f00559f7085480139bbbe97d376ac9335c0e4252563c0c993164821e88b1f1e7" host="localhost" Mar 20 21:37:43.892741 containerd[1452]: 2025-03-20 21:37:43.856 [INFO][4202] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f00559f7085480139bbbe97d376ac9335c0e4252563c0c993164821e88b1f1e7 Mar 20 21:37:43.892741 containerd[1452]: 2025-03-20 21:37:43.860 [INFO][4202] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f00559f7085480139bbbe97d376ac9335c0e4252563c0c993164821e88b1f1e7" host="localhost" Mar 20 21:37:43.892741 containerd[1452]: 2025-03-20 21:37:43.870 [INFO][4202] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.f00559f7085480139bbbe97d376ac9335c0e4252563c0c993164821e88b1f1e7" host="localhost" Mar 20 21:37:43.892741 containerd[1452]: 2025-03-20 21:37:43.870 [INFO][4202] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.f00559f7085480139bbbe97d376ac9335c0e4252563c0c993164821e88b1f1e7" host="localhost" Mar 20 21:37:43.892741 containerd[1452]: 2025-03-20 21:37:43.870 [INFO][4202] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 20 21:37:43.892741 containerd[1452]: 2025-03-20 21:37:43.870 [INFO][4202] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="f00559f7085480139bbbe97d376ac9335c0e4252563c0c993164821e88b1f1e7" HandleID="k8s-pod-network.f00559f7085480139bbbe97d376ac9335c0e4252563c0c993164821e88b1f1e7" Workload="localhost-k8s-coredns--6f6b679f8f--pfl2r-eth0" Mar 20 21:37:43.893548 containerd[1452]: 2025-03-20 21:37:43.872 [INFO][4159] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f00559f7085480139bbbe97d376ac9335c0e4252563c0c993164821e88b1f1e7" Namespace="kube-system" Pod="coredns-6f6b679f8f-pfl2r" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--pfl2r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--pfl2r-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"7d3d34a8-f5e5-459f-b088-f0d540cbdefd", ResourceVersion:"760", Generation:0, CreationTimestamp:time.Date(2025, time.March, 20, 21, 37, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-6f6b679f8f-pfl2r", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3619dadbe36", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 20 21:37:43.893548 containerd[1452]: 2025-03-20 21:37:43.872 [INFO][4159] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="f00559f7085480139bbbe97d376ac9335c0e4252563c0c993164821e88b1f1e7" Namespace="kube-system" Pod="coredns-6f6b679f8f-pfl2r" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--pfl2r-eth0" Mar 20 21:37:43.893548 containerd[1452]: 2025-03-20 21:37:43.872 [INFO][4159] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3619dadbe36 ContainerID="f00559f7085480139bbbe97d376ac9335c0e4252563c0c993164821e88b1f1e7" Namespace="kube-system" Pod="coredns-6f6b679f8f-pfl2r" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--pfl2r-eth0" Mar 20 21:37:43.893548 containerd[1452]: 2025-03-20 21:37:43.876 [INFO][4159] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f00559f7085480139bbbe97d376ac9335c0e4252563c0c993164821e88b1f1e7" Namespace="kube-system" Pod="coredns-6f6b679f8f-pfl2r" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--pfl2r-eth0" Mar 20 21:37:43.893548 containerd[1452]: 2025-03-20 21:37:43.876 [INFO][4159] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f00559f7085480139bbbe97d376ac9335c0e4252563c0c993164821e88b1f1e7" Namespace="kube-system" Pod="coredns-6f6b679f8f-pfl2r" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--pfl2r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--pfl2r-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"7d3d34a8-f5e5-459f-b088-f0d540cbdefd", ResourceVersion:"760", Generation:0, CreationTimestamp:time.Date(2025, time.March, 20, 21, 37, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f00559f7085480139bbbe97d376ac9335c0e4252563c0c993164821e88b1f1e7", Pod:"coredns-6f6b679f8f-pfl2r", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3619dadbe36", MAC:"7a:53:f7:4b:c5:36", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 20 21:37:43.893548 containerd[1452]: 2025-03-20 21:37:43.889 [INFO][4159] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f00559f7085480139bbbe97d376ac9335c0e4252563c0c993164821e88b1f1e7" Namespace="kube-system" Pod="coredns-6f6b679f8f-pfl2r" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--pfl2r-eth0" Mar 20 21:37:43.895064 kubelet[2567]: E0320 21:37:43.894675 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:37:43.898384 containerd[1452]: time="2025-03-20T21:37:43.898348533Z" level=info msg="CreateContainer within sandbox \"60d8cc58f5d08d7ffa510a63f2658c9343da29ff6ed9f16d6d1d75f6ae880812\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 20 21:37:43.913622 containerd[1452]: time="2025-03-20T21:37:43.912116710Z" level=info msg="Container ca0387fe9eba6ad2c28c063b4e3cba222779bc132a7fe910e6a79cfe95c93287: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:37:43.920060 containerd[1452]: time="2025-03-20T21:37:43.920014200Z" level=info msg="connecting to shim f00559f7085480139bbbe97d376ac9335c0e4252563c0c993164821e88b1f1e7" address="unix:///run/containerd/s/f3bc1a339c8a6b5e5abd4cae99e2dc10f003c067c2d5b76ff3a151d7db06d75d" namespace=k8s.io protocol=ttrpc version=3 Mar 20 21:37:43.923386 containerd[1452]: time="2025-03-20T21:37:43.923351035Z" level=info msg="CreateContainer within sandbox \"60d8cc58f5d08d7ffa510a63f2658c9343da29ff6ed9f16d6d1d75f6ae880812\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ca0387fe9eba6ad2c28c063b4e3cba222779bc132a7fe910e6a79cfe95c93287\"" Mar 20 21:37:43.924855 containerd[1452]: time="2025-03-20T21:37:43.924826619Z" level=info msg="StartContainer for \"ca0387fe9eba6ad2c28c063b4e3cba222779bc132a7fe910e6a79cfe95c93287\"" Mar 20 21:37:43.925672 containerd[1452]: time="2025-03-20T21:37:43.925600357Z" level=info msg="connecting to shim ca0387fe9eba6ad2c28c063b4e3cba222779bc132a7fe910e6a79cfe95c93287" address="unix:///run/containerd/s/f2f8c1d7ce773718b83027423ae8be051781bd6697d9b584207e7c487f5e903d" protocol=ttrpc version=3 Mar 20 21:37:43.950791 systemd[1]: Started cri-containerd-f00559f7085480139bbbe97d376ac9335c0e4252563c0c993164821e88b1f1e7.scope - libcontainer container f00559f7085480139bbbe97d376ac9335c0e4252563c0c993164821e88b1f1e7. Mar 20 21:37:43.962528 systemd[1]: Started cri-containerd-ca0387fe9eba6ad2c28c063b4e3cba222779bc132a7fe910e6a79cfe95c93287.scope - libcontainer container ca0387fe9eba6ad2c28c063b4e3cba222779bc132a7fe910e6a79cfe95c93287. Mar 20 21:37:43.967917 systemd-resolved[1323]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 20 21:37:43.997646 containerd[1452]: time="2025-03-20T21:37:43.997085754Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-pfl2r,Uid:7d3d34a8-f5e5-459f-b088-f0d540cbdefd,Namespace:kube-system,Attempt:0,} returns sandbox id \"f00559f7085480139bbbe97d376ac9335c0e4252563c0c993164821e88b1f1e7\"" Mar 20 21:37:43.997805 kubelet[2567]: E0320 21:37:43.997784 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:37:43.999095 containerd[1452]: time="2025-03-20T21:37:43.999021460Z" level=info msg="StartContainer for \"ca0387fe9eba6ad2c28c063b4e3cba222779bc132a7fe910e6a79cfe95c93287\" returns successfully" Mar 20 21:37:43.999372 containerd[1452]: time="2025-03-20T21:37:43.999345957Z" level=info msg="CreateContainer within sandbox \"f00559f7085480139bbbe97d376ac9335c0e4252563c0c993164821e88b1f1e7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 20 21:37:44.010588 containerd[1452]: time="2025-03-20T21:37:44.010479983Z" level=info msg="Container ee2d483c12f27fe931f4ffe6ee05a64ab0fe6fd4e410f24136d06c28d09ba9af: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:37:44.046647 containerd[1452]: time="2025-03-20T21:37:44.046581329Z" level=info msg="CreateContainer within sandbox \"f00559f7085480139bbbe97d376ac9335c0e4252563c0c993164821e88b1f1e7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ee2d483c12f27fe931f4ffe6ee05a64ab0fe6fd4e410f24136d06c28d09ba9af\"" Mar 20 21:37:44.047282 containerd[1452]: time="2025-03-20T21:37:44.047248525Z" level=info msg="StartContainer for \"ee2d483c12f27fe931f4ffe6ee05a64ab0fe6fd4e410f24136d06c28d09ba9af\"" Mar 20 21:37:44.049203 containerd[1452]: time="2025-03-20T21:37:44.048032101Z" level=info msg="connecting to shim ee2d483c12f27fe931f4ffe6ee05a64ab0fe6fd4e410f24136d06c28d09ba9af" address="unix:///run/containerd/s/f3bc1a339c8a6b5e5abd4cae99e2dc10f003c067c2d5b76ff3a151d7db06d75d" protocol=ttrpc version=3 Mar 20 21:37:44.069815 systemd[1]: Started cri-containerd-ee2d483c12f27fe931f4ffe6ee05a64ab0fe6fd4e410f24136d06c28d09ba9af.scope - libcontainer container ee2d483c12f27fe931f4ffe6ee05a64ab0fe6fd4e410f24136d06c28d09ba9af. Mar 20 21:37:44.141575 containerd[1452]: time="2025-03-20T21:37:44.141465357Z" level=info msg="StartContainer for \"ee2d483c12f27fe931f4ffe6ee05a64ab0fe6fd4e410f24136d06c28d09ba9af\" returns successfully" Mar 20 21:37:44.403985 containerd[1452]: time="2025-03-20T21:37:44.403952155Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9c4b576c4-ld24m,Uid:48cfd0f2-c351-498a-b143-cbea7fdfcbf4,Namespace:calico-system,Attempt:0,}" Mar 20 21:37:44.550873 kubelet[2567]: E0320 21:37:44.550549 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:37:44.552503 systemd-networkd[1398]: cali6b8bfb6d1ff: Link UP Mar 20 21:37:44.553119 systemd-networkd[1398]: cali6b8bfb6d1ff: Gained carrier Mar 20 21:37:44.563387 kubelet[2567]: I0320 21:37:44.563078 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7c9db95659-hdm9r" podStartSLOduration=17.750518682 podStartE2EDuration="19.56306205s" podCreationTimestamp="2025-03-20 21:37:25 +0000 UTC" firstStartedPulling="2025-03-20 21:37:41.835343108 +0000 UTC m=+33.513635589" lastFinishedPulling="2025-03-20 21:37:43.647886436 +0000 UTC m=+35.326178957" observedRunningTime="2025-03-20 21:37:44.562959432 +0000 UTC m=+36.241251913" watchObservedRunningTime="2025-03-20 21:37:44.56306205 +0000 UTC m=+36.241354571" Mar 20 21:37:44.566442 kubelet[2567]: I0320 21:37:44.564880 2567 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 20 21:37:44.566442 kubelet[2567]: E0320 21:37:44.565158 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:37:44.575632 containerd[1452]: 2025-03-20 21:37:44.454 [INFO][4511] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--9c4b576c4--ld24m-eth0 calico-kube-controllers-9c4b576c4- calico-system 48cfd0f2-c351-498a-b143-cbea7fdfcbf4 753 0 2025-03-20 21:37:23 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:9c4b576c4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-9c4b576c4-ld24m eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali6b8bfb6d1ff [] []}} ContainerID="c1faaeab47b47e0b01fc2beeb04a7c9681d584c66b9b2af1cda00493cbf100e3" Namespace="calico-system" Pod="calico-kube-controllers-9c4b576c4-ld24m" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--9c4b576c4--ld24m-" Mar 20 21:37:44.575632 containerd[1452]: 2025-03-20 21:37:44.454 [INFO][4511] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c1faaeab47b47e0b01fc2beeb04a7c9681d584c66b9b2af1cda00493cbf100e3" Namespace="calico-system" Pod="calico-kube-controllers-9c4b576c4-ld24m" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--9c4b576c4--ld24m-eth0" Mar 20 21:37:44.575632 containerd[1452]: 2025-03-20 21:37:44.497 [INFO][4526] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c1faaeab47b47e0b01fc2beeb04a7c9681d584c66b9b2af1cda00493cbf100e3" HandleID="k8s-pod-network.c1faaeab47b47e0b01fc2beeb04a7c9681d584c66b9b2af1cda00493cbf100e3" Workload="localhost-k8s-calico--kube--controllers--9c4b576c4--ld24m-eth0" Mar 20 21:37:44.575632 containerd[1452]: 2025-03-20 21:37:44.510 [INFO][4526] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c1faaeab47b47e0b01fc2beeb04a7c9681d584c66b9b2af1cda00493cbf100e3" HandleID="k8s-pod-network.c1faaeab47b47e0b01fc2beeb04a7c9681d584c66b9b2af1cda00493cbf100e3" Workload="localhost-k8s-calico--kube--controllers--9c4b576c4--ld24m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000304d70), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-9c4b576c4-ld24m", "timestamp":"2025-03-20 21:37:44.497923305 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 20 21:37:44.575632 containerd[1452]: 2025-03-20 21:37:44.511 [INFO][4526] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 20 21:37:44.575632 containerd[1452]: 2025-03-20 21:37:44.511 [INFO][4526] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 20 21:37:44.575632 containerd[1452]: 2025-03-20 21:37:44.511 [INFO][4526] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 20 21:37:44.575632 containerd[1452]: 2025-03-20 21:37:44.513 [INFO][4526] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c1faaeab47b47e0b01fc2beeb04a7c9681d584c66b9b2af1cda00493cbf100e3" host="localhost" Mar 20 21:37:44.575632 containerd[1452]: 2025-03-20 21:37:44.517 [INFO][4526] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Mar 20 21:37:44.575632 containerd[1452]: 2025-03-20 21:37:44.522 [INFO][4526] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Mar 20 21:37:44.575632 containerd[1452]: 2025-03-20 21:37:44.525 [INFO][4526] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 20 21:37:44.575632 containerd[1452]: 2025-03-20 21:37:44.528 [INFO][4526] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 20 21:37:44.575632 containerd[1452]: 2025-03-20 21:37:44.528 [INFO][4526] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c1faaeab47b47e0b01fc2beeb04a7c9681d584c66b9b2af1cda00493cbf100e3" host="localhost" Mar 20 21:37:44.575632 containerd[1452]: 2025-03-20 21:37:44.530 [INFO][4526] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c1faaeab47b47e0b01fc2beeb04a7c9681d584c66b9b2af1cda00493cbf100e3 Mar 20 21:37:44.575632 containerd[1452]: 2025-03-20 21:37:44.534 [INFO][4526] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c1faaeab47b47e0b01fc2beeb04a7c9681d584c66b9b2af1cda00493cbf100e3" host="localhost" Mar 20 21:37:44.575632 containerd[1452]: 2025-03-20 21:37:44.540 [INFO][4526] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.c1faaeab47b47e0b01fc2beeb04a7c9681d584c66b9b2af1cda00493cbf100e3" host="localhost" Mar 20 21:37:44.575632 containerd[1452]: 2025-03-20 21:37:44.540 [INFO][4526] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.c1faaeab47b47e0b01fc2beeb04a7c9681d584c66b9b2af1cda00493cbf100e3" host="localhost" Mar 20 21:37:44.575632 containerd[1452]: 2025-03-20 21:37:44.540 [INFO][4526] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 20 21:37:44.575632 containerd[1452]: 2025-03-20 21:37:44.540 [INFO][4526] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="c1faaeab47b47e0b01fc2beeb04a7c9681d584c66b9b2af1cda00493cbf100e3" HandleID="k8s-pod-network.c1faaeab47b47e0b01fc2beeb04a7c9681d584c66b9b2af1cda00493cbf100e3" Workload="localhost-k8s-calico--kube--controllers--9c4b576c4--ld24m-eth0" Mar 20 21:37:44.576238 containerd[1452]: 2025-03-20 21:37:44.543 [INFO][4511] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c1faaeab47b47e0b01fc2beeb04a7c9681d584c66b9b2af1cda00493cbf100e3" Namespace="calico-system" Pod="calico-kube-controllers-9c4b576c4-ld24m" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--9c4b576c4--ld24m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--9c4b576c4--ld24m-eth0", GenerateName:"calico-kube-controllers-9c4b576c4-", Namespace:"calico-system", SelfLink:"", UID:"48cfd0f2-c351-498a-b143-cbea7fdfcbf4", ResourceVersion:"753", Generation:0, CreationTimestamp:time.Date(2025, time.March, 20, 21, 37, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"9c4b576c4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-9c4b576c4-ld24m", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6b8bfb6d1ff", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 20 21:37:44.576238 containerd[1452]: 2025-03-20 21:37:44.543 [INFO][4511] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="c1faaeab47b47e0b01fc2beeb04a7c9681d584c66b9b2af1cda00493cbf100e3" Namespace="calico-system" Pod="calico-kube-controllers-9c4b576c4-ld24m" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--9c4b576c4--ld24m-eth0" Mar 20 21:37:44.576238 containerd[1452]: 2025-03-20 21:37:44.543 [INFO][4511] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6b8bfb6d1ff ContainerID="c1faaeab47b47e0b01fc2beeb04a7c9681d584c66b9b2af1cda00493cbf100e3" Namespace="calico-system" Pod="calico-kube-controllers-9c4b576c4-ld24m" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--9c4b576c4--ld24m-eth0" Mar 20 21:37:44.576238 containerd[1452]: 2025-03-20 21:37:44.552 [INFO][4511] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c1faaeab47b47e0b01fc2beeb04a7c9681d584c66b9b2af1cda00493cbf100e3" Namespace="calico-system" Pod="calico-kube-controllers-9c4b576c4-ld24m" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--9c4b576c4--ld24m-eth0" Mar 20 21:37:44.576238 containerd[1452]: 2025-03-20 21:37:44.552 [INFO][4511] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c1faaeab47b47e0b01fc2beeb04a7c9681d584c66b9b2af1cda00493cbf100e3" Namespace="calico-system" Pod="calico-kube-controllers-9c4b576c4-ld24m" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--9c4b576c4--ld24m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--9c4b576c4--ld24m-eth0", GenerateName:"calico-kube-controllers-9c4b576c4-", Namespace:"calico-system", SelfLink:"", UID:"48cfd0f2-c351-498a-b143-cbea7fdfcbf4", ResourceVersion:"753", Generation:0, CreationTimestamp:time.Date(2025, time.March, 20, 21, 37, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"9c4b576c4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c1faaeab47b47e0b01fc2beeb04a7c9681d584c66b9b2af1cda00493cbf100e3", Pod:"calico-kube-controllers-9c4b576c4-ld24m", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6b8bfb6d1ff", MAC:"9a:50:3b:2c:83:44", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 20 21:37:44.576238 containerd[1452]: 2025-03-20 21:37:44.567 [INFO][4511] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c1faaeab47b47e0b01fc2beeb04a7c9681d584c66b9b2af1cda00493cbf100e3" Namespace="calico-system" Pod="calico-kube-controllers-9c4b576c4-ld24m" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--9c4b576c4--ld24m-eth0" Mar 20 21:37:44.578093 kubelet[2567]: I0320 21:37:44.578025 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-pfl2r" podStartSLOduration=29.578010125 podStartE2EDuration="29.578010125s" podCreationTimestamp="2025-03-20 21:37:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-20 21:37:44.577603254 +0000 UTC m=+36.255895775" watchObservedRunningTime="2025-03-20 21:37:44.578010125 +0000 UTC m=+36.256302646" Mar 20 21:37:44.604440 kubelet[2567]: I0320 21:37:44.604335 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-dtv2b" podStartSLOduration=29.604265802 podStartE2EDuration="29.604265802s" podCreationTimestamp="2025-03-20 21:37:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-20 21:37:44.602484092 +0000 UTC m=+36.280776613" watchObservedRunningTime="2025-03-20 21:37:44.604265802 +0000 UTC m=+36.282558363" Mar 20 21:37:44.616178 containerd[1452]: time="2025-03-20T21:37:44.616123940Z" level=info msg="connecting to shim c1faaeab47b47e0b01fc2beeb04a7c9681d584c66b9b2af1cda00493cbf100e3" address="unix:///run/containerd/s/5e4d87577fcb5cd5f188503686aaf1225bec462c1b2843ba007f089538051499" namespace=k8s.io protocol=ttrpc version=3 Mar 20 21:37:44.646923 systemd[1]: Started cri-containerd-c1faaeab47b47e0b01fc2beeb04a7c9681d584c66b9b2af1cda00493cbf100e3.scope - libcontainer container c1faaeab47b47e0b01fc2beeb04a7c9681d584c66b9b2af1cda00493cbf100e3. Mar 20 21:37:44.684918 systemd-resolved[1323]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 20 21:37:44.746418 containerd[1452]: time="2025-03-20T21:37:44.746362624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9c4b576c4-ld24m,Uid:48cfd0f2-c351-498a-b143-cbea7fdfcbf4,Namespace:calico-system,Attempt:0,} returns sandbox id \"c1faaeab47b47e0b01fc2beeb04a7c9681d584c66b9b2af1cda00493cbf100e3\"" Mar 20 21:37:44.783823 systemd-networkd[1398]: cali0b02f33f09f: Gained IPv6LL Mar 20 21:37:44.826930 containerd[1452]: time="2025-03-20T21:37:44.826883919Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:37:44.829071 containerd[1452]: time="2025-03-20T21:37:44.827300952Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.2: active requests=0, bytes read=7473801" Mar 20 21:37:44.829071 containerd[1452]: time="2025-03-20T21:37:44.828032239Z" level=info msg="ImageCreate event name:\"sha256:f39063099e467ddd9d84500bfd4d97c404bb5f706a2161afc8979f4a94b8ad0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:37:44.830428 containerd[1452]: time="2025-03-20T21:37:44.830394329Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:214b4eef7008808bda55ad3cc1d4a3cd8df9e0e8094dff213fa3241104eb892c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:37:44.831201 containerd[1452]: time="2025-03-20T21:37:44.831166903Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.2\" with image id \"sha256:f39063099e467ddd9d84500bfd4d97c404bb5f706a2161afc8979f4a94b8ad0b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:214b4eef7008808bda55ad3cc1d4a3cd8df9e0e8094dff213fa3241104eb892c\", size \"8843558\" in 1.045044477s" Mar 20 21:37:44.831319 containerd[1452]: time="2025-03-20T21:37:44.831302046Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.2\" returns image reference \"sha256:f39063099e467ddd9d84500bfd4d97c404bb5f706a2161afc8979f4a94b8ad0b\"" Mar 20 21:37:44.832229 containerd[1452]: time="2025-03-20T21:37:44.832198122Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.2\"" Mar 20 21:37:44.833686 containerd[1452]: time="2025-03-20T21:37:44.833657775Z" level=info msg="CreateContainer within sandbox \"b56f49d33b6b4de031752e1d1d49d55443258a72580af815a87650fc8cdde8dd\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 20 21:37:44.842987 containerd[1452]: time="2025-03-20T21:37:44.842950108Z" level=info msg="Container 96655c4efa7b95de5d24480d2477e776bd5703c387f4c86809abd0d776dd412a: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:37:44.848107 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount151155378.mount: Deactivated successfully. Mar 20 21:37:44.859086 containerd[1452]: time="2025-03-20T21:37:44.859042101Z" level=info msg="CreateContainer within sandbox \"b56f49d33b6b4de031752e1d1d49d55443258a72580af815a87650fc8cdde8dd\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"96655c4efa7b95de5d24480d2477e776bd5703c387f4c86809abd0d776dd412a\"" Mar 20 21:37:44.864638 containerd[1452]: time="2025-03-20T21:37:44.859865284Z" level=info msg="StartContainer for \"96655c4efa7b95de5d24480d2477e776bd5703c387f4c86809abd0d776dd412a\"" Mar 20 21:37:44.864638 containerd[1452]: time="2025-03-20T21:37:44.861341300Z" level=info msg="connecting to shim 96655c4efa7b95de5d24480d2477e776bd5703c387f4c86809abd0d776dd412a" address="unix:///run/containerd/s/5541a1fe40968bd0e4c616d84b64ad9da154bc83c359f206ec0d0ac94771bcd9" protocol=ttrpc version=3 Mar 20 21:37:44.889992 systemd[1]: Started cri-containerd-96655c4efa7b95de5d24480d2477e776bd5703c387f4c86809abd0d776dd412a.scope - libcontainer container 96655c4efa7b95de5d24480d2477e776bd5703c387f4c86809abd0d776dd412a. Mar 20 21:37:44.911789 systemd-networkd[1398]: cali0e4d7528e17: Gained IPv6LL Mar 20 21:37:44.949089 containerd[1452]: time="2025-03-20T21:37:44.947542221Z" level=info msg="StartContainer for \"96655c4efa7b95de5d24480d2477e776bd5703c387f4c86809abd0d776dd412a\" returns successfully" Mar 20 21:37:45.230851 systemd-networkd[1398]: cali3619dadbe36: Gained IPv6LL Mar 20 21:37:45.572663 kubelet[2567]: I0320 21:37:45.571301 2567 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 20 21:37:45.572663 kubelet[2567]: E0320 21:37:45.571588 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:37:45.572663 kubelet[2567]: E0320 21:37:45.571680 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:37:46.257252 containerd[1452]: time="2025-03-20T21:37:46.256817997Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:37:46.257651 containerd[1452]: time="2025-03-20T21:37:46.257379809Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.2: active requests=0, bytes read=32560257" Mar 20 21:37:46.258222 containerd[1452]: time="2025-03-20T21:37:46.258174940Z" level=info msg="ImageCreate event name:\"sha256:39a6e91a11a792441d34dccf5e11416a0fd297782f169fdb871a5558ad50b229\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:37:46.259836 containerd[1452]: time="2025-03-20T21:37:46.259794487Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:6d1f392b747f912366ec5c60ee1130952c2c07e8ce24c53480187daa0e3364aa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:37:46.260412 containerd[1452]: time="2025-03-20T21:37:46.260302850Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.2\" with image id \"sha256:39a6e91a11a792441d34dccf5e11416a0fd297782f169fdb871a5558ad50b229\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:6d1f392b747f912366ec5c60ee1130952c2c07e8ce24c53480187daa0e3364aa\", size \"33929982\" in 1.428072163s" Mar 20 21:37:46.260412 containerd[1452]: time="2025-03-20T21:37:46.260330375Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.2\" returns image reference \"sha256:39a6e91a11a792441d34dccf5e11416a0fd297782f169fdb871a5558ad50b229\"" Mar 20 21:37:46.261582 containerd[1452]: time="2025-03-20T21:37:46.261557017Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\"" Mar 20 21:37:46.273853 containerd[1452]: time="2025-03-20T21:37:46.273814995Z" level=info msg="CreateContainer within sandbox \"c1faaeab47b47e0b01fc2beeb04a7c9681d584c66b9b2af1cda00493cbf100e3\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 20 21:37:46.280642 containerd[1452]: time="2025-03-20T21:37:46.279664759Z" level=info msg="Container e0504ee96ce479d79937d7214488857e79d36cd31b035603b65bbc6a7525e06d: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:37:46.284539 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount889324791.mount: Deactivated successfully. Mar 20 21:37:46.287066 containerd[1452]: time="2025-03-20T21:37:46.287023010Z" level=info msg="CreateContainer within sandbox \"c1faaeab47b47e0b01fc2beeb04a7c9681d584c66b9b2af1cda00493cbf100e3\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"e0504ee96ce479d79937d7214488857e79d36cd31b035603b65bbc6a7525e06d\"" Mar 20 21:37:46.287412 containerd[1452]: time="2025-03-20T21:37:46.287386950Z" level=info msg="StartContainer for \"e0504ee96ce479d79937d7214488857e79d36cd31b035603b65bbc6a7525e06d\"" Mar 20 21:37:46.288420 containerd[1452]: time="2025-03-20T21:37:46.288389315Z" level=info msg="connecting to shim e0504ee96ce479d79937d7214488857e79d36cd31b035603b65bbc6a7525e06d" address="unix:///run/containerd/s/5e4d87577fcb5cd5f188503686aaf1225bec462c1b2843ba007f089538051499" protocol=ttrpc version=3 Mar 20 21:37:46.309743 systemd[1]: Started cri-containerd-e0504ee96ce479d79937d7214488857e79d36cd31b035603b65bbc6a7525e06d.scope - libcontainer container e0504ee96ce479d79937d7214488857e79d36cd31b035603b65bbc6a7525e06d. Mar 20 21:37:46.341915 containerd[1452]: time="2025-03-20T21:37:46.341884164Z" level=info msg="StartContainer for \"e0504ee96ce479d79937d7214488857e79d36cd31b035603b65bbc6a7525e06d\" returns successfully" Mar 20 21:37:46.447740 systemd-networkd[1398]: cali6b8bfb6d1ff: Gained IPv6LL Mar 20 21:37:46.575211 kubelet[2567]: E0320 21:37:46.575053 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:37:46.575211 kubelet[2567]: E0320 21:37:46.575181 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:37:46.586179 kubelet[2567]: I0320 21:37:46.585544 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-9c4b576c4-ld24m" podStartSLOduration=22.073102047 podStartE2EDuration="23.585531403s" podCreationTimestamp="2025-03-20 21:37:23 +0000 UTC" firstStartedPulling="2025-03-20 21:37:44.748788365 +0000 UTC m=+36.427080886" lastFinishedPulling="2025-03-20 21:37:46.261217761 +0000 UTC m=+37.939510242" observedRunningTime="2025-03-20 21:37:46.584804484 +0000 UTC m=+38.263097005" watchObservedRunningTime="2025-03-20 21:37:46.585531403 +0000 UTC m=+38.263823884" Mar 20 21:37:47.558016 containerd[1452]: time="2025-03-20T21:37:47.557458480Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:37:47.558016 containerd[1452]: time="2025-03-20T21:37:47.557950239Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2: active requests=0, bytes read=13121717" Mar 20 21:37:47.558927 containerd[1452]: time="2025-03-20T21:37:47.558896591Z" level=info msg="ImageCreate event name:\"sha256:5b766f5f5d1b2ccc7c16f12d59c6c17c490ae33a8973c1fa7b2bcf3b8aa5098a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:37:47.560756 containerd[1452]: time="2025-03-20T21:37:47.560725645Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:54ef0afa50feb3f691782e8d6df9a7f27d127a3af9bbcbd0bcdadac98e8be8e3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:37:47.561277 containerd[1452]: time="2025-03-20T21:37:47.561249409Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\" with image id \"sha256:5b766f5f5d1b2ccc7c16f12d59c6c17c490ae33a8973c1fa7b2bcf3b8aa5098a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:54ef0afa50feb3f691782e8d6df9a7f27d127a3af9bbcbd0bcdadac98e8be8e3\", size \"14491426\" in 1.299649906s" Mar 20 21:37:47.561320 containerd[1452]: time="2025-03-20T21:37:47.561283855Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\" returns image reference \"sha256:5b766f5f5d1b2ccc7c16f12d59c6c17c490ae33a8973c1fa7b2bcf3b8aa5098a\"" Mar 20 21:37:47.563512 containerd[1452]: time="2025-03-20T21:37:47.563475807Z" level=info msg="CreateContainer within sandbox \"b56f49d33b6b4de031752e1d1d49d55443258a72580af815a87650fc8cdde8dd\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 20 21:37:47.571499 containerd[1452]: time="2025-03-20T21:37:47.570445566Z" level=info msg="Container 970de98a9b814745234f01facd9f4301fc5d361207eceff6d33f41a5820c3a12: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:37:47.576073 kubelet[2567]: I0320 21:37:47.576049 2567 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 20 21:37:47.579591 containerd[1452]: time="2025-03-20T21:37:47.579553509Z" level=info msg="CreateContainer within sandbox \"b56f49d33b6b4de031752e1d1d49d55443258a72580af815a87650fc8cdde8dd\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"970de98a9b814745234f01facd9f4301fc5d361207eceff6d33f41a5820c3a12\"" Mar 20 21:37:47.580053 containerd[1452]: time="2025-03-20T21:37:47.580023465Z" level=info msg="StartContainer for \"970de98a9b814745234f01facd9f4301fc5d361207eceff6d33f41a5820c3a12\"" Mar 20 21:37:47.581337 containerd[1452]: time="2025-03-20T21:37:47.581292069Z" level=info msg="connecting to shim 970de98a9b814745234f01facd9f4301fc5d361207eceff6d33f41a5820c3a12" address="unix:///run/containerd/s/5541a1fe40968bd0e4c616d84b64ad9da154bc83c359f206ec0d0ac94771bcd9" protocol=ttrpc version=3 Mar 20 21:37:47.601783 systemd[1]: Started cri-containerd-970de98a9b814745234f01facd9f4301fc5d361207eceff6d33f41a5820c3a12.scope - libcontainer container 970de98a9b814745234f01facd9f4301fc5d361207eceff6d33f41a5820c3a12. Mar 20 21:37:47.631592 containerd[1452]: time="2025-03-20T21:37:47.631497053Z" level=info msg="StartContainer for \"970de98a9b814745234f01facd9f4301fc5d361207eceff6d33f41a5820c3a12\" returns successfully" Mar 20 21:37:48.087861 systemd[1]: Started sshd@9-10.0.0.3:22-10.0.0.1:48884.service - OpenSSH per-connection server daemon (10.0.0.1:48884). Mar 20 21:37:48.157471 sshd[4723]: Accepted publickey for core from 10.0.0.1 port 48884 ssh2: RSA SHA256:RPxckmxBxmDHHfBFzj0E8HhfLPWbeWZYhF2T7Zu87Y8 Mar 20 21:37:48.159009 sshd-session[4723]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:37:48.163883 systemd-logind[1436]: New session 10 of user core. Mar 20 21:37:48.173745 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 20 21:37:48.341683 sshd[4726]: Connection closed by 10.0.0.1 port 48884 Mar 20 21:37:48.340113 sshd-session[4723]: pam_unix(sshd:session): session closed for user core Mar 20 21:37:48.357149 systemd[1]: sshd@9-10.0.0.3:22-10.0.0.1:48884.service: Deactivated successfully. Mar 20 21:37:48.358875 systemd[1]: session-10.scope: Deactivated successfully. Mar 20 21:37:48.359516 systemd-logind[1436]: Session 10 logged out. Waiting for processes to exit. Mar 20 21:37:48.361374 systemd[1]: Started sshd@10-10.0.0.3:22-10.0.0.1:48900.service - OpenSSH per-connection server daemon (10.0.0.1:48900). Mar 20 21:37:48.362481 systemd-logind[1436]: Removed session 10. Mar 20 21:37:48.416188 sshd[4740]: Accepted publickey for core from 10.0.0.1 port 48900 ssh2: RSA SHA256:RPxckmxBxmDHHfBFzj0E8HhfLPWbeWZYhF2T7Zu87Y8 Mar 20 21:37:48.417311 sshd-session[4740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:37:48.421186 systemd-logind[1436]: New session 11 of user core. Mar 20 21:37:48.440816 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 20 21:37:48.497098 kubelet[2567]: I0320 21:37:48.496953 2567 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 20 21:37:48.500058 kubelet[2567]: I0320 21:37:48.500040 2567 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 20 21:37:48.663654 sshd[4743]: Connection closed by 10.0.0.1 port 48900 Mar 20 21:37:48.663714 sshd-session[4740]: pam_unix(sshd:session): session closed for user core Mar 20 21:37:48.677173 systemd[1]: sshd@10-10.0.0.3:22-10.0.0.1:48900.service: Deactivated successfully. Mar 20 21:37:48.678907 systemd[1]: session-11.scope: Deactivated successfully. Mar 20 21:37:48.679570 systemd-logind[1436]: Session 11 logged out. Waiting for processes to exit. Mar 20 21:37:48.681349 systemd[1]: Started sshd@11-10.0.0.3:22-10.0.0.1:48916.service - OpenSSH per-connection server daemon (10.0.0.1:48916). Mar 20 21:37:48.682662 systemd-logind[1436]: Removed session 11. Mar 20 21:37:48.739671 sshd[4758]: Accepted publickey for core from 10.0.0.1 port 48916 ssh2: RSA SHA256:RPxckmxBxmDHHfBFzj0E8HhfLPWbeWZYhF2T7Zu87Y8 Mar 20 21:37:48.740905 sshd-session[4758]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:37:48.745336 systemd-logind[1436]: New session 12 of user core. Mar 20 21:37:48.754801 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 20 21:37:48.902984 sshd[4761]: Connection closed by 10.0.0.1 port 48916 Mar 20 21:37:48.903333 sshd-session[4758]: pam_unix(sshd:session): session closed for user core Mar 20 21:37:48.907236 systemd[1]: sshd@11-10.0.0.3:22-10.0.0.1:48916.service: Deactivated successfully. Mar 20 21:37:48.909053 systemd[1]: session-12.scope: Deactivated successfully. Mar 20 21:37:48.909803 systemd-logind[1436]: Session 12 logged out. Waiting for processes to exit. Mar 20 21:37:48.910698 systemd-logind[1436]: Removed session 12. Mar 20 21:37:53.921763 systemd[1]: Started sshd@12-10.0.0.3:22-10.0.0.1:50454.service - OpenSSH per-connection server daemon (10.0.0.1:50454). Mar 20 21:37:53.976931 sshd[4787]: Accepted publickey for core from 10.0.0.1 port 50454 ssh2: RSA SHA256:RPxckmxBxmDHHfBFzj0E8HhfLPWbeWZYhF2T7Zu87Y8 Mar 20 21:37:53.978163 sshd-session[4787]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:37:53.981680 systemd-logind[1436]: New session 13 of user core. Mar 20 21:37:53.994731 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 20 21:37:54.116002 sshd[4789]: Connection closed by 10.0.0.1 port 50454 Mar 20 21:37:54.116497 sshd-session[4787]: pam_unix(sshd:session): session closed for user core Mar 20 21:37:54.119722 systemd[1]: sshd@12-10.0.0.3:22-10.0.0.1:50454.service: Deactivated successfully. Mar 20 21:37:54.121425 systemd[1]: session-13.scope: Deactivated successfully. Mar 20 21:37:54.122122 systemd-logind[1436]: Session 13 logged out. Waiting for processes to exit. Mar 20 21:37:54.122943 systemd-logind[1436]: Removed session 13. Mar 20 21:37:58.050406 containerd[1452]: time="2025-03-20T21:37:58.050354146Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c224e159f0c7388fbca2b7942eac918c5faf0746fe5890ac6577392f1e9ee158\" id:\"058898a29eb5031022daf49d14f8fdeca92be3c65edde81fcb600314285bc544\" pid:4815 exited_at:{seconds:1742506678 nanos:50080270}" Mar 20 21:37:58.052208 kubelet[2567]: E0320 21:37:58.052183 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:37:58.065126 kubelet[2567]: I0320 21:37:58.065066 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-7lkcl" podStartSLOduration=31.288644055 podStartE2EDuration="35.065052574s" podCreationTimestamp="2025-03-20 21:37:23 +0000 UTC" firstStartedPulling="2025-03-20 21:37:43.785580609 +0000 UTC m=+35.463873130" lastFinishedPulling="2025-03-20 21:37:47.561989128 +0000 UTC m=+39.240281649" observedRunningTime="2025-03-20 21:37:48.594403358 +0000 UTC m=+40.272695879" watchObservedRunningTime="2025-03-20 21:37:58.065052574 +0000 UTC m=+49.743345095" Mar 20 21:37:59.127768 systemd[1]: Started sshd@13-10.0.0.3:22-10.0.0.1:50468.service - OpenSSH per-connection server daemon (10.0.0.1:50468). Mar 20 21:37:59.189177 sshd[4828]: Accepted publickey for core from 10.0.0.1 port 50468 ssh2: RSA SHA256:RPxckmxBxmDHHfBFzj0E8HhfLPWbeWZYhF2T7Zu87Y8 Mar 20 21:37:59.190411 sshd-session[4828]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:37:59.193805 systemd-logind[1436]: New session 14 of user core. Mar 20 21:37:59.200755 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 20 21:37:59.328669 sshd[4830]: Connection closed by 10.0.0.1 port 50468 Mar 20 21:37:59.328975 sshd-session[4828]: pam_unix(sshd:session): session closed for user core Mar 20 21:37:59.332356 systemd[1]: sshd@13-10.0.0.3:22-10.0.0.1:50468.service: Deactivated successfully. Mar 20 21:37:59.333893 systemd[1]: session-14.scope: Deactivated successfully. Mar 20 21:37:59.334563 systemd-logind[1436]: Session 14 logged out. Waiting for processes to exit. Mar 20 21:37:59.335249 systemd-logind[1436]: Removed session 14. Mar 20 21:38:00.122006 kubelet[2567]: I0320 21:38:00.121957 2567 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 20 21:38:00.156804 containerd[1452]: time="2025-03-20T21:38:00.156765030Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e0504ee96ce479d79937d7214488857e79d36cd31b035603b65bbc6a7525e06d\" id:\"8a4c33a95e8deca2491c68ebaf42fdf3a04b22b1832e023937f899dba4fb4d48\" pid:4861 exited_at:{seconds:1742506680 nanos:151489604}" Mar 20 21:38:00.194438 containerd[1452]: time="2025-03-20T21:38:00.194403341Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e0504ee96ce479d79937d7214488857e79d36cd31b035603b65bbc6a7525e06d\" id:\"90d37a1f2e20c10164c2320180fe05fb0d3e719924015151b5d1695144183c1f\" pid:4884 exited_at:{seconds:1742506680 nanos:194068739}" Mar 20 21:38:04.345926 systemd[1]: Started sshd@14-10.0.0.3:22-10.0.0.1:56948.service - OpenSSH per-connection server daemon (10.0.0.1:56948). Mar 20 21:38:04.396500 sshd[4898]: Accepted publickey for core from 10.0.0.1 port 56948 ssh2: RSA SHA256:RPxckmxBxmDHHfBFzj0E8HhfLPWbeWZYhF2T7Zu87Y8 Mar 20 21:38:04.397639 sshd-session[4898]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:38:04.401153 systemd-logind[1436]: New session 15 of user core. Mar 20 21:38:04.405738 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 20 21:38:04.592715 sshd[4900]: Connection closed by 10.0.0.1 port 56948 Mar 20 21:38:04.593051 sshd-session[4898]: pam_unix(sshd:session): session closed for user core Mar 20 21:38:04.598200 systemd-logind[1436]: Session 15 logged out. Waiting for processes to exit. Mar 20 21:38:04.598460 systemd[1]: sshd@14-10.0.0.3:22-10.0.0.1:56948.service: Deactivated successfully. Mar 20 21:38:04.601223 systemd[1]: session-15.scope: Deactivated successfully. Mar 20 21:38:04.602156 systemd-logind[1436]: Removed session 15. Mar 20 21:38:09.605255 systemd[1]: Started sshd@15-10.0.0.3:22-10.0.0.1:56960.service - OpenSSH per-connection server daemon (10.0.0.1:56960). Mar 20 21:38:09.660081 sshd[4918]: Accepted publickey for core from 10.0.0.1 port 56960 ssh2: RSA SHA256:RPxckmxBxmDHHfBFzj0E8HhfLPWbeWZYhF2T7Zu87Y8 Mar 20 21:38:09.661170 sshd-session[4918]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:38:09.664769 systemd-logind[1436]: New session 16 of user core. Mar 20 21:38:09.676746 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 20 21:38:09.813155 sshd[4920]: Connection closed by 10.0.0.1 port 56960 Mar 20 21:38:09.813517 sshd-session[4918]: pam_unix(sshd:session): session closed for user core Mar 20 21:38:09.826852 systemd[1]: sshd@15-10.0.0.3:22-10.0.0.1:56960.service: Deactivated successfully. Mar 20 21:38:09.828535 systemd[1]: session-16.scope: Deactivated successfully. Mar 20 21:38:09.829198 systemd-logind[1436]: Session 16 logged out. Waiting for processes to exit. Mar 20 21:38:09.831130 systemd[1]: Started sshd@16-10.0.0.3:22-10.0.0.1:56968.service - OpenSSH per-connection server daemon (10.0.0.1:56968). Mar 20 21:38:09.832163 systemd-logind[1436]: Removed session 16. Mar 20 21:38:09.882683 sshd[4933]: Accepted publickey for core from 10.0.0.1 port 56968 ssh2: RSA SHA256:RPxckmxBxmDHHfBFzj0E8HhfLPWbeWZYhF2T7Zu87Y8 Mar 20 21:38:09.883831 sshd-session[4933]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:38:09.888085 systemd-logind[1436]: New session 17 of user core. Mar 20 21:38:09.899823 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 20 21:38:10.127637 sshd[4936]: Connection closed by 10.0.0.1 port 56968 Mar 20 21:38:10.128365 sshd-session[4933]: pam_unix(sshd:session): session closed for user core Mar 20 21:38:10.138932 systemd[1]: sshd@16-10.0.0.3:22-10.0.0.1:56968.service: Deactivated successfully. Mar 20 21:38:10.140398 systemd[1]: session-17.scope: Deactivated successfully. Mar 20 21:38:10.141117 systemd-logind[1436]: Session 17 logged out. Waiting for processes to exit. Mar 20 21:38:10.142809 systemd[1]: Started sshd@17-10.0.0.3:22-10.0.0.1:56970.service - OpenSSH per-connection server daemon (10.0.0.1:56970). Mar 20 21:38:10.143729 systemd-logind[1436]: Removed session 17. Mar 20 21:38:10.199685 sshd[4946]: Accepted publickey for core from 10.0.0.1 port 56970 ssh2: RSA SHA256:RPxckmxBxmDHHfBFzj0E8HhfLPWbeWZYhF2T7Zu87Y8 Mar 20 21:38:10.200902 sshd-session[4946]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:38:10.205558 systemd-logind[1436]: New session 18 of user core. Mar 20 21:38:10.214745 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 20 21:38:11.595560 sshd[4949]: Connection closed by 10.0.0.1 port 56970 Mar 20 21:38:11.596966 sshd-session[4946]: pam_unix(sshd:session): session closed for user core Mar 20 21:38:11.606407 systemd[1]: sshd@17-10.0.0.3:22-10.0.0.1:56970.service: Deactivated successfully. Mar 20 21:38:11.610527 systemd[1]: session-18.scope: Deactivated successfully. Mar 20 21:38:11.611024 systemd[1]: session-18.scope: Consumed 479ms CPU time, 66.5M memory peak. Mar 20 21:38:11.612517 systemd-logind[1436]: Session 18 logged out. Waiting for processes to exit. Mar 20 21:38:11.616726 systemd[1]: Started sshd@18-10.0.0.3:22-10.0.0.1:56976.service - OpenSSH per-connection server daemon (10.0.0.1:56976). Mar 20 21:38:11.618672 systemd-logind[1436]: Removed session 18. Mar 20 21:38:11.669282 sshd[4970]: Accepted publickey for core from 10.0.0.1 port 56976 ssh2: RSA SHA256:RPxckmxBxmDHHfBFzj0E8HhfLPWbeWZYhF2T7Zu87Y8 Mar 20 21:38:11.670448 sshd-session[4970]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:38:11.674920 systemd-logind[1436]: New session 19 of user core. Mar 20 21:38:11.685769 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 20 21:38:12.014632 sshd[4973]: Connection closed by 10.0.0.1 port 56976 Mar 20 21:38:12.015814 sshd-session[4970]: pam_unix(sshd:session): session closed for user core Mar 20 21:38:12.027568 systemd[1]: sshd@18-10.0.0.3:22-10.0.0.1:56976.service: Deactivated successfully. Mar 20 21:38:12.029286 systemd[1]: session-19.scope: Deactivated successfully. Mar 20 21:38:12.030079 systemd-logind[1436]: Session 19 logged out. Waiting for processes to exit. Mar 20 21:38:12.032221 systemd[1]: Started sshd@19-10.0.0.3:22-10.0.0.1:56982.service - OpenSSH per-connection server daemon (10.0.0.1:56982). Mar 20 21:38:12.033489 systemd-logind[1436]: Removed session 19. Mar 20 21:38:12.084882 sshd[4985]: Accepted publickey for core from 10.0.0.1 port 56982 ssh2: RSA SHA256:RPxckmxBxmDHHfBFzj0E8HhfLPWbeWZYhF2T7Zu87Y8 Mar 20 21:38:12.085945 sshd-session[4985]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:38:12.089950 systemd-logind[1436]: New session 20 of user core. Mar 20 21:38:12.098737 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 20 21:38:12.209783 sshd[4988]: Connection closed by 10.0.0.1 port 56982 Mar 20 21:38:12.210147 sshd-session[4985]: pam_unix(sshd:session): session closed for user core Mar 20 21:38:12.213519 systemd[1]: sshd@19-10.0.0.3:22-10.0.0.1:56982.service: Deactivated successfully. Mar 20 21:38:12.215383 systemd[1]: session-20.scope: Deactivated successfully. Mar 20 21:38:12.216051 systemd-logind[1436]: Session 20 logged out. Waiting for processes to exit. Mar 20 21:38:12.216829 systemd-logind[1436]: Removed session 20. Mar 20 21:38:13.469441 kubelet[2567]: I0320 21:38:13.469041 2567 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 20 21:38:14.584719 containerd[1452]: time="2025-03-20T21:38:14.584664835Z" level=info msg="StopContainer for \"a298a9451a4c08cfae0fae7fc04acfe264caa10b80a591347cfad1ee96df4197\" with timeout 300 (s)" Mar 20 21:38:14.592906 containerd[1452]: time="2025-03-20T21:38:14.592682371Z" level=info msg="Stop container \"a298a9451a4c08cfae0fae7fc04acfe264caa10b80a591347cfad1ee96df4197\" with signal terminated" Mar 20 21:38:14.712335 containerd[1452]: time="2025-03-20T21:38:14.711801386Z" level=info msg="StopContainer for \"e0504ee96ce479d79937d7214488857e79d36cd31b035603b65bbc6a7525e06d\" with timeout 30 (s)" Mar 20 21:38:14.713359 containerd[1452]: time="2025-03-20T21:38:14.713267377Z" level=info msg="Stop container \"e0504ee96ce479d79937d7214488857e79d36cd31b035603b65bbc6a7525e06d\" with signal terminated" Mar 20 21:38:14.725475 systemd[1]: cri-containerd-e0504ee96ce479d79937d7214488857e79d36cd31b035603b65bbc6a7525e06d.scope: Deactivated successfully. Mar 20 21:38:14.728008 containerd[1452]: time="2025-03-20T21:38:14.727964126Z" level=info msg="received exit event container_id:\"e0504ee96ce479d79937d7214488857e79d36cd31b035603b65bbc6a7525e06d\" id:\"e0504ee96ce479d79937d7214488857e79d36cd31b035603b65bbc6a7525e06d\" pid:4664 exit_status:2 exited_at:{seconds:1742506694 nanos:727764263}" Mar 20 21:38:14.729553 containerd[1452]: time="2025-03-20T21:38:14.729436397Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e0504ee96ce479d79937d7214488857e79d36cd31b035603b65bbc6a7525e06d\" id:\"e0504ee96ce479d79937d7214488857e79d36cd31b035603b65bbc6a7525e06d\" pid:4664 exit_status:2 exited_at:{seconds:1742506694 nanos:727764263}" Mar 20 21:38:14.759146 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e0504ee96ce479d79937d7214488857e79d36cd31b035603b65bbc6a7525e06d-rootfs.mount: Deactivated successfully. Mar 20 21:38:14.775527 containerd[1452]: time="2025-03-20T21:38:14.775395399Z" level=info msg="StopContainer for \"e0504ee96ce479d79937d7214488857e79d36cd31b035603b65bbc6a7525e06d\" returns successfully" Mar 20 21:38:14.776351 containerd[1452]: time="2025-03-20T21:38:14.776300200Z" level=info msg="StopPodSandbox for \"c1faaeab47b47e0b01fc2beeb04a7c9681d584c66b9b2af1cda00493cbf100e3\"" Mar 20 21:38:14.776477 containerd[1452]: time="2025-03-20T21:38:14.776445267Z" level=info msg="Container to stop \"e0504ee96ce479d79937d7214488857e79d36cd31b035603b65bbc6a7525e06d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 20 21:38:14.785986 systemd[1]: cri-containerd-c1faaeab47b47e0b01fc2beeb04a7c9681d584c66b9b2af1cda00493cbf100e3.scope: Deactivated successfully. Mar 20 21:38:14.789635 containerd[1452]: time="2025-03-20T21:38:14.786546499Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c1faaeab47b47e0b01fc2beeb04a7c9681d584c66b9b2af1cda00493cbf100e3\" id:\"c1faaeab47b47e0b01fc2beeb04a7c9681d584c66b9b2af1cda00493cbf100e3\" pid:4589 exit_status:137 exited_at:{seconds:1742506694 nanos:786247566}" Mar 20 21:38:14.814263 containerd[1452]: time="2025-03-20T21:38:14.814233987Z" level=info msg="StopContainer for \"c224e159f0c7388fbca2b7942eac918c5faf0746fe5890ac6577392f1e9ee158\" with timeout 5 (s)" Mar 20 21:38:14.814993 containerd[1452]: time="2025-03-20T21:38:14.814964403Z" level=info msg="Stop container \"c224e159f0c7388fbca2b7942eac918c5faf0746fe5890ac6577392f1e9ee158\" with signal terminated" Mar 20 21:38:14.824591 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c1faaeab47b47e0b01fc2beeb04a7c9681d584c66b9b2af1cda00493cbf100e3-rootfs.mount: Deactivated successfully. Mar 20 21:38:14.827751 containerd[1452]: time="2025-03-20T21:38:14.827720082Z" level=info msg="shim disconnected" id=c1faaeab47b47e0b01fc2beeb04a7c9681d584c66b9b2af1cda00493cbf100e3 namespace=k8s.io Mar 20 21:38:14.827828 containerd[1452]: time="2025-03-20T21:38:14.827750240Z" level=warning msg="cleaning up after shim disconnected" id=c1faaeab47b47e0b01fc2beeb04a7c9681d584c66b9b2af1cda00493cbf100e3 namespace=k8s.io Mar 20 21:38:14.827828 containerd[1452]: time="2025-03-20T21:38:14.827780477Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 20 21:38:14.841799 systemd[1]: cri-containerd-c224e159f0c7388fbca2b7942eac918c5faf0746fe5890ac6577392f1e9ee158.scope: Deactivated successfully. Mar 20 21:38:14.842094 systemd[1]: cri-containerd-c224e159f0c7388fbca2b7942eac918c5faf0746fe5890ac6577392f1e9ee158.scope: Consumed 2.018s CPU time, 159.5M memory peak, 6M read from disk, 1M written to disk. Mar 20 21:38:14.847063 containerd[1452]: time="2025-03-20T21:38:14.846785767Z" level=info msg="received exit event container_id:\"c224e159f0c7388fbca2b7942eac918c5faf0746fe5890ac6577392f1e9ee158\" id:\"c224e159f0c7388fbca2b7942eac918c5faf0746fe5890ac6577392f1e9ee158\" pid:3531 exited_at:{seconds:1742506694 nanos:846415520}" Mar 20 21:38:14.875100 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c224e159f0c7388fbca2b7942eac918c5faf0746fe5890ac6577392f1e9ee158-rootfs.mount: Deactivated successfully. Mar 20 21:38:14.878053 containerd[1452]: time="2025-03-20T21:38:14.877977187Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c224e159f0c7388fbca2b7942eac918c5faf0746fe5890ac6577392f1e9ee158\" id:\"0d7ad7cc4ecef55d1e9fa8e6e7811fd8efd19f9d60c504cd4b662cdf1299709d\" pid:5026 exited_at:{seconds:1742506694 nanos:809999999}" Mar 20 21:38:14.879094 containerd[1452]: time="2025-03-20T21:38:14.879057532Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c224e159f0c7388fbca2b7942eac918c5faf0746fe5890ac6577392f1e9ee158\" id:\"c224e159f0c7388fbca2b7942eac918c5faf0746fe5890ac6577392f1e9ee158\" pid:3531 exited_at:{seconds:1742506694 nanos:846415520}" Mar 20 21:38:14.881771 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c1faaeab47b47e0b01fc2beeb04a7c9681d584c66b9b2af1cda00493cbf100e3-shm.mount: Deactivated successfully. Mar 20 21:38:14.885171 containerd[1452]: time="2025-03-20T21:38:14.885115920Z" level=info msg="received exit event sandbox_id:\"c1faaeab47b47e0b01fc2beeb04a7c9681d584c66b9b2af1cda00493cbf100e3\" exit_status:137 exited_at:{seconds:1742506694 nanos:786247566}" Mar 20 21:38:14.901084 containerd[1452]: time="2025-03-20T21:38:14.901037921Z" level=info msg="StopContainer for \"c224e159f0c7388fbca2b7942eac918c5faf0746fe5890ac6577392f1e9ee158\" returns successfully" Mar 20 21:38:14.901506 containerd[1452]: time="2025-03-20T21:38:14.901480722Z" level=info msg="StopPodSandbox for \"e73373abbb209ed61daa36d838f2241c6446bc211589f0e22be3ef71ddc4b8c1\"" Mar 20 21:38:14.901574 containerd[1452]: time="2025-03-20T21:38:14.901537277Z" level=info msg="Container to stop \"39db45afc20814fee94d62f79b5ec308822aab5b9a977553893866c140315829\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 20 21:38:14.901574 containerd[1452]: time="2025-03-20T21:38:14.901548836Z" level=info msg="Container to stop \"c224e159f0c7388fbca2b7942eac918c5faf0746fe5890ac6577392f1e9ee158\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 20 21:38:14.901574 containerd[1452]: time="2025-03-20T21:38:14.901557915Z" level=info msg="Container to stop \"63eedf710cd7b37fc43507a19420c82a413ed711ccf61b62ce54644cb15e0c91\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 20 21:38:14.912457 systemd[1]: cri-containerd-e73373abbb209ed61daa36d838f2241c6446bc211589f0e22be3ef71ddc4b8c1.scope: Deactivated successfully. Mar 20 21:38:14.913145 containerd[1452]: time="2025-03-20T21:38:14.913105541Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e73373abbb209ed61daa36d838f2241c6446bc211589f0e22be3ef71ddc4b8c1\" id:\"e73373abbb209ed61daa36d838f2241c6446bc211589f0e22be3ef71ddc4b8c1\" pid:3094 exit_status:137 exited_at:{seconds:1742506694 nanos:912503954}" Mar 20 21:38:14.956485 containerd[1452]: time="2025-03-20T21:38:14.956435574Z" level=info msg="shim disconnected" id=e73373abbb209ed61daa36d838f2241c6446bc211589f0e22be3ef71ddc4b8c1 namespace=k8s.io Mar 20 21:38:14.956485 containerd[1452]: time="2025-03-20T21:38:14.956468651Z" level=warning msg="cleaning up after shim disconnected" id=e73373abbb209ed61daa36d838f2241c6446bc211589f0e22be3ef71ddc4b8c1 namespace=k8s.io Mar 20 21:38:14.956728 containerd[1452]: time="2025-03-20T21:38:14.956505048Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 20 21:38:14.962440 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e73373abbb209ed61daa36d838f2241c6446bc211589f0e22be3ef71ddc4b8c1-rootfs.mount: Deactivated successfully. Mar 20 21:38:14.969119 systemd-networkd[1398]: cali6b8bfb6d1ff: Link DOWN Mar 20 21:38:14.969125 systemd-networkd[1398]: cali6b8bfb6d1ff: Lost carrier Mar 20 21:38:14.986272 containerd[1452]: time="2025-03-20T21:38:14.986211118Z" level=info msg="received exit event sandbox_id:\"e73373abbb209ed61daa36d838f2241c6446bc211589f0e22be3ef71ddc4b8c1\" exit_status:137 exited_at:{seconds:1742506694 nanos:912503954}" Mar 20 21:38:14.987989 containerd[1452]: time="2025-03-20T21:38:14.987838256Z" level=info msg="TearDown network for sandbox \"e73373abbb209ed61daa36d838f2241c6446bc211589f0e22be3ef71ddc4b8c1\" successfully" Mar 20 21:38:14.987989 containerd[1452]: time="2025-03-20T21:38:14.987867173Z" level=info msg="StopPodSandbox for \"e73373abbb209ed61daa36d838f2241c6446bc211589f0e22be3ef71ddc4b8c1\" returns successfully" Mar 20 21:38:15.030338 kubelet[2567]: I0320 21:38:15.030285 2567 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f4c6ed22-520f-437f-9056-61327fcbf4c9-var-lib-calico\") pod \"f4c6ed22-520f-437f-9056-61327fcbf4c9\" (UID: \"f4c6ed22-520f-437f-9056-61327fcbf4c9\") " Mar 20 21:38:15.030338 kubelet[2567]: I0320 21:38:15.030324 2567 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/f4c6ed22-520f-437f-9056-61327fcbf4c9-flexvol-driver-host\") pod \"f4c6ed22-520f-437f-9056-61327fcbf4c9\" (UID: \"f4c6ed22-520f-437f-9056-61327fcbf4c9\") " Mar 20 21:38:15.030338 kubelet[2567]: I0320 21:38:15.030342 2567 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/f4c6ed22-520f-437f-9056-61327fcbf4c9-cni-bin-dir\") pod \"f4c6ed22-520f-437f-9056-61327fcbf4c9\" (UID: \"f4c6ed22-520f-437f-9056-61327fcbf4c9\") " Mar 20 21:38:15.030778 kubelet[2567]: I0320 21:38:15.030365 2567 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/f4c6ed22-520f-437f-9056-61327fcbf4c9-node-certs\") pod \"f4c6ed22-520f-437f-9056-61327fcbf4c9\" (UID: \"f4c6ed22-520f-437f-9056-61327fcbf4c9\") " Mar 20 21:38:15.030778 kubelet[2567]: I0320 21:38:15.030383 2567 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/f4c6ed22-520f-437f-9056-61327fcbf4c9-policysync\") pod \"f4c6ed22-520f-437f-9056-61327fcbf4c9\" (UID: \"f4c6ed22-520f-437f-9056-61327fcbf4c9\") " Mar 20 21:38:15.030778 kubelet[2567]: I0320 21:38:15.030399 2567 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/f4c6ed22-520f-437f-9056-61327fcbf4c9-var-run-calico\") pod \"f4c6ed22-520f-437f-9056-61327fcbf4c9\" (UID: \"f4c6ed22-520f-437f-9056-61327fcbf4c9\") " Mar 20 21:38:15.030778 kubelet[2567]: I0320 21:38:15.030413 2567 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f4c6ed22-520f-437f-9056-61327fcbf4c9-xtables-lock\") pod \"f4c6ed22-520f-437f-9056-61327fcbf4c9\" (UID: \"f4c6ed22-520f-437f-9056-61327fcbf4c9\") " Mar 20 21:38:15.030778 kubelet[2567]: I0320 21:38:15.030430 2567 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dkhd9\" (UniqueName: \"kubernetes.io/projected/f4c6ed22-520f-437f-9056-61327fcbf4c9-kube-api-access-dkhd9\") pod \"f4c6ed22-520f-437f-9056-61327fcbf4c9\" (UID: \"f4c6ed22-520f-437f-9056-61327fcbf4c9\") " Mar 20 21:38:15.030778 kubelet[2567]: I0320 21:38:15.030445 2567 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/f4c6ed22-520f-437f-9056-61327fcbf4c9-cni-net-dir\") pod \"f4c6ed22-520f-437f-9056-61327fcbf4c9\" (UID: \"f4c6ed22-520f-437f-9056-61327fcbf4c9\") " Mar 20 21:38:15.030937 kubelet[2567]: I0320 21:38:15.030493 2567 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f4c6ed22-520f-437f-9056-61327fcbf4c9-lib-modules\") pod \"f4c6ed22-520f-437f-9056-61327fcbf4c9\" (UID: \"f4c6ed22-520f-437f-9056-61327fcbf4c9\") " Mar 20 21:38:15.030937 kubelet[2567]: I0320 21:38:15.030512 2567 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/f4c6ed22-520f-437f-9056-61327fcbf4c9-cni-log-dir\") pod \"f4c6ed22-520f-437f-9056-61327fcbf4c9\" (UID: \"f4c6ed22-520f-437f-9056-61327fcbf4c9\") " Mar 20 21:38:15.030937 kubelet[2567]: I0320 21:38:15.030533 2567 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f4c6ed22-520f-437f-9056-61327fcbf4c9-tigera-ca-bundle\") pod \"f4c6ed22-520f-437f-9056-61327fcbf4c9\" (UID: \"f4c6ed22-520f-437f-9056-61327fcbf4c9\") " Mar 20 21:38:15.032371 kubelet[2567]: I0320 21:38:15.031678 2567 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4c6ed22-520f-437f-9056-61327fcbf4c9-policysync" (OuterVolumeSpecName: "policysync") pod "f4c6ed22-520f-437f-9056-61327fcbf4c9" (UID: "f4c6ed22-520f-437f-9056-61327fcbf4c9"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 21:38:15.032371 kubelet[2567]: I0320 21:38:15.031693 2567 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4c6ed22-520f-437f-9056-61327fcbf4c9-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "f4c6ed22-520f-437f-9056-61327fcbf4c9" (UID: "f4c6ed22-520f-437f-9056-61327fcbf4c9"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 21:38:15.032371 kubelet[2567]: I0320 21:38:15.031732 2567 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4c6ed22-520f-437f-9056-61327fcbf4c9-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "f4c6ed22-520f-437f-9056-61327fcbf4c9" (UID: "f4c6ed22-520f-437f-9056-61327fcbf4c9"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 21:38:15.034739 kubelet[2567]: E0320 21:38:15.033536 2567 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f4c6ed22-520f-437f-9056-61327fcbf4c9" containerName="flexvol-driver" Mar 20 21:38:15.034739 kubelet[2567]: E0320 21:38:15.034489 2567 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f4c6ed22-520f-437f-9056-61327fcbf4c9" containerName="install-cni" Mar 20 21:38:15.034739 kubelet[2567]: E0320 21:38:15.034498 2567 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f4c6ed22-520f-437f-9056-61327fcbf4c9" containerName="calico-node" Mar 20 21:38:15.034739 kubelet[2567]: I0320 21:38:15.034588 2567 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4c6ed22-520f-437f-9056-61327fcbf4c9-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "f4c6ed22-520f-437f-9056-61327fcbf4c9" (UID: "f4c6ed22-520f-437f-9056-61327fcbf4c9"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 21:38:15.035068 kubelet[2567]: I0320 21:38:15.035034 2567 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4c6ed22-520f-437f-9056-61327fcbf4c9-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "f4c6ed22-520f-437f-9056-61327fcbf4c9" (UID: "f4c6ed22-520f-437f-9056-61327fcbf4c9"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 21:38:15.035123 kubelet[2567]: I0320 21:38:15.035085 2567 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4c6ed22-520f-437f-9056-61327fcbf4c9-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "f4c6ed22-520f-437f-9056-61327fcbf4c9" (UID: "f4c6ed22-520f-437f-9056-61327fcbf4c9"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 21:38:15.035123 kubelet[2567]: I0320 21:38:15.035105 2567 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4c6ed22-520f-437f-9056-61327fcbf4c9-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "f4c6ed22-520f-437f-9056-61327fcbf4c9" (UID: "f4c6ed22-520f-437f-9056-61327fcbf4c9"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 21:38:15.035177 kubelet[2567]: I0320 21:38:15.035124 2567 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4c6ed22-520f-437f-9056-61327fcbf4c9-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "f4c6ed22-520f-437f-9056-61327fcbf4c9" (UID: "f4c6ed22-520f-437f-9056-61327fcbf4c9"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 21:38:15.035177 kubelet[2567]: I0320 21:38:15.035141 2567 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4c6ed22-520f-437f-9056-61327fcbf4c9-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "f4c6ed22-520f-437f-9056-61327fcbf4c9" (UID: "f4c6ed22-520f-437f-9056-61327fcbf4c9"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 21:38:15.035220 kubelet[2567]: I0320 21:38:15.035201 2567 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4c6ed22-520f-437f-9056-61327fcbf4c9" containerName="calico-node" Mar 20 21:38:15.036188 kubelet[2567]: I0320 21:38:15.036155 2567 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4c6ed22-520f-437f-9056-61327fcbf4c9-node-certs" (OuterVolumeSpecName: "node-certs") pod "f4c6ed22-520f-437f-9056-61327fcbf4c9" (UID: "f4c6ed22-520f-437f-9056-61327fcbf4c9"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 21:38:15.039422 kubelet[2567]: I0320 21:38:15.039387 2567 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4c6ed22-520f-437f-9056-61327fcbf4c9-kube-api-access-dkhd9" (OuterVolumeSpecName: "kube-api-access-dkhd9") pod "f4c6ed22-520f-437f-9056-61327fcbf4c9" (UID: "f4c6ed22-520f-437f-9056-61327fcbf4c9"). InnerVolumeSpecName "kube-api-access-dkhd9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 21:38:15.046365 systemd[1]: Created slice kubepods-besteffort-pod1ac94d46_0be5_488d_9129_c69f14ba1880.slice - libcontainer container kubepods-besteffort-pod1ac94d46_0be5_488d_9129_c69f14ba1880.slice. Mar 20 21:38:15.051396 kubelet[2567]: I0320 21:38:15.051360 2567 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4c6ed22-520f-437f-9056-61327fcbf4c9-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "f4c6ed22-520f-437f-9056-61327fcbf4c9" (UID: "f4c6ed22-520f-437f-9056-61327fcbf4c9"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 21:38:15.058159 containerd[1452]: 2025-03-20 21:38:14.965 [INFO][5132] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c1faaeab47b47e0b01fc2beeb04a7c9681d584c66b9b2af1cda00493cbf100e3" Mar 20 21:38:15.058159 containerd[1452]: 2025-03-20 21:38:14.965 [INFO][5132] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c1faaeab47b47e0b01fc2beeb04a7c9681d584c66b9b2af1cda00493cbf100e3" iface="eth0" netns="/var/run/netns/cni-d9b5e8fe-95b4-a34b-b969-b9869d632850" Mar 20 21:38:15.058159 containerd[1452]: 2025-03-20 21:38:14.966 [INFO][5132] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c1faaeab47b47e0b01fc2beeb04a7c9681d584c66b9b2af1cda00493cbf100e3" iface="eth0" netns="/var/run/netns/cni-d9b5e8fe-95b4-a34b-b969-b9869d632850" Mar 20 21:38:15.058159 containerd[1452]: 2025-03-20 21:38:14.976 [INFO][5132] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="c1faaeab47b47e0b01fc2beeb04a7c9681d584c66b9b2af1cda00493cbf100e3" after=11.109264ms iface="eth0" netns="/var/run/netns/cni-d9b5e8fe-95b4-a34b-b969-b9869d632850" Mar 20 21:38:15.058159 containerd[1452]: 2025-03-20 21:38:14.976 [INFO][5132] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c1faaeab47b47e0b01fc2beeb04a7c9681d584c66b9b2af1cda00493cbf100e3" Mar 20 21:38:15.058159 containerd[1452]: 2025-03-20 21:38:14.976 [INFO][5132] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c1faaeab47b47e0b01fc2beeb04a7c9681d584c66b9b2af1cda00493cbf100e3" Mar 20 21:38:15.058159 containerd[1452]: 2025-03-20 21:38:15.002 [INFO][5181] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c1faaeab47b47e0b01fc2beeb04a7c9681d584c66b9b2af1cda00493cbf100e3" HandleID="k8s-pod-network.c1faaeab47b47e0b01fc2beeb04a7c9681d584c66b9b2af1cda00493cbf100e3" Workload="localhost-k8s-calico--kube--controllers--9c4b576c4--ld24m-eth0" Mar 20 21:38:15.058159 containerd[1452]: 2025-03-20 21:38:15.002 [INFO][5181] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 20 21:38:15.058159 containerd[1452]: 2025-03-20 21:38:15.002 [INFO][5181] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 20 21:38:15.058159 containerd[1452]: 2025-03-20 21:38:15.051 [INFO][5181] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="c1faaeab47b47e0b01fc2beeb04a7c9681d584c66b9b2af1cda00493cbf100e3" HandleID="k8s-pod-network.c1faaeab47b47e0b01fc2beeb04a7c9681d584c66b9b2af1cda00493cbf100e3" Workload="localhost-k8s-calico--kube--controllers--9c4b576c4--ld24m-eth0" Mar 20 21:38:15.058159 containerd[1452]: 2025-03-20 21:38:15.051 [INFO][5181] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c1faaeab47b47e0b01fc2beeb04a7c9681d584c66b9b2af1cda00493cbf100e3" HandleID="k8s-pod-network.c1faaeab47b47e0b01fc2beeb04a7c9681d584c66b9b2af1cda00493cbf100e3" Workload="localhost-k8s-calico--kube--controllers--9c4b576c4--ld24m-eth0" Mar 20 21:38:15.058159 containerd[1452]: 2025-03-20 21:38:15.055 [INFO][5181] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 20 21:38:15.058159 containerd[1452]: 2025-03-20 21:38:15.056 [INFO][5132] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c1faaeab47b47e0b01fc2beeb04a7c9681d584c66b9b2af1cda00493cbf100e3" Mar 20 21:38:15.058749 containerd[1452]: time="2025-03-20T21:38:15.058715170Z" level=info msg="TearDown network for sandbox \"c1faaeab47b47e0b01fc2beeb04a7c9681d584c66b9b2af1cda00493cbf100e3\" successfully" Mar 20 21:38:15.058805 containerd[1452]: time="2025-03-20T21:38:15.058750727Z" level=info msg="StopPodSandbox for \"c1faaeab47b47e0b01fc2beeb04a7c9681d584c66b9b2af1cda00493cbf100e3\" returns successfully" Mar 20 21:38:15.131459 kubelet[2567]: I0320 21:38:15.131314 2567 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/48cfd0f2-c351-498a-b143-cbea7fdfcbf4-tigera-ca-bundle\") pod \"48cfd0f2-c351-498a-b143-cbea7fdfcbf4\" (UID: \"48cfd0f2-c351-498a-b143-cbea7fdfcbf4\") " Mar 20 21:38:15.131459 kubelet[2567]: I0320 21:38:15.131357 2567 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2l2f8\" (UniqueName: \"kubernetes.io/projected/48cfd0f2-c351-498a-b143-cbea7fdfcbf4-kube-api-access-2l2f8\") pod \"48cfd0f2-c351-498a-b143-cbea7fdfcbf4\" (UID: \"48cfd0f2-c351-498a-b143-cbea7fdfcbf4\") " Mar 20 21:38:15.131459 kubelet[2567]: I0320 21:38:15.131407 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1ac94d46-0be5-488d-9129-c69f14ba1880-tigera-ca-bundle\") pod \"calico-node-kgj92\" (UID: \"1ac94d46-0be5-488d-9129-c69f14ba1880\") " pod="calico-system/calico-node-kgj92" Mar 20 21:38:15.131459 kubelet[2567]: I0320 21:38:15.131428 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/1ac94d46-0be5-488d-9129-c69f14ba1880-var-run-calico\") pod \"calico-node-kgj92\" (UID: \"1ac94d46-0be5-488d-9129-c69f14ba1880\") " pod="calico-system/calico-node-kgj92" Mar 20 21:38:15.131459 kubelet[2567]: I0320 21:38:15.131444 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1ac94d46-0be5-488d-9129-c69f14ba1880-lib-modules\") pod \"calico-node-kgj92\" (UID: \"1ac94d46-0be5-488d-9129-c69f14ba1880\") " pod="calico-system/calico-node-kgj92" Mar 20 21:38:15.132381 kubelet[2567]: I0320 21:38:15.131459 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/1ac94d46-0be5-488d-9129-c69f14ba1880-flexvol-driver-host\") pod \"calico-node-kgj92\" (UID: \"1ac94d46-0be5-488d-9129-c69f14ba1880\") " pod="calico-system/calico-node-kgj92" Mar 20 21:38:15.132381 kubelet[2567]: I0320 21:38:15.131477 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/1ac94d46-0be5-488d-9129-c69f14ba1880-policysync\") pod \"calico-node-kgj92\" (UID: \"1ac94d46-0be5-488d-9129-c69f14ba1880\") " pod="calico-system/calico-node-kgj92" Mar 20 21:38:15.132381 kubelet[2567]: I0320 21:38:15.131492 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/1ac94d46-0be5-488d-9129-c69f14ba1880-cni-net-dir\") pod \"calico-node-kgj92\" (UID: \"1ac94d46-0be5-488d-9129-c69f14ba1880\") " pod="calico-system/calico-node-kgj92" Mar 20 21:38:15.132381 kubelet[2567]: I0320 21:38:15.131507 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7f5r5\" (UniqueName: \"kubernetes.io/projected/1ac94d46-0be5-488d-9129-c69f14ba1880-kube-api-access-7f5r5\") pod \"calico-node-kgj92\" (UID: \"1ac94d46-0be5-488d-9129-c69f14ba1880\") " pod="calico-system/calico-node-kgj92" Mar 20 21:38:15.132381 kubelet[2567]: I0320 21:38:15.131521 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/1ac94d46-0be5-488d-9129-c69f14ba1880-node-certs\") pod \"calico-node-kgj92\" (UID: \"1ac94d46-0be5-488d-9129-c69f14ba1880\") " pod="calico-system/calico-node-kgj92" Mar 20 21:38:15.133115 kubelet[2567]: I0320 21:38:15.131535 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1ac94d46-0be5-488d-9129-c69f14ba1880-var-lib-calico\") pod \"calico-node-kgj92\" (UID: \"1ac94d46-0be5-488d-9129-c69f14ba1880\") " pod="calico-system/calico-node-kgj92" Mar 20 21:38:15.133115 kubelet[2567]: I0320 21:38:15.131550 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/1ac94d46-0be5-488d-9129-c69f14ba1880-cni-bin-dir\") pod \"calico-node-kgj92\" (UID: \"1ac94d46-0be5-488d-9129-c69f14ba1880\") " pod="calico-system/calico-node-kgj92" Mar 20 21:38:15.133115 kubelet[2567]: I0320 21:38:15.131566 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1ac94d46-0be5-488d-9129-c69f14ba1880-xtables-lock\") pod \"calico-node-kgj92\" (UID: \"1ac94d46-0be5-488d-9129-c69f14ba1880\") " pod="calico-system/calico-node-kgj92" Mar 20 21:38:15.133115 kubelet[2567]: I0320 21:38:15.131581 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/1ac94d46-0be5-488d-9129-c69f14ba1880-cni-log-dir\") pod \"calico-node-kgj92\" (UID: \"1ac94d46-0be5-488d-9129-c69f14ba1880\") " pod="calico-system/calico-node-kgj92" Mar 20 21:38:15.133115 kubelet[2567]: I0320 21:38:15.131604 2567 reconciler_common.go:288] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f4c6ed22-520f-437f-9056-61327fcbf4c9-var-lib-calico\") on node \"localhost\" DevicePath \"\"" Mar 20 21:38:15.133115 kubelet[2567]: I0320 21:38:15.131637 2567 reconciler_common.go:288] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/f4c6ed22-520f-437f-9056-61327fcbf4c9-flexvol-driver-host\") on node \"localhost\" DevicePath \"\"" Mar 20 21:38:15.133463 kubelet[2567]: I0320 21:38:15.131649 2567 reconciler_common.go:288] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/f4c6ed22-520f-437f-9056-61327fcbf4c9-cni-bin-dir\") on node \"localhost\" DevicePath \"\"" Mar 20 21:38:15.133463 kubelet[2567]: I0320 21:38:15.131657 2567 reconciler_common.go:288] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/f4c6ed22-520f-437f-9056-61327fcbf4c9-node-certs\") on node \"localhost\" DevicePath \"\"" Mar 20 21:38:15.133463 kubelet[2567]: I0320 21:38:15.131664 2567 reconciler_common.go:288] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/f4c6ed22-520f-437f-9056-61327fcbf4c9-var-run-calico\") on node \"localhost\" DevicePath \"\"" Mar 20 21:38:15.133463 kubelet[2567]: I0320 21:38:15.131671 2567 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f4c6ed22-520f-437f-9056-61327fcbf4c9-xtables-lock\") on node \"localhost\" DevicePath \"\"" Mar 20 21:38:15.133463 kubelet[2567]: I0320 21:38:15.131678 2567 reconciler_common.go:288] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/f4c6ed22-520f-437f-9056-61327fcbf4c9-policysync\") on node \"localhost\" DevicePath \"\"" Mar 20 21:38:15.133463 kubelet[2567]: I0320 21:38:15.131705 2567 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-dkhd9\" (UniqueName: \"kubernetes.io/projected/f4c6ed22-520f-437f-9056-61327fcbf4c9-kube-api-access-dkhd9\") on node \"localhost\" DevicePath \"\"" Mar 20 21:38:15.133463 kubelet[2567]: I0320 21:38:15.131760 2567 reconciler_common.go:288] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/f4c6ed22-520f-437f-9056-61327fcbf4c9-cni-net-dir\") on node \"localhost\" DevicePath \"\"" Mar 20 21:38:15.133463 kubelet[2567]: I0320 21:38:15.131781 2567 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f4c6ed22-520f-437f-9056-61327fcbf4c9-lib-modules\") on node \"localhost\" DevicePath \"\"" Mar 20 21:38:15.133680 kubelet[2567]: I0320 21:38:15.131789 2567 reconciler_common.go:288] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/f4c6ed22-520f-437f-9056-61327fcbf4c9-cni-log-dir\") on node \"localhost\" DevicePath \"\"" Mar 20 21:38:15.133680 kubelet[2567]: I0320 21:38:15.131798 2567 reconciler_common.go:288] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f4c6ed22-520f-437f-9056-61327fcbf4c9-tigera-ca-bundle\") on node \"localhost\" DevicePath \"\"" Mar 20 21:38:15.134596 kubelet[2567]: I0320 21:38:15.134564 2567 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48cfd0f2-c351-498a-b143-cbea7fdfcbf4-kube-api-access-2l2f8" (OuterVolumeSpecName: "kube-api-access-2l2f8") pod "48cfd0f2-c351-498a-b143-cbea7fdfcbf4" (UID: "48cfd0f2-c351-498a-b143-cbea7fdfcbf4"). InnerVolumeSpecName "kube-api-access-2l2f8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 21:38:15.136673 kubelet[2567]: I0320 21:38:15.136638 2567 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48cfd0f2-c351-498a-b143-cbea7fdfcbf4-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "48cfd0f2-c351-498a-b143-cbea7fdfcbf4" (UID: "48cfd0f2-c351-498a-b143-cbea7fdfcbf4"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 21:38:15.232446 kubelet[2567]: I0320 21:38:15.232397 2567 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-2l2f8\" (UniqueName: \"kubernetes.io/projected/48cfd0f2-c351-498a-b143-cbea7fdfcbf4-kube-api-access-2l2f8\") on node \"localhost\" DevicePath \"\"" Mar 20 21:38:15.232446 kubelet[2567]: I0320 21:38:15.232436 2567 reconciler_common.go:288] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/48cfd0f2-c351-498a-b143-cbea7fdfcbf4-tigera-ca-bundle\") on node \"localhost\" DevicePath \"\"" Mar 20 21:38:15.350467 kubelet[2567]: E0320 21:38:15.350408 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:38:15.351185 containerd[1452]: time="2025-03-20T21:38:15.351142596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-kgj92,Uid:1ac94d46-0be5-488d-9129-c69f14ba1880,Namespace:calico-system,Attempt:0,}" Mar 20 21:38:15.367549 containerd[1452]: time="2025-03-20T21:38:15.367512682Z" level=info msg="connecting to shim f8a68233f63825a544322076ea9474118c1e0112e6ff3473b81d08d2e6845c19" address="unix:///run/containerd/s/3066b6abaf15df2d342038df399657d132d32f8b94ecaa3bb6020200f683b415" namespace=k8s.io protocol=ttrpc version=3 Mar 20 21:38:15.394811 systemd[1]: Started cri-containerd-f8a68233f63825a544322076ea9474118c1e0112e6ff3473b81d08d2e6845c19.scope - libcontainer container f8a68233f63825a544322076ea9474118c1e0112e6ff3473b81d08d2e6845c19. Mar 20 21:38:15.422506 containerd[1452]: time="2025-03-20T21:38:15.422460460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-kgj92,Uid:1ac94d46-0be5-488d-9129-c69f14ba1880,Namespace:calico-system,Attempt:0,} returns sandbox id \"f8a68233f63825a544322076ea9474118c1e0112e6ff3473b81d08d2e6845c19\"" Mar 20 21:38:15.423774 kubelet[2567]: E0320 21:38:15.423705 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:38:15.428580 containerd[1452]: time="2025-03-20T21:38:15.428523279Z" level=info msg="CreateContainer within sandbox \"f8a68233f63825a544322076ea9474118c1e0112e6ff3473b81d08d2e6845c19\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 20 21:38:15.440091 containerd[1452]: time="2025-03-20T21:38:15.440027768Z" level=info msg="Container d75520133845021855200208047c0563363c5a347e98ad45bd9df50c4c8c0142: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:38:15.448527 containerd[1452]: time="2025-03-20T21:38:15.448386916Z" level=info msg="CreateContainer within sandbox \"f8a68233f63825a544322076ea9474118c1e0112e6ff3473b81d08d2e6845c19\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"d75520133845021855200208047c0563363c5a347e98ad45bd9df50c4c8c0142\"" Mar 20 21:38:15.448917 containerd[1452]: time="2025-03-20T21:38:15.448833360Z" level=info msg="StartContainer for \"d75520133845021855200208047c0563363c5a347e98ad45bd9df50c4c8c0142\"" Mar 20 21:38:15.451596 containerd[1452]: time="2025-03-20T21:38:15.451469142Z" level=info msg="connecting to shim d75520133845021855200208047c0563363c5a347e98ad45bd9df50c4c8c0142" address="unix:///run/containerd/s/3066b6abaf15df2d342038df399657d132d32f8b94ecaa3bb6020200f683b415" protocol=ttrpc version=3 Mar 20 21:38:15.469259 systemd[1]: Started cri-containerd-d75520133845021855200208047c0563363c5a347e98ad45bd9df50c4c8c0142.scope - libcontainer container d75520133845021855200208047c0563363c5a347e98ad45bd9df50c4c8c0142. Mar 20 21:38:15.509981 containerd[1452]: time="2025-03-20T21:38:15.509871074Z" level=info msg="StartContainer for \"d75520133845021855200208047c0563363c5a347e98ad45bd9df50c4c8c0142\" returns successfully" Mar 20 21:38:15.537985 systemd[1]: cri-containerd-d75520133845021855200208047c0563363c5a347e98ad45bd9df50c4c8c0142.scope: Deactivated successfully. Mar 20 21:38:15.538321 systemd[1]: cri-containerd-d75520133845021855200208047c0563363c5a347e98ad45bd9df50c4c8c0142.scope: Consumed 40ms CPU time, 17.8M memory peak, 9.8M read from disk, 6.2M written to disk. Mar 20 21:38:15.539741 containerd[1452]: time="2025-03-20T21:38:15.539696728Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d75520133845021855200208047c0563363c5a347e98ad45bd9df50c4c8c0142\" id:\"d75520133845021855200208047c0563363c5a347e98ad45bd9df50c4c8c0142\" pid:5253 exited_at:{seconds:1742506695 nanos:539320999}" Mar 20 21:38:15.539932 containerd[1452]: time="2025-03-20T21:38:15.539909310Z" level=info msg="received exit event container_id:\"d75520133845021855200208047c0563363c5a347e98ad45bd9df50c4c8c0142\" id:\"d75520133845021855200208047c0563363c5a347e98ad45bd9df50c4c8c0142\" pid:5253 exited_at:{seconds:1742506695 nanos:539320999}" Mar 20 21:38:15.639093 kubelet[2567]: I0320 21:38:15.639057 2567 scope.go:117] "RemoveContainer" containerID="e0504ee96ce479d79937d7214488857e79d36cd31b035603b65bbc6a7525e06d" Mar 20 21:38:15.642695 containerd[1452]: time="2025-03-20T21:38:15.641896479Z" level=info msg="RemoveContainer for \"e0504ee96ce479d79937d7214488857e79d36cd31b035603b65bbc6a7525e06d\"" Mar 20 21:38:15.646283 containerd[1452]: time="2025-03-20T21:38:15.645471904Z" level=info msg="RemoveContainer for \"e0504ee96ce479d79937d7214488857e79d36cd31b035603b65bbc6a7525e06d\" returns successfully" Mar 20 21:38:15.646869 kubelet[2567]: I0320 21:38:15.645661 2567 scope.go:117] "RemoveContainer" containerID="e0504ee96ce479d79937d7214488857e79d36cd31b035603b65bbc6a7525e06d" Mar 20 21:38:15.647109 containerd[1452]: time="2025-03-20T21:38:15.647038414Z" level=error msg="ContainerStatus for \"e0504ee96ce479d79937d7214488857e79d36cd31b035603b65bbc6a7525e06d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e0504ee96ce479d79937d7214488857e79d36cd31b035603b65bbc6a7525e06d\": not found" Mar 20 21:38:15.647893 systemd[1]: Removed slice kubepods-besteffort-pod48cfd0f2_c351_498a_b143_cbea7fdfcbf4.slice - libcontainer container kubepods-besteffort-pod48cfd0f2_c351_498a_b143_cbea7fdfcbf4.slice. Mar 20 21:38:15.651727 kubelet[2567]: E0320 21:38:15.651691 2567 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e0504ee96ce479d79937d7214488857e79d36cd31b035603b65bbc6a7525e06d\": not found" containerID="e0504ee96ce479d79937d7214488857e79d36cd31b035603b65bbc6a7525e06d" Mar 20 21:38:15.651800 kubelet[2567]: I0320 21:38:15.651730 2567 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e0504ee96ce479d79937d7214488857e79d36cd31b035603b65bbc6a7525e06d"} err="failed to get container status \"e0504ee96ce479d79937d7214488857e79d36cd31b035603b65bbc6a7525e06d\": rpc error: code = NotFound desc = an error occurred when try to find container \"e0504ee96ce479d79937d7214488857e79d36cd31b035603b65bbc6a7525e06d\": not found" Mar 20 21:38:15.652088 kubelet[2567]: I0320 21:38:15.652027 2567 scope.go:117] "RemoveContainer" containerID="c224e159f0c7388fbca2b7942eac918c5faf0746fe5890ac6577392f1e9ee158" Mar 20 21:38:15.655131 containerd[1452]: time="2025-03-20T21:38:15.655076670Z" level=info msg="RemoveContainer for \"c224e159f0c7388fbca2b7942eac918c5faf0746fe5890ac6577392f1e9ee158\"" Mar 20 21:38:15.655225 kubelet[2567]: E0320 21:38:15.655199 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:38:15.656660 containerd[1452]: time="2025-03-20T21:38:15.656597624Z" level=info msg="CreateContainer within sandbox \"f8a68233f63825a544322076ea9474118c1e0112e6ff3473b81d08d2e6845c19\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 20 21:38:15.656901 systemd[1]: Removed slice kubepods-besteffort-podf4c6ed22_520f_437f_9056_61327fcbf4c9.slice - libcontainer container kubepods-besteffort-podf4c6ed22_520f_437f_9056_61327fcbf4c9.slice. Mar 20 21:38:15.656986 systemd[1]: kubepods-besteffort-podf4c6ed22_520f_437f_9056_61327fcbf4c9.slice: Consumed 2.492s CPU time, 217.6M memory peak, 6M read from disk, 157.6M written to disk. Mar 20 21:38:15.661121 containerd[1452]: time="2025-03-20T21:38:15.661079493Z" level=info msg="RemoveContainer for \"c224e159f0c7388fbca2b7942eac918c5faf0746fe5890ac6577392f1e9ee158\" returns successfully" Mar 20 21:38:15.663385 kubelet[2567]: I0320 21:38:15.662333 2567 scope.go:117] "RemoveContainer" containerID="63eedf710cd7b37fc43507a19420c82a413ed711ccf61b62ce54644cb15e0c91" Mar 20 21:38:15.665658 containerd[1452]: time="2025-03-20T21:38:15.665595200Z" level=info msg="RemoveContainer for \"63eedf710cd7b37fc43507a19420c82a413ed711ccf61b62ce54644cb15e0c91\"" Mar 20 21:38:15.672425 containerd[1452]: time="2025-03-20T21:38:15.672018429Z" level=info msg="RemoveContainer for \"63eedf710cd7b37fc43507a19420c82a413ed711ccf61b62ce54644cb15e0c91\" returns successfully" Mar 20 21:38:15.672729 kubelet[2567]: I0320 21:38:15.672656 2567 scope.go:117] "RemoveContainer" containerID="39db45afc20814fee94d62f79b5ec308822aab5b9a977553893866c140315829" Mar 20 21:38:15.678035 containerd[1452]: time="2025-03-20T21:38:15.677992495Z" level=info msg="RemoveContainer for \"39db45afc20814fee94d62f79b5ec308822aab5b9a977553893866c140315829\"" Mar 20 21:38:15.689245 containerd[1452]: time="2025-03-20T21:38:15.689160652Z" level=info msg="RemoveContainer for \"39db45afc20814fee94d62f79b5ec308822aab5b9a977553893866c140315829\" returns successfully" Mar 20 21:38:15.689461 kubelet[2567]: I0320 21:38:15.689428 2567 scope.go:117] "RemoveContainer" containerID="c224e159f0c7388fbca2b7942eac918c5faf0746fe5890ac6577392f1e9ee158" Mar 20 21:38:15.691778 containerd[1452]: time="2025-03-20T21:38:15.691728080Z" level=error msg="ContainerStatus for \"c224e159f0c7388fbca2b7942eac918c5faf0746fe5890ac6577392f1e9ee158\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c224e159f0c7388fbca2b7942eac918c5faf0746fe5890ac6577392f1e9ee158\": not found" Mar 20 21:38:15.692068 kubelet[2567]: E0320 21:38:15.692040 2567 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c224e159f0c7388fbca2b7942eac918c5faf0746fe5890ac6577392f1e9ee158\": not found" containerID="c224e159f0c7388fbca2b7942eac918c5faf0746fe5890ac6577392f1e9ee158" Mar 20 21:38:15.692176 kubelet[2567]: I0320 21:38:15.692069 2567 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c224e159f0c7388fbca2b7942eac918c5faf0746fe5890ac6577392f1e9ee158"} err="failed to get container status \"c224e159f0c7388fbca2b7942eac918c5faf0746fe5890ac6577392f1e9ee158\": rpc error: code = NotFound desc = an error occurred when try to find container \"c224e159f0c7388fbca2b7942eac918c5faf0746fe5890ac6577392f1e9ee158\": not found" Mar 20 21:38:15.692176 kubelet[2567]: I0320 21:38:15.692135 2567 scope.go:117] "RemoveContainer" containerID="63eedf710cd7b37fc43507a19420c82a413ed711ccf61b62ce54644cb15e0c91" Mar 20 21:38:15.692998 containerd[1452]: time="2025-03-20T21:38:15.692947019Z" level=error msg="ContainerStatus for \"63eedf710cd7b37fc43507a19420c82a413ed711ccf61b62ce54644cb15e0c91\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"63eedf710cd7b37fc43507a19420c82a413ed711ccf61b62ce54644cb15e0c91\": not found" Mar 20 21:38:15.693123 kubelet[2567]: E0320 21:38:15.693092 2567 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"63eedf710cd7b37fc43507a19420c82a413ed711ccf61b62ce54644cb15e0c91\": not found" containerID="63eedf710cd7b37fc43507a19420c82a413ed711ccf61b62ce54644cb15e0c91" Mar 20 21:38:15.693175 kubelet[2567]: I0320 21:38:15.693122 2567 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"63eedf710cd7b37fc43507a19420c82a413ed711ccf61b62ce54644cb15e0c91"} err="failed to get container status \"63eedf710cd7b37fc43507a19420c82a413ed711ccf61b62ce54644cb15e0c91\": rpc error: code = NotFound desc = an error occurred when try to find container \"63eedf710cd7b37fc43507a19420c82a413ed711ccf61b62ce54644cb15e0c91\": not found" Mar 20 21:38:15.693175 kubelet[2567]: I0320 21:38:15.693140 2567 scope.go:117] "RemoveContainer" containerID="39db45afc20814fee94d62f79b5ec308822aab5b9a977553893866c140315829" Mar 20 21:38:15.694312 containerd[1452]: time="2025-03-20T21:38:15.693280831Z" level=error msg="ContainerStatus for \"39db45afc20814fee94d62f79b5ec308822aab5b9a977553893866c140315829\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"39db45afc20814fee94d62f79b5ec308822aab5b9a977553893866c140315829\": not found" Mar 20 21:38:15.694388 kubelet[2567]: E0320 21:38:15.693491 2567 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"39db45afc20814fee94d62f79b5ec308822aab5b9a977553893866c140315829\": not found" containerID="39db45afc20814fee94d62f79b5ec308822aab5b9a977553893866c140315829" Mar 20 21:38:15.694388 kubelet[2567]: I0320 21:38:15.693509 2567 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"39db45afc20814fee94d62f79b5ec308822aab5b9a977553893866c140315829"} err="failed to get container status \"39db45afc20814fee94d62f79b5ec308822aab5b9a977553893866c140315829\": rpc error: code = NotFound desc = an error occurred when try to find container \"39db45afc20814fee94d62f79b5ec308822aab5b9a977553893866c140315829\": not found" Mar 20 21:38:15.698397 containerd[1452]: time="2025-03-20T21:38:15.698336374Z" level=info msg="Container 2233532fef50fbf7d389db5981e850f1a840d80ea11ce9a954ce3bd1650c5e6f: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:38:15.715804 containerd[1452]: time="2025-03-20T21:38:15.715753334Z" level=info msg="CreateContainer within sandbox \"f8a68233f63825a544322076ea9474118c1e0112e6ff3473b81d08d2e6845c19\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"2233532fef50fbf7d389db5981e850f1a840d80ea11ce9a954ce3bd1650c5e6f\"" Mar 20 21:38:15.716363 containerd[1452]: time="2025-03-20T21:38:15.716337125Z" level=info msg="StartContainer for \"2233532fef50fbf7d389db5981e850f1a840d80ea11ce9a954ce3bd1650c5e6f\"" Mar 20 21:38:15.718364 containerd[1452]: time="2025-03-20T21:38:15.718230449Z" level=info msg="connecting to shim 2233532fef50fbf7d389db5981e850f1a840d80ea11ce9a954ce3bd1650c5e6f" address="unix:///run/containerd/s/3066b6abaf15df2d342038df399657d132d32f8b94ecaa3bb6020200f683b415" protocol=ttrpc version=3 Mar 20 21:38:15.771875 systemd[1]: var-lib-kubelet-pods-48cfd0f2\x2dc351\x2d498a\x2db143\x2dcbea7fdfcbf4-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dkube\x2dcontrollers-1.mount: Deactivated successfully. Mar 20 21:38:15.772256 systemd[1]: run-netns-cni\x2dd9b5e8fe\x2d95b4\x2da34b\x2db969\x2db9869d632850.mount: Deactivated successfully. Mar 20 21:38:15.772319 systemd[1]: var-lib-kubelet-pods-f4c6ed22\x2d520f\x2d437f\x2d9056\x2d61327fcbf4c9-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dnode-1.mount: Deactivated successfully. Mar 20 21:38:15.772373 systemd[1]: var-lib-kubelet-pods-48cfd0f2\x2dc351\x2d498a\x2db143\x2dcbea7fdfcbf4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2l2f8.mount: Deactivated successfully. Mar 20 21:38:15.772437 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e73373abbb209ed61daa36d838f2241c6446bc211589f0e22be3ef71ddc4b8c1-shm.mount: Deactivated successfully. Mar 20 21:38:15.772486 systemd[1]: var-lib-kubelet-pods-f4c6ed22\x2d520f\x2d437f\x2d9056\x2d61327fcbf4c9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddkhd9.mount: Deactivated successfully. Mar 20 21:38:15.772531 systemd[1]: var-lib-kubelet-pods-f4c6ed22\x2d520f\x2d437f\x2d9056\x2d61327fcbf4c9-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. Mar 20 21:38:15.780761 systemd[1]: Started cri-containerd-2233532fef50fbf7d389db5981e850f1a840d80ea11ce9a954ce3bd1650c5e6f.scope - libcontainer container 2233532fef50fbf7d389db5981e850f1a840d80ea11ce9a954ce3bd1650c5e6f. Mar 20 21:38:15.814564 containerd[1452]: time="2025-03-20T21:38:15.814384260Z" level=info msg="StartContainer for \"2233532fef50fbf7d389db5981e850f1a840d80ea11ce9a954ce3bd1650c5e6f\" returns successfully" Mar 20 21:38:16.310116 systemd[1]: cri-containerd-2233532fef50fbf7d389db5981e850f1a840d80ea11ce9a954ce3bd1650c5e6f.scope: Deactivated successfully. Mar 20 21:38:16.310713 systemd[1]: cri-containerd-2233532fef50fbf7d389db5981e850f1a840d80ea11ce9a954ce3bd1650c5e6f.scope: Consumed 626ms CPU time, 126.2M memory peak, 101.5M read from disk. Mar 20 21:38:16.311223 containerd[1452]: time="2025-03-20T21:38:16.310728785Z" level=info msg="received exit event container_id:\"2233532fef50fbf7d389db5981e850f1a840d80ea11ce9a954ce3bd1650c5e6f\" id:\"2233532fef50fbf7d389db5981e850f1a840d80ea11ce9a954ce3bd1650c5e6f\" pid:5306 exited_at:{seconds:1742506696 nanos:310502243}" Mar 20 21:38:16.311223 containerd[1452]: time="2025-03-20T21:38:16.310765822Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2233532fef50fbf7d389db5981e850f1a840d80ea11ce9a954ce3bd1650c5e6f\" id:\"2233532fef50fbf7d389db5981e850f1a840d80ea11ce9a954ce3bd1650c5e6f\" pid:5306 exited_at:{seconds:1742506696 nanos:310502243}" Mar 20 21:38:16.314143 containerd[1452]: time="2025-03-20T21:38:16.314003291Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/10-calico.conflist\")" error="cni config load failed: failed to load CNI config list file /etc/cni/net.d/10-calico.conflist: error parsing configuration list: unexpected end of JSON input: invalid cni config: failed to load cni config" Mar 20 21:38:16.330050 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2233532fef50fbf7d389db5981e850f1a840d80ea11ce9a954ce3bd1650c5e6f-rootfs.mount: Deactivated successfully. Mar 20 21:38:16.406324 kubelet[2567]: I0320 21:38:16.406289 2567 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48cfd0f2-c351-498a-b143-cbea7fdfcbf4" path="/var/lib/kubelet/pods/48cfd0f2-c351-498a-b143-cbea7fdfcbf4/volumes" Mar 20 21:38:16.406815 kubelet[2567]: I0320 21:38:16.406699 2567 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4c6ed22-520f-437f-9056-61327fcbf4c9" path="/var/lib/kubelet/pods/f4c6ed22-520f-437f-9056-61327fcbf4c9/volumes" Mar 20 21:38:16.660167 kubelet[2567]: E0320 21:38:16.660077 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:38:16.672093 containerd[1452]: time="2025-03-20T21:38:16.672048890Z" level=info msg="CreateContainer within sandbox \"f8a68233f63825a544322076ea9474118c1e0112e6ff3473b81d08d2e6845c19\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 20 21:38:16.683211 containerd[1452]: time="2025-03-20T21:38:16.682374928Z" level=info msg="Container 4699842050002fe51fbd2fc16f517973e6c200d37af3cf5211c3d3fc1bb4a55b: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:38:16.694322 containerd[1452]: time="2025-03-20T21:38:16.694111536Z" level=info msg="CreateContainer within sandbox \"f8a68233f63825a544322076ea9474118c1e0112e6ff3473b81d08d2e6845c19\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"4699842050002fe51fbd2fc16f517973e6c200d37af3cf5211c3d3fc1bb4a55b\"" Mar 20 21:38:16.694659 containerd[1452]: time="2025-03-20T21:38:16.694632736Z" level=info msg="StartContainer for \"4699842050002fe51fbd2fc16f517973e6c200d37af3cf5211c3d3fc1bb4a55b\"" Mar 20 21:38:16.696534 containerd[1452]: time="2025-03-20T21:38:16.696502991Z" level=info msg="connecting to shim 4699842050002fe51fbd2fc16f517973e6c200d37af3cf5211c3d3fc1bb4a55b" address="unix:///run/containerd/s/3066b6abaf15df2d342038df399657d132d32f8b94ecaa3bb6020200f683b415" protocol=ttrpc version=3 Mar 20 21:38:16.716767 systemd[1]: Started cri-containerd-4699842050002fe51fbd2fc16f517973e6c200d37af3cf5211c3d3fc1bb4a55b.scope - libcontainer container 4699842050002fe51fbd2fc16f517973e6c200d37af3cf5211c3d3fc1bb4a55b. Mar 20 21:38:16.752994 containerd[1452]: time="2025-03-20T21:38:16.752868094Z" level=info msg="StartContainer for \"4699842050002fe51fbd2fc16f517973e6c200d37af3cf5211c3d3fc1bb4a55b\" returns successfully" Mar 20 21:38:17.225535 systemd[1]: Started sshd@20-10.0.0.3:22-10.0.0.1:40402.service - OpenSSH per-connection server daemon (10.0.0.1:40402). Mar 20 21:38:17.288388 sshd[5402]: Accepted publickey for core from 10.0.0.1 port 40402 ssh2: RSA SHA256:RPxckmxBxmDHHfBFzj0E8HhfLPWbeWZYhF2T7Zu87Y8 Mar 20 21:38:17.289665 sshd-session[5402]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:38:17.294878 systemd-logind[1436]: New session 21 of user core. Mar 20 21:38:17.302825 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 20 21:38:17.466586 sshd[5404]: Connection closed by 10.0.0.1 port 40402 Mar 20 21:38:17.466917 sshd-session[5402]: pam_unix(sshd:session): session closed for user core Mar 20 21:38:17.470982 systemd[1]: sshd@20-10.0.0.3:22-10.0.0.1:40402.service: Deactivated successfully. Mar 20 21:38:17.472854 systemd[1]: session-21.scope: Deactivated successfully. Mar 20 21:38:17.473532 systemd-logind[1436]: Session 21 logged out. Waiting for processes to exit. Mar 20 21:38:17.475072 systemd-logind[1436]: Removed session 21. Mar 20 21:38:17.675413 kubelet[2567]: E0320 21:38:17.675360 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:38:17.696024 kubelet[2567]: I0320 21:38:17.695953 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-kgj92" podStartSLOduration=2.695927045 podStartE2EDuration="2.695927045s" podCreationTimestamp="2025-03-20 21:38:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-20 21:38:17.695681223 +0000 UTC m=+69.373973744" watchObservedRunningTime="2025-03-20 21:38:17.695927045 +0000 UTC m=+69.374219566" Mar 20 21:38:17.732368 containerd[1452]: time="2025-03-20T21:38:17.732316397Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4699842050002fe51fbd2fc16f517973e6c200d37af3cf5211c3d3fc1bb4a55b\" id:\"8fa5422b10b9abc8ea102af372b5220fe0b5e252b2051c9eb2b9e761f4e1e1bf\" pid:5428 exit_status:1 exited_at:{seconds:1742506697 nanos:732042537}" Mar 20 21:38:18.677827 kubelet[2567]: E0320 21:38:18.677478 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 20 21:38:18.738312 containerd[1452]: time="2025-03-20T21:38:18.738261389Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4699842050002fe51fbd2fc16f517973e6c200d37af3cf5211c3d3fc1bb4a55b\" id:\"01126bdbe16c13e7568d8e1b2b9b959c70c774ada4f44af5d0a1c480ed33140c\" pid:5655 exit_status:1 exited_at:{seconds:1742506698 nanos:737952290}" Mar 20 21:38:19.222311 systemd[1]: cri-containerd-a298a9451a4c08cfae0fae7fc04acfe264caa10b80a591347cfad1ee96df4197.scope: Deactivated successfully. Mar 20 21:38:19.223089 systemd[1]: cri-containerd-a298a9451a4c08cfae0fae7fc04acfe264caa10b80a591347cfad1ee96df4197.scope: Consumed 379ms CPU time, 26.8M memory peak, 5.7M read from disk. Mar 20 21:38:19.224492 containerd[1452]: time="2025-03-20T21:38:19.223917253Z" level=info msg="received exit event container_id:\"a298a9451a4c08cfae0fae7fc04acfe264caa10b80a591347cfad1ee96df4197\" id:\"a298a9451a4c08cfae0fae7fc04acfe264caa10b80a591347cfad1ee96df4197\" pid:3210 exit_status:1 exited_at:{seconds:1742506699 nanos:223324010}" Mar 20 21:38:19.224970 containerd[1452]: time="2025-03-20T21:38:19.224839794Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a298a9451a4c08cfae0fae7fc04acfe264caa10b80a591347cfad1ee96df4197\" id:\"a298a9451a4c08cfae0fae7fc04acfe264caa10b80a591347cfad1ee96df4197\" pid:3210 exit_status:1 exited_at:{seconds:1742506699 nanos:223324010}" Mar 20 21:38:19.244275 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a298a9451a4c08cfae0fae7fc04acfe264caa10b80a591347cfad1ee96df4197-rootfs.mount: Deactivated successfully. Mar 20 21:38:19.254706 containerd[1452]: time="2025-03-20T21:38:19.254669060Z" level=info msg="StopContainer for \"a298a9451a4c08cfae0fae7fc04acfe264caa10b80a591347cfad1ee96df4197\" returns successfully" Mar 20 21:38:19.255167 containerd[1452]: time="2025-03-20T21:38:19.255142750Z" level=info msg="StopPodSandbox for \"f19e6f38ef1ed753be0965e3236da63554bf10b5f1702cd4b3ad754aff90ac18\"" Mar 20 21:38:19.255222 containerd[1452]: time="2025-03-20T21:38:19.255207146Z" level=info msg="Container to stop \"a298a9451a4c08cfae0fae7fc04acfe264caa10b80a591347cfad1ee96df4197\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 20 21:38:19.261331 systemd[1]: cri-containerd-f19e6f38ef1ed753be0965e3236da63554bf10b5f1702cd4b3ad754aff90ac18.scope: Deactivated successfully. Mar 20 21:38:19.262855 containerd[1452]: time="2025-03-20T21:38:19.262806703Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f19e6f38ef1ed753be0965e3236da63554bf10b5f1702cd4b3ad754aff90ac18\" id:\"f19e6f38ef1ed753be0965e3236da63554bf10b5f1702cd4b3ad754aff90ac18\" pid:3086 exit_status:137 exited_at:{seconds:1742506699 nanos:262411888}" Mar 20 21:38:19.288675 containerd[1452]: time="2025-03-20T21:38:19.288629143Z" level=info msg="shim disconnected" id=f19e6f38ef1ed753be0965e3236da63554bf10b5f1702cd4b3ad754aff90ac18 namespace=k8s.io Mar 20 21:38:19.288864 containerd[1452]: time="2025-03-20T21:38:19.288661341Z" level=warning msg="cleaning up after shim disconnected" id=f19e6f38ef1ed753be0965e3236da63554bf10b5f1702cd4b3ad754aff90ac18 namespace=k8s.io Mar 20 21:38:19.288864 containerd[1452]: time="2025-03-20T21:38:19.288695339Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 20 21:38:19.290074 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f19e6f38ef1ed753be0965e3236da63554bf10b5f1702cd4b3ad754aff90ac18-rootfs.mount: Deactivated successfully. Mar 20 21:38:19.322090 containerd[1452]: time="2025-03-20T21:38:19.322039861Z" level=info msg="received exit event sandbox_id:\"f19e6f38ef1ed753be0965e3236da63554bf10b5f1702cd4b3ad754aff90ac18\" exit_status:137 exited_at:{seconds:1742506699 nanos:262411888}" Mar 20 21:38:19.322901 containerd[1452]: time="2025-03-20T21:38:19.322167013Z" level=info msg="TearDown network for sandbox \"f19e6f38ef1ed753be0965e3236da63554bf10b5f1702cd4b3ad754aff90ac18\" successfully" Mar 20 21:38:19.322901 containerd[1452]: time="2025-03-20T21:38:19.322370280Z" level=info msg="StopPodSandbox for \"f19e6f38ef1ed753be0965e3236da63554bf10b5f1702cd4b3ad754aff90ac18\" returns successfully" Mar 20 21:38:19.323819 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f19e6f38ef1ed753be0965e3236da63554bf10b5f1702cd4b3ad754aff90ac18-shm.mount: Deactivated successfully. Mar 20 21:38:19.361061 kubelet[2567]: I0320 21:38:19.361024 2567 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-56pbw\" (UniqueName: \"kubernetes.io/projected/872e6251-6e97-43bb-8f50-79e0f03579b8-kube-api-access-56pbw\") pod \"872e6251-6e97-43bb-8f50-79e0f03579b8\" (UID: \"872e6251-6e97-43bb-8f50-79e0f03579b8\") " Mar 20 21:38:19.361203 kubelet[2567]: I0320 21:38:19.361068 2567 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/872e6251-6e97-43bb-8f50-79e0f03579b8-tigera-ca-bundle\") pod \"872e6251-6e97-43bb-8f50-79e0f03579b8\" (UID: \"872e6251-6e97-43bb-8f50-79e0f03579b8\") " Mar 20 21:38:19.361203 kubelet[2567]: I0320 21:38:19.361100 2567 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/872e6251-6e97-43bb-8f50-79e0f03579b8-typha-certs\") pod \"872e6251-6e97-43bb-8f50-79e0f03579b8\" (UID: \"872e6251-6e97-43bb-8f50-79e0f03579b8\") " Mar 20 21:38:19.363792 kubelet[2567]: I0320 21:38:19.363753 2567 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/872e6251-6e97-43bb-8f50-79e0f03579b8-typha-certs" (OuterVolumeSpecName: "typha-certs") pod "872e6251-6e97-43bb-8f50-79e0f03579b8" (UID: "872e6251-6e97-43bb-8f50-79e0f03579b8"). InnerVolumeSpecName "typha-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 21:38:19.364025 kubelet[2567]: I0320 21:38:19.363989 2567 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/872e6251-6e97-43bb-8f50-79e0f03579b8-kube-api-access-56pbw" (OuterVolumeSpecName: "kube-api-access-56pbw") pod "872e6251-6e97-43bb-8f50-79e0f03579b8" (UID: "872e6251-6e97-43bb-8f50-79e0f03579b8"). InnerVolumeSpecName "kube-api-access-56pbw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 21:38:19.364739 systemd[1]: var-lib-kubelet-pods-872e6251\x2d6e97\x2d43bb\x2d8f50\x2d79e0f03579b8-volumes-kubernetes.io\x7esecret-typha\x2dcerts.mount: Deactivated successfully. Mar 20 21:38:19.365558 kubelet[2567]: I0320 21:38:19.365515 2567 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/872e6251-6e97-43bb-8f50-79e0f03579b8-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "872e6251-6e97-43bb-8f50-79e0f03579b8" (UID: "872e6251-6e97-43bb-8f50-79e0f03579b8"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 21:38:19.366999 systemd[1]: var-lib-kubelet-pods-872e6251\x2d6e97\x2d43bb\x2d8f50\x2d79e0f03579b8-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dtypha-1.mount: Deactivated successfully. Mar 20 21:38:19.367091 systemd[1]: var-lib-kubelet-pods-872e6251\x2d6e97\x2d43bb\x2d8f50\x2d79e0f03579b8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d56pbw.mount: Deactivated successfully. Mar 20 21:38:19.461938 kubelet[2567]: I0320 21:38:19.461898 2567 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-56pbw\" (UniqueName: \"kubernetes.io/projected/872e6251-6e97-43bb-8f50-79e0f03579b8-kube-api-access-56pbw\") on node \"localhost\" DevicePath \"\"" Mar 20 21:38:19.462191 kubelet[2567]: I0320 21:38:19.462135 2567 reconciler_common.go:288] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/872e6251-6e97-43bb-8f50-79e0f03579b8-tigera-ca-bundle\") on node \"localhost\" DevicePath \"\"" Mar 20 21:38:19.462191 kubelet[2567]: I0320 21:38:19.462150 2567 reconciler_common.go:288] "Volume detached for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/872e6251-6e97-43bb-8f50-79e0f03579b8-typha-certs\") on node \"localhost\" DevicePath \"\"" Mar 20 21:38:19.680870 kubelet[2567]: I0320 21:38:19.680791 2567 scope.go:117] "RemoveContainer" containerID="a298a9451a4c08cfae0fae7fc04acfe264caa10b80a591347cfad1ee96df4197" Mar 20 21:38:19.685709 containerd[1452]: time="2025-03-20T21:38:19.685458022Z" level=info msg="RemoveContainer for \"a298a9451a4c08cfae0fae7fc04acfe264caa10b80a591347cfad1ee96df4197\"" Mar 20 21:38:19.689088 containerd[1452]: time="2025-03-20T21:38:19.689059953Z" level=info msg="RemoveContainer for \"a298a9451a4c08cfae0fae7fc04acfe264caa10b80a591347cfad1ee96df4197\" returns successfully" Mar 20 21:38:19.689555 kubelet[2567]: I0320 21:38:19.689229 2567 scope.go:117] "RemoveContainer" containerID="a298a9451a4c08cfae0fae7fc04acfe264caa10b80a591347cfad1ee96df4197" Mar 20 21:38:19.689784 containerd[1452]: time="2025-03-20T21:38:19.689433529Z" level=error msg="ContainerStatus for \"a298a9451a4c08cfae0fae7fc04acfe264caa10b80a591347cfad1ee96df4197\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a298a9451a4c08cfae0fae7fc04acfe264caa10b80a591347cfad1ee96df4197\": not found" Mar 20 21:38:19.689818 kubelet[2567]: E0320 21:38:19.689550 2567 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a298a9451a4c08cfae0fae7fc04acfe264caa10b80a591347cfad1ee96df4197\": not found" containerID="a298a9451a4c08cfae0fae7fc04acfe264caa10b80a591347cfad1ee96df4197" Mar 20 21:38:19.689818 kubelet[2567]: I0320 21:38:19.689599 2567 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a298a9451a4c08cfae0fae7fc04acfe264caa10b80a591347cfad1ee96df4197"} err="failed to get container status \"a298a9451a4c08cfae0fae7fc04acfe264caa10b80a591347cfad1ee96df4197\": rpc error: code = NotFound desc = an error occurred when try to find container \"a298a9451a4c08cfae0fae7fc04acfe264caa10b80a591347cfad1ee96df4197\": not found" Mar 20 21:38:19.691662 systemd[1]: Removed slice kubepods-besteffort-pod872e6251_6e97_43bb_8f50_79e0f03579b8.slice - libcontainer container kubepods-besteffort-pod872e6251_6e97_43bb_8f50_79e0f03579b8.slice. Mar 20 21:38:19.691772 systemd[1]: kubepods-besteffort-pod872e6251_6e97_43bb_8f50_79e0f03579b8.slice: Consumed 395ms CPU time, 27.1M memory peak, 5.7M read from disk. Mar 20 21:38:20.406342 kubelet[2567]: I0320 21:38:20.406299 2567 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="872e6251-6e97-43bb-8f50-79e0f03579b8" path="/var/lib/kubelet/pods/872e6251-6e97-43bb-8f50-79e0f03579b8/volumes" Mar 20 21:38:22.480542 systemd[1]: Started sshd@21-10.0.0.3:22-10.0.0.1:51458.service - OpenSSH per-connection server daemon (10.0.0.1:51458). Mar 20 21:38:22.537997 sshd[5729]: Accepted publickey for core from 10.0.0.1 port 51458 ssh2: RSA SHA256:RPxckmxBxmDHHfBFzj0E8HhfLPWbeWZYhF2T7Zu87Y8 Mar 20 21:38:22.539409 sshd-session[5729]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:38:22.543512 systemd-logind[1436]: New session 22 of user core. Mar 20 21:38:22.559740 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 20 21:38:22.746451 sshd[5731]: Connection closed by 10.0.0.1 port 51458 Mar 20 21:38:22.747146 sshd-session[5729]: pam_unix(sshd:session): session closed for user core Mar 20 21:38:22.750714 systemd[1]: sshd@21-10.0.0.3:22-10.0.0.1:51458.service: Deactivated successfully. Mar 20 21:38:22.752461 systemd[1]: session-22.scope: Deactivated successfully. Mar 20 21:38:22.753143 systemd-logind[1436]: Session 22 logged out. Waiting for processes to exit. Mar 20 21:38:22.753956 systemd-logind[1436]: Removed session 22. Mar 20 21:38:25.029162 kubelet[2567]: I0320 21:38:25.028894 2567 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 20 21:38:27.757983 systemd[1]: Started sshd@22-10.0.0.3:22-10.0.0.1:51468.service - OpenSSH per-connection server daemon (10.0.0.1:51468). Mar 20 21:38:27.814753 sshd[5755]: Accepted publickey for core from 10.0.0.1 port 51468 ssh2: RSA SHA256:RPxckmxBxmDHHfBFzj0E8HhfLPWbeWZYhF2T7Zu87Y8 Mar 20 21:38:27.815837 sshd-session[5755]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:38:27.819830 systemd-logind[1436]: New session 23 of user core. Mar 20 21:38:27.829851 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 20 21:38:27.947990 sshd[5757]: Connection closed by 10.0.0.1 port 51468 Mar 20 21:38:27.948505 sshd-session[5755]: pam_unix(sshd:session): session closed for user core Mar 20 21:38:27.951186 systemd[1]: sshd@22-10.0.0.3:22-10.0.0.1:51468.service: Deactivated successfully. Mar 20 21:38:27.952906 systemd[1]: session-23.scope: Deactivated successfully. Mar 20 21:38:27.954119 systemd-logind[1436]: Session 23 logged out. Waiting for processes to exit. Mar 20 21:38:27.954876 systemd-logind[1436]: Removed session 23.