Jan 30 13:02:39.032635 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 30 13:02:39.032661 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Wed Jan 29 09:30:22 -00 2025 Jan 30 13:02:39.032672 kernel: KASLR enabled Jan 30 13:02:39.032678 kernel: efi: EFI v2.7 by EDK II Jan 30 13:02:39.032683 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbae018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40218 Jan 30 13:02:39.032689 kernel: random: crng init done Jan 30 13:02:39.032696 kernel: secureboot: Secure boot disabled Jan 30 13:02:39.032702 kernel: ACPI: Early table checksum verification disabled Jan 30 13:02:39.032708 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) Jan 30 13:02:39.032716 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) Jan 30 13:02:39.032722 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:02:39.032736 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:02:39.032743 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:02:39.032749 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:02:39.032756 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:02:39.032765 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:02:39.032771 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:02:39.032778 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:02:39.032784 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:02:39.032790 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jan 30 13:02:39.032796 kernel: NUMA: Failed to initialise from firmware Jan 30 13:02:39.032802 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jan 30 13:02:39.032808 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Jan 30 13:02:39.032815 kernel: Zone ranges: Jan 30 13:02:39.032821 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jan 30 13:02:39.032829 kernel: DMA32 empty Jan 30 13:02:39.032835 kernel: Normal empty Jan 30 13:02:39.032841 kernel: Movable zone start for each node Jan 30 13:02:39.032847 kernel: Early memory node ranges Jan 30 13:02:39.032853 kernel: node 0: [mem 0x0000000040000000-0x00000000d967ffff] Jan 30 13:02:39.032859 kernel: node 0: [mem 0x00000000d9680000-0x00000000d968ffff] Jan 30 13:02:39.032865 kernel: node 0: [mem 0x00000000d9690000-0x00000000d976ffff] Jan 30 13:02:39.032871 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Jan 30 13:02:39.032877 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Jan 30 13:02:39.032883 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jan 30 13:02:39.032889 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jan 30 13:02:39.032895 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jan 30 13:02:39.032903 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jan 30 13:02:39.032909 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jan 30 13:02:39.032916 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jan 30 13:02:39.032925 kernel: psci: probing for conduit method from ACPI. Jan 30 13:02:39.032932 kernel: psci: PSCIv1.1 detected in firmware. Jan 30 13:02:39.032938 kernel: psci: Using standard PSCI v0.2 function IDs Jan 30 13:02:39.032947 kernel: psci: Trusted OS migration not required Jan 30 13:02:39.032953 kernel: psci: SMC Calling Convention v1.1 Jan 30 13:02:39.032960 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jan 30 13:02:39.032967 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 30 13:02:39.032973 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 30 13:02:39.032980 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jan 30 13:02:39.032987 kernel: Detected PIPT I-cache on CPU0 Jan 30 13:02:39.032993 kernel: CPU features: detected: GIC system register CPU interface Jan 30 13:02:39.033000 kernel: CPU features: detected: Hardware dirty bit management Jan 30 13:02:39.033006 kernel: CPU features: detected: Spectre-v4 Jan 30 13:02:39.033015 kernel: CPU features: detected: Spectre-BHB Jan 30 13:02:39.033021 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 30 13:02:39.033028 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 30 13:02:39.033034 kernel: CPU features: detected: ARM erratum 1418040 Jan 30 13:02:39.033041 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 30 13:02:39.033047 kernel: alternatives: applying boot alternatives Jan 30 13:02:39.033055 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=e6957044c3256d96283265c263579aa4275d1d707b02496fcb081f5fc6356346 Jan 30 13:02:39.033062 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 13:02:39.033069 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 30 13:02:39.033075 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 30 13:02:39.033082 kernel: Fallback order for Node 0: 0 Jan 30 13:02:39.033090 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jan 30 13:02:39.033097 kernel: Policy zone: DMA Jan 30 13:02:39.033103 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 13:02:39.033110 kernel: software IO TLB: area num 4. Jan 30 13:02:39.033116 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Jan 30 13:02:39.033123 kernel: Memory: 2385940K/2572288K available (10304K kernel code, 2186K rwdata, 8092K rodata, 39936K init, 897K bss, 186348K reserved, 0K cma-reserved) Jan 30 13:02:39.033130 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 30 13:02:39.033137 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 13:02:39.033144 kernel: rcu: RCU event tracing is enabled. Jan 30 13:02:39.033151 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 30 13:02:39.033158 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 13:02:39.033164 kernel: Tracing variant of Tasks RCU enabled. Jan 30 13:02:39.033173 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 13:02:39.033180 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 30 13:02:39.033186 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 30 13:02:39.033193 kernel: GICv3: 256 SPIs implemented Jan 30 13:02:39.033199 kernel: GICv3: 0 Extended SPIs implemented Jan 30 13:02:39.033206 kernel: Root IRQ handler: gic_handle_irq Jan 30 13:02:39.033212 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 30 13:02:39.033219 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jan 30 13:02:39.033226 kernel: ITS [mem 0x08080000-0x0809ffff] Jan 30 13:02:39.033233 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Jan 30 13:02:39.033239 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Jan 30 13:02:39.033248 kernel: GICv3: using LPI property table @0x00000000400f0000 Jan 30 13:02:39.033255 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Jan 30 13:02:39.033261 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 13:02:39.033268 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 30 13:02:39.033275 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 30 13:02:39.033282 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 30 13:02:39.033288 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 30 13:02:39.033295 kernel: arm-pv: using stolen time PV Jan 30 13:02:39.033302 kernel: Console: colour dummy device 80x25 Jan 30 13:02:39.033309 kernel: ACPI: Core revision 20230628 Jan 30 13:02:39.033316 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 30 13:02:39.033324 kernel: pid_max: default: 32768 minimum: 301 Jan 30 13:02:39.033331 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 13:02:39.033338 kernel: landlock: Up and running. Jan 30 13:02:39.033345 kernel: SELinux: Initializing. Jan 30 13:02:39.033351 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 13:02:39.033358 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 13:02:39.033365 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 30 13:02:39.033372 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 30 13:02:39.033379 kernel: rcu: Hierarchical SRCU implementation. Jan 30 13:02:39.033387 kernel: rcu: Max phase no-delay instances is 400. Jan 30 13:02:39.033394 kernel: Platform MSI: ITS@0x8080000 domain created Jan 30 13:02:39.033401 kernel: PCI/MSI: ITS@0x8080000 domain created Jan 30 13:02:39.033408 kernel: Remapping and enabling EFI services. Jan 30 13:02:39.033415 kernel: smp: Bringing up secondary CPUs ... Jan 30 13:02:39.033421 kernel: Detected PIPT I-cache on CPU1 Jan 30 13:02:39.033428 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jan 30 13:02:39.033435 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Jan 30 13:02:39.033442 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 30 13:02:39.033450 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 30 13:02:39.033458 kernel: Detected PIPT I-cache on CPU2 Jan 30 13:02:39.033470 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jan 30 13:02:39.033479 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Jan 30 13:02:39.033486 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 30 13:02:39.033493 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jan 30 13:02:39.033500 kernel: Detected PIPT I-cache on CPU3 Jan 30 13:02:39.033507 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jan 30 13:02:39.033514 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Jan 30 13:02:39.033523 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 30 13:02:39.033530 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jan 30 13:02:39.033537 kernel: smp: Brought up 1 node, 4 CPUs Jan 30 13:02:39.033544 kernel: SMP: Total of 4 processors activated. Jan 30 13:02:39.033552 kernel: CPU features: detected: 32-bit EL0 Support Jan 30 13:02:39.033559 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 30 13:02:39.033566 kernel: CPU features: detected: Common not Private translations Jan 30 13:02:39.033573 kernel: CPU features: detected: CRC32 instructions Jan 30 13:02:39.033581 kernel: CPU features: detected: Enhanced Virtualization Traps Jan 30 13:02:39.033671 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 30 13:02:39.033679 kernel: CPU features: detected: LSE atomic instructions Jan 30 13:02:39.033686 kernel: CPU features: detected: Privileged Access Never Jan 30 13:02:39.033694 kernel: CPU features: detected: RAS Extension Support Jan 30 13:02:39.033701 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 30 13:02:39.033708 kernel: CPU: All CPU(s) started at EL1 Jan 30 13:02:39.033715 kernel: alternatives: applying system-wide alternatives Jan 30 13:02:39.033722 kernel: devtmpfs: initialized Jan 30 13:02:39.033740 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 13:02:39.033747 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 30 13:02:39.033754 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 13:02:39.033761 kernel: SMBIOS 3.0.0 present. Jan 30 13:02:39.033769 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Jan 30 13:02:39.033776 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 13:02:39.033783 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 30 13:02:39.033791 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 30 13:02:39.033806 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 30 13:02:39.033816 kernel: audit: initializing netlink subsys (disabled) Jan 30 13:02:39.033823 kernel: audit: type=2000 audit(0.020:1): state=initialized audit_enabled=0 res=1 Jan 30 13:02:39.033830 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 13:02:39.033838 kernel: cpuidle: using governor menu Jan 30 13:02:39.033845 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 30 13:02:39.033852 kernel: ASID allocator initialised with 32768 entries Jan 30 13:02:39.033859 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 13:02:39.033866 kernel: Serial: AMBA PL011 UART driver Jan 30 13:02:39.033874 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 30 13:02:39.033882 kernel: Modules: 0 pages in range for non-PLT usage Jan 30 13:02:39.033889 kernel: Modules: 508880 pages in range for PLT usage Jan 30 13:02:39.033897 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 30 13:02:39.033904 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 30 13:02:39.033911 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 30 13:02:39.033918 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 30 13:02:39.033925 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 13:02:39.033933 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 13:02:39.033940 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 30 13:02:39.033949 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 30 13:02:39.033956 kernel: ACPI: Added _OSI(Module Device) Jan 30 13:02:39.033963 kernel: ACPI: Added _OSI(Processor Device) Jan 30 13:02:39.033970 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 13:02:39.033977 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 13:02:39.033984 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 30 13:02:39.033991 kernel: ACPI: Interpreter enabled Jan 30 13:02:39.033998 kernel: ACPI: Using GIC for interrupt routing Jan 30 13:02:39.034005 kernel: ACPI: MCFG table detected, 1 entries Jan 30 13:02:39.034013 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jan 30 13:02:39.034021 kernel: printk: console [ttyAMA0] enabled Jan 30 13:02:39.034029 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 30 13:02:39.034177 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 30 13:02:39.034254 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 30 13:02:39.034324 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 30 13:02:39.034392 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jan 30 13:02:39.034459 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jan 30 13:02:39.034471 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jan 30 13:02:39.034478 kernel: PCI host bridge to bus 0000:00 Jan 30 13:02:39.034552 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jan 30 13:02:39.034645 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 30 13:02:39.034708 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jan 30 13:02:39.034875 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 30 13:02:39.034969 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jan 30 13:02:39.035056 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jan 30 13:02:39.035126 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jan 30 13:02:39.035196 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jan 30 13:02:39.035265 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jan 30 13:02:39.035333 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jan 30 13:02:39.035403 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jan 30 13:02:39.035474 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jan 30 13:02:39.035536 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jan 30 13:02:39.035622 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 30 13:02:39.035693 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jan 30 13:02:39.035702 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 30 13:02:39.035710 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 30 13:02:39.035717 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 30 13:02:39.035731 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 30 13:02:39.035745 kernel: iommu: Default domain type: Translated Jan 30 13:02:39.035753 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 30 13:02:39.035760 kernel: efivars: Registered efivars operations Jan 30 13:02:39.035767 kernel: vgaarb: loaded Jan 30 13:02:39.035775 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 30 13:02:39.035782 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 13:02:39.035790 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 13:02:39.035797 kernel: pnp: PnP ACPI init Jan 30 13:02:39.035884 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jan 30 13:02:39.035898 kernel: pnp: PnP ACPI: found 1 devices Jan 30 13:02:39.035905 kernel: NET: Registered PF_INET protocol family Jan 30 13:02:39.035913 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 30 13:02:39.035920 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 30 13:02:39.035927 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 13:02:39.035935 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 30 13:02:39.035942 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 30 13:02:39.035949 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 30 13:02:39.035958 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 13:02:39.035966 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 13:02:39.035973 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 13:02:39.035980 kernel: PCI: CLS 0 bytes, default 64 Jan 30 13:02:39.035987 kernel: kvm [1]: HYP mode not available Jan 30 13:02:39.035995 kernel: Initialise system trusted keyrings Jan 30 13:02:39.036002 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 30 13:02:39.036009 kernel: Key type asymmetric registered Jan 30 13:02:39.036016 kernel: Asymmetric key parser 'x509' registered Jan 30 13:02:39.036025 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 30 13:02:39.036032 kernel: io scheduler mq-deadline registered Jan 30 13:02:39.036039 kernel: io scheduler kyber registered Jan 30 13:02:39.036047 kernel: io scheduler bfq registered Jan 30 13:02:39.036054 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 30 13:02:39.036062 kernel: ACPI: button: Power Button [PWRB] Jan 30 13:02:39.036069 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 30 13:02:39.036140 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jan 30 13:02:39.036151 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 13:02:39.036160 kernel: thunder_xcv, ver 1.0 Jan 30 13:02:39.036167 kernel: thunder_bgx, ver 1.0 Jan 30 13:02:39.036175 kernel: nicpf, ver 1.0 Jan 30 13:02:39.036182 kernel: nicvf, ver 1.0 Jan 30 13:02:39.036264 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 30 13:02:39.036330 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-30T13:02:38 UTC (1738242158) Jan 30 13:02:39.036339 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 30 13:02:39.036347 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jan 30 13:02:39.036357 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 30 13:02:39.036364 kernel: watchdog: Hard watchdog permanently disabled Jan 30 13:02:39.036371 kernel: NET: Registered PF_INET6 protocol family Jan 30 13:02:39.036378 kernel: Segment Routing with IPv6 Jan 30 13:02:39.036386 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 13:02:39.036393 kernel: NET: Registered PF_PACKET protocol family Jan 30 13:02:39.036400 kernel: Key type dns_resolver registered Jan 30 13:02:39.036407 kernel: registered taskstats version 1 Jan 30 13:02:39.036414 kernel: Loading compiled-in X.509 certificates Jan 30 13:02:39.036421 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: c31663d2c680b3b306c17f44b5295280d3a2e28a' Jan 30 13:02:39.036430 kernel: Key type .fscrypt registered Jan 30 13:02:39.036437 kernel: Key type fscrypt-provisioning registered Jan 30 13:02:39.036444 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 30 13:02:39.036452 kernel: ima: Allocated hash algorithm: sha1 Jan 30 13:02:39.036459 kernel: ima: No architecture policies found Jan 30 13:02:39.036466 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 30 13:02:39.036473 kernel: clk: Disabling unused clocks Jan 30 13:02:39.036480 kernel: Freeing unused kernel memory: 39936K Jan 30 13:02:39.036489 kernel: Run /init as init process Jan 30 13:02:39.036496 kernel: with arguments: Jan 30 13:02:39.036503 kernel: /init Jan 30 13:02:39.036510 kernel: with environment: Jan 30 13:02:39.036517 kernel: HOME=/ Jan 30 13:02:39.036524 kernel: TERM=linux Jan 30 13:02:39.036531 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 13:02:39.036540 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:02:39.036551 systemd[1]: Detected virtualization kvm. Jan 30 13:02:39.036559 systemd[1]: Detected architecture arm64. Jan 30 13:02:39.036567 systemd[1]: Running in initrd. Jan 30 13:02:39.036575 systemd[1]: No hostname configured, using default hostname. Jan 30 13:02:39.036582 systemd[1]: Hostname set to . Jan 30 13:02:39.036619 systemd[1]: Initializing machine ID from VM UUID. Jan 30 13:02:39.036628 systemd[1]: Queued start job for default target initrd.target. Jan 30 13:02:39.036636 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:02:39.036647 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:02:39.036655 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 13:02:39.036663 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:02:39.036671 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 13:02:39.036679 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 13:02:39.036689 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 13:02:39.036697 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 13:02:39.036706 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:02:39.036714 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:02:39.036722 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:02:39.036735 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:02:39.036743 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:02:39.036751 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:02:39.036759 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:02:39.036767 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:02:39.036776 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 13:02:39.036784 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 13:02:39.036792 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:02:39.036800 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:02:39.036807 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:02:39.036815 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:02:39.036823 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 13:02:39.036831 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:02:39.036838 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 13:02:39.036849 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 13:02:39.036857 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:02:39.036864 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:02:39.036872 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:02:39.036880 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 13:02:39.036888 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:02:39.036895 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 13:02:39.036905 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 13:02:39.036914 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:02:39.036921 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:02:39.036951 systemd-journald[238]: Collecting audit messages is disabled. Jan 30 13:02:39.036973 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:02:39.036981 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:02:39.036989 systemd-journald[238]: Journal started Jan 30 13:02:39.037012 systemd-journald[238]: Runtime Journal (/run/log/journal/fdca3cdf64ad4dea9edddef12fe97cef) is 5.9M, max 47.3M, 41.4M free. Jan 30 13:02:39.029736 systemd-modules-load[239]: Inserted module 'overlay' Jan 30 13:02:39.041955 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:02:39.045766 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:02:39.049838 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 13:02:39.050983 kernel: Bridge firewalling registered Jan 30 13:02:39.050609 systemd-modules-load[239]: Inserted module 'br_netfilter' Jan 30 13:02:39.053358 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:02:39.056671 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:02:39.058195 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:02:39.060543 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:02:39.073777 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 13:02:39.075579 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:02:39.088359 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:02:39.090960 dracut-cmdline[272]: dracut-dracut-053 Jan 30 13:02:39.097213 dracut-cmdline[272]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=e6957044c3256d96283265c263579aa4275d1d707b02496fcb081f5fc6356346 Jan 30 13:02:39.099035 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:02:39.139119 systemd-resolved[287]: Positive Trust Anchors: Jan 30 13:02:39.139138 systemd-resolved[287]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:02:39.139170 systemd-resolved[287]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:02:39.154933 systemd-resolved[287]: Defaulting to hostname 'linux'. Jan 30 13:02:39.156788 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:02:39.158079 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:02:39.250622 kernel: SCSI subsystem initialized Jan 30 13:02:39.256675 kernel: Loading iSCSI transport class v2.0-870. Jan 30 13:02:39.267362 kernel: iscsi: registered transport (tcp) Jan 30 13:02:39.285702 kernel: iscsi: registered transport (qla4xxx) Jan 30 13:02:39.285773 kernel: QLogic iSCSI HBA Driver Jan 30 13:02:39.350793 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 13:02:39.362786 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 13:02:39.381102 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 13:02:39.381184 kernel: device-mapper: uevent: version 1.0.3 Jan 30 13:02:39.382390 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 13:02:39.431663 kernel: raid6: neonx8 gen() 15631 MB/s Jan 30 13:02:39.448866 kernel: raid6: neonx4 gen() 15678 MB/s Jan 30 13:02:39.465646 kernel: raid6: neonx2 gen() 13174 MB/s Jan 30 13:02:39.482645 kernel: raid6: neonx1 gen() 10473 MB/s Jan 30 13:02:39.499643 kernel: raid6: int64x8 gen() 6779 MB/s Jan 30 13:02:39.516646 kernel: raid6: int64x4 gen() 7268 MB/s Jan 30 13:02:39.533633 kernel: raid6: int64x2 gen() 6060 MB/s Jan 30 13:02:39.550932 kernel: raid6: int64x1 gen() 5020 MB/s Jan 30 13:02:39.551001 kernel: raid6: using algorithm neonx4 gen() 15678 MB/s Jan 30 13:02:39.568914 kernel: raid6: .... xor() 12303 MB/s, rmw enabled Jan 30 13:02:39.568990 kernel: raid6: using neon recovery algorithm Jan 30 13:02:39.574609 kernel: xor: measuring software checksum speed Jan 30 13:02:39.574645 kernel: 8regs : 21573 MB/sec Jan 30 13:02:39.576011 kernel: 32regs : 19021 MB/sec Jan 30 13:02:39.576028 kernel: arm64_neon : 27561 MB/sec Jan 30 13:02:39.576037 kernel: xor: using function: arm64_neon (27561 MB/sec) Jan 30 13:02:39.630632 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 13:02:39.645147 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:02:39.657860 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:02:39.671366 systemd-udevd[462]: Using default interface naming scheme 'v255'. Jan 30 13:02:39.674578 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:02:39.680775 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 13:02:39.697956 dracut-pre-trigger[468]: rd.md=0: removing MD RAID activation Jan 30 13:02:39.732677 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:02:39.747813 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:02:39.801605 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:02:39.812196 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 13:02:39.835257 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 13:02:39.838524 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:02:39.841415 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:02:39.842971 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:02:39.852951 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 13:02:39.862968 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jan 30 13:02:39.881377 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 30 13:02:39.881493 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 30 13:02:39.881507 kernel: GPT:9289727 != 19775487 Jan 30 13:02:39.881516 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 30 13:02:39.881525 kernel: GPT:9289727 != 19775487 Jan 30 13:02:39.881534 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 30 13:02:39.881545 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:02:39.870062 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:02:39.874570 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:02:39.874684 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:02:39.880998 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:02:39.882495 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:02:39.882662 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:02:39.885180 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:02:39.892017 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:02:39.903621 kernel: BTRFS: device fsid 1e2e5fa7-c757-4d5d-af66-73afe98fbaae devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (506) Jan 30 13:02:39.908752 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (508) Jan 30 13:02:39.910665 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 30 13:02:39.912238 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:02:39.920559 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 30 13:02:39.927441 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 30 13:02:39.928847 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 30 13:02:39.935033 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 13:02:39.947807 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 13:02:39.950065 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:02:39.970807 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:02:40.073705 disk-uuid[552]: Primary Header is updated. Jan 30 13:02:40.073705 disk-uuid[552]: Secondary Entries is updated. Jan 30 13:02:40.073705 disk-uuid[552]: Secondary Header is updated. Jan 30 13:02:40.078627 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:02:41.089639 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:02:41.090936 disk-uuid[561]: The operation has completed successfully. Jan 30 13:02:41.134366 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 13:02:41.134471 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 13:02:41.152831 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 13:02:41.158047 sh[573]: Success Jan 30 13:02:41.185665 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 30 13:02:41.252249 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 13:02:41.254307 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 13:02:41.256154 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 13:02:41.271040 kernel: BTRFS info (device dm-0): first mount of filesystem 1e2e5fa7-c757-4d5d-af66-73afe98fbaae Jan 30 13:02:41.271114 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 30 13:02:41.271125 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 13:02:41.273440 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 13:02:41.275259 kernel: BTRFS info (device dm-0): using free space tree Jan 30 13:02:41.279973 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 13:02:41.281221 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 13:02:41.293788 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 13:02:41.295673 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 13:02:41.314126 kernel: BTRFS info (device vda6): first mount of filesystem 5265f28b-8d78-4be2-8b05-2145d9ab7cfa Jan 30 13:02:41.314196 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 30 13:02:41.314207 kernel: BTRFS info (device vda6): using free space tree Jan 30 13:02:41.320956 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 13:02:41.331252 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 30 13:02:41.334396 kernel: BTRFS info (device vda6): last unmount of filesystem 5265f28b-8d78-4be2-8b05-2145d9ab7cfa Jan 30 13:02:41.340366 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 13:02:41.352806 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 13:02:41.435638 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:02:41.446113 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:02:41.475804 systemd-networkd[759]: lo: Link UP Jan 30 13:02:41.475814 systemd-networkd[759]: lo: Gained carrier Jan 30 13:02:41.476768 systemd-networkd[759]: Enumeration completed Jan 30 13:02:41.476894 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:02:41.477762 systemd-networkd[759]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:02:41.477766 systemd-networkd[759]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:02:41.478642 systemd-networkd[759]: eth0: Link UP Jan 30 13:02:41.478645 systemd-networkd[759]: eth0: Gained carrier Jan 30 13:02:41.478652 systemd-networkd[759]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:02:41.481806 systemd[1]: Reached target network.target - Network. Jan 30 13:02:41.491143 ignition[665]: Ignition 2.20.0 Jan 30 13:02:41.491150 ignition[665]: Stage: fetch-offline Jan 30 13:02:41.491198 ignition[665]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:02:41.491206 ignition[665]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:02:41.491376 ignition[665]: parsed url from cmdline: "" Jan 30 13:02:41.491380 ignition[665]: no config URL provided Jan 30 13:02:41.491385 ignition[665]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 13:02:41.491392 ignition[665]: no config at "/usr/lib/ignition/user.ign" Jan 30 13:02:41.491421 ignition[665]: op(1): [started] loading QEMU firmware config module Jan 30 13:02:41.491425 ignition[665]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 30 13:02:41.505673 systemd-networkd[759]: eth0: DHCPv4 address 10.0.0.99/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 30 13:02:41.499811 ignition[665]: op(1): [finished] loading QEMU firmware config module Jan 30 13:02:41.547765 ignition[665]: parsing config with SHA512: 49434d2f65b334ce81b6fde894a34686518c322efb4ee0c7e314a60ea032a88afea7760356cd5c688a87b0e391d12bb69aff60061cd900c25356db3c2a472605 Jan 30 13:02:41.554656 unknown[665]: fetched base config from "system" Jan 30 13:02:41.554670 unknown[665]: fetched user config from "qemu" Jan 30 13:02:41.555124 ignition[665]: fetch-offline: fetch-offline passed Jan 30 13:02:41.555217 ignition[665]: Ignition finished successfully Jan 30 13:02:41.559689 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:02:41.561717 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 30 13:02:41.569781 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 13:02:41.581336 ignition[771]: Ignition 2.20.0 Jan 30 13:02:41.581347 ignition[771]: Stage: kargs Jan 30 13:02:41.581534 ignition[771]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:02:41.581544 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:02:41.585946 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 13:02:41.582543 ignition[771]: kargs: kargs passed Jan 30 13:02:41.582616 ignition[771]: Ignition finished successfully Jan 30 13:02:41.603803 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 13:02:41.614203 ignition[779]: Ignition 2.20.0 Jan 30 13:02:41.614214 ignition[779]: Stage: disks Jan 30 13:02:41.614391 ignition[779]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:02:41.614402 ignition[779]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:02:41.617714 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 13:02:41.615359 ignition[779]: disks: disks passed Jan 30 13:02:41.619368 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 13:02:41.615412 ignition[779]: Ignition finished successfully Jan 30 13:02:41.621392 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 13:02:41.624032 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:02:41.625964 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:02:41.628041 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:02:41.639861 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 13:02:41.658160 systemd-fsck[792]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 30 13:02:41.668770 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 13:02:41.678789 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 13:02:41.734651 kernel: EXT4-fs (vda9): mounted filesystem 88903c49-366d-43ff-90b1-141790b6e85c r/w with ordered data mode. Quota mode: none. Jan 30 13:02:41.735144 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 13:02:41.736670 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 13:02:41.752771 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:02:41.755209 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 13:02:41.756808 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 30 13:02:41.756863 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 13:02:41.765602 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (800) Jan 30 13:02:41.765878 kernel: BTRFS info (device vda6): first mount of filesystem 5265f28b-8d78-4be2-8b05-2145d9ab7cfa Jan 30 13:02:41.756904 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:02:41.770971 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 30 13:02:41.771007 kernel: BTRFS info (device vda6): using free space tree Jan 30 13:02:41.771018 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 13:02:41.765388 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 13:02:41.771081 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 13:02:41.774757 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:02:41.823309 initrd-setup-root[824]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 13:02:41.827782 initrd-setup-root[831]: cut: /sysroot/etc/group: No such file or directory Jan 30 13:02:41.832073 initrd-setup-root[838]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 13:02:41.836388 initrd-setup-root[845]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 13:02:41.935866 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 13:02:41.947741 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 13:02:41.950618 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 13:02:41.956650 kernel: BTRFS info (device vda6): last unmount of filesystem 5265f28b-8d78-4be2-8b05-2145d9ab7cfa Jan 30 13:02:41.978200 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 13:02:41.980563 ignition[913]: INFO : Ignition 2.20.0 Jan 30 13:02:41.980563 ignition[913]: INFO : Stage: mount Jan 30 13:02:41.980563 ignition[913]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:02:41.980563 ignition[913]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:02:41.980563 ignition[913]: INFO : mount: mount passed Jan 30 13:02:41.980563 ignition[913]: INFO : Ignition finished successfully Jan 30 13:02:41.981610 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 13:02:42.000818 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 13:02:42.268678 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 13:02:42.283863 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:02:42.290636 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (927) Jan 30 13:02:42.293233 kernel: BTRFS info (device vda6): first mount of filesystem 5265f28b-8d78-4be2-8b05-2145d9ab7cfa Jan 30 13:02:42.293260 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 30 13:02:42.293271 kernel: BTRFS info (device vda6): using free space tree Jan 30 13:02:42.296613 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 13:02:42.297665 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:02:42.321199 ignition[944]: INFO : Ignition 2.20.0 Jan 30 13:02:42.321199 ignition[944]: INFO : Stage: files Jan 30 13:02:42.322890 ignition[944]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:02:42.322890 ignition[944]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:02:42.322890 ignition[944]: DEBUG : files: compiled without relabeling support, skipping Jan 30 13:02:42.329957 ignition[944]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 13:02:42.329957 ignition[944]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 13:02:42.342061 ignition[944]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 13:02:42.343734 ignition[944]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 13:02:42.343734 ignition[944]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 13:02:42.342760 unknown[944]: wrote ssh authorized keys file for user: core Jan 30 13:02:42.348160 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 30 13:02:42.348160 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Jan 30 13:02:42.411643 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 30 13:02:42.899150 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 30 13:02:42.899150 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 30 13:02:42.903363 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 13:02:42.903363 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 30 13:02:42.903363 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 30 13:02:42.903363 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 13:02:42.903363 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 13:02:42.903363 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 13:02:42.903363 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 13:02:42.903363 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:02:42.903363 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:02:42.903363 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Jan 30 13:02:42.903363 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Jan 30 13:02:42.903363 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Jan 30 13:02:42.903363 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-arm64.raw: attempt #1 Jan 30 13:02:43.213712 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 30 13:02:43.266619 systemd-networkd[759]: eth0: Gained IPv6LL Jan 30 13:02:43.465197 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Jan 30 13:02:43.465197 ignition[944]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 30 13:02:43.469893 ignition[944]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 13:02:43.469893 ignition[944]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 13:02:43.469893 ignition[944]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 30 13:02:43.469893 ignition[944]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 30 13:02:43.469893 ignition[944]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 30 13:02:43.479520 ignition[944]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 30 13:02:43.479520 ignition[944]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 30 13:02:43.479520 ignition[944]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 30 13:02:43.507611 ignition[944]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 30 13:02:43.511639 ignition[944]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 30 13:02:43.514635 ignition[944]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 30 13:02:43.514635 ignition[944]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 30 13:02:43.514635 ignition[944]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 30 13:02:43.514635 ignition[944]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:02:43.514635 ignition[944]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:02:43.514635 ignition[944]: INFO : files: files passed Jan 30 13:02:43.514635 ignition[944]: INFO : Ignition finished successfully Jan 30 13:02:43.515526 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 13:02:43.531851 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 13:02:43.533960 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 13:02:43.540038 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 13:02:43.540222 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 13:02:43.544725 initrd-setup-root-after-ignition[973]: grep: /sysroot/oem/oem-release: No such file or directory Jan 30 13:02:43.547137 initrd-setup-root-after-ignition[975]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:02:43.547137 initrd-setup-root-after-ignition[975]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:02:43.551022 initrd-setup-root-after-ignition[979]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:02:43.552058 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:02:43.555004 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 13:02:43.561481 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 13:02:43.586647 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 13:02:43.586800 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 13:02:43.589195 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 13:02:43.592965 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 13:02:43.595019 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 13:02:43.609847 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 13:02:43.627018 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:02:43.646845 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 13:02:43.658064 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:02:43.659407 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:02:43.661675 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 13:02:43.663491 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 13:02:43.663667 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:02:43.666278 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 13:02:43.668270 systemd[1]: Stopped target basic.target - Basic System. Jan 30 13:02:43.669980 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 13:02:43.671756 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:02:43.674342 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 13:02:43.676278 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 13:02:43.678199 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:02:43.680220 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 13:02:43.682154 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 13:02:43.683988 systemd[1]: Stopped target swap.target - Swaps. Jan 30 13:02:43.685529 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 13:02:43.685690 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:02:43.687973 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:02:43.690793 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:02:43.692005 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 13:02:43.692722 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:02:43.694149 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 13:02:43.694278 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 13:02:43.697214 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 13:02:43.697340 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:02:43.699543 systemd[1]: Stopped target paths.target - Path Units. Jan 30 13:02:43.701145 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 13:02:43.701842 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:02:43.703218 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 13:02:43.705361 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 13:02:43.707057 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 13:02:43.707203 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:02:43.709802 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 13:02:43.709934 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:02:43.712238 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 13:02:43.712401 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:02:43.714982 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 13:02:43.715131 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 13:02:43.727886 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 13:02:43.731321 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 13:02:43.732501 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 13:02:43.732725 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:02:43.734860 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 13:02:43.735016 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:02:43.741893 ignition[999]: INFO : Ignition 2.20.0 Jan 30 13:02:43.741893 ignition[999]: INFO : Stage: umount Jan 30 13:02:43.743965 ignition[999]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:02:43.743965 ignition[999]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:02:43.743965 ignition[999]: INFO : umount: umount passed Jan 30 13:02:43.743965 ignition[999]: INFO : Ignition finished successfully Jan 30 13:02:43.744835 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 13:02:43.746149 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 13:02:43.747212 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 13:02:43.749196 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 13:02:43.749289 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 13:02:43.752915 systemd[1]: Stopped target network.target - Network. Jan 30 13:02:43.754750 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 13:02:43.754842 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 13:02:43.757055 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 13:02:43.757109 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 13:02:43.759361 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 13:02:43.759412 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 13:02:43.761767 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 13:02:43.761823 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 13:02:43.765363 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 13:02:43.767344 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 13:02:43.777662 systemd-networkd[759]: eth0: DHCPv6 lease lost Jan 30 13:02:43.781138 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 13:02:43.782235 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 13:02:43.784754 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 13:02:43.784886 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 13:02:43.788892 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 13:02:43.788964 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:02:43.804770 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 13:02:43.805764 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 13:02:43.805835 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:02:43.808064 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 13:02:43.808114 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:02:43.809251 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 13:02:43.809303 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 13:02:43.811406 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 13:02:43.811457 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:02:43.813748 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:02:43.816092 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 13:02:43.818098 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 13:02:43.830800 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 13:02:43.830882 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 13:02:43.835619 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 13:02:43.836705 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 13:02:43.838112 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 13:02:43.838244 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:02:43.840757 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 13:02:43.840833 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 13:02:43.842143 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 13:02:43.842195 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:02:43.844561 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 13:02:43.844643 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:02:43.847847 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 13:02:43.847905 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 13:02:43.851015 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:02:43.851076 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:02:43.865826 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 13:02:43.866967 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 13:02:43.867054 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:02:43.869449 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 30 13:02:43.869503 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:02:43.871642 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 13:02:43.871694 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:02:43.874032 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:02:43.874086 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:02:43.877210 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 13:02:43.877301 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 13:02:43.881147 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 13:02:43.884938 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 13:02:43.896633 systemd[1]: Switching root. Jan 30 13:02:43.925386 systemd-journald[238]: Journal stopped Jan 30 13:02:44.884303 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). Jan 30 13:02:44.884367 kernel: SELinux: policy capability network_peer_controls=1 Jan 30 13:02:44.884380 kernel: SELinux: policy capability open_perms=1 Jan 30 13:02:44.884395 kernel: SELinux: policy capability extended_socket_class=1 Jan 30 13:02:44.884405 kernel: SELinux: policy capability always_check_network=0 Jan 30 13:02:44.884415 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 30 13:02:44.884425 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 30 13:02:44.884434 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 30 13:02:44.884448 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 30 13:02:44.884460 kernel: audit: type=1403 audit(1738242164.094:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 30 13:02:44.884472 systemd[1]: Successfully loaded SELinux policy in 39.276ms. Jan 30 13:02:44.884486 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.363ms. Jan 30 13:02:44.884498 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:02:44.884509 systemd[1]: Detected virtualization kvm. Jan 30 13:02:44.884521 systemd[1]: Detected architecture arm64. Jan 30 13:02:44.884531 systemd[1]: Detected first boot. Jan 30 13:02:44.884543 systemd[1]: Initializing machine ID from VM UUID. Jan 30 13:02:44.884554 zram_generator::config[1044]: No configuration found. Jan 30 13:02:44.884568 systemd[1]: Populated /etc with preset unit settings. Jan 30 13:02:44.884579 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 30 13:02:44.884618 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 30 13:02:44.884634 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 30 13:02:44.884651 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 30 13:02:44.884662 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 30 13:02:44.884673 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 30 13:02:44.884684 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 30 13:02:44.884697 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 30 13:02:44.884708 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 30 13:02:44.884720 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 30 13:02:44.884737 systemd[1]: Created slice user.slice - User and Session Slice. Jan 30 13:02:44.884755 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:02:44.884766 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:02:44.884778 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 30 13:02:44.884789 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 30 13:02:44.884799 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 30 13:02:44.884812 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:02:44.884823 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 30 13:02:44.884834 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:02:44.884845 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 30 13:02:44.884855 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 30 13:02:44.884866 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 30 13:02:44.884876 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 30 13:02:44.884889 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:02:44.884900 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:02:44.884915 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:02:44.884930 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:02:44.884941 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 30 13:02:44.884952 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 30 13:02:44.884963 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:02:44.884973 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:02:44.884984 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:02:44.884996 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 30 13:02:44.885009 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 30 13:02:44.885020 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 30 13:02:44.885031 systemd[1]: Mounting media.mount - External Media Directory... Jan 30 13:02:44.885042 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 30 13:02:44.885052 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 30 13:02:44.885063 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 30 13:02:44.885073 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 30 13:02:44.885084 systemd[1]: Reached target machines.target - Containers. Jan 30 13:02:44.885097 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 30 13:02:44.885108 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:02:44.885119 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:02:44.885130 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 30 13:02:44.885141 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:02:44.885153 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 13:02:44.885164 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:02:44.885175 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 30 13:02:44.885186 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:02:44.885199 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 30 13:02:44.885210 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 30 13:02:44.885220 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 30 13:02:44.885231 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 30 13:02:44.885242 systemd[1]: Stopped systemd-fsck-usr.service. Jan 30 13:02:44.885253 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:02:44.885265 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:02:44.885275 kernel: fuse: init (API version 7.39) Jan 30 13:02:44.885287 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 30 13:02:44.885305 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 30 13:02:44.885319 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:02:44.885333 systemd[1]: verity-setup.service: Deactivated successfully. Jan 30 13:02:44.885347 systemd[1]: Stopped verity-setup.service. Jan 30 13:02:44.885361 kernel: loop: module loaded Jan 30 13:02:44.885374 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 30 13:02:44.885388 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 30 13:02:44.885403 systemd[1]: Mounted media.mount - External Media Directory. Jan 30 13:02:44.885419 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 30 13:02:44.885433 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 30 13:02:44.885448 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 30 13:02:44.885459 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:02:44.885470 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 30 13:02:44.885482 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 30 13:02:44.885495 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:02:44.885506 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:02:44.885517 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:02:44.885528 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:02:44.885540 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 30 13:02:44.885551 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 30 13:02:44.885643 systemd-journald[1104]: Collecting audit messages is disabled. Jan 30 13:02:44.885676 kernel: ACPI: bus type drm_connector registered Jan 30 13:02:44.885687 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:02:44.885698 systemd-journald[1104]: Journal started Jan 30 13:02:44.885727 systemd-journald[1104]: Runtime Journal (/run/log/journal/fdca3cdf64ad4dea9edddef12fe97cef) is 5.9M, max 47.3M, 41.4M free. Jan 30 13:02:44.555784 systemd[1]: Queued start job for default target multi-user.target. Jan 30 13:02:44.576627 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 30 13:02:44.577060 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 30 13:02:44.887161 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:02:44.891438 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:02:44.892469 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 13:02:44.892703 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 13:02:44.894470 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:02:44.896256 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 30 13:02:44.898212 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 30 13:02:44.905939 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 30 13:02:44.917468 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 30 13:02:44.933776 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 30 13:02:44.936415 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 30 13:02:44.937878 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 30 13:02:44.937935 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:02:44.941131 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 30 13:02:44.944060 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 30 13:02:44.947078 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 30 13:02:44.948534 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:02:44.950508 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 30 13:02:44.953477 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 30 13:02:44.955009 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:02:44.958937 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 30 13:02:44.960474 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:02:44.963841 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:02:44.966868 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 30 13:02:44.973138 systemd-journald[1104]: Time spent on flushing to /var/log/journal/fdca3cdf64ad4dea9edddef12fe97cef is 15.150ms for 858 entries. Jan 30 13:02:44.973138 systemd-journald[1104]: System Journal (/var/log/journal/fdca3cdf64ad4dea9edddef12fe97cef) is 8.0M, max 195.6M, 187.6M free. Jan 30 13:02:45.000649 systemd-journald[1104]: Received client request to flush runtime journal. Jan 30 13:02:45.000749 kernel: loop0: detected capacity change from 0 to 113552 Jan 30 13:02:44.974114 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 13:02:44.979395 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:02:44.981225 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 30 13:02:44.982815 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 30 13:02:44.984705 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 30 13:02:44.989268 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 30 13:02:44.995378 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 30 13:02:45.007618 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 30 13:02:45.015872 systemd-tmpfiles[1156]: ACLs are not supported, ignoring. Jan 30 13:02:45.015891 systemd-tmpfiles[1156]: ACLs are not supported, ignoring. Jan 30 13:02:45.016819 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 30 13:02:45.020779 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 30 13:02:45.024861 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 30 13:02:45.027376 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:02:45.031329 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:02:45.038700 kernel: loop1: detected capacity change from 0 to 201592 Jan 30 13:02:45.046678 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 30 13:02:45.048456 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 30 13:02:45.059052 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 30 13:02:45.060759 udevadm[1169]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 30 13:02:45.087979 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 30 13:02:45.092683 kernel: loop2: detected capacity change from 0 to 116784 Jan 30 13:02:45.100056 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:02:45.118107 systemd-tmpfiles[1179]: ACLs are not supported, ignoring. Jan 30 13:02:45.118512 systemd-tmpfiles[1179]: ACLs are not supported, ignoring. Jan 30 13:02:45.123883 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:02:45.130636 kernel: loop3: detected capacity change from 0 to 113552 Jan 30 13:02:45.137613 kernel: loop4: detected capacity change from 0 to 201592 Jan 30 13:02:45.146608 kernel: loop5: detected capacity change from 0 to 116784 Jan 30 13:02:45.152630 (sd-merge)[1183]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 30 13:02:45.153106 (sd-merge)[1183]: Merged extensions into '/usr'. Jan 30 13:02:45.156848 systemd[1]: Reloading requested from client PID 1155 ('systemd-sysext') (unit systemd-sysext.service)... Jan 30 13:02:45.156864 systemd[1]: Reloading... Jan 30 13:02:45.222626 zram_generator::config[1209]: No configuration found. Jan 30 13:02:45.299141 ldconfig[1150]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 30 13:02:45.323723 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:02:45.361403 systemd[1]: Reloading finished in 204 ms. Jan 30 13:02:45.390993 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 30 13:02:45.394634 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 30 13:02:45.414872 systemd[1]: Starting ensure-sysext.service... Jan 30 13:02:45.417561 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:02:45.433208 systemd[1]: Reloading requested from client PID 1244 ('systemctl') (unit ensure-sysext.service)... Jan 30 13:02:45.433225 systemd[1]: Reloading... Jan 30 13:02:45.441498 systemd-tmpfiles[1245]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 30 13:02:45.441750 systemd-tmpfiles[1245]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 30 13:02:45.442422 systemd-tmpfiles[1245]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 30 13:02:45.442650 systemd-tmpfiles[1245]: ACLs are not supported, ignoring. Jan 30 13:02:45.442695 systemd-tmpfiles[1245]: ACLs are not supported, ignoring. Jan 30 13:02:45.445339 systemd-tmpfiles[1245]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 13:02:45.445355 systemd-tmpfiles[1245]: Skipping /boot Jan 30 13:02:45.454788 systemd-tmpfiles[1245]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 13:02:45.454807 systemd-tmpfiles[1245]: Skipping /boot Jan 30 13:02:45.500640 zram_generator::config[1278]: No configuration found. Jan 30 13:02:45.579210 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:02:45.616229 systemd[1]: Reloading finished in 182 ms. Jan 30 13:02:45.635644 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 30 13:02:45.647138 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:02:45.656254 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 30 13:02:45.659413 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 30 13:02:45.662519 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 30 13:02:45.666835 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:02:45.677500 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:02:45.682933 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 30 13:02:45.700985 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 30 13:02:45.706664 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:02:45.723356 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:02:45.730782 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:02:45.734110 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:02:45.736687 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:02:45.738405 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:02:45.739689 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:02:45.744554 systemd-udevd[1313]: Using default interface naming scheme 'v255'. Jan 30 13:02:45.744752 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:02:45.744969 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:02:45.754232 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 30 13:02:45.756790 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:02:45.756976 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:02:45.765816 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:02:45.775068 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:02:45.779671 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:02:45.785385 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:02:45.788905 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:02:45.809158 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 30 13:02:45.811206 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 30 13:02:45.813219 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:02:45.817378 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 30 13:02:45.821429 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 30 13:02:45.825724 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:02:45.825920 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:02:45.831272 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:02:45.831446 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:02:45.834533 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:02:45.835793 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:02:45.837522 augenrules[1370]: No rules Jan 30 13:02:45.844284 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 13:02:45.844507 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 30 13:02:45.844603 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1364) Jan 30 13:02:45.868559 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 30 13:02:45.878704 systemd[1]: Finished ensure-sysext.service. Jan 30 13:02:45.881778 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 30 13:02:45.904868 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 30 13:02:45.906864 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:02:45.910875 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:02:45.913870 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 13:02:45.919292 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:02:45.920709 systemd-resolved[1311]: Positive Trust Anchors: Jan 30 13:02:45.920741 systemd-resolved[1311]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:02:45.920774 systemd-resolved[1311]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:02:45.924002 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:02:45.927853 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:02:45.928221 systemd-resolved[1311]: Defaulting to hostname 'linux'. Jan 30 13:02:45.932677 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:02:45.934508 augenrules[1388]: /sbin/augenrules: No change Jan 30 13:02:45.937814 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 30 13:02:45.939756 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 13:02:45.940100 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:02:45.943191 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:02:45.943375 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:02:45.945065 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:02:45.945214 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:02:45.947162 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 13:02:45.947332 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 13:02:45.949510 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:02:45.949676 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:02:45.952209 augenrules[1412]: No rules Jan 30 13:02:45.952548 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 13:02:45.954408 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 13:02:45.954648 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 30 13:02:45.965240 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:02:45.971582 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 30 13:02:45.973030 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:02:45.973110 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:02:45.977181 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:02:45.979082 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 30 13:02:45.995901 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 30 13:02:46.006998 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 30 13:02:46.017400 lvm[1426]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 13:02:46.034851 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 30 13:02:46.036615 systemd[1]: Reached target time-set.target - System Time Set. Jan 30 13:02:46.058487 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 30 13:02:46.062163 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:02:46.063232 systemd-networkd[1402]: lo: Link UP Jan 30 13:02:46.063237 systemd-networkd[1402]: lo: Gained carrier Jan 30 13:02:46.064652 systemd-networkd[1402]: Enumeration completed Jan 30 13:02:46.073223 systemd-networkd[1402]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:02:46.073233 systemd-networkd[1402]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:02:46.074449 systemd-networkd[1402]: eth0: Link UP Jan 30 13:02:46.074457 systemd-networkd[1402]: eth0: Gained carrier Jan 30 13:02:46.074475 systemd-networkd[1402]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:02:46.075925 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 30 13:02:46.077259 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:02:46.079327 systemd[1]: Reached target network.target - Network. Jan 30 13:02:46.081056 lvm[1434]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 13:02:46.083005 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 30 13:02:46.088678 systemd-networkd[1402]: eth0: DHCPv4 address 10.0.0.99/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 30 13:02:46.089332 systemd-timesyncd[1406]: Network configuration changed, trying to establish connection. Jan 30 13:02:46.090611 systemd-timesyncd[1406]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 30 13:02:46.090680 systemd-timesyncd[1406]: Initial clock synchronization to Thu 2025-01-30 13:02:45.980287 UTC. Jan 30 13:02:46.121668 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:02:46.123261 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 30 13:02:46.128142 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:02:46.130199 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 30 13:02:46.131606 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 30 13:02:46.133537 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 30 13:02:46.134859 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 30 13:02:46.136445 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 30 13:02:46.137833 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 30 13:02:46.137923 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:02:46.138845 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:02:46.145317 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 30 13:02:46.148953 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 30 13:02:46.160931 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 30 13:02:46.166215 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 30 13:02:46.167705 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:02:46.168854 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:02:46.169915 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 30 13:02:46.169985 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 30 13:02:46.187825 systemd[1]: Starting containerd.service - containerd container runtime... Jan 30 13:02:46.190374 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 30 13:02:46.192417 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 30 13:02:46.194636 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 30 13:02:46.195760 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 30 13:02:46.202807 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 30 13:02:46.205880 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 30 13:02:46.216474 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 30 13:02:46.220019 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 30 13:02:46.234911 jq[1444]: false Jan 30 13:02:46.244162 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 30 13:02:46.287169 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 30 13:02:46.287820 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 30 13:02:46.300092 dbus-daemon[1443]: [system] SELinux support is enabled Jan 30 13:02:46.321706 extend-filesystems[1445]: Found loop3 Jan 30 13:02:46.321706 extend-filesystems[1445]: Found loop4 Jan 30 13:02:46.321706 extend-filesystems[1445]: Found loop5 Jan 30 13:02:46.321706 extend-filesystems[1445]: Found vda Jan 30 13:02:46.321706 extend-filesystems[1445]: Found vda1 Jan 30 13:02:46.321706 extend-filesystems[1445]: Found vda2 Jan 30 13:02:46.321706 extend-filesystems[1445]: Found vda3 Jan 30 13:02:46.321706 extend-filesystems[1445]: Found usr Jan 30 13:02:46.321706 extend-filesystems[1445]: Found vda4 Jan 30 13:02:46.321706 extend-filesystems[1445]: Found vda6 Jan 30 13:02:46.321706 extend-filesystems[1445]: Found vda7 Jan 30 13:02:46.321706 extend-filesystems[1445]: Found vda9 Jan 30 13:02:46.321706 extend-filesystems[1445]: Checking size of /dev/vda9 Jan 30 13:02:46.320823 systemd[1]: Starting update-engine.service - Update Engine... Jan 30 13:02:46.324755 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 30 13:02:46.354644 jq[1462]: true Jan 30 13:02:46.332362 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 30 13:02:46.342233 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 30 13:02:46.342416 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 30 13:02:46.342720 systemd[1]: motdgen.service: Deactivated successfully. Jan 30 13:02:46.342866 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 30 13:02:46.345253 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 30 13:02:46.345397 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 30 13:02:46.359056 systemd-logind[1455]: Watching system buttons on /dev/input/event0 (Power Button) Jan 30 13:02:46.362716 systemd-logind[1455]: New seat seat0. Jan 30 13:02:46.366193 systemd[1]: Started systemd-logind.service - User Login Management. Jan 30 13:02:46.373986 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 30 13:02:46.374471 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 30 13:02:46.376083 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 30 13:02:46.376233 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 30 13:02:46.389666 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1361) Jan 30 13:02:46.393098 tar[1465]: linux-arm64/LICENSE Jan 30 13:02:46.393098 tar[1465]: linux-arm64/helm Jan 30 13:02:46.396495 extend-filesystems[1445]: Resized partition /dev/vda9 Jan 30 13:02:46.410193 (ntainerd)[1476]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 30 13:02:46.412672 jq[1466]: true Jan 30 13:02:46.434833 extend-filesystems[1478]: resize2fs 1.47.1 (20-May-2024) Jan 30 13:02:46.453614 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 30 13:02:46.462088 update_engine[1460]: I20250130 13:02:46.461871 1460 main.cc:92] Flatcar Update Engine starting Jan 30 13:02:46.464741 systemd[1]: Started update-engine.service - Update Engine. Jan 30 13:02:46.464879 update_engine[1460]: I20250130 13:02:46.464746 1460 update_check_scheduler.cc:74] Next update check in 8m19s Jan 30 13:02:46.478310 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 30 13:02:46.551372 locksmithd[1495]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 30 13:02:46.580631 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 30 13:02:46.603690 extend-filesystems[1478]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 30 13:02:46.603690 extend-filesystems[1478]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 30 13:02:46.603690 extend-filesystems[1478]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 30 13:02:46.605647 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 30 13:02:46.612766 bash[1496]: Updated "/home/core/.ssh/authorized_keys" Jan 30 13:02:46.612877 extend-filesystems[1445]: Resized filesystem in /dev/vda9 Jan 30 13:02:46.605883 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 30 13:02:46.613449 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 30 13:02:46.618987 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 30 13:02:46.676994 containerd[1476]: time="2025-01-30T13:02:46.676879520Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 30 13:02:46.709063 containerd[1476]: time="2025-01-30T13:02:46.708883200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:02:46.710386 containerd[1476]: time="2025-01-30T13:02:46.710349520Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:02:46.710386 containerd[1476]: time="2025-01-30T13:02:46.710385320Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 30 13:02:46.710463 containerd[1476]: time="2025-01-30T13:02:46.710404560Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 30 13:02:46.710583 containerd[1476]: time="2025-01-30T13:02:46.710561280Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 30 13:02:46.710641 containerd[1476]: time="2025-01-30T13:02:46.710603600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 30 13:02:46.710724 containerd[1476]: time="2025-01-30T13:02:46.710664920Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:02:46.710724 containerd[1476]: time="2025-01-30T13:02:46.710683480Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:02:46.710889 containerd[1476]: time="2025-01-30T13:02:46.710866280Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:02:46.710889 containerd[1476]: time="2025-01-30T13:02:46.710887080Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 30 13:02:46.710941 containerd[1476]: time="2025-01-30T13:02:46.710902920Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:02:46.710941 containerd[1476]: time="2025-01-30T13:02:46.710912720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 30 13:02:46.711020 containerd[1476]: time="2025-01-30T13:02:46.710995200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:02:46.711245 containerd[1476]: time="2025-01-30T13:02:46.711226560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:02:46.711364 containerd[1476]: time="2025-01-30T13:02:46.711328080Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:02:46.711364 containerd[1476]: time="2025-01-30T13:02:46.711344800Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 30 13:02:46.711446 containerd[1476]: time="2025-01-30T13:02:46.711430920Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 30 13:02:46.711491 containerd[1476]: time="2025-01-30T13:02:46.711478680Z" level=info msg="metadata content store policy set" policy=shared Jan 30 13:02:46.719280 containerd[1476]: time="2025-01-30T13:02:46.719237920Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 30 13:02:46.719280 containerd[1476]: time="2025-01-30T13:02:46.719298920Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 30 13:02:46.719407 containerd[1476]: time="2025-01-30T13:02:46.719315120Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 30 13:02:46.719407 containerd[1476]: time="2025-01-30T13:02:46.719332760Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 30 13:02:46.719407 containerd[1476]: time="2025-01-30T13:02:46.719357960Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 30 13:02:46.719652 containerd[1476]: time="2025-01-30T13:02:46.719528960Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 30 13:02:46.719823 containerd[1476]: time="2025-01-30T13:02:46.719804880Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 30 13:02:46.719934 containerd[1476]: time="2025-01-30T13:02:46.719916040Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 30 13:02:46.719965 containerd[1476]: time="2025-01-30T13:02:46.719937000Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 30 13:02:46.719965 containerd[1476]: time="2025-01-30T13:02:46.719953800Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 30 13:02:46.720003 containerd[1476]: time="2025-01-30T13:02:46.719967360Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 30 13:02:46.720003 containerd[1476]: time="2025-01-30T13:02:46.719980480Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 30 13:02:46.720003 containerd[1476]: time="2025-01-30T13:02:46.719992520Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 30 13:02:46.720068 containerd[1476]: time="2025-01-30T13:02:46.720007960Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 30 13:02:46.720068 containerd[1476]: time="2025-01-30T13:02:46.720023880Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 30 13:02:46.720068 containerd[1476]: time="2025-01-30T13:02:46.720036680Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 30 13:02:46.720068 containerd[1476]: time="2025-01-30T13:02:46.720049200Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 30 13:02:46.720068 containerd[1476]: time="2025-01-30T13:02:46.720061440Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 30 13:02:46.720146 containerd[1476]: time="2025-01-30T13:02:46.720082040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 30 13:02:46.720146 containerd[1476]: time="2025-01-30T13:02:46.720097240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 30 13:02:46.720146 containerd[1476]: time="2025-01-30T13:02:46.720110920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 30 13:02:46.720146 containerd[1476]: time="2025-01-30T13:02:46.720123800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 30 13:02:46.720146 containerd[1476]: time="2025-01-30T13:02:46.720135600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 30 13:02:46.720238 containerd[1476]: time="2025-01-30T13:02:46.720148560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 30 13:02:46.720238 containerd[1476]: time="2025-01-30T13:02:46.720160120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 30 13:02:46.720238 containerd[1476]: time="2025-01-30T13:02:46.720172520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 30 13:02:46.720238 containerd[1476]: time="2025-01-30T13:02:46.720184600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 30 13:02:46.720238 containerd[1476]: time="2025-01-30T13:02:46.720198400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 30 13:02:46.720238 containerd[1476]: time="2025-01-30T13:02:46.720210200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 30 13:02:46.720238 containerd[1476]: time="2025-01-30T13:02:46.720222560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 30 13:02:46.720238 containerd[1476]: time="2025-01-30T13:02:46.720234440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 30 13:02:46.720366 containerd[1476]: time="2025-01-30T13:02:46.720248960Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 30 13:02:46.720366 containerd[1476]: time="2025-01-30T13:02:46.720268240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 30 13:02:46.720366 containerd[1476]: time="2025-01-30T13:02:46.720281320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 30 13:02:46.720366 containerd[1476]: time="2025-01-30T13:02:46.720295920Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 30 13:02:46.720696 containerd[1476]: time="2025-01-30T13:02:46.720474320Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 30 13:02:46.720696 containerd[1476]: time="2025-01-30T13:02:46.720492920Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 30 13:02:46.720696 containerd[1476]: time="2025-01-30T13:02:46.720506600Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 30 13:02:46.720696 containerd[1476]: time="2025-01-30T13:02:46.720519720Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 30 13:02:46.720696 containerd[1476]: time="2025-01-30T13:02:46.720528800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 30 13:02:46.720696 containerd[1476]: time="2025-01-30T13:02:46.720552800Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 30 13:02:46.720696 containerd[1476]: time="2025-01-30T13:02:46.720562840Z" level=info msg="NRI interface is disabled by configuration." Jan 30 13:02:46.720696 containerd[1476]: time="2025-01-30T13:02:46.720573560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 30 13:02:46.720985 containerd[1476]: time="2025-01-30T13:02:46.720944120Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 30 13:02:46.721087 containerd[1476]: time="2025-01-30T13:02:46.720992960Z" level=info msg="Connect containerd service" Jan 30 13:02:46.721087 containerd[1476]: time="2025-01-30T13:02:46.721023440Z" level=info msg="using legacy CRI server" Jan 30 13:02:46.721087 containerd[1476]: time="2025-01-30T13:02:46.721030080Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 30 13:02:46.721281 containerd[1476]: time="2025-01-30T13:02:46.721258720Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 30 13:02:46.721990 containerd[1476]: time="2025-01-30T13:02:46.721937000Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 13:02:46.723430 containerd[1476]: time="2025-01-30T13:02:46.722253120Z" level=info msg="Start subscribing containerd event" Jan 30 13:02:46.723430 containerd[1476]: time="2025-01-30T13:02:46.722318040Z" level=info msg="Start recovering state" Jan 30 13:02:46.723430 containerd[1476]: time="2025-01-30T13:02:46.722390800Z" level=info msg="Start event monitor" Jan 30 13:02:46.723430 containerd[1476]: time="2025-01-30T13:02:46.722410280Z" level=info msg="Start snapshots syncer" Jan 30 13:02:46.723430 containerd[1476]: time="2025-01-30T13:02:46.722421360Z" level=info msg="Start cni network conf syncer for default" Jan 30 13:02:46.723430 containerd[1476]: time="2025-01-30T13:02:46.722430000Z" level=info msg="Start streaming server" Jan 30 13:02:46.723430 containerd[1476]: time="2025-01-30T13:02:46.722511120Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 30 13:02:46.723430 containerd[1476]: time="2025-01-30T13:02:46.722558640Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 30 13:02:46.723430 containerd[1476]: time="2025-01-30T13:02:46.722634640Z" level=info msg="containerd successfully booted in 0.047243s" Jan 30 13:02:46.722813 systemd[1]: Started containerd.service - containerd container runtime. Jan 30 13:02:46.894366 tar[1465]: linux-arm64/README.md Jan 30 13:02:46.904967 sshd_keygen[1457]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 30 13:02:46.910274 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 30 13:02:46.924859 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 30 13:02:46.936912 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 30 13:02:46.944089 systemd[1]: issuegen.service: Deactivated successfully. Jan 30 13:02:46.944341 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 30 13:02:46.948520 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 30 13:02:46.963854 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 30 13:02:46.977016 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 30 13:02:46.979623 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 30 13:02:46.980963 systemd[1]: Reached target getty.target - Login Prompts. Jan 30 13:02:47.361744 systemd-networkd[1402]: eth0: Gained IPv6LL Jan 30 13:02:47.364320 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 30 13:02:47.366275 systemd[1]: Reached target network-online.target - Network is Online. Jan 30 13:02:47.377912 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 30 13:02:47.380755 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:02:47.383349 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 30 13:02:47.403947 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 30 13:02:47.404414 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 30 13:02:47.406319 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 30 13:02:47.423559 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 30 13:02:48.022100 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:02:48.023619 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 30 13:02:48.027758 (kubelet)[1555]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:02:48.029140 systemd[1]: Startup finished in 645ms (kernel) + 5.356s (initrd) + 3.976s (userspace) = 9.978s. Jan 30 13:02:48.036841 agetty[1532]: failed to open credentials directory Jan 30 13:02:48.036922 agetty[1531]: failed to open credentials directory Jan 30 13:02:48.536404 kubelet[1555]: E0130 13:02:48.536342 1555 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:02:48.538923 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:02:48.539076 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:02:51.756384 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 30 13:02:51.757518 systemd[1]: Started sshd@0-10.0.0.99:22-10.0.0.1:47068.service - OpenSSH per-connection server daemon (10.0.0.1:47068). Jan 30 13:02:51.819498 sshd[1568]: Accepted publickey for core from 10.0.0.1 port 47068 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 13:02:51.821991 sshd-session[1568]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:02:51.843288 systemd-logind[1455]: New session 1 of user core. Jan 30 13:02:51.845031 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 30 13:02:51.863572 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 30 13:02:51.873873 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 30 13:02:51.876358 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 30 13:02:51.885091 (systemd)[1572]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 30 13:02:51.965360 systemd[1572]: Queued start job for default target default.target. Jan 30 13:02:51.974785 systemd[1572]: Created slice app.slice - User Application Slice. Jan 30 13:02:51.974908 systemd[1572]: Reached target paths.target - Paths. Jan 30 13:02:51.974978 systemd[1572]: Reached target timers.target - Timers. Jan 30 13:02:51.976288 systemd[1572]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 30 13:02:51.986976 systemd[1572]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 30 13:02:51.987085 systemd[1572]: Reached target sockets.target - Sockets. Jan 30 13:02:51.987097 systemd[1572]: Reached target basic.target - Basic System. Jan 30 13:02:51.987131 systemd[1572]: Reached target default.target - Main User Target. Jan 30 13:02:51.987159 systemd[1572]: Startup finished in 93ms. Jan 30 13:02:51.987320 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 30 13:02:51.988649 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 30 13:02:52.064978 systemd[1]: Started sshd@1-10.0.0.99:22-10.0.0.1:47084.service - OpenSSH per-connection server daemon (10.0.0.1:47084). Jan 30 13:02:52.106315 sshd[1583]: Accepted publickey for core from 10.0.0.1 port 47084 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 13:02:52.107640 sshd-session[1583]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:02:52.111646 systemd-logind[1455]: New session 2 of user core. Jan 30 13:02:52.118740 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 30 13:02:52.171334 sshd[1585]: Connection closed by 10.0.0.1 port 47084 Jan 30 13:02:52.171874 sshd-session[1583]: pam_unix(sshd:session): session closed for user core Jan 30 13:02:52.187760 systemd[1]: sshd@1-10.0.0.99:22-10.0.0.1:47084.service: Deactivated successfully. Jan 30 13:02:52.189742 systemd[1]: session-2.scope: Deactivated successfully. Jan 30 13:02:52.191087 systemd-logind[1455]: Session 2 logged out. Waiting for processes to exit. Jan 30 13:02:52.204919 systemd[1]: Started sshd@2-10.0.0.99:22-10.0.0.1:47088.service - OpenSSH per-connection server daemon (10.0.0.1:47088). Jan 30 13:02:52.206611 systemd-logind[1455]: Removed session 2. Jan 30 13:02:52.245468 sshd[1590]: Accepted publickey for core from 10.0.0.1 port 47088 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 13:02:52.246982 sshd-session[1590]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:02:52.251818 systemd-logind[1455]: New session 3 of user core. Jan 30 13:02:52.270826 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 30 13:02:52.319357 sshd[1592]: Connection closed by 10.0.0.1 port 47088 Jan 30 13:02:52.319647 sshd-session[1590]: pam_unix(sshd:session): session closed for user core Jan 30 13:02:52.337040 systemd[1]: sshd@2-10.0.0.99:22-10.0.0.1:47088.service: Deactivated successfully. Jan 30 13:02:52.340482 systemd[1]: session-3.scope: Deactivated successfully. Jan 30 13:02:52.341775 systemd-logind[1455]: Session 3 logged out. Waiting for processes to exit. Jan 30 13:02:52.350881 systemd[1]: Started sshd@3-10.0.0.99:22-10.0.0.1:47090.service - OpenSSH per-connection server daemon (10.0.0.1:47090). Jan 30 13:02:52.351873 systemd-logind[1455]: Removed session 3. Jan 30 13:02:52.391444 sshd[1597]: Accepted publickey for core from 10.0.0.1 port 47090 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 13:02:52.392747 sshd-session[1597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:02:52.396364 systemd-logind[1455]: New session 4 of user core. Jan 30 13:02:52.410789 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 30 13:02:52.462838 sshd[1599]: Connection closed by 10.0.0.1 port 47090 Jan 30 13:02:52.463438 sshd-session[1597]: pam_unix(sshd:session): session closed for user core Jan 30 13:02:52.480150 systemd[1]: sshd@3-10.0.0.99:22-10.0.0.1:47090.service: Deactivated successfully. Jan 30 13:02:52.481679 systemd[1]: session-4.scope: Deactivated successfully. Jan 30 13:02:52.482846 systemd-logind[1455]: Session 4 logged out. Waiting for processes to exit. Jan 30 13:02:52.493069 systemd[1]: Started sshd@4-10.0.0.99:22-10.0.0.1:47100.service - OpenSSH per-connection server daemon (10.0.0.1:47100). Jan 30 13:02:52.494172 systemd-logind[1455]: Removed session 4. Jan 30 13:02:52.534580 sshd[1604]: Accepted publickey for core from 10.0.0.1 port 47100 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 13:02:52.535889 sshd-session[1604]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:02:52.539676 systemd-logind[1455]: New session 5 of user core. Jan 30 13:02:52.553818 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 30 13:02:52.622337 sudo[1607]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 30 13:02:52.622639 sudo[1607]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:02:52.636650 sudo[1607]: pam_unix(sudo:session): session closed for user root Jan 30 13:02:52.641930 sshd[1606]: Connection closed by 10.0.0.1 port 47100 Jan 30 13:02:52.642467 sshd-session[1604]: pam_unix(sshd:session): session closed for user core Jan 30 13:02:52.649966 systemd[1]: sshd@4-10.0.0.99:22-10.0.0.1:47100.service: Deactivated successfully. Jan 30 13:02:52.651431 systemd[1]: session-5.scope: Deactivated successfully. Jan 30 13:02:52.652653 systemd-logind[1455]: Session 5 logged out. Waiting for processes to exit. Jan 30 13:02:52.662942 systemd[1]: Started sshd@5-10.0.0.99:22-10.0.0.1:39756.service - OpenSSH per-connection server daemon (10.0.0.1:39756). Jan 30 13:02:52.663916 systemd-logind[1455]: Removed session 5. Jan 30 13:02:52.704336 sshd[1612]: Accepted publickey for core from 10.0.0.1 port 39756 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 13:02:52.705863 sshd-session[1612]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:02:52.709675 systemd-logind[1455]: New session 6 of user core. Jan 30 13:02:52.725794 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 30 13:02:52.777829 sudo[1616]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 30 13:02:52.778099 sudo[1616]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:02:52.781780 sudo[1616]: pam_unix(sudo:session): session closed for user root Jan 30 13:02:52.787412 sudo[1615]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 30 13:02:52.787730 sudo[1615]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:02:52.807955 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 30 13:02:52.834103 augenrules[1638]: No rules Jan 30 13:02:52.842419 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 13:02:52.842665 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 30 13:02:52.843837 sudo[1615]: pam_unix(sudo:session): session closed for user root Jan 30 13:02:52.845416 sshd[1614]: Connection closed by 10.0.0.1 port 39756 Jan 30 13:02:52.845865 sshd-session[1612]: pam_unix(sshd:session): session closed for user core Jan 30 13:02:52.857203 systemd[1]: sshd@5-10.0.0.99:22-10.0.0.1:39756.service: Deactivated successfully. Jan 30 13:02:52.859120 systemd[1]: session-6.scope: Deactivated successfully. Jan 30 13:02:52.860325 systemd-logind[1455]: Session 6 logged out. Waiting for processes to exit. Jan 30 13:02:52.862873 systemd[1]: Started sshd@6-10.0.0.99:22-10.0.0.1:39764.service - OpenSSH per-connection server daemon (10.0.0.1:39764). Jan 30 13:02:52.863728 systemd-logind[1455]: Removed session 6. Jan 30 13:02:52.918206 sshd[1646]: Accepted publickey for core from 10.0.0.1 port 39764 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 13:02:52.919320 sshd-session[1646]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:02:52.926395 systemd-logind[1455]: New session 7 of user core. Jan 30 13:02:52.934833 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 30 13:02:52.989266 sudo[1649]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 30 13:02:52.989538 sudo[1649]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:02:53.362880 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 30 13:02:53.362963 (dockerd)[1670]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 30 13:02:53.694286 dockerd[1670]: time="2025-01-30T13:02:53.694159587Z" level=info msg="Starting up" Jan 30 13:02:53.934465 systemd[1]: var-lib-docker-metacopy\x2dcheck3152433123-merged.mount: Deactivated successfully. Jan 30 13:02:53.952364 dockerd[1670]: time="2025-01-30T13:02:53.952259411Z" level=info msg="Loading containers: start." Jan 30 13:02:54.202618 kernel: Initializing XFRM netlink socket Jan 30 13:02:54.323167 systemd-networkd[1402]: docker0: Link UP Jan 30 13:02:54.370146 dockerd[1670]: time="2025-01-30T13:02:54.370087078Z" level=info msg="Loading containers: done." Jan 30 13:02:54.387320 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2949547093-merged.mount: Deactivated successfully. Jan 30 13:02:54.398137 dockerd[1670]: time="2025-01-30T13:02:54.398076540Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 30 13:02:54.398266 dockerd[1670]: time="2025-01-30T13:02:54.398189443Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Jan 30 13:02:54.398407 dockerd[1670]: time="2025-01-30T13:02:54.398375464Z" level=info msg="Daemon has completed initialization" Jan 30 13:02:54.440909 dockerd[1670]: time="2025-01-30T13:02:54.440835281Z" level=info msg="API listen on /run/docker.sock" Jan 30 13:02:54.441078 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 30 13:02:55.072944 containerd[1476]: time="2025-01-30T13:02:55.072647246Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.1\"" Jan 30 13:02:55.898274 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount134765104.mount: Deactivated successfully. Jan 30 13:02:56.992034 containerd[1476]: time="2025-01-30T13:02:56.991967696Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:02:56.992628 containerd[1476]: time="2025-01-30T13:02:56.992555741Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.1: active requests=0, bytes read=26220950" Jan 30 13:02:56.993732 containerd[1476]: time="2025-01-30T13:02:56.993677920Z" level=info msg="ImageCreate event name:\"sha256:265c2dedf28ab9b88c7910c1643e210ad62483867f2bab88f56919a6e49a0d19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:02:56.996886 containerd[1476]: time="2025-01-30T13:02:56.996832876Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:02:56.998139 containerd[1476]: time="2025-01-30T13:02:56.998097105Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.1\" with image id \"sha256:265c2dedf28ab9b88c7910c1643e210ad62483867f2bab88f56919a6e49a0d19\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac\", size \"26217748\" in 1.925400818s" Jan 30 13:02:56.998210 containerd[1476]: time="2025-01-30T13:02:56.998143446Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.1\" returns image reference \"sha256:265c2dedf28ab9b88c7910c1643e210ad62483867f2bab88f56919a6e49a0d19\"" Jan 30 13:02:56.998987 containerd[1476]: time="2025-01-30T13:02:56.998962956Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.1\"" Jan 30 13:02:58.188904 containerd[1476]: time="2025-01-30T13:02:58.188841362Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:02:58.189814 containerd[1476]: time="2025-01-30T13:02:58.189774471Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.1: active requests=0, bytes read=22527109" Jan 30 13:02:58.192840 containerd[1476]: time="2025-01-30T13:02:58.192794966Z" level=info msg="ImageCreate event name:\"sha256:2933761aa7adae93679cdde1c0bf457bd4dc4b53f95fc066a4c50aa9c375ea13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:02:58.196508 containerd[1476]: time="2025-01-30T13:02:58.196455406Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:02:58.198390 containerd[1476]: time="2025-01-30T13:02:58.198350606Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.1\" with image id \"sha256:2933761aa7adae93679cdde1c0bf457bd4dc4b53f95fc066a4c50aa9c375ea13\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954\", size \"23968433\" in 1.19935007s" Jan 30 13:02:58.198604 containerd[1476]: time="2025-01-30T13:02:58.198498902Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.1\" returns image reference \"sha256:2933761aa7adae93679cdde1c0bf457bd4dc4b53f95fc066a4c50aa9c375ea13\"" Jan 30 13:02:58.199229 containerd[1476]: time="2025-01-30T13:02:58.199177077Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.1\"" Jan 30 13:02:58.789359 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 30 13:02:58.798823 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:02:58.907502 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:02:58.912421 (kubelet)[1935]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:02:58.954650 kubelet[1935]: E0130 13:02:58.954561 1935 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:02:58.957480 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:02:58.957649 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:02:59.601351 containerd[1476]: time="2025-01-30T13:02:59.601303712Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:02:59.602886 containerd[1476]: time="2025-01-30T13:02:59.602833598Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.1: active requests=0, bytes read=17481115" Jan 30 13:02:59.605604 containerd[1476]: time="2025-01-30T13:02:59.604170739Z" level=info msg="ImageCreate event name:\"sha256:ddb38cac617cb18802e09e448db4b3aa70e9e469b02defa76e6de7192847a71c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:02:59.606764 containerd[1476]: time="2025-01-30T13:02:59.606727037Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:02:59.607936 containerd[1476]: time="2025-01-30T13:02:59.607905044Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.1\" with image id \"sha256:ddb38cac617cb18802e09e448db4b3aa70e9e469b02defa76e6de7192847a71c\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e\", size \"18922457\" in 1.40868523s" Jan 30 13:02:59.607989 containerd[1476]: time="2025-01-30T13:02:59.607943163Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.1\" returns image reference \"sha256:ddb38cac617cb18802e09e448db4b3aa70e9e469b02defa76e6de7192847a71c\"" Jan 30 13:02:59.608560 containerd[1476]: time="2025-01-30T13:02:59.608362786Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.1\"" Jan 30 13:03:00.733005 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount893407709.mount: Deactivated successfully. Jan 30 13:03:01.005605 containerd[1476]: time="2025-01-30T13:03:01.005318700Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:03:01.005983 containerd[1476]: time="2025-01-30T13:03:01.005917983Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.1: active requests=0, bytes read=27364399" Jan 30 13:03:01.007087 containerd[1476]: time="2025-01-30T13:03:01.007030829Z" level=info msg="ImageCreate event name:\"sha256:e124fbed851d756107a6153db4dc52269a2fd34af3cc46f00a2ef113f868aab0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:03:01.009186 containerd[1476]: time="2025-01-30T13:03:01.009141123Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:03:01.009788 containerd[1476]: time="2025-01-30T13:03:01.009752732Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.1\" with image id \"sha256:e124fbed851d756107a6153db4dc52269a2fd34af3cc46f00a2ef113f868aab0\", repo tag \"registry.k8s.io/kube-proxy:v1.32.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5\", size \"27363416\" in 1.401356727s" Jan 30 13:03:01.009848 containerd[1476]: time="2025-01-30T13:03:01.009788711Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.1\" returns image reference \"sha256:e124fbed851d756107a6153db4dc52269a2fd34af3cc46f00a2ef113f868aab0\"" Jan 30 13:03:01.010328 containerd[1476]: time="2025-01-30T13:03:01.010285720Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 30 13:03:01.618231 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1147272047.mount: Deactivated successfully. Jan 30 13:03:02.466884 containerd[1476]: time="2025-01-30T13:03:02.466759127Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:03:02.467461 containerd[1476]: time="2025-01-30T13:03:02.467415046Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Jan 30 13:03:02.468572 containerd[1476]: time="2025-01-30T13:03:02.468527249Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:03:02.471485 containerd[1476]: time="2025-01-30T13:03:02.471440327Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:03:02.473025 containerd[1476]: time="2025-01-30T13:03:02.472890483Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.462570856s" Jan 30 13:03:02.473025 containerd[1476]: time="2025-01-30T13:03:02.472929820Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jan 30 13:03:02.473467 containerd[1476]: time="2025-01-30T13:03:02.473439483Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 30 13:03:02.939532 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1258910378.mount: Deactivated successfully. Jan 30 13:03:02.944999 containerd[1476]: time="2025-01-30T13:03:02.944933687Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:03:02.945879 containerd[1476]: time="2025-01-30T13:03:02.945630778Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Jan 30 13:03:02.946645 containerd[1476]: time="2025-01-30T13:03:02.946613560Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:03:02.949846 containerd[1476]: time="2025-01-30T13:03:02.949803034Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:03:02.950729 containerd[1476]: time="2025-01-30T13:03:02.950701916Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 477.228163ms" Jan 30 13:03:02.950820 containerd[1476]: time="2025-01-30T13:03:02.950731877Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jan 30 13:03:02.951435 containerd[1476]: time="2025-01-30T13:03:02.951261249Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 30 13:03:03.654325 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount262425583.mount: Deactivated successfully. Jan 30 13:03:05.153396 containerd[1476]: time="2025-01-30T13:03:05.153330906Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:03:05.155347 containerd[1476]: time="2025-01-30T13:03:05.155287559Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812431" Jan 30 13:03:05.156159 containerd[1476]: time="2025-01-30T13:03:05.156130059Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:03:05.159503 containerd[1476]: time="2025-01-30T13:03:05.159464455Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:03:05.160856 containerd[1476]: time="2025-01-30T13:03:05.160816215Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.209516662s" Jan 30 13:03:05.160918 containerd[1476]: time="2025-01-30T13:03:05.160856049Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Jan 30 13:03:09.208119 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 30 13:03:09.217821 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:03:09.348552 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:03:09.354117 (kubelet)[2097]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:03:09.396285 kubelet[2097]: E0130 13:03:09.396236 2097 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:03:09.398842 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:03:09.398993 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:03:09.581488 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:03:09.597867 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:03:09.624531 systemd[1]: Reloading requested from client PID 2112 ('systemctl') (unit session-7.scope)... Jan 30 13:03:09.624720 systemd[1]: Reloading... Jan 30 13:03:09.699623 zram_generator::config[2154]: No configuration found. Jan 30 13:03:09.867981 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:03:09.921760 systemd[1]: Reloading finished in 296 ms. Jan 30 13:03:09.957344 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 30 13:03:09.957410 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 30 13:03:09.957651 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:03:09.959854 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:03:10.068098 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:03:10.073170 (kubelet)[2196]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 13:03:10.112096 kubelet[2196]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:03:10.112096 kubelet[2196]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 30 13:03:10.112096 kubelet[2196]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:03:10.112096 kubelet[2196]: I0130 13:03:10.112035 2196 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 13:03:10.738225 kubelet[2196]: I0130 13:03:10.738159 2196 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Jan 30 13:03:10.738225 kubelet[2196]: I0130 13:03:10.738200 2196 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 13:03:10.738511 kubelet[2196]: I0130 13:03:10.738480 2196 server.go:954] "Client rotation is on, will bootstrap in background" Jan 30 13:03:10.776166 kubelet[2196]: E0130 13:03:10.776114 2196 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.99:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.99:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:03:10.777127 kubelet[2196]: I0130 13:03:10.777089 2196 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 13:03:10.788162 kubelet[2196]: E0130 13:03:10.788081 2196 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 30 13:03:10.788903 kubelet[2196]: I0130 13:03:10.788317 2196 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 30 13:03:10.791183 kubelet[2196]: I0130 13:03:10.791163 2196 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 13:03:10.792455 kubelet[2196]: I0130 13:03:10.792401 2196 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 13:03:10.792666 kubelet[2196]: I0130 13:03:10.792453 2196 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 30 13:03:10.792788 kubelet[2196]: I0130 13:03:10.792729 2196 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 13:03:10.792788 kubelet[2196]: I0130 13:03:10.792738 2196 container_manager_linux.go:304] "Creating device plugin manager" Jan 30 13:03:10.792981 kubelet[2196]: I0130 13:03:10.792955 2196 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:03:10.795900 kubelet[2196]: I0130 13:03:10.795860 2196 kubelet.go:446] "Attempting to sync node with API server" Jan 30 13:03:10.795900 kubelet[2196]: I0130 13:03:10.795904 2196 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 13:03:10.796004 kubelet[2196]: I0130 13:03:10.795933 2196 kubelet.go:352] "Adding apiserver pod source" Jan 30 13:03:10.796004 kubelet[2196]: I0130 13:03:10.795945 2196 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 13:03:10.799046 kubelet[2196]: I0130 13:03:10.798580 2196 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 30 13:03:10.799046 kubelet[2196]: W0130 13:03:10.798920 2196 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.99:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.99:6443: connect: connection refused Jan 30 13:03:10.799046 kubelet[2196]: E0130 13:03:10.798976 2196 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.99:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.99:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:03:10.799254 kubelet[2196]: W0130 13:03:10.799196 2196 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.99:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.99:6443: connect: connection refused Jan 30 13:03:10.799286 kubelet[2196]: E0130 13:03:10.799252 2196 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.99:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.99:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:03:10.799286 kubelet[2196]: I0130 13:03:10.799265 2196 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 13:03:10.799428 kubelet[2196]: W0130 13:03:10.799410 2196 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 30 13:03:10.800454 kubelet[2196]: I0130 13:03:10.800428 2196 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 30 13:03:10.800523 kubelet[2196]: I0130 13:03:10.800466 2196 server.go:1287] "Started kubelet" Jan 30 13:03:10.801929 kubelet[2196]: I0130 13:03:10.801901 2196 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 13:03:10.802846 kubelet[2196]: I0130 13:03:10.802766 2196 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 13:03:10.803483 kubelet[2196]: I0130 13:03:10.803209 2196 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 13:03:10.803483 kubelet[2196]: I0130 13:03:10.803290 2196 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 13:03:10.804577 kubelet[2196]: I0130 13:03:10.804536 2196 server.go:490] "Adding debug handlers to kubelet server" Jan 30 13:03:10.807067 kubelet[2196]: I0130 13:03:10.807014 2196 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 30 13:03:10.808895 kubelet[2196]: I0130 13:03:10.808406 2196 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 30 13:03:10.808895 kubelet[2196]: E0130 13:03:10.808536 2196 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:03:10.808895 kubelet[2196]: I0130 13:03:10.808583 2196 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 13:03:10.808895 kubelet[2196]: I0130 13:03:10.808658 2196 reconciler.go:26] "Reconciler: start to sync state" Jan 30 13:03:10.808895 kubelet[2196]: E0130 13:03:10.808864 2196 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.99:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.99:6443: connect: connection refused" interval="200ms" Jan 30 13:03:10.809103 kubelet[2196]: W0130 13:03:10.808952 2196 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.99:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.99:6443: connect: connection refused Jan 30 13:03:10.809103 kubelet[2196]: E0130 13:03:10.809002 2196 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.99:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.99:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:03:10.809291 kubelet[2196]: I0130 13:03:10.809219 2196 factory.go:221] Registration of the systemd container factory successfully Jan 30 13:03:10.809328 kubelet[2196]: I0130 13:03:10.809317 2196 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 13:03:10.809359 kubelet[2196]: E0130 13:03:10.809349 2196 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 13:03:10.809680 kubelet[2196]: E0130 13:03:10.808863 2196 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.99:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.99:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181f7a101c41e1f8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-30 13:03:10.800445944 +0000 UTC m=+0.723414572,LastTimestamp:2025-01-30 13:03:10.800445944 +0000 UTC m=+0.723414572,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 30 13:03:10.811632 kubelet[2196]: I0130 13:03:10.811507 2196 factory.go:221] Registration of the containerd container factory successfully Jan 30 13:03:10.822342 kubelet[2196]: I0130 13:03:10.822272 2196 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 13:03:10.823635 kubelet[2196]: I0130 13:03:10.823595 2196 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 13:03:10.823635 kubelet[2196]: I0130 13:03:10.823634 2196 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 30 13:03:10.823749 kubelet[2196]: I0130 13:03:10.823657 2196 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 30 13:03:10.823749 kubelet[2196]: I0130 13:03:10.823665 2196 kubelet.go:2388] "Starting kubelet main sync loop" Jan 30 13:03:10.823749 kubelet[2196]: E0130 13:03:10.823716 2196 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 13:03:10.824396 kubelet[2196]: W0130 13:03:10.824345 2196 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.99:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.99:6443: connect: connection refused Jan 30 13:03:10.824466 kubelet[2196]: E0130 13:03:10.824404 2196 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.99:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.99:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:03:10.831870 kubelet[2196]: I0130 13:03:10.831841 2196 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 30 13:03:10.832058 kubelet[2196]: I0130 13:03:10.832044 2196 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 30 13:03:10.832124 kubelet[2196]: I0130 13:03:10.832114 2196 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:03:10.909030 kubelet[2196]: E0130 13:03:10.908993 2196 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:03:10.924201 kubelet[2196]: E0130 13:03:10.924162 2196 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 30 13:03:11.009569 kubelet[2196]: E0130 13:03:11.009323 2196 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:03:11.010815 kubelet[2196]: E0130 13:03:11.010774 2196 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.99:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.99:6443: connect: connection refused" interval="400ms" Jan 30 13:03:11.110119 kubelet[2196]: E0130 13:03:11.110070 2196 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:03:11.124795 kubelet[2196]: E0130 13:03:11.124749 2196 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 30 13:03:11.168843 kubelet[2196]: I0130 13:03:11.168794 2196 policy_none.go:49] "None policy: Start" Jan 30 13:03:11.168843 kubelet[2196]: I0130 13:03:11.168834 2196 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 30 13:03:11.168843 kubelet[2196]: I0130 13:03:11.168846 2196 state_mem.go:35] "Initializing new in-memory state store" Jan 30 13:03:11.176790 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 30 13:03:11.193757 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 30 13:03:11.197212 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 30 13:03:11.208702 kubelet[2196]: I0130 13:03:11.208660 2196 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 13:03:11.209155 kubelet[2196]: I0130 13:03:11.208887 2196 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 30 13:03:11.209155 kubelet[2196]: I0130 13:03:11.208904 2196 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 13:03:11.209155 kubelet[2196]: I0130 13:03:11.209118 2196 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 13:03:11.210043 kubelet[2196]: E0130 13:03:11.209969 2196 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 30 13:03:11.210111 kubelet[2196]: E0130 13:03:11.210066 2196 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 30 13:03:11.311543 kubelet[2196]: I0130 13:03:11.311425 2196 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Jan 30 13:03:11.311953 kubelet[2196]: E0130 13:03:11.311915 2196 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.99:6443/api/v1/nodes\": dial tcp 10.0.0.99:6443: connect: connection refused" node="localhost" Jan 30 13:03:11.411860 kubelet[2196]: E0130 13:03:11.411804 2196 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.99:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.99:6443: connect: connection refused" interval="800ms" Jan 30 13:03:11.513375 kubelet[2196]: I0130 13:03:11.513339 2196 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Jan 30 13:03:11.513748 kubelet[2196]: E0130 13:03:11.513716 2196 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.99:6443/api/v1/nodes\": dial tcp 10.0.0.99:6443: connect: connection refused" node="localhost" Jan 30 13:03:11.533166 systemd[1]: Created slice kubepods-burstable-podeb981ecac1bbdbbdd50082f31745642c.slice - libcontainer container kubepods-burstable-podeb981ecac1bbdbbdd50082f31745642c.slice. Jan 30 13:03:11.544540 kubelet[2196]: E0130 13:03:11.544491 2196 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 30 13:03:11.547783 systemd[1]: Created slice kubepods-burstable-podf85fcd9d3b88c147c29f0e02b8826b61.slice - libcontainer container kubepods-burstable-podf85fcd9d3b88c147c29f0e02b8826b61.slice. Jan 30 13:03:11.551024 kubelet[2196]: E0130 13:03:11.550673 2196 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 30 13:03:11.556055 systemd[1]: Created slice kubepods-burstable-pode9ba8773e418c2bbf5a955ad3b2b2e16.slice - libcontainer container kubepods-burstable-pode9ba8773e418c2bbf5a955ad3b2b2e16.slice. Jan 30 13:03:11.557767 kubelet[2196]: E0130 13:03:11.557705 2196 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 30 13:03:11.612763 kubelet[2196]: I0130 13:03:11.612574 2196 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f85fcd9d3b88c147c29f0e02b8826b61-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"f85fcd9d3b88c147c29f0e02b8826b61\") " pod="kube-system/kube-apiserver-localhost" Jan 30 13:03:11.612763 kubelet[2196]: I0130 13:03:11.612632 2196 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f85fcd9d3b88c147c29f0e02b8826b61-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"f85fcd9d3b88c147c29f0e02b8826b61\") " pod="kube-system/kube-apiserver-localhost" Jan 30 13:03:11.612763 kubelet[2196]: I0130 13:03:11.612661 2196 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f85fcd9d3b88c147c29f0e02b8826b61-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"f85fcd9d3b88c147c29f0e02b8826b61\") " pod="kube-system/kube-apiserver-localhost" Jan 30 13:03:11.612763 kubelet[2196]: I0130 13:03:11.612689 2196 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:03:11.612763 kubelet[2196]: I0130 13:03:11.612709 2196 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:03:11.612960 kubelet[2196]: I0130 13:03:11.612726 2196 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:03:11.612960 kubelet[2196]: I0130 13:03:11.612755 2196 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/eb981ecac1bbdbbdd50082f31745642c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"eb981ecac1bbdbbdd50082f31745642c\") " pod="kube-system/kube-scheduler-localhost" Jan 30 13:03:11.612960 kubelet[2196]: I0130 13:03:11.612774 2196 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:03:11.612960 kubelet[2196]: I0130 13:03:11.612793 2196 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:03:11.845940 kubelet[2196]: E0130 13:03:11.845835 2196 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:03:11.846646 containerd[1476]: time="2025-01-30T13:03:11.846575962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:eb981ecac1bbdbbdd50082f31745642c,Namespace:kube-system,Attempt:0,}" Jan 30 13:03:11.852121 kubelet[2196]: E0130 13:03:11.852087 2196 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:03:11.852291 kubelet[2196]: W0130 13:03:11.852242 2196 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.99:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.99:6443: connect: connection refused Jan 30 13:03:11.852357 kubelet[2196]: E0130 13:03:11.852299 2196 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.99:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.99:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:03:11.852564 containerd[1476]: time="2025-01-30T13:03:11.852518771Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:f85fcd9d3b88c147c29f0e02b8826b61,Namespace:kube-system,Attempt:0,}" Jan 30 13:03:11.859226 kubelet[2196]: E0130 13:03:11.859117 2196 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:03:11.861037 containerd[1476]: time="2025-01-30T13:03:11.860993429Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:e9ba8773e418c2bbf5a955ad3b2b2e16,Namespace:kube-system,Attempt:0,}" Jan 30 13:03:11.915694 kubelet[2196]: I0130 13:03:11.915520 2196 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Jan 30 13:03:11.916037 kubelet[2196]: E0130 13:03:11.915884 2196 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.99:6443/api/v1/nodes\": dial tcp 10.0.0.99:6443: connect: connection refused" node="localhost" Jan 30 13:03:12.010737 kubelet[2196]: W0130 13:03:12.010692 2196 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.99:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.99:6443: connect: connection refused Jan 30 13:03:12.010737 kubelet[2196]: E0130 13:03:12.010741 2196 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.99:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.99:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:03:12.078576 kubelet[2196]: W0130 13:03:12.078501 2196 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.99:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.99:6443: connect: connection refused Jan 30 13:03:12.078576 kubelet[2196]: E0130 13:03:12.078566 2196 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.99:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.99:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:03:12.213084 kubelet[2196]: E0130 13:03:12.212922 2196 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.99:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.99:6443: connect: connection refused" interval="1.6s" Jan 30 13:03:12.311073 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2718860643.mount: Deactivated successfully. Jan 30 13:03:12.370009 containerd[1476]: time="2025-01-30T13:03:12.369951694Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:03:12.381124 kubelet[2196]: W0130 13:03:12.381055 2196 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.99:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.99:6443: connect: connection refused Jan 30 13:03:12.381124 kubelet[2196]: E0130 13:03:12.381124 2196 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.99:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.99:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:03:12.395188 containerd[1476]: time="2025-01-30T13:03:12.394854477Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Jan 30 13:03:12.406836 containerd[1476]: time="2025-01-30T13:03:12.406782728Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:03:12.408712 containerd[1476]: time="2025-01-30T13:03:12.408575344Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:03:12.418942 containerd[1476]: time="2025-01-30T13:03:12.418797938Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 13:03:12.420164 containerd[1476]: time="2025-01-30T13:03:12.420118164Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:03:12.421983 containerd[1476]: time="2025-01-30T13:03:12.421918810Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 13:03:12.422971 containerd[1476]: time="2025-01-30T13:03:12.422925427Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:03:12.425641 containerd[1476]: time="2025-01-30T13:03:12.425274759Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 564.203004ms" Jan 30 13:03:12.426633 containerd[1476]: time="2025-01-30T13:03:12.426347006Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 579.65693ms" Jan 30 13:03:12.430746 containerd[1476]: time="2025-01-30T13:03:12.430685644Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 578.071931ms" Jan 30 13:03:12.591782 containerd[1476]: time="2025-01-30T13:03:12.591108420Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:03:12.591782 containerd[1476]: time="2025-01-30T13:03:12.591667173Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:03:12.592684 containerd[1476]: time="2025-01-30T13:03:12.592470748Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:03:12.592684 containerd[1476]: time="2025-01-30T13:03:12.592521279Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:03:12.592684 containerd[1476]: time="2025-01-30T13:03:12.592533223Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:03:12.592684 containerd[1476]: time="2025-01-30T13:03:12.592636281Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:03:12.592914 containerd[1476]: time="2025-01-30T13:03:12.591685707Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:03:12.593106 containerd[1476]: time="2025-01-30T13:03:12.593031698Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:03:12.593161 containerd[1476]: time="2025-01-30T13:03:12.593125089Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:03:12.593183 containerd[1476]: time="2025-01-30T13:03:12.593155048Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:03:12.593785 containerd[1476]: time="2025-01-30T13:03:12.593258626Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:03:12.594501 containerd[1476]: time="2025-01-30T13:03:12.594443718Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:03:12.626967 systemd[1]: Started cri-containerd-48f1aa9548e0461888caea0c4ff8e470f5a015d61b4e4a6ba463bd384ce477ef.scope - libcontainer container 48f1aa9548e0461888caea0c4ff8e470f5a015d61b4e4a6ba463bd384ce477ef. Jan 30 13:03:12.628701 systemd[1]: Started cri-containerd-a3950da19057d304b2d082c0f16d76d82f192faf5d059b380c5267fdb8c64501.scope - libcontainer container a3950da19057d304b2d082c0f16d76d82f192faf5d059b380c5267fdb8c64501. Jan 30 13:03:12.630096 systemd[1]: Started cri-containerd-eaa00d303caeebb4d35c8804bfa7eadc691aeac5670e4e7e0baa2ab83ea8a4a1.scope - libcontainer container eaa00d303caeebb4d35c8804bfa7eadc691aeac5670e4e7e0baa2ab83ea8a4a1. Jan 30 13:03:12.669100 containerd[1476]: time="2025-01-30T13:03:12.668788367Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:e9ba8773e418c2bbf5a955ad3b2b2e16,Namespace:kube-system,Attempt:0,} returns sandbox id \"48f1aa9548e0461888caea0c4ff8e470f5a015d61b4e4a6ba463bd384ce477ef\"" Jan 30 13:03:12.671494 kubelet[2196]: E0130 13:03:12.671446 2196 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:03:12.674638 containerd[1476]: time="2025-01-30T13:03:12.673781825Z" level=info msg="CreateContainer within sandbox \"48f1aa9548e0461888caea0c4ff8e470f5a015d61b4e4a6ba463bd384ce477ef\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 30 13:03:12.677262 containerd[1476]: time="2025-01-30T13:03:12.676664105Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:f85fcd9d3b88c147c29f0e02b8826b61,Namespace:kube-system,Attempt:0,} returns sandbox id \"a3950da19057d304b2d082c0f16d76d82f192faf5d059b380c5267fdb8c64501\"" Jan 30 13:03:12.678557 containerd[1476]: time="2025-01-30T13:03:12.678513125Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:eb981ecac1bbdbbdd50082f31745642c,Namespace:kube-system,Attempt:0,} returns sandbox id \"eaa00d303caeebb4d35c8804bfa7eadc691aeac5670e4e7e0baa2ab83ea8a4a1\"" Jan 30 13:03:12.679755 kubelet[2196]: E0130 13:03:12.679445 2196 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:03:12.680287 kubelet[2196]: E0130 13:03:12.680258 2196 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:03:12.681836 containerd[1476]: time="2025-01-30T13:03:12.681785548Z" level=info msg="CreateContainer within sandbox \"a3950da19057d304b2d082c0f16d76d82f192faf5d059b380c5267fdb8c64501\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 30 13:03:12.682992 containerd[1476]: time="2025-01-30T13:03:12.682920309Z" level=info msg="CreateContainer within sandbox \"eaa00d303caeebb4d35c8804bfa7eadc691aeac5670e4e7e0baa2ab83ea8a4a1\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 30 13:03:12.705486 containerd[1476]: time="2025-01-30T13:03:12.705410767Z" level=info msg="CreateContainer within sandbox \"48f1aa9548e0461888caea0c4ff8e470f5a015d61b4e4a6ba463bd384ce477ef\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b030dddb381269382a27d811476ef2c4c39570d3190de35c4f0fe3c23ed9f88c\"" Jan 30 13:03:12.706295 containerd[1476]: time="2025-01-30T13:03:12.706247697Z" level=info msg="StartContainer for \"b030dddb381269382a27d811476ef2c4c39570d3190de35c4f0fe3c23ed9f88c\"" Jan 30 13:03:12.718112 kubelet[2196]: I0130 13:03:12.718060 2196 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Jan 30 13:03:12.719634 kubelet[2196]: E0130 13:03:12.718509 2196 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.99:6443/api/v1/nodes\": dial tcp 10.0.0.99:6443: connect: connection refused" node="localhost" Jan 30 13:03:12.719945 containerd[1476]: time="2025-01-30T13:03:12.719890072Z" level=info msg="CreateContainer within sandbox \"a3950da19057d304b2d082c0f16d76d82f192faf5d059b380c5267fdb8c64501\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"9b8a67904a374f7dd534201e06bc217d9c88dec71fa79b176df8cb13890763d4\"" Jan 30 13:03:12.720408 containerd[1476]: time="2025-01-30T13:03:12.720374806Z" level=info msg="StartContainer for \"9b8a67904a374f7dd534201e06bc217d9c88dec71fa79b176df8cb13890763d4\"" Jan 30 13:03:12.731838 containerd[1476]: time="2025-01-30T13:03:12.731770947Z" level=info msg="CreateContainer within sandbox \"eaa00d303caeebb4d35c8804bfa7eadc691aeac5670e4e7e0baa2ab83ea8a4a1\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"3e96c511323d6309b45170cde7bad3e02e935e2b84815f15d148fb0cf0e17623\"" Jan 30 13:03:12.732797 containerd[1476]: time="2025-01-30T13:03:12.732675305Z" level=info msg="StartContainer for \"3e96c511323d6309b45170cde7bad3e02e935e2b84815f15d148fb0cf0e17623\"" Jan 30 13:03:12.735833 systemd[1]: Started cri-containerd-b030dddb381269382a27d811476ef2c4c39570d3190de35c4f0fe3c23ed9f88c.scope - libcontainer container b030dddb381269382a27d811476ef2c4c39570d3190de35c4f0fe3c23ed9f88c. Jan 30 13:03:12.755094 systemd[1]: Started cri-containerd-9b8a67904a374f7dd534201e06bc217d9c88dec71fa79b176df8cb13890763d4.scope - libcontainer container 9b8a67904a374f7dd534201e06bc217d9c88dec71fa79b176df8cb13890763d4. Jan 30 13:03:12.767795 systemd[1]: Started cri-containerd-3e96c511323d6309b45170cde7bad3e02e935e2b84815f15d148fb0cf0e17623.scope - libcontainer container 3e96c511323d6309b45170cde7bad3e02e935e2b84815f15d148fb0cf0e17623. Jan 30 13:03:12.827297 containerd[1476]: time="2025-01-30T13:03:12.823173878Z" level=info msg="StartContainer for \"9b8a67904a374f7dd534201e06bc217d9c88dec71fa79b176df8cb13890763d4\" returns successfully" Jan 30 13:03:12.844409 containerd[1476]: time="2025-01-30T13:03:12.842350768Z" level=info msg="StartContainer for \"b030dddb381269382a27d811476ef2c4c39570d3190de35c4f0fe3c23ed9f88c\" returns successfully" Jan 30 13:03:12.844409 containerd[1476]: time="2025-01-30T13:03:12.842540228Z" level=info msg="StartContainer for \"3e96c511323d6309b45170cde7bad3e02e935e2b84815f15d148fb0cf0e17623\" returns successfully" Jan 30 13:03:12.853422 kubelet[2196]: E0130 13:03:12.850407 2196 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 30 13:03:12.853422 kubelet[2196]: E0130 13:03:12.850563 2196 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:03:12.862012 kubelet[2196]: E0130 13:03:12.855398 2196 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 30 13:03:12.862012 kubelet[2196]: E0130 13:03:12.855687 2196 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:03:12.880371 kubelet[2196]: E0130 13:03:12.879765 2196 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 30 13:03:12.880371 kubelet[2196]: E0130 13:03:12.879917 2196 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:03:12.887565 kubelet[2196]: E0130 13:03:12.886688 2196 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.99:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.99:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:03:13.875330 kubelet[2196]: E0130 13:03:13.875298 2196 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 30 13:03:13.875683 kubelet[2196]: E0130 13:03:13.875340 2196 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 30 13:03:13.875683 kubelet[2196]: E0130 13:03:13.875459 2196 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:03:13.875683 kubelet[2196]: E0130 13:03:13.875503 2196 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:03:14.320919 kubelet[2196]: I0130 13:03:14.320807 2196 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Jan 30 13:03:14.868301 kubelet[2196]: E0130 13:03:14.868262 2196 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 30 13:03:14.876765 kubelet[2196]: E0130 13:03:14.876742 2196 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 30 13:03:14.877053 kubelet[2196]: E0130 13:03:14.876886 2196 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:03:14.939092 kubelet[2196]: I0130 13:03:14.939052 2196 kubelet_node_status.go:79] "Successfully registered node" node="localhost" Jan 30 13:03:14.939183 kubelet[2196]: E0130 13:03:14.939104 2196 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jan 30 13:03:15.009364 kubelet[2196]: I0130 13:03:15.009314 2196 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 30 13:03:15.015664 kubelet[2196]: E0130 13:03:15.015628 2196 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jan 30 13:03:15.015664 kubelet[2196]: I0130 13:03:15.015664 2196 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 30 13:03:15.017630 kubelet[2196]: E0130 13:03:15.017603 2196 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 30 13:03:15.017630 kubelet[2196]: I0130 13:03:15.017630 2196 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 30 13:03:15.019516 kubelet[2196]: E0130 13:03:15.019489 2196 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jan 30 13:03:15.801237 kubelet[2196]: I0130 13:03:15.801004 2196 apiserver.go:52] "Watching apiserver" Jan 30 13:03:15.808791 kubelet[2196]: I0130 13:03:15.808720 2196 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 13:03:17.047316 systemd[1]: Reloading requested from client PID 2478 ('systemctl') (unit session-7.scope)... Jan 30 13:03:17.047334 systemd[1]: Reloading... Jan 30 13:03:17.118615 zram_generator::config[2517]: No configuration found. Jan 30 13:03:17.213974 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:03:17.278806 systemd[1]: Reloading finished in 231 ms. Jan 30 13:03:17.309245 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:03:17.322628 systemd[1]: kubelet.service: Deactivated successfully. Jan 30 13:03:17.324630 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:03:17.324693 systemd[1]: kubelet.service: Consumed 1.172s CPU time, 123.5M memory peak, 0B memory swap peak. Jan 30 13:03:17.342973 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:03:17.443117 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:03:17.449085 (kubelet)[2559]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 13:03:17.491228 kubelet[2559]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:03:17.491228 kubelet[2559]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 30 13:03:17.491228 kubelet[2559]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:03:17.491228 kubelet[2559]: I0130 13:03:17.490914 2559 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 13:03:17.497966 kubelet[2559]: I0130 13:03:17.497919 2559 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Jan 30 13:03:17.499084 kubelet[2559]: I0130 13:03:17.498569 2559 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 13:03:17.499084 kubelet[2559]: I0130 13:03:17.499029 2559 server.go:954] "Client rotation is on, will bootstrap in background" Jan 30 13:03:17.501135 kubelet[2559]: I0130 13:03:17.500873 2559 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 30 13:03:17.503520 kubelet[2559]: I0130 13:03:17.503389 2559 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 13:03:17.508376 kubelet[2559]: E0130 13:03:17.507741 2559 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 30 13:03:17.508376 kubelet[2559]: I0130 13:03:17.507786 2559 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 30 13:03:17.511047 kubelet[2559]: I0130 13:03:17.511011 2559 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 13:03:17.511247 kubelet[2559]: I0130 13:03:17.511195 2559 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 13:03:17.511430 kubelet[2559]: I0130 13:03:17.511231 2559 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 30 13:03:17.511430 kubelet[2559]: I0130 13:03:17.511429 2559 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 13:03:17.511548 kubelet[2559]: I0130 13:03:17.511437 2559 container_manager_linux.go:304] "Creating device plugin manager" Jan 30 13:03:17.511548 kubelet[2559]: I0130 13:03:17.511482 2559 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:03:17.511674 kubelet[2559]: I0130 13:03:17.511648 2559 kubelet.go:446] "Attempting to sync node with API server" Jan 30 13:03:17.511674 kubelet[2559]: I0130 13:03:17.511666 2559 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 13:03:17.511731 kubelet[2559]: I0130 13:03:17.511682 2559 kubelet.go:352] "Adding apiserver pod source" Jan 30 13:03:17.511731 kubelet[2559]: I0130 13:03:17.511692 2559 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 13:03:17.513843 kubelet[2559]: I0130 13:03:17.512804 2559 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 30 13:03:17.513843 kubelet[2559]: I0130 13:03:17.513241 2559 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 13:03:17.513843 kubelet[2559]: I0130 13:03:17.513649 2559 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 30 13:03:17.513843 kubelet[2559]: I0130 13:03:17.513676 2559 server.go:1287] "Started kubelet" Jan 30 13:03:17.514455 kubelet[2559]: I0130 13:03:17.514411 2559 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 13:03:17.514920 kubelet[2559]: I0130 13:03:17.514869 2559 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 13:03:17.515104 kubelet[2559]: I0130 13:03:17.515086 2559 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 13:03:17.517418 kubelet[2559]: I0130 13:03:17.517395 2559 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 13:03:17.518329 kubelet[2559]: I0130 13:03:17.518300 2559 server.go:490] "Adding debug handlers to kubelet server" Jan 30 13:03:17.525682 kubelet[2559]: I0130 13:03:17.525650 2559 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 30 13:03:17.525992 kubelet[2559]: I0130 13:03:17.525961 2559 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 30 13:03:17.526595 kubelet[2559]: E0130 13:03:17.526568 2559 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:03:17.527465 kubelet[2559]: I0130 13:03:17.526730 2559 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 13:03:17.527465 kubelet[2559]: I0130 13:03:17.527275 2559 reconciler.go:26] "Reconciler: start to sync state" Jan 30 13:03:17.529480 kubelet[2559]: I0130 13:03:17.529454 2559 factory.go:221] Registration of the systemd container factory successfully Jan 30 13:03:17.529676 kubelet[2559]: I0130 13:03:17.529572 2559 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 13:03:17.542269 kubelet[2559]: I0130 13:03:17.542161 2559 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 13:03:17.544540 kubelet[2559]: I0130 13:03:17.544502 2559 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 13:03:17.544703 kubelet[2559]: I0130 13:03:17.544692 2559 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 30 13:03:17.544799 kubelet[2559]: I0130 13:03:17.544780 2559 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 30 13:03:17.544928 kubelet[2559]: I0130 13:03:17.544852 2559 kubelet.go:2388] "Starting kubelet main sync loop" Jan 30 13:03:17.544989 kubelet[2559]: E0130 13:03:17.544904 2559 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 13:03:17.548264 kubelet[2559]: I0130 13:03:17.548235 2559 factory.go:221] Registration of the containerd container factory successfully Jan 30 13:03:17.550653 kubelet[2559]: E0130 13:03:17.550401 2559 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 13:03:17.586642 kubelet[2559]: I0130 13:03:17.585744 2559 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 30 13:03:17.586642 kubelet[2559]: I0130 13:03:17.585772 2559 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 30 13:03:17.586642 kubelet[2559]: I0130 13:03:17.585796 2559 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:03:17.586642 kubelet[2559]: I0130 13:03:17.586006 2559 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 30 13:03:17.586642 kubelet[2559]: I0130 13:03:17.586017 2559 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 30 13:03:17.586642 kubelet[2559]: I0130 13:03:17.586036 2559 policy_none.go:49] "None policy: Start" Jan 30 13:03:17.586642 kubelet[2559]: I0130 13:03:17.586059 2559 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 30 13:03:17.586642 kubelet[2559]: I0130 13:03:17.586070 2559 state_mem.go:35] "Initializing new in-memory state store" Jan 30 13:03:17.586642 kubelet[2559]: I0130 13:03:17.586304 2559 state_mem.go:75] "Updated machine memory state" Jan 30 13:03:17.592068 kubelet[2559]: I0130 13:03:17.592032 2559 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 13:03:17.592326 kubelet[2559]: I0130 13:03:17.592294 2559 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 30 13:03:17.592371 kubelet[2559]: I0130 13:03:17.592316 2559 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 13:03:17.592637 kubelet[2559]: I0130 13:03:17.592613 2559 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 13:03:17.593365 kubelet[2559]: E0130 13:03:17.593335 2559 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 30 13:03:17.645978 kubelet[2559]: I0130 13:03:17.645911 2559 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 30 13:03:17.646152 kubelet[2559]: I0130 13:03:17.645924 2559 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 30 13:03:17.646394 kubelet[2559]: I0130 13:03:17.646354 2559 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 30 13:03:17.696872 kubelet[2559]: I0130 13:03:17.696845 2559 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Jan 30 13:03:17.703909 kubelet[2559]: I0130 13:03:17.703874 2559 kubelet_node_status.go:125] "Node was previously registered" node="localhost" Jan 30 13:03:17.704043 kubelet[2559]: I0130 13:03:17.703966 2559 kubelet_node_status.go:79] "Successfully registered node" node="localhost" Jan 30 13:03:17.729122 kubelet[2559]: I0130 13:03:17.728814 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:03:17.729122 kubelet[2559]: I0130 13:03:17.728854 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:03:17.729122 kubelet[2559]: I0130 13:03:17.728880 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:03:17.729921 kubelet[2559]: I0130 13:03:17.729876 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/eb981ecac1bbdbbdd50082f31745642c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"eb981ecac1bbdbbdd50082f31745642c\") " pod="kube-system/kube-scheduler-localhost" Jan 30 13:03:17.729994 kubelet[2559]: I0130 13:03:17.729928 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f85fcd9d3b88c147c29f0e02b8826b61-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"f85fcd9d3b88c147c29f0e02b8826b61\") " pod="kube-system/kube-apiserver-localhost" Jan 30 13:03:17.729994 kubelet[2559]: I0130 13:03:17.729952 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f85fcd9d3b88c147c29f0e02b8826b61-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"f85fcd9d3b88c147c29f0e02b8826b61\") " pod="kube-system/kube-apiserver-localhost" Jan 30 13:03:17.730041 kubelet[2559]: I0130 13:03:17.729990 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f85fcd9d3b88c147c29f0e02b8826b61-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"f85fcd9d3b88c147c29f0e02b8826b61\") " pod="kube-system/kube-apiserver-localhost" Jan 30 13:03:17.730041 kubelet[2559]: I0130 13:03:17.730027 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:03:17.730089 kubelet[2559]: I0130 13:03:17.730067 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:03:17.962992 kubelet[2559]: E0130 13:03:17.962609 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:03:17.962992 kubelet[2559]: E0130 13:03:17.962636 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:03:17.963416 kubelet[2559]: E0130 13:03:17.963335 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:03:18.513002 kubelet[2559]: I0130 13:03:18.512918 2559 apiserver.go:52] "Watching apiserver" Jan 30 13:03:18.526930 kubelet[2559]: I0130 13:03:18.526895 2559 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 13:03:18.563439 kubelet[2559]: E0130 13:03:18.563208 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:03:18.563439 kubelet[2559]: E0130 13:03:18.563331 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:03:18.564703 kubelet[2559]: E0130 13:03:18.563613 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:03:18.577230 kubelet[2559]: I0130 13:03:18.576722 2559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.5767029080000001 podStartE2EDuration="1.576702908s" podCreationTimestamp="2025-01-30 13:03:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:03:18.575802747 +0000 UTC m=+1.122898710" watchObservedRunningTime="2025-01-30 13:03:18.576702908 +0000 UTC m=+1.123798871" Jan 30 13:03:18.577230 kubelet[2559]: I0130 13:03:18.576841 2559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.576836903 podStartE2EDuration="1.576836903s" podCreationTimestamp="2025-01-30 13:03:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:03:18.564672166 +0000 UTC m=+1.111768129" watchObservedRunningTime="2025-01-30 13:03:18.576836903 +0000 UTC m=+1.123932866" Jan 30 13:03:18.584580 kubelet[2559]: I0130 13:03:18.584520 2559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.584506032 podStartE2EDuration="1.584506032s" podCreationTimestamp="2025-01-30 13:03:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:03:18.584472144 +0000 UTC m=+1.131568067" watchObservedRunningTime="2025-01-30 13:03:18.584506032 +0000 UTC m=+1.131601955" Jan 30 13:03:19.566093 kubelet[2559]: E0130 13:03:19.564477 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:03:19.566093 kubelet[2559]: E0130 13:03:19.565073 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:03:20.565988 kubelet[2559]: E0130 13:03:20.565944 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:03:22.342538 kubelet[2559]: I0130 13:03:22.342377 2559 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 30 13:03:22.353139 containerd[1476]: time="2025-01-30T13:03:22.353064343Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 30 13:03:22.356359 kubelet[2559]: I0130 13:03:22.355779 2559 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 30 13:03:22.421881 sudo[1649]: pam_unix(sudo:session): session closed for user root Jan 30 13:03:22.424363 sshd[1648]: Connection closed by 10.0.0.1 port 39764 Jan 30 13:03:22.425160 sshd-session[1646]: pam_unix(sshd:session): session closed for user core Jan 30 13:03:22.428213 systemd[1]: sshd@6-10.0.0.99:22-10.0.0.1:39764.service: Deactivated successfully. Jan 30 13:03:22.430180 systemd[1]: session-7.scope: Deactivated successfully. Jan 30 13:03:22.431617 systemd[1]: session-7.scope: Consumed 6.670s CPU time, 155.4M memory peak, 0B memory swap peak. Jan 30 13:03:22.433492 systemd-logind[1455]: Session 7 logged out. Waiting for processes to exit. Jan 30 13:03:22.434513 systemd-logind[1455]: Removed session 7. Jan 30 13:03:22.964707 kubelet[2559]: I0130 13:03:22.964664 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/565e535f-87ed-4ea4-bab0-181895569063-kube-proxy\") pod \"kube-proxy-r5gmn\" (UID: \"565e535f-87ed-4ea4-bab0-181895569063\") " pod="kube-system/kube-proxy-r5gmn" Jan 30 13:03:22.964707 kubelet[2559]: I0130 13:03:22.964705 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/565e535f-87ed-4ea4-bab0-181895569063-xtables-lock\") pod \"kube-proxy-r5gmn\" (UID: \"565e535f-87ed-4ea4-bab0-181895569063\") " pod="kube-system/kube-proxy-r5gmn" Jan 30 13:03:22.964875 kubelet[2559]: I0130 13:03:22.964726 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/565e535f-87ed-4ea4-bab0-181895569063-lib-modules\") pod \"kube-proxy-r5gmn\" (UID: \"565e535f-87ed-4ea4-bab0-181895569063\") " pod="kube-system/kube-proxy-r5gmn" Jan 30 13:03:22.964875 kubelet[2559]: I0130 13:03:22.964745 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlqhj\" (UniqueName: \"kubernetes.io/projected/565e535f-87ed-4ea4-bab0-181895569063-kube-api-access-mlqhj\") pod \"kube-proxy-r5gmn\" (UID: \"565e535f-87ed-4ea4-bab0-181895569063\") " pod="kube-system/kube-proxy-r5gmn" Jan 30 13:03:22.967383 systemd[1]: Created slice kubepods-besteffort-pod565e535f_87ed_4ea4_bab0_181895569063.slice - libcontainer container kubepods-besteffort-pod565e535f_87ed_4ea4_bab0_181895569063.slice. Jan 30 13:03:23.079711 kubelet[2559]: E0130 13:03:23.079660 2559 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 30 13:03:23.079711 kubelet[2559]: E0130 13:03:23.079696 2559 projected.go:194] Error preparing data for projected volume kube-api-access-mlqhj for pod kube-system/kube-proxy-r5gmn: configmap "kube-root-ca.crt" not found Jan 30 13:03:23.079887 kubelet[2559]: E0130 13:03:23.079764 2559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/565e535f-87ed-4ea4-bab0-181895569063-kube-api-access-mlqhj podName:565e535f-87ed-4ea4-bab0-181895569063 nodeName:}" failed. No retries permitted until 2025-01-30 13:03:23.579743725 +0000 UTC m=+6.126839688 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-mlqhj" (UniqueName: "kubernetes.io/projected/565e535f-87ed-4ea4-bab0-181895569063-kube-api-access-mlqhj") pod "kube-proxy-r5gmn" (UID: "565e535f-87ed-4ea4-bab0-181895569063") : configmap "kube-root-ca.crt" not found Jan 30 13:03:23.414845 kubelet[2559]: E0130 13:03:23.414737 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:03:23.467669 kubelet[2559]: I0130 13:03:23.467620 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlmjh\" (UniqueName: \"kubernetes.io/projected/5e3f70c7-d5e8-412e-9348-953e4c79dc56-kube-api-access-vlmjh\") pod \"tigera-operator-7d68577dc5-69tgw\" (UID: \"5e3f70c7-d5e8-412e-9348-953e4c79dc56\") " pod="tigera-operator/tigera-operator-7d68577dc5-69tgw" Jan 30 13:03:23.467669 kubelet[2559]: I0130 13:03:23.467665 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5e3f70c7-d5e8-412e-9348-953e4c79dc56-var-lib-calico\") pod \"tigera-operator-7d68577dc5-69tgw\" (UID: \"5e3f70c7-d5e8-412e-9348-953e4c79dc56\") " pod="tigera-operator/tigera-operator-7d68577dc5-69tgw" Jan 30 13:03:23.469169 systemd[1]: Created slice kubepods-besteffort-pod5e3f70c7_d5e8_412e_9348_953e4c79dc56.slice - libcontainer container kubepods-besteffort-pod5e3f70c7_d5e8_412e_9348_953e4c79dc56.slice. Jan 30 13:03:23.490776 kubelet[2559]: E0130 13:03:23.490733 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:03:23.571707 kubelet[2559]: E0130 13:03:23.570955 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:03:23.571707 kubelet[2559]: E0130 13:03:23.571009 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:03:23.774665 containerd[1476]: time="2025-01-30T13:03:23.774536184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7d68577dc5-69tgw,Uid:5e3f70c7-d5e8-412e-9348-953e4c79dc56,Namespace:tigera-operator,Attempt:0,}" Jan 30 13:03:23.796463 containerd[1476]: time="2025-01-30T13:03:23.795825971Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:03:23.796463 containerd[1476]: time="2025-01-30T13:03:23.796421849Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:03:23.796463 containerd[1476]: time="2025-01-30T13:03:23.796439677Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:03:23.796781 containerd[1476]: time="2025-01-30T13:03:23.796617677Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:03:23.818815 systemd[1]: Started cri-containerd-2805f4613e6d6dac705d3889abadd8570ad34c7246d918c1aafb23ec6a173cf2.scope - libcontainer container 2805f4613e6d6dac705d3889abadd8570ad34c7246d918c1aafb23ec6a173cf2. Jan 30 13:03:23.847433 containerd[1476]: time="2025-01-30T13:03:23.847391479Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7d68577dc5-69tgw,Uid:5e3f70c7-d5e8-412e-9348-953e4c79dc56,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"2805f4613e6d6dac705d3889abadd8570ad34c7246d918c1aafb23ec6a173cf2\"" Jan 30 13:03:23.851378 containerd[1476]: time="2025-01-30T13:03:23.851311872Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 30 13:03:23.876623 kubelet[2559]: E0130 13:03:23.876539 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:03:23.877096 containerd[1476]: time="2025-01-30T13:03:23.877047138Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-r5gmn,Uid:565e535f-87ed-4ea4-bab0-181895569063,Namespace:kube-system,Attempt:0,}" Jan 30 13:03:23.902207 containerd[1476]: time="2025-01-30T13:03:23.902045262Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:03:23.902207 containerd[1476]: time="2025-01-30T13:03:23.902109578Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:03:23.902207 containerd[1476]: time="2025-01-30T13:03:23.902120731Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:03:23.902492 containerd[1476]: time="2025-01-30T13:03:23.902275026Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:03:23.918778 systemd[1]: Started cri-containerd-5e4159fb893731780fdcc85e87a5c84c75bf7220c5b508c5e277ca9426f96bb0.scope - libcontainer container 5e4159fb893731780fdcc85e87a5c84c75bf7220c5b508c5e277ca9426f96bb0. Jan 30 13:03:23.940235 containerd[1476]: time="2025-01-30T13:03:23.940198144Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-r5gmn,Uid:565e535f-87ed-4ea4-bab0-181895569063,Namespace:kube-system,Attempt:0,} returns sandbox id \"5e4159fb893731780fdcc85e87a5c84c75bf7220c5b508c5e277ca9426f96bb0\"" Jan 30 13:03:23.940986 kubelet[2559]: E0130 13:03:23.940965 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:03:23.943346 containerd[1476]: time="2025-01-30T13:03:23.943307765Z" level=info msg="CreateContainer within sandbox \"5e4159fb893731780fdcc85e87a5c84c75bf7220c5b508c5e277ca9426f96bb0\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 30 13:03:23.957182 containerd[1476]: time="2025-01-30T13:03:23.957106329Z" level=info msg="CreateContainer within sandbox \"5e4159fb893731780fdcc85e87a5c84c75bf7220c5b508c5e277ca9426f96bb0\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2be5465679691a935ea83ece7b39093dffb6c49a71891eaf2f8284ce61f539e9\"" Jan 30 13:03:23.957931 containerd[1476]: time="2025-01-30T13:03:23.957886203Z" level=info msg="StartContainer for \"2be5465679691a935ea83ece7b39093dffb6c49a71891eaf2f8284ce61f539e9\"" Jan 30 13:03:23.987850 systemd[1]: Started cri-containerd-2be5465679691a935ea83ece7b39093dffb6c49a71891eaf2f8284ce61f539e9.scope - libcontainer container 2be5465679691a935ea83ece7b39093dffb6c49a71891eaf2f8284ce61f539e9. Jan 30 13:03:24.024958 containerd[1476]: time="2025-01-30T13:03:24.024757677Z" level=info msg="StartContainer for \"2be5465679691a935ea83ece7b39093dffb6c49a71891eaf2f8284ce61f539e9\" returns successfully" Jan 30 13:03:24.574818 kubelet[2559]: E0130 13:03:24.574761 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:03:24.576243 kubelet[2559]: E0130 13:03:24.576189 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:03:24.592611 kubelet[2559]: I0130 13:03:24.592544 2559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-r5gmn" podStartSLOduration=2.592524145 podStartE2EDuration="2.592524145s" podCreationTimestamp="2025-01-30 13:03:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:03:24.590801155 +0000 UTC m=+7.137897158" watchObservedRunningTime="2025-01-30 13:03:24.592524145 +0000 UTC m=+7.139620108" Jan 30 13:03:25.207270 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1380238369.mount: Deactivated successfully. Jan 30 13:03:25.541831 containerd[1476]: time="2025-01-30T13:03:25.541754447Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:03:25.542359 containerd[1476]: time="2025-01-30T13:03:25.542316834Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=19124160" Jan 30 13:03:25.543326 containerd[1476]: time="2025-01-30T13:03:25.543293294Z" level=info msg="ImageCreate event name:\"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:03:25.545679 containerd[1476]: time="2025-01-30T13:03:25.545640342Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:03:25.547653 containerd[1476]: time="2025-01-30T13:03:25.546327214Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"19120155\" in 1.694949027s" Jan 30 13:03:25.547653 containerd[1476]: time="2025-01-30T13:03:25.546354678Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\"" Jan 30 13:03:25.551818 containerd[1476]: time="2025-01-30T13:03:25.551779220Z" level=info msg="CreateContainer within sandbox \"2805f4613e6d6dac705d3889abadd8570ad34c7246d918c1aafb23ec6a173cf2\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 30 13:03:25.577513 containerd[1476]: time="2025-01-30T13:03:25.577459104Z" level=info msg="CreateContainer within sandbox \"2805f4613e6d6dac705d3889abadd8570ad34c7246d918c1aafb23ec6a173cf2\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"2f80639227f8731b2d8236fed288654213feec063b0f50edab1f8cd9d8e02914\"" Jan 30 13:03:25.580679 containerd[1476]: time="2025-01-30T13:03:25.580594603Z" level=info msg="StartContainer for \"2f80639227f8731b2d8236fed288654213feec063b0f50edab1f8cd9d8e02914\"" Jan 30 13:03:25.610784 systemd[1]: Started cri-containerd-2f80639227f8731b2d8236fed288654213feec063b0f50edab1f8cd9d8e02914.scope - libcontainer container 2f80639227f8731b2d8236fed288654213feec063b0f50edab1f8cd9d8e02914. Jan 30 13:03:25.635040 containerd[1476]: time="2025-01-30T13:03:25.634996966Z" level=info msg="StartContainer for \"2f80639227f8731b2d8236fed288654213feec063b0f50edab1f8cd9d8e02914\" returns successfully" Jan 30 13:03:26.626687 kubelet[2559]: I0130 13:03:26.626611 2559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7d68577dc5-69tgw" podStartSLOduration=1.9253242830000001 podStartE2EDuration="3.626538542s" podCreationTimestamp="2025-01-30 13:03:23 +0000 UTC" firstStartedPulling="2025-01-30 13:03:23.849292835 +0000 UTC m=+6.396388798" lastFinishedPulling="2025-01-30 13:03:25.550507094 +0000 UTC m=+8.097603057" observedRunningTime="2025-01-30 13:03:26.626302594 +0000 UTC m=+9.173398557" watchObservedRunningTime="2025-01-30 13:03:26.626538542 +0000 UTC m=+9.173634505" Jan 30 13:03:29.079638 kubelet[2559]: E0130 13:03:29.079606 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:03:29.827042 systemd[1]: Created slice kubepods-besteffort-pode7d4eefd_0b3e_4749_a655_476f62eac828.slice - libcontainer container kubepods-besteffort-pode7d4eefd_0b3e_4749_a655_476f62eac828.slice. Jan 30 13:03:29.858346 systemd[1]: Created slice kubepods-besteffort-pod6d789978_772e_48df_8e1c_ef2d1e86c5ea.slice - libcontainer container kubepods-besteffort-pod6d789978_772e_48df_8e1c_ef2d1e86c5ea.slice. Jan 30 13:03:29.907402 kubelet[2559]: I0130 13:03:29.907351 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/e7d4eefd-0b3e-4749-a655-476f62eac828-typha-certs\") pod \"calico-typha-7b66b59579-8kcgv\" (UID: \"e7d4eefd-0b3e-4749-a655-476f62eac828\") " pod="calico-system/calico-typha-7b66b59579-8kcgv" Jan 30 13:03:29.907402 kubelet[2559]: I0130 13:03:29.907397 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6d789978-772e-48df-8e1c-ef2d1e86c5ea-xtables-lock\") pod \"calico-node-8lvj5\" (UID: \"6d789978-772e-48df-8e1c-ef2d1e86c5ea\") " pod="calico-system/calico-node-8lvj5" Jan 30 13:03:29.907402 kubelet[2559]: I0130 13:03:29.907414 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/6d789978-772e-48df-8e1c-ef2d1e86c5ea-flexvol-driver-host\") pod \"calico-node-8lvj5\" (UID: \"6d789978-772e-48df-8e1c-ef2d1e86c5ea\") " pod="calico-system/calico-node-8lvj5" Jan 30 13:03:29.907626 kubelet[2559]: I0130 13:03:29.907433 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d789978-772e-48df-8e1c-ef2d1e86c5ea-tigera-ca-bundle\") pod \"calico-node-8lvj5\" (UID: \"6d789978-772e-48df-8e1c-ef2d1e86c5ea\") " pod="calico-system/calico-node-8lvj5" Jan 30 13:03:29.907626 kubelet[2559]: I0130 13:03:29.907451 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/6d789978-772e-48df-8e1c-ef2d1e86c5ea-cni-net-dir\") pod \"calico-node-8lvj5\" (UID: \"6d789978-772e-48df-8e1c-ef2d1e86c5ea\") " pod="calico-system/calico-node-8lvj5" Jan 30 13:03:29.907626 kubelet[2559]: I0130 13:03:29.907472 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/6d789978-772e-48df-8e1c-ef2d1e86c5ea-cni-log-dir\") pod \"calico-node-8lvj5\" (UID: \"6d789978-772e-48df-8e1c-ef2d1e86c5ea\") " pod="calico-system/calico-node-8lvj5" Jan 30 13:03:29.907626 kubelet[2559]: I0130 13:03:29.907493 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6d789978-772e-48df-8e1c-ef2d1e86c5ea-var-lib-calico\") pod \"calico-node-8lvj5\" (UID: \"6d789978-772e-48df-8e1c-ef2d1e86c5ea\") " pod="calico-system/calico-node-8lvj5" Jan 30 13:03:29.907626 kubelet[2559]: I0130 13:03:29.907509 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svn4z\" (UniqueName: \"kubernetes.io/projected/e7d4eefd-0b3e-4749-a655-476f62eac828-kube-api-access-svn4z\") pod \"calico-typha-7b66b59579-8kcgv\" (UID: \"e7d4eefd-0b3e-4749-a655-476f62eac828\") " pod="calico-system/calico-typha-7b66b59579-8kcgv" Jan 30 13:03:29.907740 kubelet[2559]: I0130 13:03:29.907527 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6d789978-772e-48df-8e1c-ef2d1e86c5ea-lib-modules\") pod \"calico-node-8lvj5\" (UID: \"6d789978-772e-48df-8e1c-ef2d1e86c5ea\") " pod="calico-system/calico-node-8lvj5" Jan 30 13:03:29.907740 kubelet[2559]: I0130 13:03:29.907545 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/6d789978-772e-48df-8e1c-ef2d1e86c5ea-cni-bin-dir\") pod \"calico-node-8lvj5\" (UID: \"6d789978-772e-48df-8e1c-ef2d1e86c5ea\") " pod="calico-system/calico-node-8lvj5" Jan 30 13:03:29.907740 kubelet[2559]: I0130 13:03:29.907560 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/6d789978-772e-48df-8e1c-ef2d1e86c5ea-node-certs\") pod \"calico-node-8lvj5\" (UID: \"6d789978-772e-48df-8e1c-ef2d1e86c5ea\") " pod="calico-system/calico-node-8lvj5" Jan 30 13:03:29.907740 kubelet[2559]: I0130 13:03:29.907577 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e7d4eefd-0b3e-4749-a655-476f62eac828-tigera-ca-bundle\") pod \"calico-typha-7b66b59579-8kcgv\" (UID: \"e7d4eefd-0b3e-4749-a655-476f62eac828\") " pod="calico-system/calico-typha-7b66b59579-8kcgv" Jan 30 13:03:29.907740 kubelet[2559]: I0130 13:03:29.907617 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/6d789978-772e-48df-8e1c-ef2d1e86c5ea-var-run-calico\") pod \"calico-node-8lvj5\" (UID: \"6d789978-772e-48df-8e1c-ef2d1e86c5ea\") " pod="calico-system/calico-node-8lvj5" Jan 30 13:03:29.907862 kubelet[2559]: I0130 13:03:29.907634 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/6d789978-772e-48df-8e1c-ef2d1e86c5ea-policysync\") pod \"calico-node-8lvj5\" (UID: \"6d789978-772e-48df-8e1c-ef2d1e86c5ea\") " pod="calico-system/calico-node-8lvj5" Jan 30 13:03:29.907862 kubelet[2559]: I0130 13:03:29.907650 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zd6rd\" (UniqueName: \"kubernetes.io/projected/6d789978-772e-48df-8e1c-ef2d1e86c5ea-kube-api-access-zd6rd\") pod \"calico-node-8lvj5\" (UID: \"6d789978-772e-48df-8e1c-ef2d1e86c5ea\") " pod="calico-system/calico-node-8lvj5" Jan 30 13:03:29.958699 kubelet[2559]: E0130 13:03:29.958644 2559 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kw84f" podUID="21d84f57-66ce-4eaa-a49a-963d6f74f4a0" Jan 30 13:03:30.008550 kubelet[2559]: I0130 13:03:30.008507 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/21d84f57-66ce-4eaa-a49a-963d6f74f4a0-kubelet-dir\") pod \"csi-node-driver-kw84f\" (UID: \"21d84f57-66ce-4eaa-a49a-963d6f74f4a0\") " pod="calico-system/csi-node-driver-kw84f" Jan 30 13:03:30.009166 kubelet[2559]: I0130 13:03:30.008672 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/21d84f57-66ce-4eaa-a49a-963d6f74f4a0-varrun\") pod \"csi-node-driver-kw84f\" (UID: \"21d84f57-66ce-4eaa-a49a-963d6f74f4a0\") " pod="calico-system/csi-node-driver-kw84f" Jan 30 13:03:30.009166 kubelet[2559]: I0130 13:03:30.008693 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/21d84f57-66ce-4eaa-a49a-963d6f74f4a0-registration-dir\") pod \"csi-node-driver-kw84f\" (UID: \"21d84f57-66ce-4eaa-a49a-963d6f74f4a0\") " pod="calico-system/csi-node-driver-kw84f" Jan 30 13:03:30.009166 kubelet[2559]: I0130 13:03:30.008713 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jtsh\" (UniqueName: \"kubernetes.io/projected/21d84f57-66ce-4eaa-a49a-963d6f74f4a0-kube-api-access-5jtsh\") pod \"csi-node-driver-kw84f\" (UID: \"21d84f57-66ce-4eaa-a49a-963d6f74f4a0\") " pod="calico-system/csi-node-driver-kw84f" Jan 30 13:03:30.009166 kubelet[2559]: I0130 13:03:30.008752 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/21d84f57-66ce-4eaa-a49a-963d6f74f4a0-socket-dir\") pod \"csi-node-driver-kw84f\" (UID: \"21d84f57-66ce-4eaa-a49a-963d6f74f4a0\") " pod="calico-system/csi-node-driver-kw84f" Jan 30 13:03:30.010935 kubelet[2559]: E0130 13:03:30.010903 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:03:30.010935 kubelet[2559]: W0130 13:03:30.010930 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:03:30.011161 kubelet[2559]: E0130 13:03:30.010966 2559 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:03:30.011279 kubelet[2559]: E0130 13:03:30.011175 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:03:30.011279 kubelet[2559]: W0130 13:03:30.011222 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:03:30.011279 kubelet[2559]: E0130 13:03:30.011235 2559 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:03:30.012362 kubelet[2559]: E0130 13:03:30.011901 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:03:30.012362 kubelet[2559]: W0130 13:03:30.011918 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:03:30.012793 kubelet[2559]: E0130 13:03:30.012672 2559 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:03:30.013418 kubelet[2559]: E0130 13:03:30.012964 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:03:30.013418 kubelet[2559]: W0130 13:03:30.012980 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:03:30.013418 kubelet[2559]: E0130 13:03:30.013005 2559 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:03:30.037699 kubelet[2559]: E0130 13:03:30.032891 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:03:30.037699 kubelet[2559]: W0130 13:03:30.032916 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:03:30.037699 kubelet[2559]: E0130 13:03:30.032965 2559 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:03:30.037699 kubelet[2559]: E0130 13:03:30.033726 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:03:30.037699 kubelet[2559]: W0130 13:03:30.033738 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:03:30.037699 kubelet[2559]: E0130 13:03:30.033749 2559 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:03:30.039421 kubelet[2559]: E0130 13:03:30.039392 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:03:30.039421 kubelet[2559]: W0130 13:03:30.039415 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:03:30.039541 kubelet[2559]: E0130 13:03:30.039437 2559 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:03:30.039762 kubelet[2559]: E0130 13:03:30.039734 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:03:30.039762 kubelet[2559]: W0130 13:03:30.039749 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:03:30.039762 kubelet[2559]: E0130 13:03:30.039760 2559 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:03:30.109971 kubelet[2559]: E0130 13:03:30.109843 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:03:30.109971 kubelet[2559]: W0130 13:03:30.109869 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:03:30.109971 kubelet[2559]: E0130 13:03:30.109891 2559 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:03:30.110739 kubelet[2559]: E0130 13:03:30.110710 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:03:30.110739 kubelet[2559]: W0130 13:03:30.110730 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:03:30.110826 kubelet[2559]: E0130 13:03:30.110753 2559 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:03:30.111077 kubelet[2559]: E0130 13:03:30.111062 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:03:30.111077 kubelet[2559]: W0130 13:03:30.111076 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:03:30.111190 kubelet[2559]: E0130 13:03:30.111093 2559 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:03:30.111346 kubelet[2559]: E0130 13:03:30.111331 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:03:30.111346 kubelet[2559]: W0130 13:03:30.111346 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:03:30.111501 kubelet[2559]: E0130 13:03:30.111386 2559 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:03:30.111526 kubelet[2559]: E0130 13:03:30.111510 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:03:30.111526 kubelet[2559]: W0130 13:03:30.111518 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:03:30.111800 kubelet[2559]: E0130 13:03:30.111543 2559 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:03:30.111887 kubelet[2559]: E0130 13:03:30.111687 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:03:30.111912 kubelet[2559]: W0130 13:03:30.111892 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:03:30.111948 kubelet[2559]: E0130 13:03:30.111911 2559 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:03:30.112165 kubelet[2559]: E0130 13:03:30.112126 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:03:30.112196 kubelet[2559]: W0130 13:03:30.112167 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:03:30.112196 kubelet[2559]: E0130 13:03:30.112183 2559 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:03:30.112364 kubelet[2559]: E0130 13:03:30.112350 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:03:30.112397 kubelet[2559]: W0130 13:03:30.112365 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:03:30.112397 kubelet[2559]: E0130 13:03:30.112381 2559 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:03:30.112535 kubelet[2559]: E0130 13:03:30.112518 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:03:30.112535 kubelet[2559]: W0130 13:03:30.112534 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:03:30.112606 kubelet[2559]: E0130 13:03:30.112556 2559 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:03:30.112725 kubelet[2559]: E0130 13:03:30.112713 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:03:30.112725 kubelet[2559]: W0130 13:03:30.112725 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:03:30.112785 kubelet[2559]: E0130 13:03:30.112746 2559 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:03:30.112938 kubelet[2559]: E0130 13:03:30.112926 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:03:30.112978 kubelet[2559]: W0130 13:03:30.112937 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:03:30.113081 kubelet[2559]: E0130 13:03:30.112974 2559 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:03:30.113531 kubelet[2559]: E0130 13:03:30.113509 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:03:30.113531 kubelet[2559]: W0130 13:03:30.113521 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:03:30.113928 kubelet[2559]: E0130 13:03:30.113561 2559 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:03:30.113928 kubelet[2559]: E0130 13:03:30.113721 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:03:30.113928 kubelet[2559]: W0130 13:03:30.113730 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:03:30.113928 kubelet[2559]: E0130 13:03:30.113754 2559 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:03:30.114491 kubelet[2559]: E0130 13:03:30.114468 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:03:30.114491 kubelet[2559]: W0130 13:03:30.114484 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:03:30.114579 kubelet[2559]: E0130 13:03:30.114513 2559 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:03:30.114699 kubelet[2559]: E0130 13:03:30.114668 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:03:30.114699 kubelet[2559]: W0130 13:03:30.114683 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:03:30.114789 kubelet[2559]: E0130 13:03:30.114706 2559 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:03:30.115783 kubelet[2559]: E0130 13:03:30.114850 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:03:30.115783 kubelet[2559]: W0130 13:03:30.114862 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:03:30.115783 kubelet[2559]: E0130 13:03:30.114884 2559 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:03:30.115783 kubelet[2559]: E0130 13:03:30.115041 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:03:30.115783 kubelet[2559]: W0130 13:03:30.115069 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:03:30.115783 kubelet[2559]: E0130 13:03:30.115094 2559 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:03:30.115783 kubelet[2559]: E0130 13:03:30.115313 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:03:30.115783 kubelet[2559]: W0130 13:03:30.115322 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:03:30.115783 kubelet[2559]: E0130 13:03:30.115427 2559 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:03:30.115783 kubelet[2559]: E0130 13:03:30.115569 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:03:30.116069 kubelet[2559]: W0130 13:03:30.115578 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:03:30.116069 kubelet[2559]: E0130 13:03:30.115603 2559 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:03:30.116069 kubelet[2559]: E0130 13:03:30.115791 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:03:30.116069 kubelet[2559]: W0130 13:03:30.115802 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:03:30.116069 kubelet[2559]: E0130 13:03:30.115817 2559 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:03:30.116069 kubelet[2559]: E0130 13:03:30.116007 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:03:30.116069 kubelet[2559]: W0130 13:03:30.116016 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:03:30.116069 kubelet[2559]: E0130 13:03:30.116043 2559 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:03:30.116257 kubelet[2559]: E0130 13:03:30.116191 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:03:30.116257 kubelet[2559]: W0130 13:03:30.116199 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:03:30.116394 kubelet[2559]: E0130 13:03:30.116383 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:03:30.116394 kubelet[2559]: W0130 13:03:30.116394 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:03:30.116540 kubelet[2559]: E0130 13:03:30.116408 2559 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:03:30.116674 kubelet[2559]: E0130 13:03:30.116649 2559 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:03:30.116714 kubelet[2559]: E0130 13:03:30.116662 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:03:30.116714 kubelet[2559]: W0130 13:03:30.116710 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:03:30.116772 kubelet[2559]: E0130 13:03:30.116721 2559 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:03:30.117190 kubelet[2559]: E0130 13:03:30.117140 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:03:30.117190 kubelet[2559]: W0130 13:03:30.117157 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:03:30.117190 kubelet[2559]: E0130 13:03:30.117171 2559 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:03:30.129920 kubelet[2559]: E0130 13:03:30.129888 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:03:30.129920 kubelet[2559]: W0130 13:03:30.129913 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:03:30.130074 kubelet[2559]: E0130 13:03:30.129942 2559 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:03:30.142284 kubelet[2559]: E0130 13:03:30.142244 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:03:30.143067 containerd[1476]: time="2025-01-30T13:03:30.143017990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7b66b59579-8kcgv,Uid:e7d4eefd-0b3e-4749-a655-476f62eac828,Namespace:calico-system,Attempt:0,}" Jan 30 13:03:30.162069 kubelet[2559]: E0130 13:03:30.162026 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:03:30.162576 containerd[1476]: time="2025-01-30T13:03:30.162542242Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8lvj5,Uid:6d789978-772e-48df-8e1c-ef2d1e86c5ea,Namespace:calico-system,Attempt:0,}" Jan 30 13:03:30.244352 containerd[1476]: time="2025-01-30T13:03:30.243996090Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:03:30.244352 containerd[1476]: time="2025-01-30T13:03:30.244088610Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:03:30.244352 containerd[1476]: time="2025-01-30T13:03:30.244138109Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:03:30.244352 containerd[1476]: time="2025-01-30T13:03:30.244229030Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:03:30.252188 containerd[1476]: time="2025-01-30T13:03:30.252044112Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:03:30.252188 containerd[1476]: time="2025-01-30T13:03:30.252115242Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:03:30.252188 containerd[1476]: time="2025-01-30T13:03:30.252127636Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:03:30.252400 containerd[1476]: time="2025-01-30T13:03:30.252260619Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:03:30.268042 systemd[1]: Started cri-containerd-91174c28f580796b2fa138309e196339bb324b82535e24796cc08149111034c7.scope - libcontainer container 91174c28f580796b2fa138309e196339bb324b82535e24796cc08149111034c7. Jan 30 13:03:30.271362 systemd[1]: Started cri-containerd-b73770729b5279eba93bdca0f00d9dcbcc63d7e4cc2f6707817a800ced28775b.scope - libcontainer container b73770729b5279eba93bdca0f00d9dcbcc63d7e4cc2f6707817a800ced28775b. Jan 30 13:03:30.296958 containerd[1476]: time="2025-01-30T13:03:30.296794687Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8lvj5,Uid:6d789978-772e-48df-8e1c-ef2d1e86c5ea,Namespace:calico-system,Attempt:0,} returns sandbox id \"b73770729b5279eba93bdca0f00d9dcbcc63d7e4cc2f6707817a800ced28775b\"" Jan 30 13:03:30.297968 kubelet[2559]: E0130 13:03:30.297709 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:03:30.299405 containerd[1476]: time="2025-01-30T13:03:30.299372780Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 30 13:03:30.306621 containerd[1476]: time="2025-01-30T13:03:30.306488003Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7b66b59579-8kcgv,Uid:e7d4eefd-0b3e-4749-a655-476f62eac828,Namespace:calico-system,Attempt:0,} returns sandbox id \"91174c28f580796b2fa138309e196339bb324b82535e24796cc08149111034c7\"" Jan 30 13:03:30.307438 kubelet[2559]: E0130 13:03:30.307379 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:03:31.312119 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2607747642.mount: Deactivated successfully. Jan 30 13:03:31.382525 containerd[1476]: time="2025-01-30T13:03:31.382446519Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:03:31.383326 containerd[1476]: time="2025-01-30T13:03:31.383273386Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6487603" Jan 30 13:03:31.384543 containerd[1476]: time="2025-01-30T13:03:31.384469544Z" level=info msg="ImageCreate event name:\"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:03:31.388032 containerd[1476]: time="2025-01-30T13:03:31.386847707Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:03:31.388032 containerd[1476]: time="2025-01-30T13:03:31.387676493Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6487425\" in 1.088269008s" Jan 30 13:03:31.388032 containerd[1476]: time="2025-01-30T13:03:31.387704282Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\"" Jan 30 13:03:31.388803 containerd[1476]: time="2025-01-30T13:03:31.388778209Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 30 13:03:31.390699 containerd[1476]: time="2025-01-30T13:03:31.390659172Z" level=info msg="CreateContainer within sandbox \"b73770729b5279eba93bdca0f00d9dcbcc63d7e4cc2f6707817a800ced28775b\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 30 13:03:31.403079 containerd[1476]: time="2025-01-30T13:03:31.403007878Z" level=info msg="CreateContainer within sandbox \"b73770729b5279eba93bdca0f00d9dcbcc63d7e4cc2f6707817a800ced28775b\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"f57155ed84e06cac90ebb507ac6a661db7d6c5db45c23e66a08511f2a30855c4\"" Jan 30 13:03:31.403556 containerd[1476]: time="2025-01-30T13:03:31.403532307Z" level=info msg="StartContainer for \"f57155ed84e06cac90ebb507ac6a661db7d6c5db45c23e66a08511f2a30855c4\"" Jan 30 13:03:31.440826 systemd[1]: Started cri-containerd-f57155ed84e06cac90ebb507ac6a661db7d6c5db45c23e66a08511f2a30855c4.scope - libcontainer container f57155ed84e06cac90ebb507ac6a661db7d6c5db45c23e66a08511f2a30855c4. Jan 30 13:03:31.489037 containerd[1476]: time="2025-01-30T13:03:31.488915640Z" level=info msg="StartContainer for \"f57155ed84e06cac90ebb507ac6a661db7d6c5db45c23e66a08511f2a30855c4\" returns successfully" Jan 30 13:03:31.514095 systemd[1]: cri-containerd-f57155ed84e06cac90ebb507ac6a661db7d6c5db45c23e66a08511f2a30855c4.scope: Deactivated successfully. Jan 30 13:03:31.546207 kubelet[2559]: E0130 13:03:31.545562 2559 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kw84f" podUID="21d84f57-66ce-4eaa-a49a-963d6f74f4a0" Jan 30 13:03:31.563623 containerd[1476]: time="2025-01-30T13:03:31.563189606Z" level=info msg="shim disconnected" id=f57155ed84e06cac90ebb507ac6a661db7d6c5db45c23e66a08511f2a30855c4 namespace=k8s.io Jan 30 13:03:31.563623 containerd[1476]: time="2025-01-30T13:03:31.563247863Z" level=warning msg="cleaning up after shim disconnected" id=f57155ed84e06cac90ebb507ac6a661db7d6c5db45c23e66a08511f2a30855c4 namespace=k8s.io Jan 30 13:03:31.563623 containerd[1476]: time="2025-01-30T13:03:31.563257579Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:03:31.625646 kubelet[2559]: E0130 13:03:31.625515 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:03:32.051932 update_engine[1460]: I20250130 13:03:32.051856 1460 update_attempter.cc:509] Updating boot flags... Jan 30 13:03:32.076633 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (3159) Jan 30 13:03:32.116216 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (3158) Jan 30 13:03:33.516553 containerd[1476]: time="2025-01-30T13:03:33.516491315Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:03:33.517132 containerd[1476]: time="2025-01-30T13:03:33.517094421Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=27861516" Jan 30 13:03:33.517894 containerd[1476]: time="2025-01-30T13:03:33.517862109Z" level=info msg="ImageCreate event name:\"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:03:33.520794 containerd[1476]: time="2025-01-30T13:03:33.520747248Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:03:33.521497 containerd[1476]: time="2025-01-30T13:03:33.521457637Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"29231162\" in 2.13264668s" Jan 30 13:03:33.521497 containerd[1476]: time="2025-01-30T13:03:33.521490305Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\"" Jan 30 13:03:33.523058 containerd[1476]: time="2025-01-30T13:03:33.523027721Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 30 13:03:33.529811 containerd[1476]: time="2025-01-30T13:03:33.529761058Z" level=info msg="CreateContainer within sandbox \"91174c28f580796b2fa138309e196339bb324b82535e24796cc08149111034c7\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 30 13:03:33.541499 containerd[1476]: time="2025-01-30T13:03:33.541441043Z" level=info msg="CreateContainer within sandbox \"91174c28f580796b2fa138309e196339bb324b82535e24796cc08149111034c7\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"b64b5eb886df8b8a8db7839542cc0a7a08ab089a634fdfcb727b40d4692ff9d7\"" Jan 30 13:03:33.542127 containerd[1476]: time="2025-01-30T13:03:33.541972495Z" level=info msg="StartContainer for \"b64b5eb886df8b8a8db7839542cc0a7a08ab089a634fdfcb727b40d4692ff9d7\"" Jan 30 13:03:33.547316 kubelet[2559]: E0130 13:03:33.547272 2559 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kw84f" podUID="21d84f57-66ce-4eaa-a49a-963d6f74f4a0" Jan 30 13:03:33.580826 systemd[1]: Started cri-containerd-b64b5eb886df8b8a8db7839542cc0a7a08ab089a634fdfcb727b40d4692ff9d7.scope - libcontainer container b64b5eb886df8b8a8db7839542cc0a7a08ab089a634fdfcb727b40d4692ff9d7. Jan 30 13:03:33.613155 containerd[1476]: time="2025-01-30T13:03:33.613105158Z" level=info msg="StartContainer for \"b64b5eb886df8b8a8db7839542cc0a7a08ab089a634fdfcb727b40d4692ff9d7\" returns successfully" Jan 30 13:03:33.631269 kubelet[2559]: E0130 13:03:33.631223 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:03:34.632567 kubelet[2559]: I0130 13:03:34.632534 2559 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:03:34.632992 kubelet[2559]: E0130 13:03:34.632894 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:03:35.545473 kubelet[2559]: E0130 13:03:35.545114 2559 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kw84f" podUID="21d84f57-66ce-4eaa-a49a-963d6f74f4a0" Jan 30 13:03:37.545652 kubelet[2559]: E0130 13:03:37.545575 2559 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kw84f" podUID="21d84f57-66ce-4eaa-a49a-963d6f74f4a0" Jan 30 13:03:37.989841 containerd[1476]: time="2025-01-30T13:03:37.989713760Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:03:37.991528 containerd[1476]: time="2025-01-30T13:03:37.991483476Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=89703123" Jan 30 13:03:37.993085 containerd[1476]: time="2025-01-30T13:03:37.993047249Z" level=info msg="ImageCreate event name:\"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:03:37.997655 containerd[1476]: time="2025-01-30T13:03:37.997537861Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:03:37.999171 containerd[1476]: time="2025-01-30T13:03:37.999096035Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"91072777\" in 4.476003573s" Jan 30 13:03:37.999388 containerd[1476]: time="2025-01-30T13:03:37.999149140Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\"" Jan 30 13:03:38.002325 containerd[1476]: time="2025-01-30T13:03:38.001909048Z" level=info msg="CreateContainer within sandbox \"b73770729b5279eba93bdca0f00d9dcbcc63d7e4cc2f6707817a800ced28775b\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 30 13:03:38.037119 containerd[1476]: time="2025-01-30T13:03:38.037054400Z" level=info msg="CreateContainer within sandbox \"b73770729b5279eba93bdca0f00d9dcbcc63d7e4cc2f6707817a800ced28775b\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"d406bf2918c1cf0c891789829534e8f5af89416042ad04990ee14901a8af7ab5\"" Jan 30 13:03:38.044678 containerd[1476]: time="2025-01-30T13:03:38.044622340Z" level=info msg="StartContainer for \"d406bf2918c1cf0c891789829534e8f5af89416042ad04990ee14901a8af7ab5\"" Jan 30 13:03:38.077859 systemd[1]: run-containerd-runc-k8s.io-d406bf2918c1cf0c891789829534e8f5af89416042ad04990ee14901a8af7ab5-runc.zDlxy1.mount: Deactivated successfully. Jan 30 13:03:38.087252 systemd[1]: Started cri-containerd-d406bf2918c1cf0c891789829534e8f5af89416042ad04990ee14901a8af7ab5.scope - libcontainer container d406bf2918c1cf0c891789829534e8f5af89416042ad04990ee14901a8af7ab5. Jan 30 13:03:38.124464 containerd[1476]: time="2025-01-30T13:03:38.124417648Z" level=info msg="StartContainer for \"d406bf2918c1cf0c891789829534e8f5af89416042ad04990ee14901a8af7ab5\" returns successfully" Jan 30 13:03:38.656068 kubelet[2559]: E0130 13:03:38.655994 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:03:38.677626 kubelet[2559]: I0130 13:03:38.676435 2559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7b66b59579-8kcgv" podStartSLOduration=6.462522837 podStartE2EDuration="9.676417246s" podCreationTimestamp="2025-01-30 13:03:29 +0000 UTC" firstStartedPulling="2025-01-30 13:03:30.308534084 +0000 UTC m=+12.855630047" lastFinishedPulling="2025-01-30 13:03:33.522428493 +0000 UTC m=+16.069524456" observedRunningTime="2025-01-30 13:03:33.64618301 +0000 UTC m=+16.193278973" watchObservedRunningTime="2025-01-30 13:03:38.676417246 +0000 UTC m=+21.223513209" Jan 30 13:03:38.714099 systemd[1]: cri-containerd-d406bf2918c1cf0c891789829534e8f5af89416042ad04990ee14901a8af7ab5.scope: Deactivated successfully. Jan 30 13:03:38.753787 kubelet[2559]: I0130 13:03:38.753597 2559 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Jan 30 13:03:38.867456 kubelet[2559]: W0130 13:03:38.866893 2559 reflector.go:569] object-"calico-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'localhost' and this object Jan 30 13:03:38.867456 kubelet[2559]: E0130 13:03:38.866969 2559 reflector.go:166] "Unhandled Error" err="object-\"calico-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Jan 30 13:03:38.872212 containerd[1476]: time="2025-01-30T13:03:38.872021752Z" level=info msg="shim disconnected" id=d406bf2918c1cf0c891789829534e8f5af89416042ad04990ee14901a8af7ab5 namespace=k8s.io Jan 30 13:03:38.872829 containerd[1476]: time="2025-01-30T13:03:38.872789595Z" level=warning msg="cleaning up after shim disconnected" id=d406bf2918c1cf0c891789829534e8f5af89416042ad04990ee14901a8af7ab5 namespace=k8s.io Jan 30 13:03:38.873415 containerd[1476]: time="2025-01-30T13:03:38.873390801Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:03:38.875928 systemd[1]: Created slice kubepods-burstable-pod51a31eb8_8977_41a3_b690_162f3ef160ae.slice - libcontainer container kubepods-burstable-pod51a31eb8_8977_41a3_b690_162f3ef160ae.slice. Jan 30 13:03:38.880862 kubelet[2559]: I0130 13:03:38.879959 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znb6q\" (UniqueName: \"kubernetes.io/projected/c16bbc77-2b47-4ff4-846f-0b437cb6c4ee-kube-api-access-znb6q\") pod \"coredns-668d6bf9bc-w7k54\" (UID: \"c16bbc77-2b47-4ff4-846f-0b437cb6c4ee\") " pod="kube-system/coredns-668d6bf9bc-w7k54" Jan 30 13:03:38.880862 kubelet[2559]: I0130 13:03:38.880014 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zt2tq\" (UniqueName: \"kubernetes.io/projected/e0b441b3-88a9-4555-8974-b147721621a3-kube-api-access-zt2tq\") pod \"calico-apiserver-7874549f5f-gzb44\" (UID: \"e0b441b3-88a9-4555-8974-b147721621a3\") " pod="calico-apiserver/calico-apiserver-7874549f5f-gzb44" Jan 30 13:03:38.880862 kubelet[2559]: I0130 13:03:38.880039 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6s2s\" (UniqueName: \"kubernetes.io/projected/7acfabdf-bbd3-498c-9434-a86e65427513-kube-api-access-t6s2s\") pod \"calico-apiserver-7874549f5f-h26th\" (UID: \"7acfabdf-bbd3-498c-9434-a86e65427513\") " pod="calico-apiserver/calico-apiserver-7874549f5f-h26th" Jan 30 13:03:38.880862 kubelet[2559]: I0130 13:03:38.880060 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/51a31eb8-8977-41a3-b690-162f3ef160ae-config-volume\") pod \"coredns-668d6bf9bc-89crv\" (UID: \"51a31eb8-8977-41a3-b690-162f3ef160ae\") " pod="kube-system/coredns-668d6bf9bc-89crv" Jan 30 13:03:38.880862 kubelet[2559]: I0130 13:03:38.880078 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57kbr\" (UniqueName: \"kubernetes.io/projected/51a31eb8-8977-41a3-b690-162f3ef160ae-kube-api-access-57kbr\") pod \"coredns-668d6bf9bc-89crv\" (UID: \"51a31eb8-8977-41a3-b690-162f3ef160ae\") " pod="kube-system/coredns-668d6bf9bc-89crv" Jan 30 13:03:38.881131 kubelet[2559]: I0130 13:03:38.880096 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c16bbc77-2b47-4ff4-846f-0b437cb6c4ee-config-volume\") pod \"coredns-668d6bf9bc-w7k54\" (UID: \"c16bbc77-2b47-4ff4-846f-0b437cb6c4ee\") " pod="kube-system/coredns-668d6bf9bc-w7k54" Jan 30 13:03:38.881131 kubelet[2559]: I0130 13:03:38.880116 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e0b441b3-88a9-4555-8974-b147721621a3-calico-apiserver-certs\") pod \"calico-apiserver-7874549f5f-gzb44\" (UID: \"e0b441b3-88a9-4555-8974-b147721621a3\") " pod="calico-apiserver/calico-apiserver-7874549f5f-gzb44" Jan 30 13:03:38.881131 kubelet[2559]: I0130 13:03:38.880141 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7acfabdf-bbd3-498c-9434-a86e65427513-calico-apiserver-certs\") pod \"calico-apiserver-7874549f5f-h26th\" (UID: \"7acfabdf-bbd3-498c-9434-a86e65427513\") " pod="calico-apiserver/calico-apiserver-7874549f5f-h26th" Jan 30 13:03:38.885346 systemd[1]: Created slice kubepods-burstable-podc16bbc77_2b47_4ff4_846f_0b437cb6c4ee.slice - libcontainer container kubepods-burstable-podc16bbc77_2b47_4ff4_846f_0b437cb6c4ee.slice. Jan 30 13:03:38.898188 systemd[1]: Created slice kubepods-besteffort-pod7acfabdf_bbd3_498c_9434_a86e65427513.slice - libcontainer container kubepods-besteffort-pod7acfabdf_bbd3_498c_9434_a86e65427513.slice. Jan 30 13:03:38.909032 systemd[1]: Created slice kubepods-besteffort-pode0b441b3_88a9_4555_8974_b147721621a3.slice - libcontainer container kubepods-besteffort-pode0b441b3_88a9_4555_8974_b147721621a3.slice. Jan 30 13:03:38.916035 systemd[1]: Created slice kubepods-besteffort-pod52142e13_5b13_4ee1_bef6_e84504589fe4.slice - libcontainer container kubepods-besteffort-pod52142e13_5b13_4ee1_bef6_e84504589fe4.slice. Jan 30 13:03:38.981113 kubelet[2559]: I0130 13:03:38.981060 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/52142e13-5b13-4ee1-bef6-e84504589fe4-tigera-ca-bundle\") pod \"calico-kube-controllers-5f499d887f-64bsx\" (UID: \"52142e13-5b13-4ee1-bef6-e84504589fe4\") " pod="calico-system/calico-kube-controllers-5f499d887f-64bsx" Jan 30 13:03:38.984610 kubelet[2559]: I0130 13:03:38.981855 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tpcj\" (UniqueName: \"kubernetes.io/projected/52142e13-5b13-4ee1-bef6-e84504589fe4-kube-api-access-4tpcj\") pod \"calico-kube-controllers-5f499d887f-64bsx\" (UID: \"52142e13-5b13-4ee1-bef6-e84504589fe4\") " pod="calico-system/calico-kube-controllers-5f499d887f-64bsx" Jan 30 13:03:39.032773 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d406bf2918c1cf0c891789829534e8f5af89416042ad04990ee14901a8af7ab5-rootfs.mount: Deactivated successfully. Jan 30 13:03:39.183279 kubelet[2559]: E0130 13:03:39.183156 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:03:39.184819 containerd[1476]: time="2025-01-30T13:03:39.184769863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-89crv,Uid:51a31eb8-8977-41a3-b690-162f3ef160ae,Namespace:kube-system,Attempt:0,}" Jan 30 13:03:39.191511 kubelet[2559]: E0130 13:03:39.191467 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:03:39.192067 containerd[1476]: time="2025-01-30T13:03:39.192033118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-w7k54,Uid:c16bbc77-2b47-4ff4-846f-0b437cb6c4ee,Namespace:kube-system,Attempt:0,}" Jan 30 13:03:39.219766 containerd[1476]: time="2025-01-30T13:03:39.219216586Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f499d887f-64bsx,Uid:52142e13-5b13-4ee1-bef6-e84504589fe4,Namespace:calico-system,Attempt:0,}" Jan 30 13:03:39.627943 systemd[1]: Created slice kubepods-besteffort-pod21d84f57_66ce_4eaa_a49a_963d6f74f4a0.slice - libcontainer container kubepods-besteffort-pod21d84f57_66ce_4eaa_a49a_963d6f74f4a0.slice. Jan 30 13:03:39.631346 containerd[1476]: time="2025-01-30T13:03:39.631256539Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kw84f,Uid:21d84f57-66ce-4eaa-a49a-963d6f74f4a0,Namespace:calico-system,Attempt:0,}" Jan 30 13:03:39.672226 kubelet[2559]: E0130 13:03:39.671822 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:03:39.689131 containerd[1476]: time="2025-01-30T13:03:39.681351662Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 30 13:03:39.957831 containerd[1476]: time="2025-01-30T13:03:39.957652671Z" level=error msg="Failed to destroy network for sandbox \"144fc21131be39a0f42c1537abb474ebcc9fa2400214cdc58e533fbeaf811efa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:03:39.964875 containerd[1476]: time="2025-01-30T13:03:39.964666506Z" level=error msg="encountered an error cleaning up failed sandbox \"144fc21131be39a0f42c1537abb474ebcc9fa2400214cdc58e533fbeaf811efa\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:03:39.964875 containerd[1476]: time="2025-01-30T13:03:39.964769561Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-89crv,Uid:51a31eb8-8977-41a3-b690-162f3ef160ae,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"144fc21131be39a0f42c1537abb474ebcc9fa2400214cdc58e533fbeaf811efa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:03:39.965053 kubelet[2559]: E0130 13:03:39.964994 2559 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"144fc21131be39a0f42c1537abb474ebcc9fa2400214cdc58e533fbeaf811efa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:03:39.972034 containerd[1476]: time="2025-01-30T13:03:39.971970871Z" level=error msg="Failed to destroy network for sandbox \"a5d295c47e96d1c5b1c07aae599b48f3d11a387edc12b031c5eb4bda19e91a06\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:03:39.972840 kubelet[2559]: E0130 13:03:39.972746 2559 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"144fc21131be39a0f42c1537abb474ebcc9fa2400214cdc58e533fbeaf811efa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-89crv" Jan 30 13:03:39.973370 kubelet[2559]: E0130 13:03:39.972819 2559 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"144fc21131be39a0f42c1537abb474ebcc9fa2400214cdc58e533fbeaf811efa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-89crv" Jan 30 13:03:39.973370 kubelet[2559]: E0130 13:03:39.973069 2559 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-89crv_kube-system(51a31eb8-8977-41a3-b690-162f3ef160ae)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-89crv_kube-system(51a31eb8-8977-41a3-b690-162f3ef160ae)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"144fc21131be39a0f42c1537abb474ebcc9fa2400214cdc58e533fbeaf811efa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-89crv" podUID="51a31eb8-8977-41a3-b690-162f3ef160ae" Jan 30 13:03:39.973459 containerd[1476]: time="2025-01-30T13:03:39.972914884Z" level=error msg="Failed to destroy network for sandbox \"71b6aa4a9833179615d228a0d19b1fb97dc78d75c32bf28c7e874e9434057227\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:03:39.973524 containerd[1476]: time="2025-01-30T13:03:39.973475829Z" level=error msg="encountered an error cleaning up failed sandbox \"a5d295c47e96d1c5b1c07aae599b48f3d11a387edc12b031c5eb4bda19e91a06\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:03:39.973613 containerd[1476]: time="2025-01-30T13:03:39.973527617Z" level=error msg="encountered an error cleaning up failed sandbox \"71b6aa4a9833179615d228a0d19b1fb97dc78d75c32bf28c7e874e9434057227\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:03:39.973754 containerd[1476]: time="2025-01-30T13:03:39.973582284Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f499d887f-64bsx,Uid:52142e13-5b13-4ee1-bef6-e84504589fe4,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"71b6aa4a9833179615d228a0d19b1fb97dc78d75c32bf28c7e874e9434057227\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:03:39.973754 containerd[1476]: time="2025-01-30T13:03:39.973543693Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-w7k54,Uid:c16bbc77-2b47-4ff4-846f-0b437cb6c4ee,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a5d295c47e96d1c5b1c07aae599b48f3d11a387edc12b031c5eb4bda19e91a06\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:03:39.973958 kubelet[2559]: E0130 13:03:39.973865 2559 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5d295c47e96d1c5b1c07aae599b48f3d11a387edc12b031c5eb4bda19e91a06\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:03:39.973958 kubelet[2559]: E0130 13:03:39.973865 2559 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"71b6aa4a9833179615d228a0d19b1fb97dc78d75c32bf28c7e874e9434057227\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:03:39.974052 kubelet[2559]: E0130 13:03:39.973980 2559 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"71b6aa4a9833179615d228a0d19b1fb97dc78d75c32bf28c7e874e9434057227\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5f499d887f-64bsx" Jan 30 13:03:39.974052 kubelet[2559]: E0130 13:03:39.973997 2559 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"71b6aa4a9833179615d228a0d19b1fb97dc78d75c32bf28c7e874e9434057227\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5f499d887f-64bsx" Jan 30 13:03:39.974052 kubelet[2559]: E0130 13:03:39.974027 2559 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5f499d887f-64bsx_calico-system(52142e13-5b13-4ee1-bef6-e84504589fe4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5f499d887f-64bsx_calico-system(52142e13-5b13-4ee1-bef6-e84504589fe4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"71b6aa4a9833179615d228a0d19b1fb97dc78d75c32bf28c7e874e9434057227\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5f499d887f-64bsx" podUID="52142e13-5b13-4ee1-bef6-e84504589fe4" Jan 30 13:03:39.974170 kubelet[2559]: E0130 13:03:39.973944 2559 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5d295c47e96d1c5b1c07aae599b48f3d11a387edc12b031c5eb4bda19e91a06\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-w7k54" Jan 30 13:03:39.974170 kubelet[2559]: E0130 13:03:39.974063 2559 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5d295c47e96d1c5b1c07aae599b48f3d11a387edc12b031c5eb4bda19e91a06\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-w7k54" Jan 30 13:03:39.974170 kubelet[2559]: E0130 13:03:39.974084 2559 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-w7k54_kube-system(c16bbc77-2b47-4ff4-846f-0b437cb6c4ee)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-w7k54_kube-system(c16bbc77-2b47-4ff4-846f-0b437cb6c4ee)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a5d295c47e96d1c5b1c07aae599b48f3d11a387edc12b031c5eb4bda19e91a06\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-w7k54" podUID="c16bbc77-2b47-4ff4-846f-0b437cb6c4ee" Jan 30 13:03:39.980015 containerd[1476]: time="2025-01-30T13:03:39.979149426Z" level=error msg="Failed to destroy network for sandbox \"2fa7c71c031aa8d5814264930e8142dd1828c577abbe79735d158974f3c75698\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:03:39.980015 containerd[1476]: time="2025-01-30T13:03:39.979468989Z" level=error msg="encountered an error cleaning up failed sandbox \"2fa7c71c031aa8d5814264930e8142dd1828c577abbe79735d158974f3c75698\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:03:39.980015 containerd[1476]: time="2025-01-30T13:03:39.979523776Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kw84f,Uid:21d84f57-66ce-4eaa-a49a-963d6f74f4a0,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2fa7c71c031aa8d5814264930e8142dd1828c577abbe79735d158974f3c75698\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:03:39.980325 kubelet[2559]: E0130 13:03:39.979772 2559 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2fa7c71c031aa8d5814264930e8142dd1828c577abbe79735d158974f3c75698\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:03:39.980325 kubelet[2559]: E0130 13:03:39.979831 2559 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2fa7c71c031aa8d5814264930e8142dd1828c577abbe79735d158974f3c75698\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kw84f" Jan 30 13:03:39.980325 kubelet[2559]: E0130 13:03:39.979850 2559 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2fa7c71c031aa8d5814264930e8142dd1828c577abbe79735d158974f3c75698\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kw84f" Jan 30 13:03:39.980483 kubelet[2559]: E0130 13:03:39.979977 2559 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-kw84f_calico-system(21d84f57-66ce-4eaa-a49a-963d6f74f4a0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-kw84f_calico-system(21d84f57-66ce-4eaa-a49a-963d6f74f4a0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2fa7c71c031aa8d5814264930e8142dd1828c577abbe79735d158974f3c75698\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-kw84f" podUID="21d84f57-66ce-4eaa-a49a-963d6f74f4a0" Jan 30 13:03:39.990018 kubelet[2559]: E0130 13:03:39.989969 2559 projected.go:288] Couldn't get configMap calico-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 30 13:03:39.990018 kubelet[2559]: E0130 13:03:39.990014 2559 projected.go:194] Error preparing data for projected volume kube-api-access-zt2tq for pod calico-apiserver/calico-apiserver-7874549f5f-gzb44: failed to sync configmap cache: timed out waiting for the condition Jan 30 13:03:39.990199 kubelet[2559]: E0130 13:03:39.990082 2559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e0b441b3-88a9-4555-8974-b147721621a3-kube-api-access-zt2tq podName:e0b441b3-88a9-4555-8974-b147721621a3 nodeName:}" failed. No retries permitted until 2025-01-30 13:03:40.490056045 +0000 UTC m=+23.037152008 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-zt2tq" (UniqueName: "kubernetes.io/projected/e0b441b3-88a9-4555-8974-b147721621a3-kube-api-access-zt2tq") pod "calico-apiserver-7874549f5f-gzb44" (UID: "e0b441b3-88a9-4555-8974-b147721621a3") : failed to sync configmap cache: timed out waiting for the condition Jan 30 13:03:39.991187 kubelet[2559]: E0130 13:03:39.991068 2559 projected.go:288] Couldn't get configMap calico-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 30 13:03:39.991187 kubelet[2559]: E0130 13:03:39.991105 2559 projected.go:194] Error preparing data for projected volume kube-api-access-t6s2s for pod calico-apiserver/calico-apiserver-7874549f5f-h26th: failed to sync configmap cache: timed out waiting for the condition Jan 30 13:03:39.991187 kubelet[2559]: E0130 13:03:39.991162 2559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7acfabdf-bbd3-498c-9434-a86e65427513-kube-api-access-t6s2s podName:7acfabdf-bbd3-498c-9434-a86e65427513 nodeName:}" failed. No retries permitted until 2025-01-30 13:03:40.491147663 +0000 UTC m=+23.038243626 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-t6s2s" (UniqueName: "kubernetes.io/projected/7acfabdf-bbd3-498c-9434-a86e65427513-kube-api-access-t6s2s") pod "calico-apiserver-7874549f5f-h26th" (UID: "7acfabdf-bbd3-498c-9434-a86e65427513") : failed to sync configmap cache: timed out waiting for the condition Jan 30 13:03:40.029988 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a5d295c47e96d1c5b1c07aae599b48f3d11a387edc12b031c5eb4bda19e91a06-shm.mount: Deactivated successfully. Jan 30 13:03:40.030191 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-144fc21131be39a0f42c1537abb474ebcc9fa2400214cdc58e533fbeaf811efa-shm.mount: Deactivated successfully. Jan 30 13:03:40.674422 kubelet[2559]: I0130 13:03:40.674376 2559 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="71b6aa4a9833179615d228a0d19b1fb97dc78d75c32bf28c7e874e9434057227" Jan 30 13:03:40.675199 containerd[1476]: time="2025-01-30T13:03:40.675166102Z" level=info msg="StopPodSandbox for \"71b6aa4a9833179615d228a0d19b1fb97dc78d75c32bf28c7e874e9434057227\"" Jan 30 13:03:40.675429 containerd[1476]: time="2025-01-30T13:03:40.675361458Z" level=info msg="Ensure that sandbox 71b6aa4a9833179615d228a0d19b1fb97dc78d75c32bf28c7e874e9434057227 in task-service has been cleanup successfully" Jan 30 13:03:40.675456 kubelet[2559]: I0130 13:03:40.675216 2559 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="144fc21131be39a0f42c1537abb474ebcc9fa2400214cdc58e533fbeaf811efa" Jan 30 13:03:40.677845 containerd[1476]: time="2025-01-30T13:03:40.675773965Z" level=info msg="StopPodSandbox for \"144fc21131be39a0f42c1537abb474ebcc9fa2400214cdc58e533fbeaf811efa\"" Jan 30 13:03:40.677845 containerd[1476]: time="2025-01-30T13:03:40.677620789Z" level=info msg="StopPodSandbox for \"a5d295c47e96d1c5b1c07aae599b48f3d11a387edc12b031c5eb4bda19e91a06\"" Jan 30 13:03:40.677845 containerd[1476]: time="2025-01-30T13:03:40.677795829Z" level=info msg="Ensure that sandbox a5d295c47e96d1c5b1c07aae599b48f3d11a387edc12b031c5eb4bda19e91a06 in task-service has been cleanup successfully" Jan 30 13:03:40.677390 systemd[1]: run-netns-cni\x2d16161922\x2d3221\x2d8998\x2d96d2\x2da00e4863a80b.mount: Deactivated successfully. Jan 30 13:03:40.678206 kubelet[2559]: I0130 13:03:40.676292 2559 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2fa7c71c031aa8d5814264930e8142dd1828c577abbe79735d158974f3c75698" Jan 30 13:03:40.678206 kubelet[2559]: I0130 13:03:40.677144 2559 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a5d295c47e96d1c5b1c07aae599b48f3d11a387edc12b031c5eb4bda19e91a06" Jan 30 13:03:40.680748 containerd[1476]: time="2025-01-30T13:03:40.678372379Z" level=info msg="Ensure that sandbox 144fc21131be39a0f42c1537abb474ebcc9fa2400214cdc58e533fbeaf811efa in task-service has been cleanup successfully" Jan 30 13:03:40.680748 containerd[1476]: time="2025-01-30T13:03:40.678415969Z" level=info msg="TearDown network for sandbox \"71b6aa4a9833179615d228a0d19b1fb97dc78d75c32bf28c7e874e9434057227\" successfully" Jan 30 13:03:40.680748 containerd[1476]: time="2025-01-30T13:03:40.678438684Z" level=info msg="StopPodSandbox for \"71b6aa4a9833179615d228a0d19b1fb97dc78d75c32bf28c7e874e9434057227\" returns successfully" Jan 30 13:03:40.680748 containerd[1476]: time="2025-01-30T13:03:40.678536062Z" level=info msg="TearDown network for sandbox \"a5d295c47e96d1c5b1c07aae599b48f3d11a387edc12b031c5eb4bda19e91a06\" successfully" Jan 30 13:03:40.680748 containerd[1476]: time="2025-01-30T13:03:40.678555018Z" level=info msg="StopPodSandbox for \"a5d295c47e96d1c5b1c07aae599b48f3d11a387edc12b031c5eb4bda19e91a06\" returns successfully" Jan 30 13:03:40.680748 containerd[1476]: time="2025-01-30T13:03:40.678994359Z" level=info msg="StopPodSandbox for \"2fa7c71c031aa8d5814264930e8142dd1828c577abbe79735d158974f3c75698\"" Jan 30 13:03:40.680748 containerd[1476]: time="2025-01-30T13:03:40.679137127Z" level=info msg="Ensure that sandbox 2fa7c71c031aa8d5814264930e8142dd1828c577abbe79735d158974f3c75698 in task-service has been cleanup successfully" Jan 30 13:03:40.680748 containerd[1476]: time="2025-01-30T13:03:40.679293812Z" level=info msg="TearDown network for sandbox \"2fa7c71c031aa8d5814264930e8142dd1828c577abbe79735d158974f3c75698\" successfully" Jan 30 13:03:40.680748 containerd[1476]: time="2025-01-30T13:03:40.679306929Z" level=info msg="StopPodSandbox for \"2fa7c71c031aa8d5814264930e8142dd1828c577abbe79735d158974f3c75698\" returns successfully" Jan 30 13:03:40.680748 containerd[1476]: time="2025-01-30T13:03:40.679536797Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-w7k54,Uid:c16bbc77-2b47-4ff4-846f-0b437cb6c4ee,Namespace:kube-system,Attempt:1,}" Jan 30 13:03:40.680748 containerd[1476]: time="2025-01-30T13:03:40.679740271Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f499d887f-64bsx,Uid:52142e13-5b13-4ee1-bef6-e84504589fe4,Namespace:calico-system,Attempt:1,}" Jan 30 13:03:40.681924 kubelet[2559]: E0130 13:03:40.678984 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:03:40.681970 containerd[1476]: time="2025-01-30T13:03:40.681495756Z" level=info msg="TearDown network for sandbox \"144fc21131be39a0f42c1537abb474ebcc9fa2400214cdc58e533fbeaf811efa\" successfully" Jan 30 13:03:40.681970 containerd[1476]: time="2025-01-30T13:03:40.681523669Z" level=info msg="StopPodSandbox for \"144fc21131be39a0f42c1537abb474ebcc9fa2400214cdc58e533fbeaf811efa\" returns successfully" Jan 30 13:03:40.682272 systemd[1]: run-netns-cni\x2dcbb5d54a\x2dc72f\x2dd68c\x2d79d4\x2df678da3c2000.mount: Deactivated successfully. Jan 30 13:03:40.682732 kubelet[2559]: E0130 13:03:40.682342 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:03:40.682405 systemd[1]: run-netns-cni\x2db4bd13e3\x2dac92\x2dada9\x2d882d\x2d64d3214df27b.mount: Deactivated successfully. Jan 30 13:03:40.683970 containerd[1476]: time="2025-01-30T13:03:40.683926168Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-89crv,Uid:51a31eb8-8977-41a3-b690-162f3ef160ae,Namespace:kube-system,Attempt:1,}" Jan 30 13:03:40.684592 containerd[1476]: time="2025-01-30T13:03:40.684552187Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kw84f,Uid:21d84f57-66ce-4eaa-a49a-963d6f74f4a0,Namespace:calico-system,Attempt:1,}" Jan 30 13:03:40.701666 containerd[1476]: time="2025-01-30T13:03:40.701620022Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7874549f5f-h26th,Uid:7acfabdf-bbd3-498c-9434-a86e65427513,Namespace:calico-apiserver,Attempt:0,}" Jan 30 13:03:40.714497 containerd[1476]: time="2025-01-30T13:03:40.714445213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7874549f5f-gzb44,Uid:e0b441b3-88a9-4555-8974-b147721621a3,Namespace:calico-apiserver,Attempt:0,}" Jan 30 13:03:40.833719 containerd[1476]: time="2025-01-30T13:03:40.833669157Z" level=error msg="Failed to destroy network for sandbox \"79977d568f0e32127ab2f5ffbcb9e92abaf88b4c4d292ab6f5d8d3e0685ef426\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:03:40.834264 containerd[1476]: time="2025-01-30T13:03:40.834123974Z" level=error msg="encountered an error cleaning up failed sandbox \"79977d568f0e32127ab2f5ffbcb9e92abaf88b4c4d292ab6f5d8d3e0685ef426\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:03:40.834264 containerd[1476]: time="2025-01-30T13:03:40.834190959Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f499d887f-64bsx,Uid:52142e13-5b13-4ee1-bef6-e84504589fe4,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"79977d568f0e32127ab2f5ffbcb9e92abaf88b4c4d292ab6f5d8d3e0685ef426\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:03:40.836578 kubelet[2559]: E0130 13:03:40.836525 2559 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79977d568f0e32127ab2f5ffbcb9e92abaf88b4c4d292ab6f5d8d3e0685ef426\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:03:40.836689 kubelet[2559]: E0130 13:03:40.836617 2559 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79977d568f0e32127ab2f5ffbcb9e92abaf88b4c4d292ab6f5d8d3e0685ef426\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5f499d887f-64bsx" Jan 30 13:03:40.836689 kubelet[2559]: E0130 13:03:40.836643 2559 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79977d568f0e32127ab2f5ffbcb9e92abaf88b4c4d292ab6f5d8d3e0685ef426\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5f499d887f-64bsx" Jan 30 13:03:40.836748 kubelet[2559]: E0130 13:03:40.836681 2559 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5f499d887f-64bsx_calico-system(52142e13-5b13-4ee1-bef6-e84504589fe4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5f499d887f-64bsx_calico-system(52142e13-5b13-4ee1-bef6-e84504589fe4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"79977d568f0e32127ab2f5ffbcb9e92abaf88b4c4d292ab6f5d8d3e0685ef426\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5f499d887f-64bsx" podUID="52142e13-5b13-4ee1-bef6-e84504589fe4" Jan 30 13:03:40.852611 containerd[1476]: time="2025-01-30T13:03:40.852437529Z" level=error msg="Failed to destroy network for sandbox \"ed1fb33fca754cc40e42e26806dbbc3cfdda2a37a4de910c7334fae2b33524be\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:03:40.855936 containerd[1476]: time="2025-01-30T13:03:40.855872155Z" level=error msg="encountered an error cleaning up failed sandbox \"ed1fb33fca754cc40e42e26806dbbc3cfdda2a37a4de910c7334fae2b33524be\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:03:40.856049 containerd[1476]: time="2025-01-30T13:03:40.855957496Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7874549f5f-h26th,Uid:7acfabdf-bbd3-498c-9434-a86e65427513,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ed1fb33fca754cc40e42e26806dbbc3cfdda2a37a4de910c7334fae2b33524be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:03:40.856228 kubelet[2559]: E0130 13:03:40.856189 2559 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed1fb33fca754cc40e42e26806dbbc3cfdda2a37a4de910c7334fae2b33524be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:03:40.856292 kubelet[2559]: E0130 13:03:40.856248 2559 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed1fb33fca754cc40e42e26806dbbc3cfdda2a37a4de910c7334fae2b33524be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7874549f5f-h26th" Jan 30 13:03:40.856292 kubelet[2559]: E0130 13:03:40.856269 2559 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed1fb33fca754cc40e42e26806dbbc3cfdda2a37a4de910c7334fae2b33524be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7874549f5f-h26th" Jan 30 13:03:40.856355 kubelet[2559]: E0130 13:03:40.856304 2559 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7874549f5f-h26th_calico-apiserver(7acfabdf-bbd3-498c-9434-a86e65427513)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7874549f5f-h26th_calico-apiserver(7acfabdf-bbd3-498c-9434-a86e65427513)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ed1fb33fca754cc40e42e26806dbbc3cfdda2a37a4de910c7334fae2b33524be\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7874549f5f-h26th" podUID="7acfabdf-bbd3-498c-9434-a86e65427513" Jan 30 13:03:40.860147 containerd[1476]: time="2025-01-30T13:03:40.859564443Z" level=error msg="Failed to destroy network for sandbox \"0b8cab69669dccc4405556aab35436320fe776560fe176a6c6371978cd4d55c5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:03:40.860959 containerd[1476]: time="2025-01-30T13:03:40.860900822Z" level=error msg="encountered an error cleaning up failed sandbox \"0b8cab69669dccc4405556aab35436320fe776560fe176a6c6371978cd4d55c5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:03:40.861156 containerd[1476]: time="2025-01-30T13:03:40.861065865Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-w7k54,Uid:c16bbc77-2b47-4ff4-846f-0b437cb6c4ee,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"0b8cab69669dccc4405556aab35436320fe776560fe176a6c6371978cd4d55c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:03:40.861452 kubelet[2559]: E0130 13:03:40.861411 2559 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b8cab69669dccc4405556aab35436320fe776560fe176a6c6371978cd4d55c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:03:40.861774 kubelet[2559]: E0130 13:03:40.861645 2559 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b8cab69669dccc4405556aab35436320fe776560fe176a6c6371978cd4d55c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-w7k54" Jan 30 13:03:40.861774 kubelet[2559]: E0130 13:03:40.861688 2559 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b8cab69669dccc4405556aab35436320fe776560fe176a6c6371978cd4d55c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-w7k54" Jan 30 13:03:40.861962 kubelet[2559]: E0130 13:03:40.861738 2559 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-w7k54_kube-system(c16bbc77-2b47-4ff4-846f-0b437cb6c4ee)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-w7k54_kube-system(c16bbc77-2b47-4ff4-846f-0b437cb6c4ee)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0b8cab69669dccc4405556aab35436320fe776560fe176a6c6371978cd4d55c5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-w7k54" podUID="c16bbc77-2b47-4ff4-846f-0b437cb6c4ee" Jan 30 13:03:40.882471 containerd[1476]: time="2025-01-30T13:03:40.882412856Z" level=error msg="Failed to destroy network for sandbox \"df66bd6fcb6a38c10cff7d4d19cad38be0f23fa4e6d01253025405d35476663e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:03:40.883343 containerd[1476]: time="2025-01-30T13:03:40.883292298Z" level=error msg="encountered an error cleaning up failed sandbox \"df66bd6fcb6a38c10cff7d4d19cad38be0f23fa4e6d01253025405d35476663e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:03:40.884361 containerd[1476]: time="2025-01-30T13:03:40.884229207Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7874549f5f-gzb44,Uid:e0b441b3-88a9-4555-8974-b147721621a3,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"df66bd6fcb6a38c10cff7d4d19cad38be0f23fa4e6d01253025405d35476663e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:03:40.884896 kubelet[2559]: E0130 13:03:40.884644 2559 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df66bd6fcb6a38c10cff7d4d19cad38be0f23fa4e6d01253025405d35476663e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:03:40.884896 kubelet[2559]: E0130 13:03:40.884777 2559 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df66bd6fcb6a38c10cff7d4d19cad38be0f23fa4e6d01253025405d35476663e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7874549f5f-gzb44" Jan 30 13:03:40.884896 kubelet[2559]: E0130 13:03:40.884798 2559 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df66bd6fcb6a38c10cff7d4d19cad38be0f23fa4e6d01253025405d35476663e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7874549f5f-gzb44" Jan 30 13:03:40.885128 kubelet[2559]: E0130 13:03:40.884870 2559 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7874549f5f-gzb44_calico-apiserver(e0b441b3-88a9-4555-8974-b147721621a3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7874549f5f-gzb44_calico-apiserver(e0b441b3-88a9-4555-8974-b147721621a3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"df66bd6fcb6a38c10cff7d4d19cad38be0f23fa4e6d01253025405d35476663e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7874549f5f-gzb44" podUID="e0b441b3-88a9-4555-8974-b147721621a3" Jan 30 13:03:40.924651 containerd[1476]: time="2025-01-30T13:03:40.924423953Z" level=error msg="Failed to destroy network for sandbox \"35b59a098d30c4bca7c9a975837a895bef0a72a468f7375cdbfb22ec4b909754\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:03:40.925488 containerd[1476]: time="2025-01-30T13:03:40.925456920Z" level=error msg="encountered an error cleaning up failed sandbox \"35b59a098d30c4bca7c9a975837a895bef0a72a468f7375cdbfb22ec4b909754\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:03:40.925686 containerd[1476]: time="2025-01-30T13:03:40.925661994Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kw84f,Uid:21d84f57-66ce-4eaa-a49a-963d6f74f4a0,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"35b59a098d30c4bca7c9a975837a895bef0a72a468f7375cdbfb22ec4b909754\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:03:40.926120 kubelet[2559]: E0130 13:03:40.926062 2559 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"35b59a098d30c4bca7c9a975837a895bef0a72a468f7375cdbfb22ec4b909754\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:03:40.926263 kubelet[2559]: E0130 13:03:40.926143 2559 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"35b59a098d30c4bca7c9a975837a895bef0a72a468f7375cdbfb22ec4b909754\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kw84f" Jan 30 13:03:40.926296 kubelet[2559]: E0130 13:03:40.926271 2559 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"35b59a098d30c4bca7c9a975837a895bef0a72a468f7375cdbfb22ec4b909754\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kw84f" Jan 30 13:03:40.926348 kubelet[2559]: E0130 13:03:40.926314 2559 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-kw84f_calico-system(21d84f57-66ce-4eaa-a49a-963d6f74f4a0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-kw84f_calico-system(21d84f57-66ce-4eaa-a49a-963d6f74f4a0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"35b59a098d30c4bca7c9a975837a895bef0a72a468f7375cdbfb22ec4b909754\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-kw84f" podUID="21d84f57-66ce-4eaa-a49a-963d6f74f4a0" Jan 30 13:03:40.936864 containerd[1476]: time="2025-01-30T13:03:40.936819681Z" level=error msg="Failed to destroy network for sandbox \"0052d1476361ed68d924c536afe59ee79b25df1b051eb83ee9e0d726b941bc9c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:03:40.937383 containerd[1476]: time="2025-01-30T13:03:40.937339244Z" level=error msg="encountered an error cleaning up failed sandbox \"0052d1476361ed68d924c536afe59ee79b25df1b051eb83ee9e0d726b941bc9c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:03:40.937434 containerd[1476]: time="2025-01-30T13:03:40.937413027Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-89crv,Uid:51a31eb8-8977-41a3-b690-162f3ef160ae,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"0052d1476361ed68d924c536afe59ee79b25df1b051eb83ee9e0d726b941bc9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:03:40.938096 kubelet[2559]: E0130 13:03:40.937639 2559 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0052d1476361ed68d924c536afe59ee79b25df1b051eb83ee9e0d726b941bc9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:03:40.938096 kubelet[2559]: E0130 13:03:40.937703 2559 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0052d1476361ed68d924c536afe59ee79b25df1b051eb83ee9e0d726b941bc9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-89crv" Jan 30 13:03:40.938096 kubelet[2559]: E0130 13:03:40.937724 2559 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0052d1476361ed68d924c536afe59ee79b25df1b051eb83ee9e0d726b941bc9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-89crv" Jan 30 13:03:40.938246 kubelet[2559]: E0130 13:03:40.937772 2559 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-89crv_kube-system(51a31eb8-8977-41a3-b690-162f3ef160ae)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-89crv_kube-system(51a31eb8-8977-41a3-b690-162f3ef160ae)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0052d1476361ed68d924c536afe59ee79b25df1b051eb83ee9e0d726b941bc9c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-89crv" podUID="51a31eb8-8977-41a3-b690-162f3ef160ae" Jan 30 13:03:41.048460 systemd[1]: run-netns-cni\x2dc2e6cb56\x2dcfc9\x2dac7f\x2d7459\x2d0625c7389ef7.mount: Deactivated successfully. Jan 30 13:03:41.680286 kubelet[2559]: I0130 13:03:41.680251 2559 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0b8cab69669dccc4405556aab35436320fe776560fe176a6c6371978cd4d55c5" Jan 30 13:03:41.681323 containerd[1476]: time="2025-01-30T13:03:41.680942024Z" level=info msg="StopPodSandbox for \"0b8cab69669dccc4405556aab35436320fe776560fe176a6c6371978cd4d55c5\"" Jan 30 13:03:41.684186 containerd[1476]: time="2025-01-30T13:03:41.681538298Z" level=info msg="Ensure that sandbox 0b8cab69669dccc4405556aab35436320fe776560fe176a6c6371978cd4d55c5 in task-service has been cleanup successfully" Jan 30 13:03:41.684186 containerd[1476]: time="2025-01-30T13:03:41.682128253Z" level=info msg="TearDown network for sandbox \"0b8cab69669dccc4405556aab35436320fe776560fe176a6c6371978cd4d55c5\" successfully" Jan 30 13:03:41.684186 containerd[1476]: time="2025-01-30T13:03:41.682604833Z" level=info msg="StopPodSandbox for \"0b8cab69669dccc4405556aab35436320fe776560fe176a6c6371978cd4d55c5\" returns successfully" Jan 30 13:03:41.684186 containerd[1476]: time="2025-01-30T13:03:41.684026852Z" level=info msg="StopPodSandbox for \"a5d295c47e96d1c5b1c07aae599b48f3d11a387edc12b031c5eb4bda19e91a06\"" Jan 30 13:03:41.684186 containerd[1476]: time="2025-01-30T13:03:41.684098517Z" level=info msg="TearDown network for sandbox \"a5d295c47e96d1c5b1c07aae599b48f3d11a387edc12b031c5eb4bda19e91a06\" successfully" Jan 30 13:03:41.684186 containerd[1476]: time="2025-01-30T13:03:41.684107795Z" level=info msg="StopPodSandbox for \"a5d295c47e96d1c5b1c07aae599b48f3d11a387edc12b031c5eb4bda19e91a06\" returns successfully" Jan 30 13:03:41.684186 containerd[1476]: time="2025-01-30T13:03:41.684179100Z" level=info msg="StopPodSandbox for \"35b59a098d30c4bca7c9a975837a895bef0a72a468f7375cdbfb22ec4b909754\"" Jan 30 13:03:41.683768 systemd[1]: run-netns-cni\x2d128e2a6e\x2d3f24\x2dc817\x2d7c47\x2dd994fd13f43b.mount: Deactivated successfully. Jan 30 13:03:41.684618 kubelet[2559]: I0130 13:03:41.681931 2559 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="35b59a098d30c4bca7c9a975837a895bef0a72a468f7375cdbfb22ec4b909754" Jan 30 13:03:41.684659 containerd[1476]: time="2025-01-30T13:03:41.684343225Z" level=info msg="Ensure that sandbox 35b59a098d30c4bca7c9a975837a895bef0a72a468f7375cdbfb22ec4b909754 in task-service has been cleanup successfully" Jan 30 13:03:41.685616 containerd[1476]: time="2025-01-30T13:03:41.685105304Z" level=info msg="TearDown network for sandbox \"35b59a098d30c4bca7c9a975837a895bef0a72a468f7375cdbfb22ec4b909754\" successfully" Jan 30 13:03:41.685616 containerd[1476]: time="2025-01-30T13:03:41.685126580Z" level=info msg="StopPodSandbox for \"35b59a098d30c4bca7c9a975837a895bef0a72a468f7375cdbfb22ec4b909754\" returns successfully" Jan 30 13:03:41.685735 kubelet[2559]: E0130 13:03:41.685188 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:03:41.686616 containerd[1476]: time="2025-01-30T13:03:41.686291094Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-w7k54,Uid:c16bbc77-2b47-4ff4-846f-0b437cb6c4ee,Namespace:kube-system,Attempt:2,}" Jan 30 13:03:41.686894 kubelet[2559]: I0130 13:03:41.686868 2559 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79977d568f0e32127ab2f5ffbcb9e92abaf88b4c4d292ab6f5d8d3e0685ef426" Jan 30 13:03:41.687610 containerd[1476]: time="2025-01-30T13:03:41.687571424Z" level=info msg="StopPodSandbox for \"79977d568f0e32127ab2f5ffbcb9e92abaf88b4c4d292ab6f5d8d3e0685ef426\"" Jan 30 13:03:41.688113 containerd[1476]: time="2025-01-30T13:03:41.686324047Z" level=info msg="StopPodSandbox for \"2fa7c71c031aa8d5814264930e8142dd1828c577abbe79735d158974f3c75698\"" Jan 30 13:03:41.688113 containerd[1476]: time="2025-01-30T13:03:41.688010011Z" level=info msg="TearDown network for sandbox \"2fa7c71c031aa8d5814264930e8142dd1828c577abbe79735d158974f3c75698\" successfully" Jan 30 13:03:41.688113 containerd[1476]: time="2025-01-30T13:03:41.688020529Z" level=info msg="StopPodSandbox for \"2fa7c71c031aa8d5814264930e8142dd1828c577abbe79735d158974f3c75698\" returns successfully" Jan 30 13:03:41.688335 systemd[1]: run-netns-cni\x2de4ac99fb\x2d1615\x2d1e3c\x2ddbe8\x2d9c842a34a12e.mount: Deactivated successfully. Jan 30 13:03:41.688612 containerd[1476]: time="2025-01-30T13:03:41.688508066Z" level=info msg="Ensure that sandbox 79977d568f0e32127ab2f5ffbcb9e92abaf88b4c4d292ab6f5d8d3e0685ef426 in task-service has been cleanup successfully" Jan 30 13:03:41.689192 containerd[1476]: time="2025-01-30T13:03:41.689107099Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kw84f,Uid:21d84f57-66ce-4eaa-a49a-963d6f74f4a0,Namespace:calico-system,Attempt:2,}" Jan 30 13:03:41.689192 containerd[1476]: time="2025-01-30T13:03:41.689193161Z" level=info msg="TearDown network for sandbox \"79977d568f0e32127ab2f5ffbcb9e92abaf88b4c4d292ab6f5d8d3e0685ef426\" successfully" Jan 30 13:03:41.689282 containerd[1476]: time="2025-01-30T13:03:41.689204919Z" level=info msg="StopPodSandbox for \"79977d568f0e32127ab2f5ffbcb9e92abaf88b4c4d292ab6f5d8d3e0685ef426\" returns successfully" Jan 30 13:03:41.689702 containerd[1476]: time="2025-01-30T13:03:41.689488019Z" level=info msg="StopPodSandbox for \"71b6aa4a9833179615d228a0d19b1fb97dc78d75c32bf28c7e874e9434057227\"" Jan 30 13:03:41.689702 containerd[1476]: time="2025-01-30T13:03:41.689569442Z" level=info msg="TearDown network for sandbox \"71b6aa4a9833179615d228a0d19b1fb97dc78d75c32bf28c7e874e9434057227\" successfully" Jan 30 13:03:41.689702 containerd[1476]: time="2025-01-30T13:03:41.689580319Z" level=info msg="StopPodSandbox for \"71b6aa4a9833179615d228a0d19b1fb97dc78d75c32bf28c7e874e9434057227\" returns successfully" Jan 30 13:03:41.690968 containerd[1476]: time="2025-01-30T13:03:41.690928235Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f499d887f-64bsx,Uid:52142e13-5b13-4ee1-bef6-e84504589fe4,Namespace:calico-system,Attempt:2,}" Jan 30 13:03:41.691295 kubelet[2559]: I0130 13:03:41.691272 2559 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0052d1476361ed68d924c536afe59ee79b25df1b051eb83ee9e0d726b941bc9c" Jan 30 13:03:41.691308 systemd[1]: run-netns-cni\x2dae5fd2bd\x2d17e1\x2dd746\x2df833\x2d394278d10c33.mount: Deactivated successfully. Jan 30 13:03:41.693334 containerd[1476]: time="2025-01-30T13:03:41.692906817Z" level=info msg="StopPodSandbox for \"0052d1476361ed68d924c536afe59ee79b25df1b051eb83ee9e0d726b941bc9c\"" Jan 30 13:03:41.693334 containerd[1476]: time="2025-01-30T13:03:41.693167362Z" level=info msg="Ensure that sandbox 0052d1476361ed68d924c536afe59ee79b25df1b051eb83ee9e0d726b941bc9c in task-service has been cleanup successfully" Jan 30 13:03:41.694569 containerd[1476]: time="2025-01-30T13:03:41.694542951Z" level=info msg="TearDown network for sandbox \"0052d1476361ed68d924c536afe59ee79b25df1b051eb83ee9e0d726b941bc9c\" successfully" Jan 30 13:03:41.694769 containerd[1476]: time="2025-01-30T13:03:41.694726833Z" level=info msg="StopPodSandbox for \"0052d1476361ed68d924c536afe59ee79b25df1b051eb83ee9e0d726b941bc9c\" returns successfully" Jan 30 13:03:41.695460 kubelet[2559]: I0130 13:03:41.695074 2559 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df66bd6fcb6a38c10cff7d4d19cad38be0f23fa4e6d01253025405d35476663e" Jan 30 13:03:41.695524 containerd[1476]: time="2025-01-30T13:03:41.695289074Z" level=info msg="StopPodSandbox for \"144fc21131be39a0f42c1537abb474ebcc9fa2400214cdc58e533fbeaf811efa\"" Jan 30 13:03:41.695524 containerd[1476]: time="2025-01-30T13:03:41.695385334Z" level=info msg="TearDown network for sandbox \"144fc21131be39a0f42c1537abb474ebcc9fa2400214cdc58e533fbeaf811efa\" successfully" Jan 30 13:03:41.695524 containerd[1476]: time="2025-01-30T13:03:41.695406329Z" level=info msg="StopPodSandbox for \"144fc21131be39a0f42c1537abb474ebcc9fa2400214cdc58e533fbeaf811efa\" returns successfully" Jan 30 13:03:41.695694 containerd[1476]: time="2025-01-30T13:03:41.695657796Z" level=info msg="StopPodSandbox for \"df66bd6fcb6a38c10cff7d4d19cad38be0f23fa4e6d01253025405d35476663e\"" Jan 30 13:03:41.695832 containerd[1476]: time="2025-01-30T13:03:41.695797966Z" level=info msg="Ensure that sandbox df66bd6fcb6a38c10cff7d4d19cad38be0f23fa4e6d01253025405d35476663e in task-service has been cleanup successfully" Jan 30 13:03:41.696107 containerd[1476]: time="2025-01-30T13:03:41.695984407Z" level=info msg="TearDown network for sandbox \"df66bd6fcb6a38c10cff7d4d19cad38be0f23fa4e6d01253025405d35476663e\" successfully" Jan 30 13:03:41.696107 containerd[1476]: time="2025-01-30T13:03:41.696002763Z" level=info msg="StopPodSandbox for \"df66bd6fcb6a38c10cff7d4d19cad38be0f23fa4e6d01253025405d35476663e\" returns successfully" Jan 30 13:03:41.697655 containerd[1476]: time="2025-01-30T13:03:41.697288892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7874549f5f-gzb44,Uid:e0b441b3-88a9-4555-8974-b147721621a3,Namespace:calico-apiserver,Attempt:1,}" Jan 30 13:03:41.698186 kubelet[2559]: E0130 13:03:41.698165 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:03:41.698245 kubelet[2559]: I0130 13:03:41.698218 2559 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed1fb33fca754cc40e42e26806dbbc3cfdda2a37a4de910c7334fae2b33524be" Jan 30 13:03:41.699228 containerd[1476]: time="2025-01-30T13:03:41.698523591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-89crv,Uid:51a31eb8-8977-41a3-b690-162f3ef160ae,Namespace:kube-system,Attempt:2,}" Jan 30 13:03:41.699228 containerd[1476]: time="2025-01-30T13:03:41.699038122Z" level=info msg="StopPodSandbox for \"ed1fb33fca754cc40e42e26806dbbc3cfdda2a37a4de910c7334fae2b33524be\"" Jan 30 13:03:41.699322 containerd[1476]: time="2025-01-30T13:03:41.699241559Z" level=info msg="Ensure that sandbox ed1fb33fca754cc40e42e26806dbbc3cfdda2a37a4de910c7334fae2b33524be in task-service has been cleanup successfully" Jan 30 13:03:41.699466 containerd[1476]: time="2025-01-30T13:03:41.699439437Z" level=info msg="TearDown network for sandbox \"ed1fb33fca754cc40e42e26806dbbc3cfdda2a37a4de910c7334fae2b33524be\" successfully" Jan 30 13:03:41.699466 containerd[1476]: time="2025-01-30T13:03:41.699459953Z" level=info msg="StopPodSandbox for \"ed1fb33fca754cc40e42e26806dbbc3cfdda2a37a4de910c7334fae2b33524be\" returns successfully" Jan 30 13:03:41.700184 containerd[1476]: time="2025-01-30T13:03:41.700143409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7874549f5f-h26th,Uid:7acfabdf-bbd3-498c-9434-a86e65427513,Namespace:calico-apiserver,Attempt:1,}" Jan 30 13:03:41.953696 containerd[1476]: time="2025-01-30T13:03:41.952362105Z" level=error msg="Failed to destroy network for sandbox \"c8efa7aec4a3f187326a7cf3623c2715e4156903579c6743f88b56f9ed168c4c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:03:41.953696 containerd[1476]: time="2025-01-30T13:03:41.952836405Z" level=error msg="encountered an error cleaning up failed sandbox \"c8efa7aec4a3f187326a7cf3623c2715e4156903579c6743f88b56f9ed168c4c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:03:41.953696 containerd[1476]: time="2025-01-30T13:03:41.952903071Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kw84f,Uid:21d84f57-66ce-4eaa-a49a-963d6f74f4a0,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"c8efa7aec4a3f187326a7cf3623c2715e4156903579c6743f88b56f9ed168c4c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:03:41.953952 kubelet[2559]: E0130 13:03:41.953837 2559 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8efa7aec4a3f187326a7cf3623c2715e4156903579c6743f88b56f9ed168c4c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:03:41.953952 kubelet[2559]: E0130 13:03:41.953893 2559 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8efa7aec4a3f187326a7cf3623c2715e4156903579c6743f88b56f9ed168c4c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kw84f" Jan 30 13:03:41.953952 kubelet[2559]: E0130 13:03:41.953914 2559 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8efa7aec4a3f187326a7cf3623c2715e4156903579c6743f88b56f9ed168c4c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kw84f" Jan 30 13:03:41.954083 kubelet[2559]: E0130 13:03:41.953955 2559 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-kw84f_calico-system(21d84f57-66ce-4eaa-a49a-963d6f74f4a0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-kw84f_calico-system(21d84f57-66ce-4eaa-a49a-963d6f74f4a0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c8efa7aec4a3f187326a7cf3623c2715e4156903579c6743f88b56f9ed168c4c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-kw84f" podUID="21d84f57-66ce-4eaa-a49a-963d6f74f4a0" Jan 30 13:03:41.967609 containerd[1476]: time="2025-01-30T13:03:41.967535421Z" level=error msg="Failed to destroy network for sandbox \"35dd0410d989508b1ae72995429a86db41e358c8a098911b5b28bc01b367ad26\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:03:41.975015 containerd[1476]: time="2025-01-30T13:03:41.974878550Z" level=error msg="encountered an error cleaning up failed sandbox \"35dd0410d989508b1ae72995429a86db41e358c8a098911b5b28bc01b367ad26\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:03:41.975015 containerd[1476]: time="2025-01-30T13:03:41.974979009Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7874549f5f-gzb44,Uid:e0b441b3-88a9-4555-8974-b147721621a3,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"35dd0410d989508b1ae72995429a86db41e358c8a098911b5b28bc01b367ad26\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:03:41.975234 containerd[1476]: time="2025-01-30T13:03:41.975201961Z" level=error msg="Failed to destroy network for sandbox \"6cf762db3582e3aa060d9ce6a49e3ffb8dc1435b98c8ef8255cb92e6996dd824\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:03:41.975316 kubelet[2559]: E0130 13:03:41.975267 2559 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"35dd0410d989508b1ae72995429a86db41e358c8a098911b5b28bc01b367ad26\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:03:41.975654 containerd[1476]: time="2025-01-30T13:03:41.975618394Z" level=error msg="encountered an error cleaning up failed sandbox \"6cf762db3582e3aa060d9ce6a49e3ffb8dc1435b98c8ef8255cb92e6996dd824\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:03:41.975704 containerd[1476]: time="2025-01-30T13:03:41.975681500Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7874549f5f-h26th,Uid:7acfabdf-bbd3-498c-9434-a86e65427513,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"6cf762db3582e3aa060d9ce6a49e3ffb8dc1435b98c8ef8255cb92e6996dd824\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:03:41.976230 kubelet[2559]: E0130 13:03:41.976200 2559 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6cf762db3582e3aa060d9ce6a49e3ffb8dc1435b98c8ef8255cb92e6996dd824\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:03:41.976319 kubelet[2559]: E0130 13:03:41.976242 2559 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6cf762db3582e3aa060d9ce6a49e3ffb8dc1435b98c8ef8255cb92e6996dd824\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7874549f5f-h26th" Jan 30 13:03:41.976319 kubelet[2559]: E0130 13:03:41.976263 2559 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6cf762db3582e3aa060d9ce6a49e3ffb8dc1435b98c8ef8255cb92e6996dd824\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7874549f5f-h26th" Jan 30 13:03:41.976370 kubelet[2559]: E0130 13:03:41.976312 2559 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7874549f5f-h26th_calico-apiserver(7acfabdf-bbd3-498c-9434-a86e65427513)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7874549f5f-h26th_calico-apiserver(7acfabdf-bbd3-498c-9434-a86e65427513)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6cf762db3582e3aa060d9ce6a49e3ffb8dc1435b98c8ef8255cb92e6996dd824\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7874549f5f-h26th" podUID="7acfabdf-bbd3-498c-9434-a86e65427513" Jan 30 13:03:41.976617 kubelet[2559]: E0130 13:03:41.976569 2559 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"35dd0410d989508b1ae72995429a86db41e358c8a098911b5b28bc01b367ad26\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7874549f5f-gzb44" Jan 30 13:03:41.976690 kubelet[2559]: E0130 13:03:41.976618 2559 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"35dd0410d989508b1ae72995429a86db41e358c8a098911b5b28bc01b367ad26\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7874549f5f-gzb44" Jan 30 13:03:41.978048 kubelet[2559]: E0130 13:03:41.977980 2559 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7874549f5f-gzb44_calico-apiserver(e0b441b3-88a9-4555-8974-b147721621a3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7874549f5f-gzb44_calico-apiserver(e0b441b3-88a9-4555-8974-b147721621a3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"35dd0410d989508b1ae72995429a86db41e358c8a098911b5b28bc01b367ad26\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7874549f5f-gzb44" podUID="e0b441b3-88a9-4555-8974-b147721621a3" Jan 30 13:03:41.984806 containerd[1476]: time="2025-01-30T13:03:41.984749825Z" level=error msg="Failed to destroy network for sandbox \"5050c56e90f5114e30b9fd95242de2c970ed8d4595a32bf4a5efd4940ca8e5dc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:03:41.985138 containerd[1476]: time="2025-01-30T13:03:41.985102511Z" level=error msg="encountered an error cleaning up failed sandbox \"5050c56e90f5114e30b9fd95242de2c970ed8d4595a32bf4a5efd4940ca8e5dc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:03:41.985205 containerd[1476]: time="2025-01-30T13:03:41.985180254Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-89crv,Uid:51a31eb8-8977-41a3-b690-162f3ef160ae,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"5050c56e90f5114e30b9fd95242de2c970ed8d4595a32bf4a5efd4940ca8e5dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:03:41.985476 kubelet[2559]: E0130 13:03:41.985433 2559 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5050c56e90f5114e30b9fd95242de2c970ed8d4595a32bf4a5efd4940ca8e5dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:03:41.985535 kubelet[2559]: E0130 13:03:41.985492 2559 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5050c56e90f5114e30b9fd95242de2c970ed8d4595a32bf4a5efd4940ca8e5dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-89crv" Jan 30 13:03:41.985535 kubelet[2559]: E0130 13:03:41.985510 2559 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5050c56e90f5114e30b9fd95242de2c970ed8d4595a32bf4a5efd4940ca8e5dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-89crv" Jan 30 13:03:41.985601 kubelet[2559]: E0130 13:03:41.985546 2559 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-89crv_kube-system(51a31eb8-8977-41a3-b690-162f3ef160ae)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-89crv_kube-system(51a31eb8-8977-41a3-b690-162f3ef160ae)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5050c56e90f5114e30b9fd95242de2c970ed8d4595a32bf4a5efd4940ca8e5dc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-89crv" podUID="51a31eb8-8977-41a3-b690-162f3ef160ae" Jan 30 13:03:41.995414 containerd[1476]: time="2025-01-30T13:03:41.995350387Z" level=error msg="Failed to destroy network for sandbox \"bf5dd00c3929bb02396db453e002b12287ce2b372add738dca0ef8a9044ac221\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:03:41.996925 containerd[1476]: time="2025-01-30T13:03:41.996888622Z" level=error msg="encountered an error cleaning up failed sandbox \"bf5dd00c3929bb02396db453e002b12287ce2b372add738dca0ef8a9044ac221\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:03:41.997083 containerd[1476]: time="2025-01-30T13:03:41.997060105Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-w7k54,Uid:c16bbc77-2b47-4ff4-846f-0b437cb6c4ee,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"bf5dd00c3929bb02396db453e002b12287ce2b372add738dca0ef8a9044ac221\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:03:42.003707 containerd[1476]: time="2025-01-30T13:03:42.003661232Z" level=error msg="Failed to destroy network for sandbox \"d8cc1b41daa9d2b65a55b4361f0063fec06aea6109431bdcb518b45538ed87c7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:03:42.031493 kubelet[2559]: E0130 13:03:42.031422 2559 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf5dd00c3929bb02396db453e002b12287ce2b372add738dca0ef8a9044ac221\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:03:42.031630 kubelet[2559]: E0130 13:03:42.031502 2559 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf5dd00c3929bb02396db453e002b12287ce2b372add738dca0ef8a9044ac221\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-w7k54" Jan 30 13:03:42.031630 kubelet[2559]: E0130 13:03:42.031530 2559 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf5dd00c3929bb02396db453e002b12287ce2b372add738dca0ef8a9044ac221\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-w7k54" Jan 30 13:03:42.031630 kubelet[2559]: E0130 13:03:42.031601 2559 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-w7k54_kube-system(c16bbc77-2b47-4ff4-846f-0b437cb6c4ee)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-w7k54_kube-system(c16bbc77-2b47-4ff4-846f-0b437cb6c4ee)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bf5dd00c3929bb02396db453e002b12287ce2b372add738dca0ef8a9044ac221\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-w7k54" podUID="c16bbc77-2b47-4ff4-846f-0b437cb6c4ee" Jan 30 13:03:42.032482 containerd[1476]: time="2025-01-30T13:03:42.005422123Z" level=error msg="encountered an error cleaning up failed sandbox \"d8cc1b41daa9d2b65a55b4361f0063fec06aea6109431bdcb518b45538ed87c7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:03:42.032482 containerd[1476]: time="2025-01-30T13:03:42.032289564Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f499d887f-64bsx,Uid:52142e13-5b13-4ee1-bef6-e84504589fe4,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"d8cc1b41daa9d2b65a55b4361f0063fec06aea6109431bdcb518b45538ed87c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:03:42.033260 kubelet[2559]: E0130 13:03:42.033154 2559 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8cc1b41daa9d2b65a55b4361f0063fec06aea6109431bdcb518b45538ed87c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:03:42.033260 kubelet[2559]: E0130 13:03:42.033212 2559 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8cc1b41daa9d2b65a55b4361f0063fec06aea6109431bdcb518b45538ed87c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5f499d887f-64bsx" Jan 30 13:03:42.033260 kubelet[2559]: E0130 13:03:42.033234 2559 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8cc1b41daa9d2b65a55b4361f0063fec06aea6109431bdcb518b45538ed87c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5f499d887f-64bsx" Jan 30 13:03:42.033523 kubelet[2559]: E0130 13:03:42.033268 2559 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5f499d887f-64bsx_calico-system(52142e13-5b13-4ee1-bef6-e84504589fe4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5f499d887f-64bsx_calico-system(52142e13-5b13-4ee1-bef6-e84504589fe4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d8cc1b41daa9d2b65a55b4361f0063fec06aea6109431bdcb518b45538ed87c7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5f499d887f-64bsx" podUID="52142e13-5b13-4ee1-bef6-e84504589fe4" Jan 30 13:03:42.034487 systemd[1]: run-netns-cni\x2dbc02e791\x2d2fe0\x2df8e2\x2d95b9\x2d9ffa6e8b9465.mount: Deactivated successfully. Jan 30 13:03:42.034571 systemd[1]: run-netns-cni\x2d8d5ff411\x2d7577\x2d039a\x2dbd05\x2d6daecff4cac9.mount: Deactivated successfully. Jan 30 13:03:42.034646 systemd[1]: run-netns-cni\x2df5e73495\x2d2391\x2da82e\x2db078\x2d39ef306662ba.mount: Deactivated successfully. Jan 30 13:03:42.702007 kubelet[2559]: I0130 13:03:42.701829 2559 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c8efa7aec4a3f187326a7cf3623c2715e4156903579c6743f88b56f9ed168c4c" Jan 30 13:03:42.702811 containerd[1476]: time="2025-01-30T13:03:42.702779221Z" level=info msg="StopPodSandbox for \"c8efa7aec4a3f187326a7cf3623c2715e4156903579c6743f88b56f9ed168c4c\"" Jan 30 13:03:42.704635 kubelet[2559]: I0130 13:03:42.704063 2559 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8cc1b41daa9d2b65a55b4361f0063fec06aea6109431bdcb518b45538ed87c7" Jan 30 13:03:42.705513 containerd[1476]: time="2025-01-30T13:03:42.705392543Z" level=info msg="Ensure that sandbox c8efa7aec4a3f187326a7cf3623c2715e4156903579c6743f88b56f9ed168c4c in task-service has been cleanup successfully" Jan 30 13:03:42.706070 containerd[1476]: time="2025-01-30T13:03:42.705725997Z" level=info msg="StopPodSandbox for \"d8cc1b41daa9d2b65a55b4361f0063fec06aea6109431bdcb518b45538ed87c7\"" Jan 30 13:03:42.706509 containerd[1476]: time="2025-01-30T13:03:42.706385387Z" level=info msg="TearDown network for sandbox \"c8efa7aec4a3f187326a7cf3623c2715e4156903579c6743f88b56f9ed168c4c\" successfully" Jan 30 13:03:42.706509 containerd[1476]: time="2025-01-30T13:03:42.706408942Z" level=info msg="StopPodSandbox for \"c8efa7aec4a3f187326a7cf3623c2715e4156903579c6743f88b56f9ed168c4c\" returns successfully" Jan 30 13:03:42.707436 containerd[1476]: time="2025-01-30T13:03:42.706921921Z" level=info msg="Ensure that sandbox d8cc1b41daa9d2b65a55b4361f0063fec06aea6109431bdcb518b45538ed87c7 in task-service has been cleanup successfully" Jan 30 13:03:42.708042 systemd[1]: run-netns-cni\x2dbf265117\x2d93d6\x2dbab1\x2dab2b\x2d88e32e6fdd30.mount: Deactivated successfully. Jan 30 13:03:42.708517 containerd[1476]: time="2025-01-30T13:03:42.708111565Z" level=info msg="TearDown network for sandbox \"d8cc1b41daa9d2b65a55b4361f0063fec06aea6109431bdcb518b45538ed87c7\" successfully" Jan 30 13:03:42.708517 containerd[1476]: time="2025-01-30T13:03:42.708132961Z" level=info msg="StopPodSandbox for \"d8cc1b41daa9d2b65a55b4361f0063fec06aea6109431bdcb518b45538ed87c7\" returns successfully" Jan 30 13:03:42.710973 systemd[1]: run-netns-cni\x2d3773408f\x2d617b\x2d1912\x2d679e\x2d9253a0ab25ca.mount: Deactivated successfully. Jan 30 13:03:42.712932 containerd[1476]: time="2025-01-30T13:03:42.712889779Z" level=info msg="StopPodSandbox for \"79977d568f0e32127ab2f5ffbcb9e92abaf88b4c4d292ab6f5d8d3e0685ef426\"" Jan 30 13:03:42.713022 containerd[1476]: time="2025-01-30T13:03:42.713008795Z" level=info msg="TearDown network for sandbox \"79977d568f0e32127ab2f5ffbcb9e92abaf88b4c4d292ab6f5d8d3e0685ef426\" successfully" Jan 30 13:03:42.713058 containerd[1476]: time="2025-01-30T13:03:42.713020833Z" level=info msg="StopPodSandbox for \"79977d568f0e32127ab2f5ffbcb9e92abaf88b4c4d292ab6f5d8d3e0685ef426\" returns successfully" Jan 30 13:03:42.713084 containerd[1476]: time="2025-01-30T13:03:42.712915054Z" level=info msg="StopPodSandbox for \"35b59a098d30c4bca7c9a975837a895bef0a72a468f7375cdbfb22ec4b909754\"" Jan 30 13:03:42.713379 containerd[1476]: time="2025-01-30T13:03:42.713118774Z" level=info msg="TearDown network for sandbox \"35b59a098d30c4bca7c9a975837a895bef0a72a468f7375cdbfb22ec4b909754\" successfully" Jan 30 13:03:42.713379 containerd[1476]: time="2025-01-30T13:03:42.713132691Z" level=info msg="StopPodSandbox for \"35b59a098d30c4bca7c9a975837a895bef0a72a468f7375cdbfb22ec4b909754\" returns successfully" Jan 30 13:03:42.714212 containerd[1476]: time="2025-01-30T13:03:42.713880263Z" level=info msg="StopPodSandbox for \"2fa7c71c031aa8d5814264930e8142dd1828c577abbe79735d158974f3c75698\"" Jan 30 13:03:42.714212 containerd[1476]: time="2025-01-30T13:03:42.713884462Z" level=info msg="StopPodSandbox for \"71b6aa4a9833179615d228a0d19b1fb97dc78d75c32bf28c7e874e9434057227\"" Jan 30 13:03:42.715242 kubelet[2559]: I0130 13:03:42.714958 2559 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5050c56e90f5114e30b9fd95242de2c970ed8d4595a32bf4a5efd4940ca8e5dc" Jan 30 13:03:42.715427 containerd[1476]: time="2025-01-30T13:03:42.715160210Z" level=info msg="TearDown network for sandbox \"2fa7c71c031aa8d5814264930e8142dd1828c577abbe79735d158974f3c75698\" successfully" Jan 30 13:03:42.715427 containerd[1476]: time="2025-01-30T13:03:42.715184925Z" level=info msg="StopPodSandbox for \"2fa7c71c031aa8d5814264930e8142dd1828c577abbe79735d158974f3c75698\" returns successfully" Jan 30 13:03:42.716511 containerd[1476]: time="2025-01-30T13:03:42.716391566Z" level=info msg="StopPodSandbox for \"5050c56e90f5114e30b9fd95242de2c970ed8d4595a32bf4a5efd4940ca8e5dc\"" Jan 30 13:03:42.717757 containerd[1476]: time="2025-01-30T13:03:42.717478031Z" level=info msg="Ensure that sandbox 5050c56e90f5114e30b9fd95242de2c970ed8d4595a32bf4a5efd4940ca8e5dc in task-service has been cleanup successfully" Jan 30 13:03:42.717757 containerd[1476]: time="2025-01-30T13:03:42.717525621Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kw84f,Uid:21d84f57-66ce-4eaa-a49a-963d6f74f4a0,Namespace:calico-system,Attempt:3,}" Jan 30 13:03:42.719526 containerd[1476]: time="2025-01-30T13:03:42.719383373Z" level=info msg="TearDown network for sandbox \"5050c56e90f5114e30b9fd95242de2c970ed8d4595a32bf4a5efd4940ca8e5dc\" successfully" Jan 30 13:03:42.719526 containerd[1476]: time="2025-01-30T13:03:42.719475235Z" level=info msg="StopPodSandbox for \"5050c56e90f5114e30b9fd95242de2c970ed8d4595a32bf4a5efd4940ca8e5dc\" returns successfully" Jan 30 13:03:42.719786 systemd[1]: run-netns-cni\x2dd8d61025\x2ddd48\x2dd462\x2d18f5\x2d6cb3db99be12.mount: Deactivated successfully. Jan 30 13:03:42.721013 containerd[1476]: time="2025-01-30T13:03:42.720840605Z" level=info msg="StopPodSandbox for \"0052d1476361ed68d924c536afe59ee79b25df1b051eb83ee9e0d726b941bc9c\"" Jan 30 13:03:42.721622 containerd[1476]: time="2025-01-30T13:03:42.721183337Z" level=info msg="TearDown network for sandbox \"0052d1476361ed68d924c536afe59ee79b25df1b051eb83ee9e0d726b941bc9c\" successfully" Jan 30 13:03:42.721622 containerd[1476]: time="2025-01-30T13:03:42.721202653Z" level=info msg="StopPodSandbox for \"0052d1476361ed68d924c536afe59ee79b25df1b051eb83ee9e0d726b941bc9c\" returns successfully" Jan 30 13:03:42.722455 kubelet[2559]: I0130 13:03:42.722423 2559 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="35dd0410d989508b1ae72995429a86db41e358c8a098911b5b28bc01b367ad26" Jan 30 13:03:42.722762 containerd[1476]: time="2025-01-30T13:03:42.722732790Z" level=info msg="StopPodSandbox for \"144fc21131be39a0f42c1537abb474ebcc9fa2400214cdc58e533fbeaf811efa\"" Jan 30 13:03:42.722972 containerd[1476]: time="2025-01-30T13:03:42.722954106Z" level=info msg="TearDown network for sandbox \"144fc21131be39a0f42c1537abb474ebcc9fa2400214cdc58e533fbeaf811efa\" successfully" Jan 30 13:03:42.723054 containerd[1476]: time="2025-01-30T13:03:42.723039170Z" level=info msg="StopPodSandbox for \"144fc21131be39a0f42c1537abb474ebcc9fa2400214cdc58e533fbeaf811efa\" returns successfully" Jan 30 13:03:42.723465 containerd[1476]: time="2025-01-30T13:03:42.723443890Z" level=info msg="StopPodSandbox for \"35dd0410d989508b1ae72995429a86db41e358c8a098911b5b28bc01b367ad26\"" Jan 30 13:03:42.723598 kubelet[2559]: E0130 13:03:42.723558 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:03:42.723934 containerd[1476]: time="2025-01-30T13:03:42.723869765Z" level=info msg="Ensure that sandbox 35dd0410d989508b1ae72995429a86db41e358c8a098911b5b28bc01b367ad26 in task-service has been cleanup successfully" Jan 30 13:03:42.724385 containerd[1476]: time="2025-01-30T13:03:42.724356269Z" level=info msg="TearDown network for sandbox \"35dd0410d989508b1ae72995429a86db41e358c8a098911b5b28bc01b367ad26\" successfully" Jan 30 13:03:42.724642 containerd[1476]: time="2025-01-30T13:03:42.724495801Z" level=info msg="StopPodSandbox for \"35dd0410d989508b1ae72995429a86db41e358c8a098911b5b28bc01b367ad26\" returns successfully" Jan 30 13:03:42.724837 containerd[1476]: time="2025-01-30T13:03:42.724801861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-89crv,Uid:51a31eb8-8977-41a3-b690-162f3ef160ae,Namespace:kube-system,Attempt:3,}" Jan 30 13:03:42.725848 containerd[1476]: time="2025-01-30T13:03:42.725625458Z" level=info msg="StopPodSandbox for \"df66bd6fcb6a38c10cff7d4d19cad38be0f23fa4e6d01253025405d35476663e\"" Jan 30 13:03:42.725848 containerd[1476]: time="2025-01-30T13:03:42.725714080Z" level=info msg="TearDown network for sandbox \"df66bd6fcb6a38c10cff7d4d19cad38be0f23fa4e6d01253025405d35476663e\" successfully" Jan 30 13:03:42.725848 containerd[1476]: time="2025-01-30T13:03:42.725723598Z" level=info msg="StopPodSandbox for \"df66bd6fcb6a38c10cff7d4d19cad38be0f23fa4e6d01253025405d35476663e\" returns successfully" Jan 30 13:03:42.726646 systemd[1]: run-netns-cni\x2d2aec67bf\x2d7243\x2d1638\x2d5b16\x2dbc2d06df9a9b.mount: Deactivated successfully. Jan 30 13:03:42.728434 containerd[1476]: time="2025-01-30T13:03:42.727500166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7874549f5f-gzb44,Uid:e0b441b3-88a9-4555-8974-b147721621a3,Namespace:calico-apiserver,Attempt:2,}" Jan 30 13:03:42.728516 kubelet[2559]: I0130 13:03:42.727755 2559 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6cf762db3582e3aa060d9ce6a49e3ffb8dc1435b98c8ef8255cb92e6996dd824" Jan 30 13:03:42.728829 containerd[1476]: time="2025-01-30T13:03:42.727638139Z" level=info msg="TearDown network for sandbox \"71b6aa4a9833179615d228a0d19b1fb97dc78d75c32bf28c7e874e9434057227\" successfully" Jan 30 13:03:42.728829 containerd[1476]: time="2025-01-30T13:03:42.728763196Z" level=info msg="StopPodSandbox for \"71b6aa4a9833179615d228a0d19b1fb97dc78d75c32bf28c7e874e9434057227\" returns successfully" Jan 30 13:03:42.728829 containerd[1476]: time="2025-01-30T13:03:42.728330722Z" level=info msg="StopPodSandbox for \"6cf762db3582e3aa060d9ce6a49e3ffb8dc1435b98c8ef8255cb92e6996dd824\"" Jan 30 13:03:42.729200 containerd[1476]: time="2025-01-30T13:03:42.729116047Z" level=info msg="Ensure that sandbox 6cf762db3582e3aa060d9ce6a49e3ffb8dc1435b98c8ef8255cb92e6996dd824 in task-service has been cleanup successfully" Jan 30 13:03:42.730199 containerd[1476]: time="2025-01-30T13:03:42.730165159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f499d887f-64bsx,Uid:52142e13-5b13-4ee1-bef6-e84504589fe4,Namespace:calico-system,Attempt:3,}" Jan 30 13:03:42.730679 containerd[1476]: time="2025-01-30T13:03:42.730649983Z" level=info msg="TearDown network for sandbox \"6cf762db3582e3aa060d9ce6a49e3ffb8dc1435b98c8ef8255cb92e6996dd824\" successfully" Jan 30 13:03:42.730679 containerd[1476]: time="2025-01-30T13:03:42.730677457Z" level=info msg="StopPodSandbox for \"6cf762db3582e3aa060d9ce6a49e3ffb8dc1435b98c8ef8255cb92e6996dd824\" returns successfully" Jan 30 13:03:42.730947 kubelet[2559]: I0130 13:03:42.730923 2559 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf5dd00c3929bb02396db453e002b12287ce2b372add738dca0ef8a9044ac221" Jan 30 13:03:42.731860 containerd[1476]: time="2025-01-30T13:03:42.731653104Z" level=info msg="StopPodSandbox for \"bf5dd00c3929bb02396db453e002b12287ce2b372add738dca0ef8a9044ac221\"" Jan 30 13:03:42.731860 containerd[1476]: time="2025-01-30T13:03:42.731702934Z" level=info msg="StopPodSandbox for \"ed1fb33fca754cc40e42e26806dbbc3cfdda2a37a4de910c7334fae2b33524be\"" Jan 30 13:03:42.731860 containerd[1476]: time="2025-01-30T13:03:42.731826110Z" level=info msg="Ensure that sandbox bf5dd00c3929bb02396db453e002b12287ce2b372add738dca0ef8a9044ac221 in task-service has been cleanup successfully" Jan 30 13:03:42.731985 containerd[1476]: time="2025-01-30T13:03:42.731948766Z" level=info msg="TearDown network for sandbox \"ed1fb33fca754cc40e42e26806dbbc3cfdda2a37a4de910c7334fae2b33524be\" successfully" Jan 30 13:03:42.731985 containerd[1476]: time="2025-01-30T13:03:42.731967482Z" level=info msg="StopPodSandbox for \"ed1fb33fca754cc40e42e26806dbbc3cfdda2a37a4de910c7334fae2b33524be\" returns successfully" Jan 30 13:03:42.732148 containerd[1476]: time="2025-01-30T13:03:42.732060704Z" level=info msg="TearDown network for sandbox \"bf5dd00c3929bb02396db453e002b12287ce2b372add738dca0ef8a9044ac221\" successfully" Jan 30 13:03:42.732148 containerd[1476]: time="2025-01-30T13:03:42.732077980Z" level=info msg="StopPodSandbox for \"bf5dd00c3929bb02396db453e002b12287ce2b372add738dca0ef8a9044ac221\" returns successfully" Jan 30 13:03:42.732539 containerd[1476]: time="2025-01-30T13:03:42.732517773Z" level=info msg="StopPodSandbox for \"0b8cab69669dccc4405556aab35436320fe776560fe176a6c6371978cd4d55c5\"" Jan 30 13:03:42.732797 containerd[1476]: time="2025-01-30T13:03:42.732731371Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7874549f5f-h26th,Uid:7acfabdf-bbd3-498c-9434-a86e65427513,Namespace:calico-apiserver,Attempt:2,}" Jan 30 13:03:42.732941 containerd[1476]: time="2025-01-30T13:03:42.732878182Z" level=info msg="TearDown network for sandbox \"0b8cab69669dccc4405556aab35436320fe776560fe176a6c6371978cd4d55c5\" successfully" Jan 30 13:03:42.732941 containerd[1476]: time="2025-01-30T13:03:42.732900377Z" level=info msg="StopPodSandbox for \"0b8cab69669dccc4405556aab35436320fe776560fe176a6c6371978cd4d55c5\" returns successfully" Jan 30 13:03:42.733555 containerd[1476]: time="2025-01-30T13:03:42.733528173Z" level=info msg="StopPodSandbox for \"a5d295c47e96d1c5b1c07aae599b48f3d11a387edc12b031c5eb4bda19e91a06\"" Jan 30 13:03:42.733739 containerd[1476]: time="2025-01-30T13:03:42.733706818Z" level=info msg="TearDown network for sandbox \"a5d295c47e96d1c5b1c07aae599b48f3d11a387edc12b031c5eb4bda19e91a06\" successfully" Jan 30 13:03:42.733739 containerd[1476]: time="2025-01-30T13:03:42.733727494Z" level=info msg="StopPodSandbox for \"a5d295c47e96d1c5b1c07aae599b48f3d11a387edc12b031c5eb4bda19e91a06\" returns successfully" Jan 30 13:03:42.734996 kubelet[2559]: E0130 13:03:42.734885 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:03:42.735268 containerd[1476]: time="2025-01-30T13:03:42.735235915Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-w7k54,Uid:c16bbc77-2b47-4ff4-846f-0b437cb6c4ee,Namespace:kube-system,Attempt:3,}" Jan 30 13:03:43.040041 systemd[1]: run-netns-cni\x2d1a62e93e\x2db5a2\x2d177e\x2dea5b\x2d75dae54f49e9.mount: Deactivated successfully. Jan 30 13:03:43.040140 systemd[1]: run-netns-cni\x2d3b44ddfe\x2d827e\x2d5acc\x2de427\x2d8907709c8c63.mount: Deactivated successfully. Jan 30 13:03:43.097977 containerd[1476]: time="2025-01-30T13:03:43.097896720Z" level=error msg="Failed to destroy network for sandbox \"53d8ca28bf33ffd6ef59fa12aa77bae3998857794f8b0a7e3df66397d1e911b3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:03:43.100583 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-53d8ca28bf33ffd6ef59fa12aa77bae3998857794f8b0a7e3df66397d1e911b3-shm.mount: Deactivated successfully. Jan 30 13:03:43.101176 containerd[1476]: time="2025-01-30T13:03:43.100981627Z" level=error msg="encountered an error cleaning up failed sandbox \"53d8ca28bf33ffd6ef59fa12aa77bae3998857794f8b0a7e3df66397d1e911b3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:03:43.101336 containerd[1476]: time="2025-01-30T13:03:43.101310366Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f499d887f-64bsx,Uid:52142e13-5b13-4ee1-bef6-e84504589fe4,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"53d8ca28bf33ffd6ef59fa12aa77bae3998857794f8b0a7e3df66397d1e911b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:03:43.101979 kubelet[2559]: E0130 13:03:43.101679 2559 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"53d8ca28bf33ffd6ef59fa12aa77bae3998857794f8b0a7e3df66397d1e911b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:03:43.101979 kubelet[2559]: E0130 13:03:43.101745 2559 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"53d8ca28bf33ffd6ef59fa12aa77bae3998857794f8b0a7e3df66397d1e911b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5f499d887f-64bsx" Jan 30 13:03:43.101979 kubelet[2559]: E0130 13:03:43.101765 2559 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"53d8ca28bf33ffd6ef59fa12aa77bae3998857794f8b0a7e3df66397d1e911b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5f499d887f-64bsx" Jan 30 13:03:43.102232 kubelet[2559]: E0130 13:03:43.101810 2559 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5f499d887f-64bsx_calico-system(52142e13-5b13-4ee1-bef6-e84504589fe4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5f499d887f-64bsx_calico-system(52142e13-5b13-4ee1-bef6-e84504589fe4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"53d8ca28bf33ffd6ef59fa12aa77bae3998857794f8b0a7e3df66397d1e911b3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5f499d887f-64bsx" podUID="52142e13-5b13-4ee1-bef6-e84504589fe4" Jan 30 13:03:43.115296 containerd[1476]: time="2025-01-30T13:03:43.115239021Z" level=error msg="Failed to destroy network for sandbox \"ec456f68e433fba4fb974e9bd18588ad52e90092aae461fdc7248fec270c3280\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:03:43.115932 containerd[1476]: time="2025-01-30T13:03:43.115891420Z" level=error msg="encountered an error cleaning up failed sandbox \"ec456f68e433fba4fb974e9bd18588ad52e90092aae461fdc7248fec270c3280\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:03:43.116001 containerd[1476]: time="2025-01-30T13:03:43.115959967Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kw84f,Uid:21d84f57-66ce-4eaa-a49a-963d6f74f4a0,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"ec456f68e433fba4fb974e9bd18588ad52e90092aae461fdc7248fec270c3280\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:03:43.117708 kubelet[2559]: E0130 13:03:43.116218 2559 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec456f68e433fba4fb974e9bd18588ad52e90092aae461fdc7248fec270c3280\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:03:43.117708 kubelet[2559]: E0130 13:03:43.116279 2559 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec456f68e433fba4fb974e9bd18588ad52e90092aae461fdc7248fec270c3280\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kw84f" Jan 30 13:03:43.117708 kubelet[2559]: E0130 13:03:43.116299 2559 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec456f68e433fba4fb974e9bd18588ad52e90092aae461fdc7248fec270c3280\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kw84f" Jan 30 13:03:43.117485 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ec456f68e433fba4fb974e9bd18588ad52e90092aae461fdc7248fec270c3280-shm.mount: Deactivated successfully. Jan 30 13:03:43.117943 kubelet[2559]: E0130 13:03:43.116335 2559 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-kw84f_calico-system(21d84f57-66ce-4eaa-a49a-963d6f74f4a0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-kw84f_calico-system(21d84f57-66ce-4eaa-a49a-963d6f74f4a0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ec456f68e433fba4fb974e9bd18588ad52e90092aae461fdc7248fec270c3280\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-kw84f" podUID="21d84f57-66ce-4eaa-a49a-963d6f74f4a0" Jan 30 13:03:43.120117 containerd[1476]: time="2025-01-30T13:03:43.120058286Z" level=error msg="Failed to destroy network for sandbox \"01b2b7084bf10be0191423ef465ccb8e5e96bdd56da1dc74b2073d6e2aca0bd4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:03:43.123350 containerd[1476]: time="2025-01-30T13:03:43.121530893Z" level=error msg="encountered an error cleaning up failed sandbox \"01b2b7084bf10be0191423ef465ccb8e5e96bdd56da1dc74b2073d6e2aca0bd4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:03:43.123350 containerd[1476]: time="2025-01-30T13:03:43.121608039Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-w7k54,Uid:c16bbc77-2b47-4ff4-846f-0b437cb6c4ee,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"01b2b7084bf10be0191423ef465ccb8e5e96bdd56da1dc74b2073d6e2aca0bd4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:03:43.122164 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-01b2b7084bf10be0191423ef465ccb8e5e96bdd56da1dc74b2073d6e2aca0bd4-shm.mount: Deactivated successfully. Jan 30 13:03:43.123524 kubelet[2559]: E0130 13:03:43.121787 2559 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"01b2b7084bf10be0191423ef465ccb8e5e96bdd56da1dc74b2073d6e2aca0bd4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:03:43.123524 kubelet[2559]: E0130 13:03:43.121840 2559 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"01b2b7084bf10be0191423ef465ccb8e5e96bdd56da1dc74b2073d6e2aca0bd4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-w7k54" Jan 30 13:03:43.123524 kubelet[2559]: E0130 13:03:43.121858 2559 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"01b2b7084bf10be0191423ef465ccb8e5e96bdd56da1dc74b2073d6e2aca0bd4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-w7k54" Jan 30 13:03:43.123634 kubelet[2559]: E0130 13:03:43.121902 2559 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-w7k54_kube-system(c16bbc77-2b47-4ff4-846f-0b437cb6c4ee)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-w7k54_kube-system(c16bbc77-2b47-4ff4-846f-0b437cb6c4ee)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"01b2b7084bf10be0191423ef465ccb8e5e96bdd56da1dc74b2073d6e2aca0bd4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-w7k54" podUID="c16bbc77-2b47-4ff4-846f-0b437cb6c4ee" Jan 30 13:03:43.132397 containerd[1476]: time="2025-01-30T13:03:43.132276259Z" level=error msg="Failed to destroy network for sandbox \"9dfac4677edde509b317abcf092ad75ffe004513ee0ec727b97a80603c8f198e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:03:43.134476 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9dfac4677edde509b317abcf092ad75ffe004513ee0ec727b97a80603c8f198e-shm.mount: Deactivated successfully. Jan 30 13:03:43.135249 containerd[1476]: time="2025-01-30T13:03:43.135212354Z" level=error msg="encountered an error cleaning up failed sandbox \"9dfac4677edde509b317abcf092ad75ffe004513ee0ec727b97a80603c8f198e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:03:43.135390 containerd[1476]: time="2025-01-30T13:03:43.135367045Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7874549f5f-h26th,Uid:7acfabdf-bbd3-498c-9434-a86e65427513,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"9dfac4677edde509b317abcf092ad75ffe004513ee0ec727b97a80603c8f198e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:03:43.136745 kubelet[2559]: E0130 13:03:43.136694 2559 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9dfac4677edde509b317abcf092ad75ffe004513ee0ec727b97a80603c8f198e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:03:43.136836 kubelet[2559]: E0130 13:03:43.136762 2559 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9dfac4677edde509b317abcf092ad75ffe004513ee0ec727b97a80603c8f198e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7874549f5f-h26th" Jan 30 13:03:43.136836 kubelet[2559]: E0130 13:03:43.136785 2559 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9dfac4677edde509b317abcf092ad75ffe004513ee0ec727b97a80603c8f198e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7874549f5f-h26th" Jan 30 13:03:43.136901 kubelet[2559]: E0130 13:03:43.136830 2559 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7874549f5f-h26th_calico-apiserver(7acfabdf-bbd3-498c-9434-a86e65427513)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7874549f5f-h26th_calico-apiserver(7acfabdf-bbd3-498c-9434-a86e65427513)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9dfac4677edde509b317abcf092ad75ffe004513ee0ec727b97a80603c8f198e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7874549f5f-h26th" podUID="7acfabdf-bbd3-498c-9434-a86e65427513" Jan 30 13:03:43.137927 containerd[1476]: time="2025-01-30T13:03:43.137898215Z" level=error msg="Failed to destroy network for sandbox \"5ec65669131bf01e5f4583128fc9f6ee7dc6c04be79f0aad25a456a1b83b6797\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:03:43.138645 containerd[1476]: time="2025-01-30T13:03:43.138615402Z" level=error msg="encountered an error cleaning up failed sandbox \"5ec65669131bf01e5f4583128fc9f6ee7dc6c04be79f0aad25a456a1b83b6797\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:03:43.138791 containerd[1476]: time="2025-01-30T13:03:43.138769014Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7874549f5f-gzb44,Uid:e0b441b3-88a9-4555-8974-b147721621a3,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"5ec65669131bf01e5f4583128fc9f6ee7dc6c04be79f0aad25a456a1b83b6797\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:03:43.139180 containerd[1476]: time="2025-01-30T13:03:43.139117069Z" level=error msg="Failed to destroy network for sandbox \"8e62769cb520bc417a3446ec3e424a25f086fcf882c1ae44e5ce183a6a33b58d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:03:43.139313 kubelet[2559]: E0130 13:03:43.139273 2559 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ec65669131bf01e5f4583128fc9f6ee7dc6c04be79f0aad25a456a1b83b6797\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:03:43.139355 kubelet[2559]: E0130 13:03:43.139321 2559 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ec65669131bf01e5f4583128fc9f6ee7dc6c04be79f0aad25a456a1b83b6797\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7874549f5f-gzb44" Jan 30 13:03:43.139355 kubelet[2559]: E0130 13:03:43.139340 2559 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ec65669131bf01e5f4583128fc9f6ee7dc6c04be79f0aad25a456a1b83b6797\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7874549f5f-gzb44" Jan 30 13:03:43.139423 kubelet[2559]: E0130 13:03:43.139379 2559 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7874549f5f-gzb44_calico-apiserver(e0b441b3-88a9-4555-8974-b147721621a3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7874549f5f-gzb44_calico-apiserver(e0b441b3-88a9-4555-8974-b147721621a3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5ec65669131bf01e5f4583128fc9f6ee7dc6c04be79f0aad25a456a1b83b6797\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7874549f5f-gzb44" podUID="e0b441b3-88a9-4555-8974-b147721621a3" Jan 30 13:03:43.139711 containerd[1476]: time="2025-01-30T13:03:43.139681084Z" level=error msg="encountered an error cleaning up failed sandbox \"8e62769cb520bc417a3446ec3e424a25f086fcf882c1ae44e5ce183a6a33b58d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:03:43.139834 containerd[1476]: time="2025-01-30T13:03:43.139811180Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-89crv,Uid:51a31eb8-8977-41a3-b690-162f3ef160ae,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"8e62769cb520bc417a3446ec3e424a25f086fcf882c1ae44e5ce183a6a33b58d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:03:43.140630 kubelet[2559]: E0130 13:03:43.140421 2559 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e62769cb520bc417a3446ec3e424a25f086fcf882c1ae44e5ce183a6a33b58d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:03:43.140630 kubelet[2559]: E0130 13:03:43.140446 2559 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e62769cb520bc417a3446ec3e424a25f086fcf882c1ae44e5ce183a6a33b58d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-89crv" Jan 30 13:03:43.140630 kubelet[2559]: E0130 13:03:43.140460 2559 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e62769cb520bc417a3446ec3e424a25f086fcf882c1ae44e5ce183a6a33b58d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-89crv" Jan 30 13:03:43.140753 kubelet[2559]: E0130 13:03:43.140483 2559 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-89crv_kube-system(51a31eb8-8977-41a3-b690-162f3ef160ae)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-89crv_kube-system(51a31eb8-8977-41a3-b690-162f3ef160ae)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8e62769cb520bc417a3446ec3e424a25f086fcf882c1ae44e5ce183a6a33b58d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-89crv" podUID="51a31eb8-8977-41a3-b690-162f3ef160ae" Jan 30 13:03:43.376472 containerd[1476]: time="2025-01-30T13:03:43.375529470Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:03:43.376472 containerd[1476]: time="2025-01-30T13:03:43.376334801Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=137671762" Jan 30 13:03:43.377535 containerd[1476]: time="2025-01-30T13:03:43.377505863Z" level=info msg="ImageCreate event name:\"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:03:43.380613 containerd[1476]: time="2025-01-30T13:03:43.380547779Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:03:43.381313 containerd[1476]: time="2025-01-30T13:03:43.381059524Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"137671624\" in 3.699654474s" Jan 30 13:03:43.381313 containerd[1476]: time="2025-01-30T13:03:43.381090958Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\"" Jan 30 13:03:43.387957 containerd[1476]: time="2025-01-30T13:03:43.387914531Z" level=info msg="CreateContainer within sandbox \"b73770729b5279eba93bdca0f00d9dcbcc63d7e4cc2f6707817a800ced28775b\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 30 13:03:43.410725 containerd[1476]: time="2025-01-30T13:03:43.410660550Z" level=info msg="CreateContainer within sandbox \"b73770729b5279eba93bdca0f00d9dcbcc63d7e4cc2f6707817a800ced28775b\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"053d0b57c73c951481477e9df6ad9ab492d0724d877cc27ce53921f349c22b6c\"" Jan 30 13:03:43.411334 containerd[1476]: time="2025-01-30T13:03:43.411308749Z" level=info msg="StartContainer for \"053d0b57c73c951481477e9df6ad9ab492d0724d877cc27ce53921f349c22b6c\"" Jan 30 13:03:43.464940 systemd[1]: Started cri-containerd-053d0b57c73c951481477e9df6ad9ab492d0724d877cc27ce53921f349c22b6c.scope - libcontainer container 053d0b57c73c951481477e9df6ad9ab492d0724d877cc27ce53921f349c22b6c. Jan 30 13:03:43.500816 containerd[1476]: time="2025-01-30T13:03:43.499075739Z" level=info msg="StartContainer for \"053d0b57c73c951481477e9df6ad9ab492d0724d877cc27ce53921f349c22b6c\" returns successfully" Jan 30 13:03:43.735664 kubelet[2559]: E0130 13:03:43.735461 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:03:43.739640 kubelet[2559]: I0130 13:03:43.739609 2559 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="01b2b7084bf10be0191423ef465ccb8e5e96bdd56da1dc74b2073d6e2aca0bd4" Jan 30 13:03:43.740300 containerd[1476]: time="2025-01-30T13:03:43.740266493Z" level=info msg="StopPodSandbox for \"01b2b7084bf10be0191423ef465ccb8e5e96bdd56da1dc74b2073d6e2aca0bd4\"" Jan 30 13:03:43.740703 containerd[1476]: time="2025-01-30T13:03:43.740439701Z" level=info msg="Ensure that sandbox 01b2b7084bf10be0191423ef465ccb8e5e96bdd56da1dc74b2073d6e2aca0bd4 in task-service has been cleanup successfully" Jan 30 13:03:43.746434 containerd[1476]: time="2025-01-30T13:03:43.744937786Z" level=info msg="TearDown network for sandbox \"01b2b7084bf10be0191423ef465ccb8e5e96bdd56da1dc74b2073d6e2aca0bd4\" successfully" Jan 30 13:03:43.746434 containerd[1476]: time="2025-01-30T13:03:43.744976659Z" level=info msg="StopPodSandbox for \"01b2b7084bf10be0191423ef465ccb8e5e96bdd56da1dc74b2073d6e2aca0bd4\" returns successfully" Jan 30 13:03:43.746434 containerd[1476]: time="2025-01-30T13:03:43.746426270Z" level=info msg="StopPodSandbox for \"bf5dd00c3929bb02396db453e002b12287ce2b372add738dca0ef8a9044ac221\"" Jan 30 13:03:43.746705 containerd[1476]: time="2025-01-30T13:03:43.746526811Z" level=info msg="TearDown network for sandbox \"bf5dd00c3929bb02396db453e002b12287ce2b372add738dca0ef8a9044ac221\" successfully" Jan 30 13:03:43.746705 containerd[1476]: time="2025-01-30T13:03:43.746549847Z" level=info msg="StopPodSandbox for \"bf5dd00c3929bb02396db453e002b12287ce2b372add738dca0ef8a9044ac221\" returns successfully" Jan 30 13:03:43.747003 containerd[1476]: time="2025-01-30T13:03:43.746907821Z" level=info msg="StopPodSandbox for \"0b8cab69669dccc4405556aab35436320fe776560fe176a6c6371978cd4d55c5\"" Jan 30 13:03:43.747121 containerd[1476]: time="2025-01-30T13:03:43.747103704Z" level=info msg="TearDown network for sandbox \"0b8cab69669dccc4405556aab35436320fe776560fe176a6c6371978cd4d55c5\" successfully" Jan 30 13:03:43.747277 containerd[1476]: time="2025-01-30T13:03:43.747209165Z" level=info msg="StopPodSandbox for \"0b8cab69669dccc4405556aab35436320fe776560fe176a6c6371978cd4d55c5\" returns successfully" Jan 30 13:03:43.748845 containerd[1476]: time="2025-01-30T13:03:43.748688810Z" level=info msg="StopPodSandbox for \"a5d295c47e96d1c5b1c07aae599b48f3d11a387edc12b031c5eb4bda19e91a06\"" Jan 30 13:03:43.748845 containerd[1476]: time="2025-01-30T13:03:43.748782233Z" level=info msg="TearDown network for sandbox \"a5d295c47e96d1c5b1c07aae599b48f3d11a387edc12b031c5eb4bda19e91a06\" successfully" Jan 30 13:03:43.748845 containerd[1476]: time="2025-01-30T13:03:43.748808628Z" level=info msg="StopPodSandbox for \"a5d295c47e96d1c5b1c07aae599b48f3d11a387edc12b031c5eb4bda19e91a06\" returns successfully" Jan 30 13:03:43.749655 kubelet[2559]: E0130 13:03:43.749631 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:03:43.750256 containerd[1476]: time="2025-01-30T13:03:43.750231564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-w7k54,Uid:c16bbc77-2b47-4ff4-846f-0b437cb6c4ee,Namespace:kube-system,Attempt:4,}" Jan 30 13:03:43.753832 kubelet[2559]: I0130 13:03:43.753801 2559 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec456f68e433fba4fb974e9bd18588ad52e90092aae461fdc7248fec270c3280" Jan 30 13:03:43.755421 containerd[1476]: time="2025-01-30T13:03:43.755388007Z" level=info msg="StopPodSandbox for \"ec456f68e433fba4fb974e9bd18588ad52e90092aae461fdc7248fec270c3280\"" Jan 30 13:03:43.755637 containerd[1476]: time="2025-01-30T13:03:43.755618284Z" level=info msg="Ensure that sandbox ec456f68e433fba4fb974e9bd18588ad52e90092aae461fdc7248fec270c3280 in task-service has been cleanup successfully" Jan 30 13:03:43.756369 containerd[1476]: time="2025-01-30T13:03:43.756338350Z" level=info msg="TearDown network for sandbox \"ec456f68e433fba4fb974e9bd18588ad52e90092aae461fdc7248fec270c3280\" successfully" Jan 30 13:03:43.756417 containerd[1476]: time="2025-01-30T13:03:43.756364346Z" level=info msg="StopPodSandbox for \"ec456f68e433fba4fb974e9bd18588ad52e90092aae461fdc7248fec270c3280\" returns successfully" Jan 30 13:03:43.756879 containerd[1476]: time="2025-01-30T13:03:43.756785307Z" level=info msg="StopPodSandbox for \"c8efa7aec4a3f187326a7cf3623c2715e4156903579c6743f88b56f9ed168c4c\"" Jan 30 13:03:43.757061 containerd[1476]: time="2025-01-30T13:03:43.756989470Z" level=info msg="TearDown network for sandbox \"c8efa7aec4a3f187326a7cf3623c2715e4156903579c6743f88b56f9ed168c4c\" successfully" Jan 30 13:03:43.757061 containerd[1476]: time="2025-01-30T13:03:43.757008866Z" level=info msg="StopPodSandbox for \"c8efa7aec4a3f187326a7cf3623c2715e4156903579c6743f88b56f9ed168c4c\" returns successfully" Jan 30 13:03:43.757547 containerd[1476]: time="2025-01-30T13:03:43.757521411Z" level=info msg="StopPodSandbox for \"35b59a098d30c4bca7c9a975837a895bef0a72a468f7375cdbfb22ec4b909754\"" Jan 30 13:03:43.757640 containerd[1476]: time="2025-01-30T13:03:43.757623672Z" level=info msg="TearDown network for sandbox \"35b59a098d30c4bca7c9a975837a895bef0a72a468f7375cdbfb22ec4b909754\" successfully" Jan 30 13:03:43.757640 containerd[1476]: time="2025-01-30T13:03:43.757638269Z" level=info msg="StopPodSandbox for \"35b59a098d30c4bca7c9a975837a895bef0a72a468f7375cdbfb22ec4b909754\" returns successfully" Jan 30 13:03:43.758266 containerd[1476]: time="2025-01-30T13:03:43.758239917Z" level=info msg="StopPodSandbox for \"2fa7c71c031aa8d5814264930e8142dd1828c577abbe79735d158974f3c75698\"" Jan 30 13:03:43.758384 containerd[1476]: time="2025-01-30T13:03:43.758365894Z" level=info msg="TearDown network for sandbox \"2fa7c71c031aa8d5814264930e8142dd1828c577abbe79735d158974f3c75698\" successfully" Jan 30 13:03:43.758384 containerd[1476]: time="2025-01-30T13:03:43.758382171Z" level=info msg="StopPodSandbox for \"2fa7c71c031aa8d5814264930e8142dd1828c577abbe79735d158974f3c75698\" returns successfully" Jan 30 13:03:43.758853 containerd[1476]: time="2025-01-30T13:03:43.758824689Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kw84f,Uid:21d84f57-66ce-4eaa-a49a-963d6f74f4a0,Namespace:calico-system,Attempt:4,}" Jan 30 13:03:43.758930 kubelet[2559]: I0130 13:03:43.758864 2559 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="53d8ca28bf33ffd6ef59fa12aa77bae3998857794f8b0a7e3df66397d1e911b3" Jan 30 13:03:43.762189 kubelet[2559]: I0130 13:03:43.762127 2559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-8lvj5" podStartSLOduration=1.67929587 podStartE2EDuration="14.762109919s" podCreationTimestamp="2025-01-30 13:03:29 +0000 UTC" firstStartedPulling="2025-01-30 13:03:30.29902345 +0000 UTC m=+12.846119373" lastFinishedPulling="2025-01-30 13:03:43.381837459 +0000 UTC m=+25.928933422" observedRunningTime="2025-01-30 13:03:43.761632808 +0000 UTC m=+26.308728771" watchObservedRunningTime="2025-01-30 13:03:43.762109919 +0000 UTC m=+26.309205882" Jan 30 13:03:43.764851 containerd[1476]: time="2025-01-30T13:03:43.764817977Z" level=info msg="StopPodSandbox for \"53d8ca28bf33ffd6ef59fa12aa77bae3998857794f8b0a7e3df66397d1e911b3\"" Jan 30 13:03:43.768048 containerd[1476]: time="2025-01-30T13:03:43.767923880Z" level=info msg="Ensure that sandbox 53d8ca28bf33ffd6ef59fa12aa77bae3998857794f8b0a7e3df66397d1e911b3 in task-service has been cleanup successfully" Jan 30 13:03:43.769323 containerd[1476]: time="2025-01-30T13:03:43.769270030Z" level=info msg="TearDown network for sandbox \"53d8ca28bf33ffd6ef59fa12aa77bae3998857794f8b0a7e3df66397d1e911b3\" successfully" Jan 30 13:03:43.769323 containerd[1476]: time="2025-01-30T13:03:43.769299225Z" level=info msg="StopPodSandbox for \"53d8ca28bf33ffd6ef59fa12aa77bae3998857794f8b0a7e3df66397d1e911b3\" returns successfully" Jan 30 13:03:43.769720 containerd[1476]: time="2025-01-30T13:03:43.769694232Z" level=info msg="StopPodSandbox for \"d8cc1b41daa9d2b65a55b4361f0063fec06aea6109431bdcb518b45538ed87c7\"" Jan 30 13:03:43.770001 containerd[1476]: time="2025-01-30T13:03:43.769916990Z" level=info msg="TearDown network for sandbox \"d8cc1b41daa9d2b65a55b4361f0063fec06aea6109431bdcb518b45538ed87c7\" successfully" Jan 30 13:03:43.770001 containerd[1476]: time="2025-01-30T13:03:43.769932787Z" level=info msg="StopPodSandbox for \"d8cc1b41daa9d2b65a55b4361f0063fec06aea6109431bdcb518b45538ed87c7\" returns successfully" Jan 30 13:03:43.770760 containerd[1476]: time="2025-01-30T13:03:43.770670690Z" level=info msg="StopPodSandbox for \"79977d568f0e32127ab2f5ffbcb9e92abaf88b4c4d292ab6f5d8d3e0685ef426\"" Jan 30 13:03:43.770831 containerd[1476]: time="2025-01-30T13:03:43.770797947Z" level=info msg="TearDown network for sandbox \"79977d568f0e32127ab2f5ffbcb9e92abaf88b4c4d292ab6f5d8d3e0685ef426\" successfully" Jan 30 13:03:43.770831 containerd[1476]: time="2025-01-30T13:03:43.770810144Z" level=info msg="StopPodSandbox for \"79977d568f0e32127ab2f5ffbcb9e92abaf88b4c4d292ab6f5d8d3e0685ef426\" returns successfully" Jan 30 13:03:43.771142 containerd[1476]: time="2025-01-30T13:03:43.771078215Z" level=info msg="StopPodSandbox for \"71b6aa4a9833179615d228a0d19b1fb97dc78d75c32bf28c7e874e9434057227\"" Jan 30 13:03:43.771185 containerd[1476]: time="2025-01-30T13:03:43.771165558Z" level=info msg="TearDown network for sandbox \"71b6aa4a9833179615d228a0d19b1fb97dc78d75c32bf28c7e874e9434057227\" successfully" Jan 30 13:03:43.771185 containerd[1476]: time="2025-01-30T13:03:43.771176436Z" level=info msg="StopPodSandbox for \"71b6aa4a9833179615d228a0d19b1fb97dc78d75c32bf28c7e874e9434057227\" returns successfully" Jan 30 13:03:43.771790 containerd[1476]: time="2025-01-30T13:03:43.771700739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f499d887f-64bsx,Uid:52142e13-5b13-4ee1-bef6-e84504589fe4,Namespace:calico-system,Attempt:4,}" Jan 30 13:03:43.771969 kubelet[2559]: I0130 13:03:43.771725 2559 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e62769cb520bc417a3446ec3e424a25f086fcf882c1ae44e5ce183a6a33b58d" Jan 30 13:03:43.772606 containerd[1476]: time="2025-01-30T13:03:43.772291469Z" level=info msg="StopPodSandbox for \"8e62769cb520bc417a3446ec3e424a25f086fcf882c1ae44e5ce183a6a33b58d\"" Jan 30 13:03:43.772606 containerd[1476]: time="2025-01-30T13:03:43.772437642Z" level=info msg="Ensure that sandbox 8e62769cb520bc417a3446ec3e424a25f086fcf882c1ae44e5ce183a6a33b58d in task-service has been cleanup successfully" Jan 30 13:03:43.772734 containerd[1476]: time="2025-01-30T13:03:43.772648203Z" level=info msg="TearDown network for sandbox \"8e62769cb520bc417a3446ec3e424a25f086fcf882c1ae44e5ce183a6a33b58d\" successfully" Jan 30 13:03:43.772734 containerd[1476]: time="2025-01-30T13:03:43.772665320Z" level=info msg="StopPodSandbox for \"8e62769cb520bc417a3446ec3e424a25f086fcf882c1ae44e5ce183a6a33b58d\" returns successfully" Jan 30 13:03:43.774560 containerd[1476]: time="2025-01-30T13:03:43.774521736Z" level=info msg="StopPodSandbox for \"5050c56e90f5114e30b9fd95242de2c970ed8d4595a32bf4a5efd4940ca8e5dc\"" Jan 30 13:03:43.775162 containerd[1476]: time="2025-01-30T13:03:43.775105107Z" level=info msg="TearDown network for sandbox \"5050c56e90f5114e30b9fd95242de2c970ed8d4595a32bf4a5efd4940ca8e5dc\" successfully" Jan 30 13:03:43.775162 containerd[1476]: time="2025-01-30T13:03:43.775158777Z" level=info msg="StopPodSandbox for \"5050c56e90f5114e30b9fd95242de2c970ed8d4595a32bf4a5efd4940ca8e5dc\" returns successfully" Jan 30 13:03:43.776488 containerd[1476]: time="2025-01-30T13:03:43.776397987Z" level=info msg="StopPodSandbox for \"0052d1476361ed68d924c536afe59ee79b25df1b051eb83ee9e0d726b941bc9c\"" Jan 30 13:03:43.777741 kubelet[2559]: I0130 13:03:43.777429 2559 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ec65669131bf01e5f4583128fc9f6ee7dc6c04be79f0aad25a456a1b83b6797" Jan 30 13:03:43.778327 containerd[1476]: time="2025-01-30T13:03:43.778296155Z" level=info msg="TearDown network for sandbox \"0052d1476361ed68d924c536afe59ee79b25df1b051eb83ee9e0d726b941bc9c\" successfully" Jan 30 13:03:43.778327 containerd[1476]: time="2025-01-30T13:03:43.778324230Z" level=info msg="StopPodSandbox for \"0052d1476361ed68d924c536afe59ee79b25df1b051eb83ee9e0d726b941bc9c\" returns successfully" Jan 30 13:03:43.779379 containerd[1476]: time="2025-01-30T13:03:43.779301848Z" level=info msg="StopPodSandbox for \"144fc21131be39a0f42c1537abb474ebcc9fa2400214cdc58e533fbeaf811efa\"" Jan 30 13:03:43.779627 containerd[1476]: time="2025-01-30T13:03:43.779395431Z" level=info msg="TearDown network for sandbox \"144fc21131be39a0f42c1537abb474ebcc9fa2400214cdc58e533fbeaf811efa\" successfully" Jan 30 13:03:43.779627 containerd[1476]: time="2025-01-30T13:03:43.779405669Z" level=info msg="StopPodSandbox for \"144fc21131be39a0f42c1537abb474ebcc9fa2400214cdc58e533fbeaf811efa\" returns successfully" Jan 30 13:03:43.780226 kubelet[2559]: E0130 13:03:43.780125 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:03:43.780657 containerd[1476]: time="2025-01-30T13:03:43.780624923Z" level=info msg="StopPodSandbox for \"5ec65669131bf01e5f4583128fc9f6ee7dc6c04be79f0aad25a456a1b83b6797\"" Jan 30 13:03:43.781179 containerd[1476]: time="2025-01-30T13:03:43.781045045Z" level=info msg="Ensure that sandbox 5ec65669131bf01e5f4583128fc9f6ee7dc6c04be79f0aad25a456a1b83b6797 in task-service has been cleanup successfully" Jan 30 13:03:43.782873 containerd[1476]: time="2025-01-30T13:03:43.782830353Z" level=info msg="TearDown network for sandbox \"5ec65669131bf01e5f4583128fc9f6ee7dc6c04be79f0aad25a456a1b83b6797\" successfully" Jan 30 13:03:43.789023 kubelet[2559]: I0130 13:03:43.788983 2559 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9dfac4677edde509b317abcf092ad75ffe004513ee0ec727b97a80603c8f198e" Jan 30 13:03:43.795185 containerd[1476]: time="2025-01-30T13:03:43.795145268Z" level=info msg="StopPodSandbox for \"5ec65669131bf01e5f4583128fc9f6ee7dc6c04be79f0aad25a456a1b83b6797\" returns successfully" Jan 30 13:03:43.795603 containerd[1476]: time="2025-01-30T13:03:43.788974133Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-89crv,Uid:51a31eb8-8977-41a3-b690-162f3ef160ae,Namespace:kube-system,Attempt:4,}" Jan 30 13:03:43.795872 containerd[1476]: time="2025-01-30T13:03:43.795837579Z" level=info msg="StopPodSandbox for \"35dd0410d989508b1ae72995429a86db41e358c8a098911b5b28bc01b367ad26\"" Jan 30 13:03:43.795993 containerd[1476]: time="2025-01-30T13:03:43.795969395Z" level=info msg="TearDown network for sandbox \"35dd0410d989508b1ae72995429a86db41e358c8a098911b5b28bc01b367ad26\" successfully" Jan 30 13:03:43.796051 containerd[1476]: time="2025-01-30T13:03:43.795986712Z" level=info msg="StopPodSandbox for \"35dd0410d989508b1ae72995429a86db41e358c8a098911b5b28bc01b367ad26\" returns successfully" Jan 30 13:03:43.796621 containerd[1476]: time="2025-01-30T13:03:43.791096979Z" level=info msg="StopPodSandbox for \"9dfac4677edde509b317abcf092ad75ffe004513ee0ec727b97a80603c8f198e\"" Jan 30 13:03:43.796621 containerd[1476]: time="2025-01-30T13:03:43.796286656Z" level=info msg="StopPodSandbox for \"df66bd6fcb6a38c10cff7d4d19cad38be0f23fa4e6d01253025405d35476663e\"" Jan 30 13:03:43.796621 containerd[1476]: time="2025-01-30T13:03:43.796394796Z" level=info msg="TearDown network for sandbox \"df66bd6fcb6a38c10cff7d4d19cad38be0f23fa4e6d01253025405d35476663e\" successfully" Jan 30 13:03:43.796621 containerd[1476]: time="2025-01-30T13:03:43.796405994Z" level=info msg="StopPodSandbox for \"df66bd6fcb6a38c10cff7d4d19cad38be0f23fa4e6d01253025405d35476663e\" returns successfully" Jan 30 13:03:43.796621 containerd[1476]: time="2025-01-30T13:03:43.796289655Z" level=info msg="Ensure that sandbox 9dfac4677edde509b317abcf092ad75ffe004513ee0ec727b97a80603c8f198e in task-service has been cleanup successfully" Jan 30 13:03:43.797362 containerd[1476]: time="2025-01-30T13:03:43.797323463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7874549f5f-gzb44,Uid:e0b441b3-88a9-4555-8974-b147721621a3,Namespace:calico-apiserver,Attempt:3,}" Jan 30 13:03:43.798867 containerd[1476]: time="2025-01-30T13:03:43.798829504Z" level=info msg="TearDown network for sandbox \"9dfac4677edde509b317abcf092ad75ffe004513ee0ec727b97a80603c8f198e\" successfully" Jan 30 13:03:43.799216 containerd[1476]: time="2025-01-30T13:03:43.799192517Z" level=info msg="StopPodSandbox for \"9dfac4677edde509b317abcf092ad75ffe004513ee0ec727b97a80603c8f198e\" returns successfully" Jan 30 13:03:43.800553 containerd[1476]: time="2025-01-30T13:03:43.800518630Z" level=info msg="StopPodSandbox for \"6cf762db3582e3aa060d9ce6a49e3ffb8dc1435b98c8ef8255cb92e6996dd824\"" Jan 30 13:03:43.801473 containerd[1476]: time="2025-01-30T13:03:43.801437940Z" level=info msg="TearDown network for sandbox \"6cf762db3582e3aa060d9ce6a49e3ffb8dc1435b98c8ef8255cb92e6996dd824\" successfully" Jan 30 13:03:43.801473 containerd[1476]: time="2025-01-30T13:03:43.801467374Z" level=info msg="StopPodSandbox for \"6cf762db3582e3aa060d9ce6a49e3ffb8dc1435b98c8ef8255cb92e6996dd824\" returns successfully" Jan 30 13:03:43.803352 containerd[1476]: time="2025-01-30T13:03:43.803318031Z" level=info msg="StopPodSandbox for \"ed1fb33fca754cc40e42e26806dbbc3cfdda2a37a4de910c7334fae2b33524be\"" Jan 30 13:03:43.803433 containerd[1476]: time="2025-01-30T13:03:43.803415533Z" level=info msg="TearDown network for sandbox \"ed1fb33fca754cc40e42e26806dbbc3cfdda2a37a4de910c7334fae2b33524be\" successfully" Jan 30 13:03:43.803433 containerd[1476]: time="2025-01-30T13:03:43.803429730Z" level=info msg="StopPodSandbox for \"ed1fb33fca754cc40e42e26806dbbc3cfdda2a37a4de910c7334fae2b33524be\" returns successfully" Jan 30 13:03:43.803920 containerd[1476]: time="2025-01-30T13:03:43.803890525Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7874549f5f-h26th,Uid:7acfabdf-bbd3-498c-9434-a86e65427513,Namespace:calico-apiserver,Attempt:3,}" Jan 30 13:03:43.886578 containerd[1476]: time="2025-01-30T13:03:43.886523268Z" level=error msg="Failed to destroy network for sandbox \"3c570fcefd704fc125ce677afb9f1dc9f63c773161f886b11963981eeb06515d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:03:43.886818 containerd[1476]: time="2025-01-30T13:03:43.886725830Z" level=error msg="Failed to destroy network for sandbox \"3cb1e6825a901ada41bfdeb0528a6faa8257a7dd5d8ca396955ed98c96d63640\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:03:43.887149 containerd[1476]: time="2025-01-30T13:03:43.887030893Z" level=error msg="encountered an error cleaning up failed sandbox \"3c570fcefd704fc125ce677afb9f1dc9f63c773161f886b11963981eeb06515d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:03:43.887149 containerd[1476]: time="2025-01-30T13:03:43.887097121Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kw84f,Uid:21d84f57-66ce-4eaa-a49a-963d6f74f4a0,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"3c570fcefd704fc125ce677afb9f1dc9f63c773161f886b11963981eeb06515d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:03:43.887750 kubelet[2559]: E0130 13:03:43.887354 2559 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c570fcefd704fc125ce677afb9f1dc9f63c773161f886b11963981eeb06515d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:03:43.887750 kubelet[2559]: E0130 13:03:43.887435 2559 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c570fcefd704fc125ce677afb9f1dc9f63c773161f886b11963981eeb06515d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kw84f" Jan 30 13:03:43.887750 kubelet[2559]: E0130 13:03:43.887456 2559 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c570fcefd704fc125ce677afb9f1dc9f63c773161f886b11963981eeb06515d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kw84f" Jan 30 13:03:43.888067 kubelet[2559]: E0130 13:03:43.887508 2559 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-kw84f_calico-system(21d84f57-66ce-4eaa-a49a-963d6f74f4a0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-kw84f_calico-system(21d84f57-66ce-4eaa-a49a-963d6f74f4a0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3c570fcefd704fc125ce677afb9f1dc9f63c773161f886b11963981eeb06515d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-kw84f" podUID="21d84f57-66ce-4eaa-a49a-963d6f74f4a0" Jan 30 13:03:43.888176 containerd[1476]: time="2025-01-30T13:03:43.887824786Z" level=error msg="encountered an error cleaning up failed sandbox \"3cb1e6825a901ada41bfdeb0528a6faa8257a7dd5d8ca396955ed98c96d63640\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:03:43.888176 containerd[1476]: time="2025-01-30T13:03:43.887878776Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-w7k54,Uid:c16bbc77-2b47-4ff4-846f-0b437cb6c4ee,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"3cb1e6825a901ada41bfdeb0528a6faa8257a7dd5d8ca396955ed98c96d63640\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:03:43.890320 kubelet[2559]: E0130 13:03:43.890244 2559 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3cb1e6825a901ada41bfdeb0528a6faa8257a7dd5d8ca396955ed98c96d63640\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:03:43.890442 kubelet[2559]: E0130 13:03:43.890375 2559 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3cb1e6825a901ada41bfdeb0528a6faa8257a7dd5d8ca396955ed98c96d63640\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-w7k54" Jan 30 13:03:43.890442 kubelet[2559]: E0130 13:03:43.890400 2559 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3cb1e6825a901ada41bfdeb0528a6faa8257a7dd5d8ca396955ed98c96d63640\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-w7k54" Jan 30 13:03:43.890520 kubelet[2559]: E0130 13:03:43.890439 2559 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-w7k54_kube-system(c16bbc77-2b47-4ff4-846f-0b437cb6c4ee)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-w7k54_kube-system(c16bbc77-2b47-4ff4-846f-0b437cb6c4ee)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3cb1e6825a901ada41bfdeb0528a6faa8257a7dd5d8ca396955ed98c96d63640\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-w7k54" podUID="c16bbc77-2b47-4ff4-846f-0b437cb6c4ee" Jan 30 13:03:43.898823 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 30 13:03:43.899482 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 30 13:03:43.991706 containerd[1476]: time="2025-01-30T13:03:43.991547655Z" level=error msg="Failed to destroy network for sandbox \"13593bf931cf9b52ef4cd2b4f372d47b29791f93df15ed1a7cce853b5ddda3c0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:03:44.005859 containerd[1476]: time="2025-01-30T13:03:44.005138945Z" level=error msg="encountered an error cleaning up failed sandbox \"13593bf931cf9b52ef4cd2b4f372d47b29791f93df15ed1a7cce853b5ddda3c0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:03:44.005859 containerd[1476]: time="2025-01-30T13:03:44.005705686Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f499d887f-64bsx,Uid:52142e13-5b13-4ee1-bef6-e84504589fe4,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"13593bf931cf9b52ef4cd2b4f372d47b29791f93df15ed1a7cce853b5ddda3c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:03:44.006709 kubelet[2559]: E0130 13:03:44.006659 2559 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"13593bf931cf9b52ef4cd2b4f372d47b29791f93df15ed1a7cce853b5ddda3c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:03:44.006709 kubelet[2559]: E0130 13:03:44.006762 2559 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"13593bf931cf9b52ef4cd2b4f372d47b29791f93df15ed1a7cce853b5ddda3c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5f499d887f-64bsx" Jan 30 13:03:44.006709 kubelet[2559]: E0130 13:03:44.006787 2559 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"13593bf931cf9b52ef4cd2b4f372d47b29791f93df15ed1a7cce853b5ddda3c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5f499d887f-64bsx" Jan 30 13:03:44.007249 kubelet[2559]: E0130 13:03:44.007049 2559 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5f499d887f-64bsx_calico-system(52142e13-5b13-4ee1-bef6-e84504589fe4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5f499d887f-64bsx_calico-system(52142e13-5b13-4ee1-bef6-e84504589fe4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"13593bf931cf9b52ef4cd2b4f372d47b29791f93df15ed1a7cce853b5ddda3c0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5f499d887f-64bsx" podUID="52142e13-5b13-4ee1-bef6-e84504589fe4" Jan 30 13:03:44.020984 containerd[1476]: time="2025-01-30T13:03:44.020937196Z" level=error msg="Failed to destroy network for sandbox \"0495fb8fe4bb2805a66ec45e77c031e67b9b4d0cc58ba6657a80b75cb6bfae3c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:03:44.023053 containerd[1476]: time="2025-01-30T13:03:44.022442974Z" level=error msg="encountered an error cleaning up failed sandbox \"0495fb8fe4bb2805a66ec45e77c031e67b9b4d0cc58ba6657a80b75cb6bfae3c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:03:44.023053 containerd[1476]: time="2025-01-30T13:03:44.022522760Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7874549f5f-gzb44,Uid:e0b441b3-88a9-4555-8974-b147721621a3,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"0495fb8fe4bb2805a66ec45e77c031e67b9b4d0cc58ba6657a80b75cb6bfae3c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:03:44.023443 kubelet[2559]: E0130 13:03:44.023369 2559 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0495fb8fe4bb2805a66ec45e77c031e67b9b4d0cc58ba6657a80b75cb6bfae3c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:03:44.023723 kubelet[2559]: E0130 13:03:44.023462 2559 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0495fb8fe4bb2805a66ec45e77c031e67b9b4d0cc58ba6657a80b75cb6bfae3c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7874549f5f-gzb44" Jan 30 13:03:44.023802 kubelet[2559]: E0130 13:03:44.023733 2559 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0495fb8fe4bb2805a66ec45e77c031e67b9b4d0cc58ba6657a80b75cb6bfae3c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7874549f5f-gzb44" Jan 30 13:03:44.023802 kubelet[2559]: E0130 13:03:44.023787 2559 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7874549f5f-gzb44_calico-apiserver(e0b441b3-88a9-4555-8974-b147721621a3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7874549f5f-gzb44_calico-apiserver(e0b441b3-88a9-4555-8974-b147721621a3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0495fb8fe4bb2805a66ec45e77c031e67b9b4d0cc58ba6657a80b75cb6bfae3c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7874549f5f-gzb44" podUID="e0b441b3-88a9-4555-8974-b147721621a3" Jan 30 13:03:44.028848 containerd[1476]: time="2025-01-30T13:03:44.028796948Z" level=error msg="Failed to destroy network for sandbox \"4d753f08a139dc4852b685592e562d0c3a06f9e2140869db245cb78d1f07ff12\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:03:44.029654 containerd[1476]: time="2025-01-30T13:03:44.029615246Z" level=error msg="encountered an error cleaning up failed sandbox \"4d753f08a139dc4852b685592e562d0c3a06f9e2140869db245cb78d1f07ff12\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:03:44.029712 containerd[1476]: time="2025-01-30T13:03:44.029686354Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7874549f5f-h26th,Uid:7acfabdf-bbd3-498c-9434-a86e65427513,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"4d753f08a139dc4852b685592e562d0c3a06f9e2140869db245cb78d1f07ff12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:03:44.032153 kubelet[2559]: E0130 13:03:44.029907 2559 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d753f08a139dc4852b685592e562d0c3a06f9e2140869db245cb78d1f07ff12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:03:44.032153 kubelet[2559]: E0130 13:03:44.029964 2559 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d753f08a139dc4852b685592e562d0c3a06f9e2140869db245cb78d1f07ff12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7874549f5f-h26th" Jan 30 13:03:44.032153 kubelet[2559]: E0130 13:03:44.029986 2559 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d753f08a139dc4852b685592e562d0c3a06f9e2140869db245cb78d1f07ff12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7874549f5f-h26th" Jan 30 13:03:44.032259 kubelet[2559]: E0130 13:03:44.030024 2559 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7874549f5f-h26th_calico-apiserver(7acfabdf-bbd3-498c-9434-a86e65427513)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7874549f5f-h26th_calico-apiserver(7acfabdf-bbd3-498c-9434-a86e65427513)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4d753f08a139dc4852b685592e562d0c3a06f9e2140869db245cb78d1f07ff12\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7874549f5f-h26th" podUID="7acfabdf-bbd3-498c-9434-a86e65427513" Jan 30 13:03:44.033891 containerd[1476]: time="2025-01-30T13:03:44.033841591Z" level=error msg="Failed to destroy network for sandbox \"90a6d303e458d7ac0616b8fab291bd48f4242f53521e4ae8c6439314ebdf1a12\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:03:44.034332 containerd[1476]: time="2025-01-30T13:03:44.034295352Z" level=error msg="encountered an error cleaning up failed sandbox \"90a6d303e458d7ac0616b8fab291bd48f4242f53521e4ae8c6439314ebdf1a12\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:03:44.034396 containerd[1476]: time="2025-01-30T13:03:44.034365779Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-89crv,Uid:51a31eb8-8977-41a3-b690-162f3ef160ae,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"90a6d303e458d7ac0616b8fab291bd48f4242f53521e4ae8c6439314ebdf1a12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:03:44.034887 kubelet[2559]: E0130 13:03:44.034707 2559 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90a6d303e458d7ac0616b8fab291bd48f4242f53521e4ae8c6439314ebdf1a12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:03:44.034887 kubelet[2559]: E0130 13:03:44.034764 2559 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90a6d303e458d7ac0616b8fab291bd48f4242f53521e4ae8c6439314ebdf1a12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-89crv" Jan 30 13:03:44.034887 kubelet[2559]: E0130 13:03:44.034789 2559 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90a6d303e458d7ac0616b8fab291bd48f4242f53521e4ae8c6439314ebdf1a12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-89crv" Jan 30 13:03:44.035051 kubelet[2559]: E0130 13:03:44.034825 2559 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-89crv_kube-system(51a31eb8-8977-41a3-b690-162f3ef160ae)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-89crv_kube-system(51a31eb8-8977-41a3-b690-162f3ef160ae)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"90a6d303e458d7ac0616b8fab291bd48f4242f53521e4ae8c6439314ebdf1a12\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-89crv" podUID="51a31eb8-8977-41a3-b690-162f3ef160ae" Jan 30 13:03:44.041129 systemd[1]: run-netns-cni\x2df3fddb57\x2d6fa0\x2d0102\x2dab41\x2d86f789a67ffe.mount: Deactivated successfully. Jan 30 13:03:44.041229 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5ec65669131bf01e5f4583128fc9f6ee7dc6c04be79f0aad25a456a1b83b6797-shm.mount: Deactivated successfully. Jan 30 13:03:44.041282 systemd[1]: run-netns-cni\x2de4c08587\x2dc182\x2ddd11\x2d8aab\x2d0cbf3bf98a96.mount: Deactivated successfully. Jan 30 13:03:44.041338 systemd[1]: run-netns-cni\x2d4d739a57\x2dd058\x2d1b34\x2dfb19\x2d8b87f3e63e59.mount: Deactivated successfully. Jan 30 13:03:44.041383 systemd[1]: run-netns-cni\x2de95b1dc6\x2ddb58\x2d46f8\x2dd9d2\x2d81b6a61e8765.mount: Deactivated successfully. Jan 30 13:03:44.041425 systemd[1]: run-netns-cni\x2d576e2d02\x2d14fc\x2d5fc0\x2dba9d\x2d407ca57ce9f2.mount: Deactivated successfully. Jan 30 13:03:44.041467 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8e62769cb520bc417a3446ec3e424a25f086fcf882c1ae44e5ce183a6a33b58d-shm.mount: Deactivated successfully. Jan 30 13:03:44.041514 systemd[1]: run-netns-cni\x2df089db25\x2dba85\x2d1f99\x2d1019\x2d8b278f931a6f.mount: Deactivated successfully. Jan 30 13:03:44.041557 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1852083884.mount: Deactivated successfully. Jan 30 13:03:44.049443 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-90a6d303e458d7ac0616b8fab291bd48f4242f53521e4ae8c6439314ebdf1a12-shm.mount: Deactivated successfully. Jan 30 13:03:44.793106 kubelet[2559]: I0130 13:03:44.793056 2559 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0495fb8fe4bb2805a66ec45e77c031e67b9b4d0cc58ba6657a80b75cb6bfae3c" Jan 30 13:03:44.804008 containerd[1476]: time="2025-01-30T13:03:44.803956189Z" level=info msg="StopPodSandbox for \"0495fb8fe4bb2805a66ec45e77c031e67b9b4d0cc58ba6657a80b75cb6bfae3c\"" Jan 30 13:03:44.804382 containerd[1476]: time="2025-01-30T13:03:44.804149236Z" level=info msg="Ensure that sandbox 0495fb8fe4bb2805a66ec45e77c031e67b9b4d0cc58ba6657a80b75cb6bfae3c in task-service has been cleanup successfully" Jan 30 13:03:44.804382 containerd[1476]: time="2025-01-30T13:03:44.804346082Z" level=info msg="TearDown network for sandbox \"0495fb8fe4bb2805a66ec45e77c031e67b9b4d0cc58ba6657a80b75cb6bfae3c\" successfully" Jan 30 13:03:44.804382 containerd[1476]: time="2025-01-30T13:03:44.804359719Z" level=info msg="StopPodSandbox for \"0495fb8fe4bb2805a66ec45e77c031e67b9b4d0cc58ba6657a80b75cb6bfae3c\" returns successfully" Jan 30 13:03:44.805245 containerd[1476]: time="2025-01-30T13:03:44.804748292Z" level=info msg="StopPodSandbox for \"5ec65669131bf01e5f4583128fc9f6ee7dc6c04be79f0aad25a456a1b83b6797\"" Jan 30 13:03:44.805245 containerd[1476]: time="2025-01-30T13:03:44.804850314Z" level=info msg="TearDown network for sandbox \"5ec65669131bf01e5f4583128fc9f6ee7dc6c04be79f0aad25a456a1b83b6797\" successfully" Jan 30 13:03:44.805245 containerd[1476]: time="2025-01-30T13:03:44.804861592Z" level=info msg="StopPodSandbox for \"5ec65669131bf01e5f4583128fc9f6ee7dc6c04be79f0aad25a456a1b83b6797\" returns successfully" Jan 30 13:03:44.805245 containerd[1476]: time="2025-01-30T13:03:44.805138264Z" level=info msg="StopPodSandbox for \"35dd0410d989508b1ae72995429a86db41e358c8a098911b5b28bc01b367ad26\"" Jan 30 13:03:44.805528 containerd[1476]: time="2025-01-30T13:03:44.805217250Z" level=info msg="TearDown network for sandbox \"35dd0410d989508b1ae72995429a86db41e358c8a098911b5b28bc01b367ad26\" successfully" Jan 30 13:03:44.805528 containerd[1476]: time="2025-01-30T13:03:44.805334909Z" level=info msg="StopPodSandbox for \"35dd0410d989508b1ae72995429a86db41e358c8a098911b5b28bc01b367ad26\" returns successfully" Jan 30 13:03:44.805700 containerd[1476]: time="2025-01-30T13:03:44.805665212Z" level=info msg="StopPodSandbox for \"df66bd6fcb6a38c10cff7d4d19cad38be0f23fa4e6d01253025405d35476663e\"" Jan 30 13:03:44.805905 containerd[1476]: time="2025-01-30T13:03:44.805887453Z" level=info msg="TearDown network for sandbox \"df66bd6fcb6a38c10cff7d4d19cad38be0f23fa4e6d01253025405d35476663e\" successfully" Jan 30 13:03:44.806053 containerd[1476]: time="2025-01-30T13:03:44.805939884Z" level=info msg="StopPodSandbox for \"df66bd6fcb6a38c10cff7d4d19cad38be0f23fa4e6d01253025405d35476663e\" returns successfully" Jan 30 13:03:44.806669 containerd[1476]: time="2025-01-30T13:03:44.806640682Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7874549f5f-gzb44,Uid:e0b441b3-88a9-4555-8974-b147721621a3,Namespace:calico-apiserver,Attempt:4,}" Jan 30 13:03:44.806810 systemd[1]: run-netns-cni\x2de0dd9e40\x2d5cc0\x2dc75c\x2d38d0\x2d17a0566fbe08.mount: Deactivated successfully. Jan 30 13:03:44.807659 kubelet[2559]: I0130 13:03:44.807620 2559 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d753f08a139dc4852b685592e562d0c3a06f9e2140869db245cb78d1f07ff12" Jan 30 13:03:44.808941 containerd[1476]: time="2025-01-30T13:03:44.808769112Z" level=info msg="StopPodSandbox for \"4d753f08a139dc4852b685592e562d0c3a06f9e2140869db245cb78d1f07ff12\"" Jan 30 13:03:44.809013 containerd[1476]: time="2025-01-30T13:03:44.808956239Z" level=info msg="Ensure that sandbox 4d753f08a139dc4852b685592e562d0c3a06f9e2140869db245cb78d1f07ff12 in task-service has been cleanup successfully" Jan 30 13:03:44.809200 containerd[1476]: time="2025-01-30T13:03:44.809176281Z" level=info msg="TearDown network for sandbox \"4d753f08a139dc4852b685592e562d0c3a06f9e2140869db245cb78d1f07ff12\" successfully" Jan 30 13:03:44.809200 containerd[1476]: time="2025-01-30T13:03:44.809195598Z" level=info msg="StopPodSandbox for \"4d753f08a139dc4852b685592e562d0c3a06f9e2140869db245cb78d1f07ff12\" returns successfully" Jan 30 13:03:44.810095 containerd[1476]: time="2025-01-30T13:03:44.810031572Z" level=info msg="StopPodSandbox for \"9dfac4677edde509b317abcf092ad75ffe004513ee0ec727b97a80603c8f198e\"" Jan 30 13:03:44.811856 containerd[1476]: time="2025-01-30T13:03:44.811764391Z" level=info msg="TearDown network for sandbox \"9dfac4677edde509b317abcf092ad75ffe004513ee0ec727b97a80603c8f198e\" successfully" Jan 30 13:03:44.811856 containerd[1476]: time="2025-01-30T13:03:44.811796545Z" level=info msg="StopPodSandbox for \"9dfac4677edde509b317abcf092ad75ffe004513ee0ec727b97a80603c8f198e\" returns successfully" Jan 30 13:03:44.811891 systemd[1]: run-netns-cni\x2dedb1c952\x2d8918\x2d5c83\x2d5e01\x2d0e7a152ae47d.mount: Deactivated successfully. Jan 30 13:03:44.812427 containerd[1476]: time="2025-01-30T13:03:44.812371485Z" level=info msg="StopPodSandbox for \"6cf762db3582e3aa060d9ce6a49e3ffb8dc1435b98c8ef8255cb92e6996dd824\"" Jan 30 13:03:44.812484 containerd[1476]: time="2025-01-30T13:03:44.812454031Z" level=info msg="TearDown network for sandbox \"6cf762db3582e3aa060d9ce6a49e3ffb8dc1435b98c8ef8255cb92e6996dd824\" successfully" Jan 30 13:03:44.812484 containerd[1476]: time="2025-01-30T13:03:44.812465589Z" level=info msg="StopPodSandbox for \"6cf762db3582e3aa060d9ce6a49e3ffb8dc1435b98c8ef8255cb92e6996dd824\" returns successfully" Jan 30 13:03:44.813245 containerd[1476]: time="2025-01-30T13:03:44.812969941Z" level=info msg="StopPodSandbox for \"ed1fb33fca754cc40e42e26806dbbc3cfdda2a37a4de910c7334fae2b33524be\"" Jan 30 13:03:44.813245 containerd[1476]: time="2025-01-30T13:03:44.813071083Z" level=info msg="TearDown network for sandbox \"ed1fb33fca754cc40e42e26806dbbc3cfdda2a37a4de910c7334fae2b33524be\" successfully" Jan 30 13:03:44.813245 containerd[1476]: time="2025-01-30T13:03:44.813082481Z" level=info msg="StopPodSandbox for \"ed1fb33fca754cc40e42e26806dbbc3cfdda2a37a4de910c7334fae2b33524be\" returns successfully" Jan 30 13:03:44.813382 kubelet[2559]: I0130 13:03:44.812842 2559 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c570fcefd704fc125ce677afb9f1dc9f63c773161f886b11963981eeb06515d" Jan 30 13:03:44.815775 containerd[1476]: time="2025-01-30T13:03:44.815724542Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7874549f5f-h26th,Uid:7acfabdf-bbd3-498c-9434-a86e65427513,Namespace:calico-apiserver,Attempt:4,}" Jan 30 13:03:44.816673 containerd[1476]: time="2025-01-30T13:03:44.815979137Z" level=info msg="StopPodSandbox for \"3c570fcefd704fc125ce677afb9f1dc9f63c773161f886b11963981eeb06515d\"" Jan 30 13:03:44.817022 containerd[1476]: time="2025-01-30T13:03:44.816990521Z" level=info msg="Ensure that sandbox 3c570fcefd704fc125ce677afb9f1dc9f63c773161f886b11963981eeb06515d in task-service has been cleanup successfully" Jan 30 13:03:44.817458 kubelet[2559]: I0130 13:03:44.817246 2559 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="13593bf931cf9b52ef4cd2b4f372d47b29791f93df15ed1a7cce853b5ddda3c0" Jan 30 13:03:44.818042 containerd[1476]: time="2025-01-30T13:03:44.818005665Z" level=info msg="TearDown network for sandbox \"3c570fcefd704fc125ce677afb9f1dc9f63c773161f886b11963981eeb06515d\" successfully" Jan 30 13:03:44.818042 containerd[1476]: time="2025-01-30T13:03:44.818034460Z" level=info msg="StopPodSandbox for \"3c570fcefd704fc125ce677afb9f1dc9f63c773161f886b11963981eeb06515d\" returns successfully" Jan 30 13:03:44.818790 containerd[1476]: time="2025-01-30T13:03:44.818743296Z" level=info msg="StopPodSandbox for \"13593bf931cf9b52ef4cd2b4f372d47b29791f93df15ed1a7cce853b5ddda3c0\"" Jan 30 13:03:44.818913 containerd[1476]: time="2025-01-30T13:03:44.818861836Z" level=info msg="StopPodSandbox for \"ec456f68e433fba4fb974e9bd18588ad52e90092aae461fdc7248fec270c3280\"" Jan 30 13:03:44.819017 containerd[1476]: time="2025-01-30T13:03:44.818987934Z" level=info msg="TearDown network for sandbox \"ec456f68e433fba4fb974e9bd18588ad52e90092aae461fdc7248fec270c3280\" successfully" Jan 30 13:03:44.819050 containerd[1476]: time="2025-01-30T13:03:44.819015529Z" level=info msg="StopPodSandbox for \"ec456f68e433fba4fb974e9bd18588ad52e90092aae461fdc7248fec270c3280\" returns successfully" Jan 30 13:03:44.819352 containerd[1476]: time="2025-01-30T13:03:44.819326435Z" level=info msg="StopPodSandbox for \"c8efa7aec4a3f187326a7cf3623c2715e4156903579c6743f88b56f9ed168c4c\"" Jan 30 13:03:44.819404 containerd[1476]: time="2025-01-30T13:03:44.819377466Z" level=info msg="Ensure that sandbox 13593bf931cf9b52ef4cd2b4f372d47b29791f93df15ed1a7cce853b5ddda3c0 in task-service has been cleanup successfully" Jan 30 13:03:44.819435 containerd[1476]: time="2025-01-30T13:03:44.819420659Z" level=info msg="TearDown network for sandbox \"c8efa7aec4a3f187326a7cf3623c2715e4156903579c6743f88b56f9ed168c4c\" successfully" Jan 30 13:03:44.819456 containerd[1476]: time="2025-01-30T13:03:44.819431977Z" level=info msg="StopPodSandbox for \"c8efa7aec4a3f187326a7cf3623c2715e4156903579c6743f88b56f9ed168c4c\" returns successfully" Jan 30 13:03:44.819750 containerd[1476]: time="2025-01-30T13:03:44.819700810Z" level=info msg="TearDown network for sandbox \"13593bf931cf9b52ef4cd2b4f372d47b29791f93df15ed1a7cce853b5ddda3c0\" successfully" Jan 30 13:03:44.819750 containerd[1476]: time="2025-01-30T13:03:44.819739363Z" level=info msg="StopPodSandbox for \"13593bf931cf9b52ef4cd2b4f372d47b29791f93df15ed1a7cce853b5ddda3c0\" returns successfully" Jan 30 13:03:44.821079 systemd[1]: run-netns-cni\x2d13e3a1a9\x2dce1a\x2dffca\x2df8f9\x2d2693a61ef124.mount: Deactivated successfully. Jan 30 13:03:44.821173 systemd[1]: run-netns-cni\x2d9941bc2c\x2d188c\x2d7ac6\x2defa9\x2d5c9b44bf4066.mount: Deactivated successfully. Jan 30 13:03:44.821867 containerd[1476]: time="2025-01-30T13:03:44.821569805Z" level=info msg="StopPodSandbox for \"53d8ca28bf33ffd6ef59fa12aa77bae3998857794f8b0a7e3df66397d1e911b3\"" Jan 30 13:03:44.822002 containerd[1476]: time="2025-01-30T13:03:44.821754332Z" level=info msg="StopPodSandbox for \"35b59a098d30c4bca7c9a975837a895bef0a72a468f7375cdbfb22ec4b909754\"" Jan 30 13:03:44.822080 containerd[1476]: time="2025-01-30T13:03:44.822058040Z" level=info msg="TearDown network for sandbox \"35b59a098d30c4bca7c9a975837a895bef0a72a468f7375cdbfb22ec4b909754\" successfully" Jan 30 13:03:44.822140 containerd[1476]: time="2025-01-30T13:03:44.822076356Z" level=info msg="StopPodSandbox for \"35b59a098d30c4bca7c9a975837a895bef0a72a468f7375cdbfb22ec4b909754\" returns successfully" Jan 30 13:03:44.822273 containerd[1476]: time="2025-01-30T13:03:44.822242328Z" level=info msg="TearDown network for sandbox \"53d8ca28bf33ffd6ef59fa12aa77bae3998857794f8b0a7e3df66397d1e911b3\" successfully" Jan 30 13:03:44.822313 containerd[1476]: time="2025-01-30T13:03:44.822278161Z" level=info msg="StopPodSandbox for \"53d8ca28bf33ffd6ef59fa12aa77bae3998857794f8b0a7e3df66397d1e911b3\" returns successfully" Jan 30 13:03:44.822338 containerd[1476]: time="2025-01-30T13:03:44.822323193Z" level=info msg="StopPodSandbox for \"2fa7c71c031aa8d5814264930e8142dd1828c577abbe79735d158974f3c75698\"" Jan 30 13:03:44.822409 containerd[1476]: time="2025-01-30T13:03:44.822393141Z" level=info msg="TearDown network for sandbox \"2fa7c71c031aa8d5814264930e8142dd1828c577abbe79735d158974f3c75698\" successfully" Jan 30 13:03:44.822438 containerd[1476]: time="2025-01-30T13:03:44.822408019Z" level=info msg="StopPodSandbox for \"2fa7c71c031aa8d5814264930e8142dd1828c577abbe79735d158974f3c75698\" returns successfully" Jan 30 13:03:44.823007 containerd[1476]: time="2025-01-30T13:03:44.822975760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kw84f,Uid:21d84f57-66ce-4eaa-a49a-963d6f74f4a0,Namespace:calico-system,Attempt:5,}" Jan 30 13:03:44.823246 containerd[1476]: time="2025-01-30T13:03:44.823219997Z" level=info msg="StopPodSandbox for \"d8cc1b41daa9d2b65a55b4361f0063fec06aea6109431bdcb518b45538ed87c7\"" Jan 30 13:03:44.823522 kubelet[2559]: I0130 13:03:44.823483 2559 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="90a6d303e458d7ac0616b8fab291bd48f4242f53521e4ae8c6439314ebdf1a12" Jan 30 13:03:44.823607 containerd[1476]: time="2025-01-30T13:03:44.823559138Z" level=info msg="TearDown network for sandbox \"d8cc1b41daa9d2b65a55b4361f0063fec06aea6109431bdcb518b45538ed87c7\" successfully" Jan 30 13:03:44.823607 containerd[1476]: time="2025-01-30T13:03:44.823577335Z" level=info msg="StopPodSandbox for \"d8cc1b41daa9d2b65a55b4361f0063fec06aea6109431bdcb518b45538ed87c7\" returns successfully" Jan 30 13:03:44.824413 containerd[1476]: time="2025-01-30T13:03:44.824391674Z" level=info msg="StopPodSandbox for \"90a6d303e458d7ac0616b8fab291bd48f4242f53521e4ae8c6439314ebdf1a12\"" Jan 30 13:03:44.824715 containerd[1476]: time="2025-01-30T13:03:44.824550006Z" level=info msg="Ensure that sandbox 90a6d303e458d7ac0616b8fab291bd48f4242f53521e4ae8c6439314ebdf1a12 in task-service has been cleanup successfully" Jan 30 13:03:44.824832 containerd[1476]: time="2025-01-30T13:03:44.824805162Z" level=info msg="StopPodSandbox for \"79977d568f0e32127ab2f5ffbcb9e92abaf88b4c4d292ab6f5d8d3e0685ef426\"" Jan 30 13:03:44.824947 containerd[1476]: time="2025-01-30T13:03:44.824931220Z" level=info msg="TearDown network for sandbox \"79977d568f0e32127ab2f5ffbcb9e92abaf88b4c4d292ab6f5d8d3e0685ef426\" successfully" Jan 30 13:03:44.824970 containerd[1476]: time="2025-01-30T13:03:44.824947457Z" level=info msg="StopPodSandbox for \"79977d568f0e32127ab2f5ffbcb9e92abaf88b4c4d292ab6f5d8d3e0685ef426\" returns successfully" Jan 30 13:03:44.824990 containerd[1476]: time="2025-01-30T13:03:44.824958175Z" level=info msg="TearDown network for sandbox \"90a6d303e458d7ac0616b8fab291bd48f4242f53521e4ae8c6439314ebdf1a12\" successfully" Jan 30 13:03:44.825010 containerd[1476]: time="2025-01-30T13:03:44.824992369Z" level=info msg="StopPodSandbox for \"90a6d303e458d7ac0616b8fab291bd48f4242f53521e4ae8c6439314ebdf1a12\" returns successfully" Jan 30 13:03:44.825279 containerd[1476]: time="2025-01-30T13:03:44.825254043Z" level=info msg="StopPodSandbox for \"71b6aa4a9833179615d228a0d19b1fb97dc78d75c32bf28c7e874e9434057227\"" Jan 30 13:03:44.825373 containerd[1476]: time="2025-01-30T13:03:44.825348987Z" level=info msg="TearDown network for sandbox \"71b6aa4a9833179615d228a0d19b1fb97dc78d75c32bf28c7e874e9434057227\" successfully" Jan 30 13:03:44.825405 containerd[1476]: time="2025-01-30T13:03:44.825371303Z" level=info msg="StopPodSandbox for \"71b6aa4a9833179615d228a0d19b1fb97dc78d75c32bf28c7e874e9434057227\" returns successfully" Jan 30 13:03:44.826083 containerd[1476]: time="2025-01-30T13:03:44.826009912Z" level=info msg="StopPodSandbox for \"8e62769cb520bc417a3446ec3e424a25f086fcf882c1ae44e5ce183a6a33b58d\"" Jan 30 13:03:44.826182 containerd[1476]: time="2025-01-30T13:03:44.826103576Z" level=info msg="TearDown network for sandbox \"8e62769cb520bc417a3446ec3e424a25f086fcf882c1ae44e5ce183a6a33b58d\" successfully" Jan 30 13:03:44.826182 containerd[1476]: time="2025-01-30T13:03:44.826115654Z" level=info msg="StopPodSandbox for \"8e62769cb520bc417a3446ec3e424a25f086fcf882c1ae44e5ce183a6a33b58d\" returns successfully" Jan 30 13:03:44.826468 containerd[1476]: time="2025-01-30T13:03:44.826278825Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f499d887f-64bsx,Uid:52142e13-5b13-4ee1-bef6-e84504589fe4,Namespace:calico-system,Attempt:5,}" Jan 30 13:03:44.826690 containerd[1476]: time="2025-01-30T13:03:44.826639802Z" level=info msg="StopPodSandbox for \"5050c56e90f5114e30b9fd95242de2c970ed8d4595a32bf4a5efd4940ca8e5dc\"" Jan 30 13:03:44.826752 containerd[1476]: time="2025-01-30T13:03:44.826740145Z" level=info msg="TearDown network for sandbox \"5050c56e90f5114e30b9fd95242de2c970ed8d4595a32bf4a5efd4940ca8e5dc\" successfully" Jan 30 13:03:44.826752 containerd[1476]: time="2025-01-30T13:03:44.826752463Z" level=info msg="StopPodSandbox for \"5050c56e90f5114e30b9fd95242de2c970ed8d4595a32bf4a5efd4940ca8e5dc\" returns successfully" Jan 30 13:03:44.828139 containerd[1476]: time="2025-01-30T13:03:44.827813078Z" level=info msg="StopPodSandbox for \"0052d1476361ed68d924c536afe59ee79b25df1b051eb83ee9e0d726b941bc9c\"" Jan 30 13:03:44.828139 containerd[1476]: time="2025-01-30T13:03:44.827990967Z" level=info msg="TearDown network for sandbox \"0052d1476361ed68d924c536afe59ee79b25df1b051eb83ee9e0d726b941bc9c\" successfully" Jan 30 13:03:44.828139 containerd[1476]: time="2025-01-30T13:03:44.828006005Z" level=info msg="StopPodSandbox for \"0052d1476361ed68d924c536afe59ee79b25df1b051eb83ee9e0d726b941bc9c\" returns successfully" Jan 30 13:03:44.828872 kubelet[2559]: I0130 13:03:44.828467 2559 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3cb1e6825a901ada41bfdeb0528a6faa8257a7dd5d8ca396955ed98c96d63640" Jan 30 13:03:44.828975 containerd[1476]: time="2025-01-30T13:03:44.828752315Z" level=info msg="StopPodSandbox for \"144fc21131be39a0f42c1537abb474ebcc9fa2400214cdc58e533fbeaf811efa\"" Jan 30 13:03:44.829009 kubelet[2559]: I0130 13:03:44.828951 2559 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:03:44.829032 containerd[1476]: time="2025-01-30T13:03:44.828998112Z" level=info msg="StopPodSandbox for \"3cb1e6825a901ada41bfdeb0528a6faa8257a7dd5d8ca396955ed98c96d63640\"" Jan 30 13:03:44.829271 containerd[1476]: time="2025-01-30T13:03:44.829241470Z" level=info msg="TearDown network for sandbox \"144fc21131be39a0f42c1537abb474ebcc9fa2400214cdc58e533fbeaf811efa\" successfully" Jan 30 13:03:44.829271 containerd[1476]: time="2025-01-30T13:03:44.829264186Z" level=info msg="StopPodSandbox for \"144fc21131be39a0f42c1537abb474ebcc9fa2400214cdc58e533fbeaf811efa\" returns successfully" Jan 30 13:03:44.829370 containerd[1476]: time="2025-01-30T13:03:44.829281703Z" level=info msg="Ensure that sandbox 3cb1e6825a901ada41bfdeb0528a6faa8257a7dd5d8ca396955ed98c96d63640 in task-service has been cleanup successfully" Jan 30 13:03:44.829827 kubelet[2559]: E0130 13:03:44.829498 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:03:44.829827 kubelet[2559]: E0130 13:03:44.829581 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:03:44.829991 containerd[1476]: time="2025-01-30T13:03:44.829880039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-89crv,Uid:51a31eb8-8977-41a3-b690-162f3ef160ae,Namespace:kube-system,Attempt:5,}" Jan 30 13:03:44.830459 containerd[1476]: time="2025-01-30T13:03:44.830422824Z" level=info msg="TearDown network for sandbox \"3cb1e6825a901ada41bfdeb0528a6faa8257a7dd5d8ca396955ed98c96d63640\" successfully" Jan 30 13:03:44.830459 containerd[1476]: time="2025-01-30T13:03:44.830455618Z" level=info msg="StopPodSandbox for \"3cb1e6825a901ada41bfdeb0528a6faa8257a7dd5d8ca396955ed98c96d63640\" returns successfully" Jan 30 13:03:44.831056 containerd[1476]: time="2025-01-30T13:03:44.830871186Z" level=info msg="StopPodSandbox for \"01b2b7084bf10be0191423ef465ccb8e5e96bdd56da1dc74b2073d6e2aca0bd4\"" Jan 30 13:03:44.831056 containerd[1476]: time="2025-01-30T13:03:44.830972289Z" level=info msg="TearDown network for sandbox \"01b2b7084bf10be0191423ef465ccb8e5e96bdd56da1dc74b2073d6e2aca0bd4\" successfully" Jan 30 13:03:44.831056 containerd[1476]: time="2025-01-30T13:03:44.830982327Z" level=info msg="StopPodSandbox for \"01b2b7084bf10be0191423ef465ccb8e5e96bdd56da1dc74b2073d6e2aca0bd4\" returns successfully" Jan 30 13:03:44.831318 containerd[1476]: time="2025-01-30T13:03:44.831287234Z" level=info msg="StopPodSandbox for \"bf5dd00c3929bb02396db453e002b12287ce2b372add738dca0ef8a9044ac221\"" Jan 30 13:03:44.831410 containerd[1476]: time="2025-01-30T13:03:44.831388776Z" level=info msg="TearDown network for sandbox \"bf5dd00c3929bb02396db453e002b12287ce2b372add738dca0ef8a9044ac221\" successfully" Jan 30 13:03:44.831410 containerd[1476]: time="2025-01-30T13:03:44.831405053Z" level=info msg="StopPodSandbox for \"bf5dd00c3929bb02396db453e002b12287ce2b372add738dca0ef8a9044ac221\" returns successfully" Jan 30 13:03:44.831726 containerd[1476]: time="2025-01-30T13:03:44.831698202Z" level=info msg="StopPodSandbox for \"0b8cab69669dccc4405556aab35436320fe776560fe176a6c6371978cd4d55c5\"" Jan 30 13:03:44.831977 containerd[1476]: time="2025-01-30T13:03:44.831956637Z" level=info msg="TearDown network for sandbox \"0b8cab69669dccc4405556aab35436320fe776560fe176a6c6371978cd4d55c5\" successfully" Jan 30 13:03:44.832077 containerd[1476]: time="2025-01-30T13:03:44.832061219Z" level=info msg="StopPodSandbox for \"0b8cab69669dccc4405556aab35436320fe776560fe176a6c6371978cd4d55c5\" returns successfully" Jan 30 13:03:44.832563 containerd[1476]: time="2025-01-30T13:03:44.832531057Z" level=info msg="StopPodSandbox for \"a5d295c47e96d1c5b1c07aae599b48f3d11a387edc12b031c5eb4bda19e91a06\"" Jan 30 13:03:44.832866 containerd[1476]: time="2025-01-30T13:03:44.832776175Z" level=info msg="TearDown network for sandbox \"a5d295c47e96d1c5b1c07aae599b48f3d11a387edc12b031c5eb4bda19e91a06\" successfully" Jan 30 13:03:44.832866 containerd[1476]: time="2025-01-30T13:03:44.832795931Z" level=info msg="StopPodSandbox for \"a5d295c47e96d1c5b1c07aae599b48f3d11a387edc12b031c5eb4bda19e91a06\" returns successfully" Jan 30 13:03:44.833050 kubelet[2559]: E0130 13:03:44.833032 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:03:44.833390 containerd[1476]: time="2025-01-30T13:03:44.833347875Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-w7k54,Uid:c16bbc77-2b47-4ff4-846f-0b437cb6c4ee,Namespace:kube-system,Attempt:5,}" Jan 30 13:03:45.042668 systemd[1]: run-netns-cni\x2d1716b8c5\x2d448c\x2d4c89\x2d2ef4\x2d83b2472edea5.mount: Deactivated successfully. Jan 30 13:03:45.042777 systemd[1]: run-netns-cni\x2d49a7d164\x2db9a4\x2d6ed9\x2d5baf\x2d864bf0da8dd1.mount: Deactivated successfully. Jan 30 13:03:45.413229 systemd[1]: run-containerd-runc-k8s.io-053d0b57c73c951481477e9df6ad9ab492d0724d877cc27ce53921f349c22b6c-runc.4J8nEU.mount: Deactivated successfully. Jan 30 13:03:45.626019 systemd-networkd[1402]: cali124433f8d19: Link UP Jan 30 13:03:45.627161 systemd-networkd[1402]: cali124433f8d19: Gained carrier Jan 30 13:03:45.657727 containerd[1476]: 2025-01-30 13:03:44.953 [INFO][4453] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 30 13:03:45.657727 containerd[1476]: 2025-01-30 13:03:45.053 [INFO][4453] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--89crv-eth0 coredns-668d6bf9bc- kube-system 51a31eb8-8977-41a3-b690-162f3ef160ae 691 0 2025-01-30 13:03:23 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-89crv eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali124433f8d19 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="0e2f3ab018ae8e8465e241e65e75a0c55c35378060cd4f5cb462208457868232" Namespace="kube-system" Pod="coredns-668d6bf9bc-89crv" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--89crv-" Jan 30 13:03:45.657727 containerd[1476]: 2025-01-30 13:03:45.053 [INFO][4453] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="0e2f3ab018ae8e8465e241e65e75a0c55c35378060cd4f5cb462208457868232" Namespace="kube-system" Pod="coredns-668d6bf9bc-89crv" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--89crv-eth0" Jan 30 13:03:45.657727 containerd[1476]: 2025-01-30 13:03:45.520 [INFO][4489] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0e2f3ab018ae8e8465e241e65e75a0c55c35378060cd4f5cb462208457868232" HandleID="k8s-pod-network.0e2f3ab018ae8e8465e241e65e75a0c55c35378060cd4f5cb462208457868232" Workload="localhost-k8s-coredns--668d6bf9bc--89crv-eth0" Jan 30 13:03:45.657727 containerd[1476]: 2025-01-30 13:03:45.542 [INFO][4489] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0e2f3ab018ae8e8465e241e65e75a0c55c35378060cd4f5cb462208457868232" HandleID="k8s-pod-network.0e2f3ab018ae8e8465e241e65e75a0c55c35378060cd4f5cb462208457868232" Workload="localhost-k8s-coredns--668d6bf9bc--89crv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400057ab30), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-89crv", "timestamp":"2025-01-30 13:03:45.520813703 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:03:45.657727 containerd[1476]: 2025-01-30 13:03:45.543 [INFO][4489] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:03:45.657727 containerd[1476]: 2025-01-30 13:03:45.543 [INFO][4489] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:03:45.657727 containerd[1476]: 2025-01-30 13:03:45.543 [INFO][4489] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 30 13:03:45.657727 containerd[1476]: 2025-01-30 13:03:45.550 [INFO][4489] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0e2f3ab018ae8e8465e241e65e75a0c55c35378060cd4f5cb462208457868232" host="localhost" Jan 30 13:03:45.657727 containerd[1476]: 2025-01-30 13:03:45.569 [INFO][4489] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 30 13:03:45.657727 containerd[1476]: 2025-01-30 13:03:45.575 [INFO][4489] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 30 13:03:45.657727 containerd[1476]: 2025-01-30 13:03:45.580 [INFO][4489] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 30 13:03:45.657727 containerd[1476]: 2025-01-30 13:03:45.583 [INFO][4489] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 30 13:03:45.657727 containerd[1476]: 2025-01-30 13:03:45.583 [INFO][4489] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0e2f3ab018ae8e8465e241e65e75a0c55c35378060cd4f5cb462208457868232" host="localhost" Jan 30 13:03:45.657727 containerd[1476]: 2025-01-30 13:03:45.586 [INFO][4489] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.0e2f3ab018ae8e8465e241e65e75a0c55c35378060cd4f5cb462208457868232 Jan 30 13:03:45.657727 containerd[1476]: 2025-01-30 13:03:45.591 [INFO][4489] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0e2f3ab018ae8e8465e241e65e75a0c55c35378060cd4f5cb462208457868232" host="localhost" Jan 30 13:03:45.657727 containerd[1476]: 2025-01-30 13:03:45.598 [INFO][4489] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.0e2f3ab018ae8e8465e241e65e75a0c55c35378060cd4f5cb462208457868232" host="localhost" Jan 30 13:03:45.657727 containerd[1476]: 2025-01-30 13:03:45.598 [INFO][4489] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.0e2f3ab018ae8e8465e241e65e75a0c55c35378060cd4f5cb462208457868232" host="localhost" Jan 30 13:03:45.657727 containerd[1476]: 2025-01-30 13:03:45.599 [INFO][4489] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:03:45.657727 containerd[1476]: 2025-01-30 13:03:45.599 [INFO][4489] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="0e2f3ab018ae8e8465e241e65e75a0c55c35378060cd4f5cb462208457868232" HandleID="k8s-pod-network.0e2f3ab018ae8e8465e241e65e75a0c55c35378060cd4f5cb462208457868232" Workload="localhost-k8s-coredns--668d6bf9bc--89crv-eth0" Jan 30 13:03:45.658806 containerd[1476]: 2025-01-30 13:03:45.604 [INFO][4453] cni-plugin/k8s.go 386: Populated endpoint ContainerID="0e2f3ab018ae8e8465e241e65e75a0c55c35378060cd4f5cb462208457868232" Namespace="kube-system" Pod="coredns-668d6bf9bc-89crv" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--89crv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--89crv-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"51a31eb8-8977-41a3-b690-162f3ef160ae", ResourceVersion:"691", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 3, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-89crv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali124433f8d19", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:03:45.658806 containerd[1476]: 2025-01-30 13:03:45.604 [INFO][4453] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="0e2f3ab018ae8e8465e241e65e75a0c55c35378060cd4f5cb462208457868232" Namespace="kube-system" Pod="coredns-668d6bf9bc-89crv" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--89crv-eth0" Jan 30 13:03:45.658806 containerd[1476]: 2025-01-30 13:03:45.604 [INFO][4453] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali124433f8d19 ContainerID="0e2f3ab018ae8e8465e241e65e75a0c55c35378060cd4f5cb462208457868232" Namespace="kube-system" Pod="coredns-668d6bf9bc-89crv" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--89crv-eth0" Jan 30 13:03:45.658806 containerd[1476]: 2025-01-30 13:03:45.626 [INFO][4453] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0e2f3ab018ae8e8465e241e65e75a0c55c35378060cd4f5cb462208457868232" Namespace="kube-system" Pod="coredns-668d6bf9bc-89crv" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--89crv-eth0" Jan 30 13:03:45.658806 containerd[1476]: 2025-01-30 13:03:45.628 [INFO][4453] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="0e2f3ab018ae8e8465e241e65e75a0c55c35378060cd4f5cb462208457868232" Namespace="kube-system" Pod="coredns-668d6bf9bc-89crv" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--89crv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--89crv-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"51a31eb8-8977-41a3-b690-162f3ef160ae", ResourceVersion:"691", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 3, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0e2f3ab018ae8e8465e241e65e75a0c55c35378060cd4f5cb462208457868232", Pod:"coredns-668d6bf9bc-89crv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali124433f8d19", MAC:"e2:14:aa:5a:df:b4", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:03:45.658806 containerd[1476]: 2025-01-30 13:03:45.655 [INFO][4453] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="0e2f3ab018ae8e8465e241e65e75a0c55c35378060cd4f5cb462208457868232" Namespace="kube-system" Pod="coredns-668d6bf9bc-89crv" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--89crv-eth0" Jan 30 13:03:45.701192 containerd[1476]: time="2025-01-30T13:03:45.700312982Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:03:45.701192 containerd[1476]: time="2025-01-30T13:03:45.700388649Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:03:45.701192 containerd[1476]: time="2025-01-30T13:03:45.700412406Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:03:45.701516 containerd[1476]: time="2025-01-30T13:03:45.701142806Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:03:45.721817 systemd[1]: Started cri-containerd-0e2f3ab018ae8e8465e241e65e75a0c55c35378060cd4f5cb462208457868232.scope - libcontainer container 0e2f3ab018ae8e8465e241e65e75a0c55c35378060cd4f5cb462208457868232. Jan 30 13:03:45.724018 systemd-networkd[1402]: cali5480cc26309: Link UP Jan 30 13:03:45.724995 systemd-networkd[1402]: cali5480cc26309: Gained carrier Jan 30 13:03:45.739464 systemd-resolved[1311]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 13:03:45.744243 containerd[1476]: 2025-01-30 13:03:44.865 [INFO][4402] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 30 13:03:45.744243 containerd[1476]: 2025-01-30 13:03:45.049 [INFO][4402] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7874549f5f--gzb44-eth0 calico-apiserver-7874549f5f- calico-apiserver e0b441b3-88a9-4555-8974-b147721621a3 694 0 2025-01-30 13:03:29 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7874549f5f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7874549f5f-gzb44 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali5480cc26309 [] []}} ContainerID="18317ee5670387304fea9240c17eb003f9d1d8a0eac03d47442cd44adec02c5f" Namespace="calico-apiserver" Pod="calico-apiserver-7874549f5f-gzb44" WorkloadEndpoint="localhost-k8s-calico--apiserver--7874549f5f--gzb44-" Jan 30 13:03:45.744243 containerd[1476]: 2025-01-30 13:03:45.050 [INFO][4402] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="18317ee5670387304fea9240c17eb003f9d1d8a0eac03d47442cd44adec02c5f" Namespace="calico-apiserver" Pod="calico-apiserver-7874549f5f-gzb44" WorkloadEndpoint="localhost-k8s-calico--apiserver--7874549f5f--gzb44-eth0" Jan 30 13:03:45.744243 containerd[1476]: 2025-01-30 13:03:45.518 [INFO][4493] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="18317ee5670387304fea9240c17eb003f9d1d8a0eac03d47442cd44adec02c5f" HandleID="k8s-pod-network.18317ee5670387304fea9240c17eb003f9d1d8a0eac03d47442cd44adec02c5f" Workload="localhost-k8s-calico--apiserver--7874549f5f--gzb44-eth0" Jan 30 13:03:45.744243 containerd[1476]: 2025-01-30 13:03:45.544 [INFO][4493] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="18317ee5670387304fea9240c17eb003f9d1d8a0eac03d47442cd44adec02c5f" HandleID="k8s-pod-network.18317ee5670387304fea9240c17eb003f9d1d8a0eac03d47442cd44adec02c5f" Workload="localhost-k8s-calico--apiserver--7874549f5f--gzb44-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40005e56f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7874549f5f-gzb44", "timestamp":"2025-01-30 13:03:45.51813302 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:03:45.744243 containerd[1476]: 2025-01-30 13:03:45.545 [INFO][4493] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:03:45.744243 containerd[1476]: 2025-01-30 13:03:45.599 [INFO][4493] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:03:45.744243 containerd[1476]: 2025-01-30 13:03:45.599 [INFO][4493] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 30 13:03:45.744243 containerd[1476]: 2025-01-30 13:03:45.649 [INFO][4493] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.18317ee5670387304fea9240c17eb003f9d1d8a0eac03d47442cd44adec02c5f" host="localhost" Jan 30 13:03:45.744243 containerd[1476]: 2025-01-30 13:03:45.662 [INFO][4493] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 30 13:03:45.744243 containerd[1476]: 2025-01-30 13:03:45.675 [INFO][4493] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 30 13:03:45.744243 containerd[1476]: 2025-01-30 13:03:45.678 [INFO][4493] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 30 13:03:45.744243 containerd[1476]: 2025-01-30 13:03:45.687 [INFO][4493] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 30 13:03:45.744243 containerd[1476]: 2025-01-30 13:03:45.687 [INFO][4493] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.18317ee5670387304fea9240c17eb003f9d1d8a0eac03d47442cd44adec02c5f" host="localhost" Jan 30 13:03:45.744243 containerd[1476]: 2025-01-30 13:03:45.690 [INFO][4493] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.18317ee5670387304fea9240c17eb003f9d1d8a0eac03d47442cd44adec02c5f Jan 30 13:03:45.744243 containerd[1476]: 2025-01-30 13:03:45.706 [INFO][4493] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.18317ee5670387304fea9240c17eb003f9d1d8a0eac03d47442cd44adec02c5f" host="localhost" Jan 30 13:03:45.744243 containerd[1476]: 2025-01-30 13:03:45.713 [INFO][4493] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.18317ee5670387304fea9240c17eb003f9d1d8a0eac03d47442cd44adec02c5f" host="localhost" Jan 30 13:03:45.744243 containerd[1476]: 2025-01-30 13:03:45.713 [INFO][4493] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.18317ee5670387304fea9240c17eb003f9d1d8a0eac03d47442cd44adec02c5f" host="localhost" Jan 30 13:03:45.744243 containerd[1476]: 2025-01-30 13:03:45.713 [INFO][4493] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:03:45.744243 containerd[1476]: 2025-01-30 13:03:45.713 [INFO][4493] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="18317ee5670387304fea9240c17eb003f9d1d8a0eac03d47442cd44adec02c5f" HandleID="k8s-pod-network.18317ee5670387304fea9240c17eb003f9d1d8a0eac03d47442cd44adec02c5f" Workload="localhost-k8s-calico--apiserver--7874549f5f--gzb44-eth0" Jan 30 13:03:45.744883 containerd[1476]: 2025-01-30 13:03:45.719 [INFO][4402] cni-plugin/k8s.go 386: Populated endpoint ContainerID="18317ee5670387304fea9240c17eb003f9d1d8a0eac03d47442cd44adec02c5f" Namespace="calico-apiserver" Pod="calico-apiserver-7874549f5f-gzb44" WorkloadEndpoint="localhost-k8s-calico--apiserver--7874549f5f--gzb44-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7874549f5f--gzb44-eth0", GenerateName:"calico-apiserver-7874549f5f-", Namespace:"calico-apiserver", SelfLink:"", UID:"e0b441b3-88a9-4555-8974-b147721621a3", ResourceVersion:"694", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 3, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7874549f5f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7874549f5f-gzb44", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5480cc26309", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:03:45.744883 containerd[1476]: 2025-01-30 13:03:45.720 [INFO][4402] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="18317ee5670387304fea9240c17eb003f9d1d8a0eac03d47442cd44adec02c5f" Namespace="calico-apiserver" Pod="calico-apiserver-7874549f5f-gzb44" WorkloadEndpoint="localhost-k8s-calico--apiserver--7874549f5f--gzb44-eth0" Jan 30 13:03:45.744883 containerd[1476]: 2025-01-30 13:03:45.720 [INFO][4402] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5480cc26309 ContainerID="18317ee5670387304fea9240c17eb003f9d1d8a0eac03d47442cd44adec02c5f" Namespace="calico-apiserver" Pod="calico-apiserver-7874549f5f-gzb44" WorkloadEndpoint="localhost-k8s-calico--apiserver--7874549f5f--gzb44-eth0" Jan 30 13:03:45.744883 containerd[1476]: 2025-01-30 13:03:45.724 [INFO][4402] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="18317ee5670387304fea9240c17eb003f9d1d8a0eac03d47442cd44adec02c5f" Namespace="calico-apiserver" Pod="calico-apiserver-7874549f5f-gzb44" WorkloadEndpoint="localhost-k8s-calico--apiserver--7874549f5f--gzb44-eth0" Jan 30 13:03:45.744883 containerd[1476]: 2025-01-30 13:03:45.725 [INFO][4402] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="18317ee5670387304fea9240c17eb003f9d1d8a0eac03d47442cd44adec02c5f" Namespace="calico-apiserver" Pod="calico-apiserver-7874549f5f-gzb44" WorkloadEndpoint="localhost-k8s-calico--apiserver--7874549f5f--gzb44-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7874549f5f--gzb44-eth0", GenerateName:"calico-apiserver-7874549f5f-", Namespace:"calico-apiserver", SelfLink:"", UID:"e0b441b3-88a9-4555-8974-b147721621a3", ResourceVersion:"694", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 3, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7874549f5f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"18317ee5670387304fea9240c17eb003f9d1d8a0eac03d47442cd44adec02c5f", Pod:"calico-apiserver-7874549f5f-gzb44", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5480cc26309", MAC:"06:37:46:66:f4:3c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:03:45.744883 containerd[1476]: 2025-01-30 13:03:45.741 [INFO][4402] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="18317ee5670387304fea9240c17eb003f9d1d8a0eac03d47442cd44adec02c5f" Namespace="calico-apiserver" Pod="calico-apiserver-7874549f5f-gzb44" WorkloadEndpoint="localhost-k8s-calico--apiserver--7874549f5f--gzb44-eth0" Jan 30 13:03:45.769259 containerd[1476]: time="2025-01-30T13:03:45.769218382Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-89crv,Uid:51a31eb8-8977-41a3-b690-162f3ef160ae,Namespace:kube-system,Attempt:5,} returns sandbox id \"0e2f3ab018ae8e8465e241e65e75a0c55c35378060cd4f5cb462208457868232\"" Jan 30 13:03:45.770188 kubelet[2559]: E0130 13:03:45.770163 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:03:45.772776 containerd[1476]: time="2025-01-30T13:03:45.772707812Z" level=info msg="CreateContainer within sandbox \"0e2f3ab018ae8e8465e241e65e75a0c55c35378060cd4f5cb462208457868232\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 13:03:45.779895 containerd[1476]: time="2025-01-30T13:03:45.779559495Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:03:45.779895 containerd[1476]: time="2025-01-30T13:03:45.779671436Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:03:45.779895 containerd[1476]: time="2025-01-30T13:03:45.779684634Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:03:45.779895 containerd[1476]: time="2025-01-30T13:03:45.779788937Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:03:45.802824 systemd[1]: Started cri-containerd-18317ee5670387304fea9240c17eb003f9d1d8a0eac03d47442cd44adec02c5f.scope - libcontainer container 18317ee5670387304fea9240c17eb003f9d1d8a0eac03d47442cd44adec02c5f. Jan 30 13:03:45.815844 systemd-networkd[1402]: cali4f7ee1e3de4: Link UP Jan 30 13:03:45.816072 systemd-networkd[1402]: cali4f7ee1e3de4: Gained carrier Jan 30 13:03:45.825214 systemd-resolved[1311]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 13:03:45.829413 containerd[1476]: 2025-01-30 13:03:44.942 [INFO][4434] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 30 13:03:45.829413 containerd[1476]: 2025-01-30 13:03:45.049 [INFO][4434] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--kw84f-eth0 csi-node-driver- calico-system 21d84f57-66ce-4eaa-a49a-963d6f74f4a0 606 0 2025-01-30 13:03:29 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:84cddb44f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-kw84f eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali4f7ee1e3de4 [] []}} ContainerID="6d3b21d2bce095472b29c5e389d901c5fabead1b6d9e14c7857033382bca7b81" Namespace="calico-system" Pod="csi-node-driver-kw84f" WorkloadEndpoint="localhost-k8s-csi--node--driver--kw84f-" Jan 30 13:03:45.829413 containerd[1476]: 2025-01-30 13:03:45.049 [INFO][4434] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="6d3b21d2bce095472b29c5e389d901c5fabead1b6d9e14c7857033382bca7b81" Namespace="calico-system" Pod="csi-node-driver-kw84f" WorkloadEndpoint="localhost-k8s-csi--node--driver--kw84f-eth0" Jan 30 13:03:45.829413 containerd[1476]: 2025-01-30 13:03:45.518 [INFO][4494] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6d3b21d2bce095472b29c5e389d901c5fabead1b6d9e14c7857033382bca7b81" HandleID="k8s-pod-network.6d3b21d2bce095472b29c5e389d901c5fabead1b6d9e14c7857033382bca7b81" Workload="localhost-k8s-csi--node--driver--kw84f-eth0" Jan 30 13:03:45.829413 containerd[1476]: 2025-01-30 13:03:45.547 [INFO][4494] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6d3b21d2bce095472b29c5e389d901c5fabead1b6d9e14c7857033382bca7b81" HandleID="k8s-pod-network.6d3b21d2bce095472b29c5e389d901c5fabead1b6d9e14c7857033382bca7b81" Workload="localhost-k8s-csi--node--driver--kw84f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400035c990), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-kw84f", "timestamp":"2025-01-30 13:03:45.518782234 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:03:45.829413 containerd[1476]: 2025-01-30 13:03:45.547 [INFO][4494] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:03:45.829413 containerd[1476]: 2025-01-30 13:03:45.714 [INFO][4494] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:03:45.829413 containerd[1476]: 2025-01-30 13:03:45.714 [INFO][4494] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 30 13:03:45.829413 containerd[1476]: 2025-01-30 13:03:45.748 [INFO][4494] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.6d3b21d2bce095472b29c5e389d901c5fabead1b6d9e14c7857033382bca7b81" host="localhost" Jan 30 13:03:45.829413 containerd[1476]: 2025-01-30 13:03:45.766 [INFO][4494] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 30 13:03:45.829413 containerd[1476]: 2025-01-30 13:03:45.778 [INFO][4494] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 30 13:03:45.829413 containerd[1476]: 2025-01-30 13:03:45.783 [INFO][4494] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 30 13:03:45.829413 containerd[1476]: 2025-01-30 13:03:45.788 [INFO][4494] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 30 13:03:45.829413 containerd[1476]: 2025-01-30 13:03:45.789 [INFO][4494] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6d3b21d2bce095472b29c5e389d901c5fabead1b6d9e14c7857033382bca7b81" host="localhost" Jan 30 13:03:45.829413 containerd[1476]: 2025-01-30 13:03:45.792 [INFO][4494] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.6d3b21d2bce095472b29c5e389d901c5fabead1b6d9e14c7857033382bca7b81 Jan 30 13:03:45.829413 containerd[1476]: 2025-01-30 13:03:45.799 [INFO][4494] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6d3b21d2bce095472b29c5e389d901c5fabead1b6d9e14c7857033382bca7b81" host="localhost" Jan 30 13:03:45.829413 containerd[1476]: 2025-01-30 13:03:45.808 [INFO][4494] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.6d3b21d2bce095472b29c5e389d901c5fabead1b6d9e14c7857033382bca7b81" host="localhost" Jan 30 13:03:45.829413 containerd[1476]: 2025-01-30 13:03:45.808 [INFO][4494] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.6d3b21d2bce095472b29c5e389d901c5fabead1b6d9e14c7857033382bca7b81" host="localhost" Jan 30 13:03:45.829413 containerd[1476]: 2025-01-30 13:03:45.808 [INFO][4494] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:03:45.829413 containerd[1476]: 2025-01-30 13:03:45.808 [INFO][4494] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="6d3b21d2bce095472b29c5e389d901c5fabead1b6d9e14c7857033382bca7b81" HandleID="k8s-pod-network.6d3b21d2bce095472b29c5e389d901c5fabead1b6d9e14c7857033382bca7b81" Workload="localhost-k8s-csi--node--driver--kw84f-eth0" Jan 30 13:03:45.830751 containerd[1476]: 2025-01-30 13:03:45.811 [INFO][4434] cni-plugin/k8s.go 386: Populated endpoint ContainerID="6d3b21d2bce095472b29c5e389d901c5fabead1b6d9e14c7857033382bca7b81" Namespace="calico-system" Pod="csi-node-driver-kw84f" WorkloadEndpoint="localhost-k8s-csi--node--driver--kw84f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--kw84f-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"21d84f57-66ce-4eaa-a49a-963d6f74f4a0", ResourceVersion:"606", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 3, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-kw84f", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4f7ee1e3de4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:03:45.830751 containerd[1476]: 2025-01-30 13:03:45.812 [INFO][4434] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="6d3b21d2bce095472b29c5e389d901c5fabead1b6d9e14c7857033382bca7b81" Namespace="calico-system" Pod="csi-node-driver-kw84f" WorkloadEndpoint="localhost-k8s-csi--node--driver--kw84f-eth0" Jan 30 13:03:45.830751 containerd[1476]: 2025-01-30 13:03:45.812 [INFO][4434] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4f7ee1e3de4 ContainerID="6d3b21d2bce095472b29c5e389d901c5fabead1b6d9e14c7857033382bca7b81" Namespace="calico-system" Pod="csi-node-driver-kw84f" WorkloadEndpoint="localhost-k8s-csi--node--driver--kw84f-eth0" Jan 30 13:03:45.830751 containerd[1476]: 2025-01-30 13:03:45.814 [INFO][4434] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6d3b21d2bce095472b29c5e389d901c5fabead1b6d9e14c7857033382bca7b81" Namespace="calico-system" Pod="csi-node-driver-kw84f" WorkloadEndpoint="localhost-k8s-csi--node--driver--kw84f-eth0" Jan 30 13:03:45.830751 containerd[1476]: 2025-01-30 13:03:45.814 [INFO][4434] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="6d3b21d2bce095472b29c5e389d901c5fabead1b6d9e14c7857033382bca7b81" Namespace="calico-system" Pod="csi-node-driver-kw84f" WorkloadEndpoint="localhost-k8s-csi--node--driver--kw84f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--kw84f-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"21d84f57-66ce-4eaa-a49a-963d6f74f4a0", ResourceVersion:"606", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 3, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6d3b21d2bce095472b29c5e389d901c5fabead1b6d9e14c7857033382bca7b81", Pod:"csi-node-driver-kw84f", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4f7ee1e3de4", MAC:"da:fb:c5:f1:ac:98", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:03:45.830751 containerd[1476]: 2025-01-30 13:03:45.825 [INFO][4434] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="6d3b21d2bce095472b29c5e389d901c5fabead1b6d9e14c7857033382bca7b81" Namespace="calico-system" Pod="csi-node-driver-kw84f" WorkloadEndpoint="localhost-k8s-csi--node--driver--kw84f-eth0" Jan 30 13:03:45.830751 containerd[1476]: time="2025-01-30T13:03:45.829903002Z" level=info msg="CreateContainer within sandbox \"0e2f3ab018ae8e8465e241e65e75a0c55c35378060cd4f5cb462208457868232\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"00fe7bc613d71443d74dbc657d321fe6ceeae95df0b69796ad728a0d645026e9\"" Jan 30 13:03:45.832019 containerd[1476]: time="2025-01-30T13:03:45.831933031Z" level=info msg="StartContainer for \"00fe7bc613d71443d74dbc657d321fe6ceeae95df0b69796ad728a0d645026e9\"" Jan 30 13:03:45.841036 kubelet[2559]: E0130 13:03:45.840792 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:03:45.870963 containerd[1476]: time="2025-01-30T13:03:45.870067491Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:03:45.870963 containerd[1476]: time="2025-01-30T13:03:45.870142318Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:03:45.870963 containerd[1476]: time="2025-01-30T13:03:45.870157436Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:03:45.870963 containerd[1476]: time="2025-01-30T13:03:45.870261899Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:03:45.873379 containerd[1476]: time="2025-01-30T13:03:45.873259850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7874549f5f-gzb44,Uid:e0b441b3-88a9-4555-8974-b147721621a3,Namespace:calico-apiserver,Attempt:4,} returns sandbox id \"18317ee5670387304fea9240c17eb003f9d1d8a0eac03d47442cd44adec02c5f\"" Jan 30 13:03:45.876194 containerd[1476]: time="2025-01-30T13:03:45.876143659Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 30 13:03:45.896232 systemd[1]: Started cri-containerd-00fe7bc613d71443d74dbc657d321fe6ceeae95df0b69796ad728a0d645026e9.scope - libcontainer container 00fe7bc613d71443d74dbc657d321fe6ceeae95df0b69796ad728a0d645026e9. Jan 30 13:03:45.902860 systemd[1]: Started cri-containerd-6d3b21d2bce095472b29c5e389d901c5fabead1b6d9e14c7857033382bca7b81.scope - libcontainer container 6d3b21d2bce095472b29c5e389d901c5fabead1b6d9e14c7857033382bca7b81. Jan 30 13:03:45.922349 systemd-resolved[1311]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 13:03:45.947055 systemd-networkd[1402]: cali65706d0a293: Link UP Jan 30 13:03:45.947886 systemd-networkd[1402]: cali65706d0a293: Gained carrier Jan 30 13:03:45.962420 containerd[1476]: time="2025-01-30T13:03:45.962306644Z" level=info msg="StartContainer for \"00fe7bc613d71443d74dbc657d321fe6ceeae95df0b69796ad728a0d645026e9\" returns successfully" Jan 30 13:03:45.963144 containerd[1476]: time="2025-01-30T13:03:45.962422745Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kw84f,Uid:21d84f57-66ce-4eaa-a49a-963d6f74f4a0,Namespace:calico-system,Attempt:5,} returns sandbox id \"6d3b21d2bce095472b29c5e389d901c5fabead1b6d9e14c7857033382bca7b81\"" Jan 30 13:03:45.991734 containerd[1476]: 2025-01-30 13:03:44.905 [INFO][4412] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 30 13:03:45.991734 containerd[1476]: 2025-01-30 13:03:45.056 [INFO][4412] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7874549f5f--h26th-eth0 calico-apiserver-7874549f5f- calico-apiserver 7acfabdf-bbd3-498c-9434-a86e65427513 693 0 2025-01-30 13:03:29 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7874549f5f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7874549f5f-h26th eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali65706d0a293 [] []}} ContainerID="e5aae9b65e6c8a28413214c2f78738b712ec0f16ec69acdc2a91a140a8ce5ef6" Namespace="calico-apiserver" Pod="calico-apiserver-7874549f5f-h26th" WorkloadEndpoint="localhost-k8s-calico--apiserver--7874549f5f--h26th-" Jan 30 13:03:45.991734 containerd[1476]: 2025-01-30 13:03:45.056 [INFO][4412] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e5aae9b65e6c8a28413214c2f78738b712ec0f16ec69acdc2a91a140a8ce5ef6" Namespace="calico-apiserver" Pod="calico-apiserver-7874549f5f-h26th" WorkloadEndpoint="localhost-k8s-calico--apiserver--7874549f5f--h26th-eth0" Jan 30 13:03:45.991734 containerd[1476]: 2025-01-30 13:03:45.509 [INFO][4492] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e5aae9b65e6c8a28413214c2f78738b712ec0f16ec69acdc2a91a140a8ce5ef6" HandleID="k8s-pod-network.e5aae9b65e6c8a28413214c2f78738b712ec0f16ec69acdc2a91a140a8ce5ef6" Workload="localhost-k8s-calico--apiserver--7874549f5f--h26th-eth0" Jan 30 13:03:45.991734 containerd[1476]: 2025-01-30 13:03:45.549 [INFO][4492] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e5aae9b65e6c8a28413214c2f78738b712ec0f16ec69acdc2a91a140a8ce5ef6" HandleID="k8s-pod-network.e5aae9b65e6c8a28413214c2f78738b712ec0f16ec69acdc2a91a140a8ce5ef6" Workload="localhost-k8s-calico--apiserver--7874549f5f--h26th-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000304960), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7874549f5f-h26th", "timestamp":"2025-01-30 13:03:45.509912601 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:03:45.991734 containerd[1476]: 2025-01-30 13:03:45.549 [INFO][4492] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:03:45.991734 containerd[1476]: 2025-01-30 13:03:45.808 [INFO][4492] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:03:45.991734 containerd[1476]: 2025-01-30 13:03:45.809 [INFO][4492] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 30 13:03:45.991734 containerd[1476]: 2025-01-30 13:03:45.853 [INFO][4492] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e5aae9b65e6c8a28413214c2f78738b712ec0f16ec69acdc2a91a140a8ce5ef6" host="localhost" Jan 30 13:03:45.991734 containerd[1476]: 2025-01-30 13:03:45.863 [INFO][4492] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 30 13:03:45.991734 containerd[1476]: 2025-01-30 13:03:45.882 [INFO][4492] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 30 13:03:45.991734 containerd[1476]: 2025-01-30 13:03:45.887 [INFO][4492] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 30 13:03:45.991734 containerd[1476]: 2025-01-30 13:03:45.891 [INFO][4492] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 30 13:03:45.991734 containerd[1476]: 2025-01-30 13:03:45.892 [INFO][4492] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e5aae9b65e6c8a28413214c2f78738b712ec0f16ec69acdc2a91a140a8ce5ef6" host="localhost" Jan 30 13:03:45.991734 containerd[1476]: 2025-01-30 13:03:45.895 [INFO][4492] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e5aae9b65e6c8a28413214c2f78738b712ec0f16ec69acdc2a91a140a8ce5ef6 Jan 30 13:03:45.991734 containerd[1476]: 2025-01-30 13:03:45.907 [INFO][4492] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e5aae9b65e6c8a28413214c2f78738b712ec0f16ec69acdc2a91a140a8ce5ef6" host="localhost" Jan 30 13:03:45.991734 containerd[1476]: 2025-01-30 13:03:45.935 [INFO][4492] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.e5aae9b65e6c8a28413214c2f78738b712ec0f16ec69acdc2a91a140a8ce5ef6" host="localhost" Jan 30 13:03:45.991734 containerd[1476]: 2025-01-30 13:03:45.935 [INFO][4492] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.e5aae9b65e6c8a28413214c2f78738b712ec0f16ec69acdc2a91a140a8ce5ef6" host="localhost" Jan 30 13:03:45.991734 containerd[1476]: 2025-01-30 13:03:45.935 [INFO][4492] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:03:45.991734 containerd[1476]: 2025-01-30 13:03:45.935 [INFO][4492] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="e5aae9b65e6c8a28413214c2f78738b712ec0f16ec69acdc2a91a140a8ce5ef6" HandleID="k8s-pod-network.e5aae9b65e6c8a28413214c2f78738b712ec0f16ec69acdc2a91a140a8ce5ef6" Workload="localhost-k8s-calico--apiserver--7874549f5f--h26th-eth0" Jan 30 13:03:45.992518 containerd[1476]: 2025-01-30 13:03:45.938 [INFO][4412] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e5aae9b65e6c8a28413214c2f78738b712ec0f16ec69acdc2a91a140a8ce5ef6" Namespace="calico-apiserver" Pod="calico-apiserver-7874549f5f-h26th" WorkloadEndpoint="localhost-k8s-calico--apiserver--7874549f5f--h26th-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7874549f5f--h26th-eth0", GenerateName:"calico-apiserver-7874549f5f-", Namespace:"calico-apiserver", SelfLink:"", UID:"7acfabdf-bbd3-498c-9434-a86e65427513", ResourceVersion:"693", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 3, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7874549f5f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7874549f5f-h26th", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali65706d0a293", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:03:45.992518 containerd[1476]: 2025-01-30 13:03:45.938 [INFO][4412] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="e5aae9b65e6c8a28413214c2f78738b712ec0f16ec69acdc2a91a140a8ce5ef6" Namespace="calico-apiserver" Pod="calico-apiserver-7874549f5f-h26th" WorkloadEndpoint="localhost-k8s-calico--apiserver--7874549f5f--h26th-eth0" Jan 30 13:03:45.992518 containerd[1476]: 2025-01-30 13:03:45.939 [INFO][4412] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali65706d0a293 ContainerID="e5aae9b65e6c8a28413214c2f78738b712ec0f16ec69acdc2a91a140a8ce5ef6" Namespace="calico-apiserver" Pod="calico-apiserver-7874549f5f-h26th" WorkloadEndpoint="localhost-k8s-calico--apiserver--7874549f5f--h26th-eth0" Jan 30 13:03:45.992518 containerd[1476]: 2025-01-30 13:03:45.947 [INFO][4412] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e5aae9b65e6c8a28413214c2f78738b712ec0f16ec69acdc2a91a140a8ce5ef6" Namespace="calico-apiserver" Pod="calico-apiserver-7874549f5f-h26th" WorkloadEndpoint="localhost-k8s-calico--apiserver--7874549f5f--h26th-eth0" Jan 30 13:03:45.992518 containerd[1476]: 2025-01-30 13:03:45.947 [INFO][4412] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e5aae9b65e6c8a28413214c2f78738b712ec0f16ec69acdc2a91a140a8ce5ef6" Namespace="calico-apiserver" Pod="calico-apiserver-7874549f5f-h26th" WorkloadEndpoint="localhost-k8s-calico--apiserver--7874549f5f--h26th-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7874549f5f--h26th-eth0", GenerateName:"calico-apiserver-7874549f5f-", Namespace:"calico-apiserver", SelfLink:"", UID:"7acfabdf-bbd3-498c-9434-a86e65427513", ResourceVersion:"693", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 3, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7874549f5f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e5aae9b65e6c8a28413214c2f78738b712ec0f16ec69acdc2a91a140a8ce5ef6", Pod:"calico-apiserver-7874549f5f-h26th", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali65706d0a293", MAC:"32:82:46:e9:ed:3e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:03:45.992518 containerd[1476]: 2025-01-30 13:03:45.989 [INFO][4412] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e5aae9b65e6c8a28413214c2f78738b712ec0f16ec69acdc2a91a140a8ce5ef6" Namespace="calico-apiserver" Pod="calico-apiserver-7874549f5f-h26th" WorkloadEndpoint="localhost-k8s-calico--apiserver--7874549f5f--h26th-eth0" Jan 30 13:03:46.018223 containerd[1476]: time="2025-01-30T13:03:46.018072435Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:03:46.018823 containerd[1476]: time="2025-01-30T13:03:46.018779047Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:03:46.021182 containerd[1476]: time="2025-01-30T13:03:46.020504863Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:03:46.021182 containerd[1476]: time="2025-01-30T13:03:46.020683076Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:03:46.055836 systemd[1]: Started cri-containerd-e5aae9b65e6c8a28413214c2f78738b712ec0f16ec69acdc2a91a140a8ce5ef6.scope - libcontainer container e5aae9b65e6c8a28413214c2f78738b712ec0f16ec69acdc2a91a140a8ce5ef6. Jan 30 13:03:46.062125 systemd-networkd[1402]: cali19c37a5da0e: Link UP Jan 30 13:03:46.062722 systemd-networkd[1402]: cali19c37a5da0e: Gained carrier Jan 30 13:03:46.078698 containerd[1476]: 2025-01-30 13:03:44.950 [INFO][4464] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 30 13:03:46.078698 containerd[1476]: 2025-01-30 13:03:45.051 [INFO][4464] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--w7k54-eth0 coredns-668d6bf9bc- kube-system c16bbc77-2b47-4ff4-846f-0b437cb6c4ee 692 0 2025-01-30 13:03:23 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-w7k54 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali19c37a5da0e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="d934b5a855c606cf80076d15e05574551ce3b19c8850384ddaede63f8f1bc97f" Namespace="kube-system" Pod="coredns-668d6bf9bc-w7k54" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--w7k54-" Jan 30 13:03:46.078698 containerd[1476]: 2025-01-30 13:03:45.051 [INFO][4464] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d934b5a855c606cf80076d15e05574551ce3b19c8850384ddaede63f8f1bc97f" Namespace="kube-system" Pod="coredns-668d6bf9bc-w7k54" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--w7k54-eth0" Jan 30 13:03:46.078698 containerd[1476]: 2025-01-30 13:03:45.515 [INFO][4505] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d934b5a855c606cf80076d15e05574551ce3b19c8850384ddaede63f8f1bc97f" HandleID="k8s-pod-network.d934b5a855c606cf80076d15e05574551ce3b19c8850384ddaede63f8f1bc97f" Workload="localhost-k8s-coredns--668d6bf9bc--w7k54-eth0" Jan 30 13:03:46.078698 containerd[1476]: 2025-01-30 13:03:45.549 [INFO][4505] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d934b5a855c606cf80076d15e05574551ce3b19c8850384ddaede63f8f1bc97f" HandleID="k8s-pod-network.d934b5a855c606cf80076d15e05574551ce3b19c8850384ddaede63f8f1bc97f" Workload="localhost-k8s-coredns--668d6bf9bc--w7k54-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002e6880), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-w7k54", "timestamp":"2025-01-30 13:03:45.515823997 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:03:46.078698 containerd[1476]: 2025-01-30 13:03:45.551 [INFO][4505] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:03:46.078698 containerd[1476]: 2025-01-30 13:03:45.935 [INFO][4505] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:03:46.078698 containerd[1476]: 2025-01-30 13:03:45.935 [INFO][4505] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 30 13:03:46.078698 containerd[1476]: 2025-01-30 13:03:45.984 [INFO][4505] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d934b5a855c606cf80076d15e05574551ce3b19c8850384ddaede63f8f1bc97f" host="localhost" Jan 30 13:03:46.078698 containerd[1476]: 2025-01-30 13:03:45.998 [INFO][4505] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 30 13:03:46.078698 containerd[1476]: 2025-01-30 13:03:46.011 [INFO][4505] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 30 13:03:46.078698 containerd[1476]: 2025-01-30 13:03:46.016 [INFO][4505] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 30 13:03:46.078698 containerd[1476]: 2025-01-30 13:03:46.025 [INFO][4505] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 30 13:03:46.078698 containerd[1476]: 2025-01-30 13:03:46.025 [INFO][4505] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d934b5a855c606cf80076d15e05574551ce3b19c8850384ddaede63f8f1bc97f" host="localhost" Jan 30 13:03:46.078698 containerd[1476]: 2025-01-30 13:03:46.027 [INFO][4505] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d934b5a855c606cf80076d15e05574551ce3b19c8850384ddaede63f8f1bc97f Jan 30 13:03:46.078698 containerd[1476]: 2025-01-30 13:03:46.044 [INFO][4505] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d934b5a855c606cf80076d15e05574551ce3b19c8850384ddaede63f8f1bc97f" host="localhost" Jan 30 13:03:46.078698 containerd[1476]: 2025-01-30 13:03:46.053 [INFO][4505] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.d934b5a855c606cf80076d15e05574551ce3b19c8850384ddaede63f8f1bc97f" host="localhost" Jan 30 13:03:46.078698 containerd[1476]: 2025-01-30 13:03:46.055 [INFO][4505] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.d934b5a855c606cf80076d15e05574551ce3b19c8850384ddaede63f8f1bc97f" host="localhost" Jan 30 13:03:46.078698 containerd[1476]: 2025-01-30 13:03:46.055 [INFO][4505] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:03:46.078698 containerd[1476]: 2025-01-30 13:03:46.055 [INFO][4505] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="d934b5a855c606cf80076d15e05574551ce3b19c8850384ddaede63f8f1bc97f" HandleID="k8s-pod-network.d934b5a855c606cf80076d15e05574551ce3b19c8850384ddaede63f8f1bc97f" Workload="localhost-k8s-coredns--668d6bf9bc--w7k54-eth0" Jan 30 13:03:46.079288 containerd[1476]: 2025-01-30 13:03:46.059 [INFO][4464] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d934b5a855c606cf80076d15e05574551ce3b19c8850384ddaede63f8f1bc97f" Namespace="kube-system" Pod="coredns-668d6bf9bc-w7k54" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--w7k54-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--w7k54-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"c16bbc77-2b47-4ff4-846f-0b437cb6c4ee", ResourceVersion:"692", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 3, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-w7k54", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali19c37a5da0e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:03:46.079288 containerd[1476]: 2025-01-30 13:03:46.060 [INFO][4464] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="d934b5a855c606cf80076d15e05574551ce3b19c8850384ddaede63f8f1bc97f" Namespace="kube-system" Pod="coredns-668d6bf9bc-w7k54" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--w7k54-eth0" Jan 30 13:03:46.079288 containerd[1476]: 2025-01-30 13:03:46.060 [INFO][4464] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali19c37a5da0e ContainerID="d934b5a855c606cf80076d15e05574551ce3b19c8850384ddaede63f8f1bc97f" Namespace="kube-system" Pod="coredns-668d6bf9bc-w7k54" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--w7k54-eth0" Jan 30 13:03:46.079288 containerd[1476]: 2025-01-30 13:03:46.061 [INFO][4464] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d934b5a855c606cf80076d15e05574551ce3b19c8850384ddaede63f8f1bc97f" Namespace="kube-system" Pod="coredns-668d6bf9bc-w7k54" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--w7k54-eth0" Jan 30 13:03:46.079288 containerd[1476]: 2025-01-30 13:03:46.062 [INFO][4464] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d934b5a855c606cf80076d15e05574551ce3b19c8850384ddaede63f8f1bc97f" Namespace="kube-system" Pod="coredns-668d6bf9bc-w7k54" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--w7k54-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--w7k54-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"c16bbc77-2b47-4ff4-846f-0b437cb6c4ee", ResourceVersion:"692", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 3, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d934b5a855c606cf80076d15e05574551ce3b19c8850384ddaede63f8f1bc97f", Pod:"coredns-668d6bf9bc-w7k54", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali19c37a5da0e", MAC:"42:aa:98:32:f3:db", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:03:46.079288 containerd[1476]: 2025-01-30 13:03:46.072 [INFO][4464] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d934b5a855c606cf80076d15e05574551ce3b19c8850384ddaede63f8f1bc97f" Namespace="kube-system" Pod="coredns-668d6bf9bc-w7k54" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--w7k54-eth0" Jan 30 13:03:46.093179 systemd-resolved[1311]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 13:03:46.110404 containerd[1476]: time="2025-01-30T13:03:46.110232782Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:03:46.110404 containerd[1476]: time="2025-01-30T13:03:46.110324728Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:03:46.110404 containerd[1476]: time="2025-01-30T13:03:46.110336646Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:03:46.110636 containerd[1476]: time="2025-01-30T13:03:46.110501740Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:03:46.143703 containerd[1476]: time="2025-01-30T13:03:46.143574963Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7874549f5f-h26th,Uid:7acfabdf-bbd3-498c-9434-a86e65427513,Namespace:calico-apiserver,Attempt:4,} returns sandbox id \"e5aae9b65e6c8a28413214c2f78738b712ec0f16ec69acdc2a91a140a8ce5ef6\"" Jan 30 13:03:46.147103 systemd[1]: Started cri-containerd-d934b5a855c606cf80076d15e05574551ce3b19c8850384ddaede63f8f1bc97f.scope - libcontainer container d934b5a855c606cf80076d15e05574551ce3b19c8850384ddaede63f8f1bc97f. Jan 30 13:03:46.153846 systemd-networkd[1402]: cali934d000ceac: Link UP Jan 30 13:03:46.154354 systemd-networkd[1402]: cali934d000ceac: Gained carrier Jan 30 13:03:46.171093 systemd-resolved[1311]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 13:03:46.172778 containerd[1476]: 2025-01-30 13:03:44.923 [INFO][4426] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 30 13:03:46.172778 containerd[1476]: 2025-01-30 13:03:45.053 [INFO][4426] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--5f499d887f--64bsx-eth0 calico-kube-controllers-5f499d887f- calico-system 52142e13-5b13-4ee1-bef6-e84504589fe4 695 0 2025-01-30 13:03:29 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5f499d887f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-5f499d887f-64bsx eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali934d000ceac [] []}} ContainerID="3ff60305c2aa988deb41e0a8f6b97d187b1d7eaa166fd500ee2cadb7ce1e58b7" Namespace="calico-system" Pod="calico-kube-controllers-5f499d887f-64bsx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5f499d887f--64bsx-" Jan 30 13:03:46.172778 containerd[1476]: 2025-01-30 13:03:45.053 [INFO][4426] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="3ff60305c2aa988deb41e0a8f6b97d187b1d7eaa166fd500ee2cadb7ce1e58b7" Namespace="calico-system" Pod="calico-kube-controllers-5f499d887f-64bsx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5f499d887f--64bsx-eth0" Jan 30 13:03:46.172778 containerd[1476]: 2025-01-30 13:03:45.506 [INFO][4490] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3ff60305c2aa988deb41e0a8f6b97d187b1d7eaa166fd500ee2cadb7ce1e58b7" HandleID="k8s-pod-network.3ff60305c2aa988deb41e0a8f6b97d187b1d7eaa166fd500ee2cadb7ce1e58b7" Workload="localhost-k8s-calico--kube--controllers--5f499d887f--64bsx-eth0" Jan 30 13:03:46.172778 containerd[1476]: 2025-01-30 13:03:45.554 [INFO][4490] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3ff60305c2aa988deb41e0a8f6b97d187b1d7eaa166fd500ee2cadb7ce1e58b7" HandleID="k8s-pod-network.3ff60305c2aa988deb41e0a8f6b97d187b1d7eaa166fd500ee2cadb7ce1e58b7" Workload="localhost-k8s-calico--kube--controllers--5f499d887f--64bsx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002b4500), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-5f499d887f-64bsx", "timestamp":"2025-01-30 13:03:45.506834343 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:03:46.172778 containerd[1476]: 2025-01-30 13:03:45.554 [INFO][4490] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:03:46.172778 containerd[1476]: 2025-01-30 13:03:46.055 [INFO][4490] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:03:46.172778 containerd[1476]: 2025-01-30 13:03:46.055 [INFO][4490] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 30 13:03:46.172778 containerd[1476]: 2025-01-30 13:03:46.084 [INFO][4490] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3ff60305c2aa988deb41e0a8f6b97d187b1d7eaa166fd500ee2cadb7ce1e58b7" host="localhost" Jan 30 13:03:46.172778 containerd[1476]: 2025-01-30 13:03:46.098 [INFO][4490] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 30 13:03:46.172778 containerd[1476]: 2025-01-30 13:03:46.107 [INFO][4490] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 30 13:03:46.172778 containerd[1476]: 2025-01-30 13:03:46.111 [INFO][4490] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 30 13:03:46.172778 containerd[1476]: 2025-01-30 13:03:46.115 [INFO][4490] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 30 13:03:46.172778 containerd[1476]: 2025-01-30 13:03:46.115 [INFO][4490] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3ff60305c2aa988deb41e0a8f6b97d187b1d7eaa166fd500ee2cadb7ce1e58b7" host="localhost" Jan 30 13:03:46.172778 containerd[1476]: 2025-01-30 13:03:46.118 [INFO][4490] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.3ff60305c2aa988deb41e0a8f6b97d187b1d7eaa166fd500ee2cadb7ce1e58b7 Jan 30 13:03:46.172778 containerd[1476]: 2025-01-30 13:03:46.125 [INFO][4490] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3ff60305c2aa988deb41e0a8f6b97d187b1d7eaa166fd500ee2cadb7ce1e58b7" host="localhost" Jan 30 13:03:46.172778 containerd[1476]: 2025-01-30 13:03:46.138 [INFO][4490] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.3ff60305c2aa988deb41e0a8f6b97d187b1d7eaa166fd500ee2cadb7ce1e58b7" host="localhost" Jan 30 13:03:46.172778 containerd[1476]: 2025-01-30 13:03:46.138 [INFO][4490] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.3ff60305c2aa988deb41e0a8f6b97d187b1d7eaa166fd500ee2cadb7ce1e58b7" host="localhost" Jan 30 13:03:46.172778 containerd[1476]: 2025-01-30 13:03:46.138 [INFO][4490] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:03:46.172778 containerd[1476]: 2025-01-30 13:03:46.138 [INFO][4490] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="3ff60305c2aa988deb41e0a8f6b97d187b1d7eaa166fd500ee2cadb7ce1e58b7" HandleID="k8s-pod-network.3ff60305c2aa988deb41e0a8f6b97d187b1d7eaa166fd500ee2cadb7ce1e58b7" Workload="localhost-k8s-calico--kube--controllers--5f499d887f--64bsx-eth0" Jan 30 13:03:46.173352 containerd[1476]: 2025-01-30 13:03:46.145 [INFO][4426] cni-plugin/k8s.go 386: Populated endpoint ContainerID="3ff60305c2aa988deb41e0a8f6b97d187b1d7eaa166fd500ee2cadb7ce1e58b7" Namespace="calico-system" Pod="calico-kube-controllers-5f499d887f-64bsx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5f499d887f--64bsx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5f499d887f--64bsx-eth0", GenerateName:"calico-kube-controllers-5f499d887f-", Namespace:"calico-system", SelfLink:"", UID:"52142e13-5b13-4ee1-bef6-e84504589fe4", ResourceVersion:"695", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 3, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5f499d887f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-5f499d887f-64bsx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali934d000ceac", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:03:46.173352 containerd[1476]: 2025-01-30 13:03:46.146 [INFO][4426] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="3ff60305c2aa988deb41e0a8f6b97d187b1d7eaa166fd500ee2cadb7ce1e58b7" Namespace="calico-system" Pod="calico-kube-controllers-5f499d887f-64bsx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5f499d887f--64bsx-eth0" Jan 30 13:03:46.173352 containerd[1476]: 2025-01-30 13:03:46.146 [INFO][4426] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali934d000ceac ContainerID="3ff60305c2aa988deb41e0a8f6b97d187b1d7eaa166fd500ee2cadb7ce1e58b7" Namespace="calico-system" Pod="calico-kube-controllers-5f499d887f-64bsx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5f499d887f--64bsx-eth0" Jan 30 13:03:46.173352 containerd[1476]: 2025-01-30 13:03:46.154 [INFO][4426] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3ff60305c2aa988deb41e0a8f6b97d187b1d7eaa166fd500ee2cadb7ce1e58b7" Namespace="calico-system" Pod="calico-kube-controllers-5f499d887f-64bsx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5f499d887f--64bsx-eth0" Jan 30 13:03:46.173352 containerd[1476]: 2025-01-30 13:03:46.157 [INFO][4426] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="3ff60305c2aa988deb41e0a8f6b97d187b1d7eaa166fd500ee2cadb7ce1e58b7" Namespace="calico-system" Pod="calico-kube-controllers-5f499d887f-64bsx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5f499d887f--64bsx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5f499d887f--64bsx-eth0", GenerateName:"calico-kube-controllers-5f499d887f-", Namespace:"calico-system", SelfLink:"", UID:"52142e13-5b13-4ee1-bef6-e84504589fe4", ResourceVersion:"695", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 3, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5f499d887f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3ff60305c2aa988deb41e0a8f6b97d187b1d7eaa166fd500ee2cadb7ce1e58b7", Pod:"calico-kube-controllers-5f499d887f-64bsx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali934d000ceac", MAC:"56:99:71:27:1c:71", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:03:46.173352 containerd[1476]: 2025-01-30 13:03:46.169 [INFO][4426] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="3ff60305c2aa988deb41e0a8f6b97d187b1d7eaa166fd500ee2cadb7ce1e58b7" Namespace="calico-system" Pod="calico-kube-controllers-5f499d887f-64bsx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5f499d887f--64bsx-eth0" Jan 30 13:03:46.206737 containerd[1476]: time="2025-01-30T13:03:46.205872276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-w7k54,Uid:c16bbc77-2b47-4ff4-846f-0b437cb6c4ee,Namespace:kube-system,Attempt:5,} returns sandbox id \"d934b5a855c606cf80076d15e05574551ce3b19c8850384ddaede63f8f1bc97f\"" Jan 30 13:03:46.207458 kubelet[2559]: E0130 13:03:46.207277 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:03:46.207563 containerd[1476]: time="2025-01-30T13:03:46.206929194Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:03:46.207563 containerd[1476]: time="2025-01-30T13:03:46.207002823Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:03:46.207563 containerd[1476]: time="2025-01-30T13:03:46.207019860Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:03:46.207563 containerd[1476]: time="2025-01-30T13:03:46.207114126Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:03:46.210063 containerd[1476]: time="2025-01-30T13:03:46.210016842Z" level=info msg="CreateContainer within sandbox \"d934b5a855c606cf80076d15e05574551ce3b19c8850384ddaede63f8f1bc97f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 13:03:46.245016 systemd[1]: Started cri-containerd-3ff60305c2aa988deb41e0a8f6b97d187b1d7eaa166fd500ee2cadb7ce1e58b7.scope - libcontainer container 3ff60305c2aa988deb41e0a8f6b97d187b1d7eaa166fd500ee2cadb7ce1e58b7. Jan 30 13:03:46.246685 containerd[1476]: time="2025-01-30T13:03:46.246183711Z" level=info msg="CreateContainer within sandbox \"d934b5a855c606cf80076d15e05574551ce3b19c8850384ddaede63f8f1bc97f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"85e977cd388ffe283037306f00abb2ef3fe07fd3a3ca1a2266760ee9a5c51a01\"" Jan 30 13:03:46.250452 containerd[1476]: time="2025-01-30T13:03:46.250402066Z" level=info msg="StartContainer for \"85e977cd388ffe283037306f00abb2ef3fe07fd3a3ca1a2266760ee9a5c51a01\"" Jan 30 13:03:46.262057 systemd-resolved[1311]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 13:03:46.287821 systemd[1]: Started cri-containerd-85e977cd388ffe283037306f00abb2ef3fe07fd3a3ca1a2266760ee9a5c51a01.scope - libcontainer container 85e977cd388ffe283037306f00abb2ef3fe07fd3a3ca1a2266760ee9a5c51a01. Jan 30 13:03:46.292353 containerd[1476]: time="2025-01-30T13:03:46.292288220Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f499d887f-64bsx,Uid:52142e13-5b13-4ee1-bef6-e84504589fe4,Namespace:calico-system,Attempt:5,} returns sandbox id \"3ff60305c2aa988deb41e0a8f6b97d187b1d7eaa166fd500ee2cadb7ce1e58b7\"" Jan 30 13:03:46.322004 containerd[1476]: time="2025-01-30T13:03:46.321944645Z" level=info msg="StartContainer for \"85e977cd388ffe283037306f00abb2ef3fe07fd3a3ca1a2266760ee9a5c51a01\" returns successfully" Jan 30 13:03:46.856155 kubelet[2559]: E0130 13:03:46.856125 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:03:46.870554 kubelet[2559]: I0130 13:03:46.870472 2559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-89crv" podStartSLOduration=23.870326902 podStartE2EDuration="23.870326902s" podCreationTimestamp="2025-01-30 13:03:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:03:46.869388845 +0000 UTC m=+29.416484768" watchObservedRunningTime="2025-01-30 13:03:46.870326902 +0000 UTC m=+29.417422865" Jan 30 13:03:46.889703 kubelet[2559]: E0130 13:03:46.889653 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:03:46.907496 kubelet[2559]: I0130 13:03:46.907377 2559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-w7k54" podStartSLOduration=23.907358439 podStartE2EDuration="23.907358439s" podCreationTimestamp="2025-01-30 13:03:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:03:46.906996814 +0000 UTC m=+29.454092777" watchObservedRunningTime="2025-01-30 13:03:46.907358439 +0000 UTC m=+29.454454402" Jan 30 13:03:46.947058 systemd-networkd[1402]: cali124433f8d19: Gained IPv6LL Jan 30 13:03:47.073705 systemd-networkd[1402]: cali4f7ee1e3de4: Gained IPv6LL Jan 30 13:03:47.320178 containerd[1476]: time="2025-01-30T13:03:47.320115122Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:03:47.320765 containerd[1476]: time="2025-01-30T13:03:47.320717955Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=39298409" Jan 30 13:03:47.321684 containerd[1476]: time="2025-01-30T13:03:47.321652381Z" level=info msg="ImageCreate event name:\"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:03:47.324333 containerd[1476]: time="2025-01-30T13:03:47.324264367Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:03:47.325118 containerd[1476]: time="2025-01-30T13:03:47.325028617Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 1.448291735s" Jan 30 13:03:47.325118 containerd[1476]: time="2025-01-30T13:03:47.325063292Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Jan 30 13:03:47.326321 containerd[1476]: time="2025-01-30T13:03:47.326295116Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 30 13:03:47.327686 containerd[1476]: time="2025-01-30T13:03:47.327623165Z" level=info msg="CreateContainer within sandbox \"18317ee5670387304fea9240c17eb003f9d1d8a0eac03d47442cd44adec02c5f\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 30 13:03:47.349802 containerd[1476]: time="2025-01-30T13:03:47.349745034Z" level=info msg="CreateContainer within sandbox \"18317ee5670387304fea9240c17eb003f9d1d8a0eac03d47442cd44adec02c5f\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"1bb08b2dc880e2d5a0a82ccc2925074dcd2305ef9ccf9d18479325a4b5af7167\"" Jan 30 13:03:47.350366 containerd[1476]: time="2025-01-30T13:03:47.350313552Z" level=info msg="StartContainer for \"1bb08b2dc880e2d5a0a82ccc2925074dcd2305ef9ccf9d18479325a4b5af7167\"" Jan 30 13:03:47.394042 systemd-networkd[1402]: cali65706d0a293: Gained IPv6LL Jan 30 13:03:47.396834 systemd[1]: Started cri-containerd-1bb08b2dc880e2d5a0a82ccc2925074dcd2305ef9ccf9d18479325a4b5af7167.scope - libcontainer container 1bb08b2dc880e2d5a0a82ccc2925074dcd2305ef9ccf9d18479325a4b5af7167. Jan 30 13:03:47.447305 containerd[1476]: time="2025-01-30T13:03:47.447137351Z" level=info msg="StartContainer for \"1bb08b2dc880e2d5a0a82ccc2925074dcd2305ef9ccf9d18479325a4b5af7167\" returns successfully" Jan 30 13:03:47.458733 systemd-networkd[1402]: cali19c37a5da0e: Gained IPv6LL Jan 30 13:03:47.649763 systemd-networkd[1402]: cali5480cc26309: Gained IPv6LL Jan 30 13:03:47.871502 systemd[1]: Started sshd@7-10.0.0.99:22-10.0.0.1:58636.service - OpenSSH per-connection server daemon (10.0.0.1:58636). Jan 30 13:03:47.907780 kubelet[2559]: E0130 13:03:47.907669 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:03:47.918014 kubelet[2559]: E0130 13:03:47.917957 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:03:47.954834 sshd[5205]: Accepted publickey for core from 10.0.0.1 port 58636 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 13:03:47.956714 sshd-session[5205]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:03:47.961648 systemd-logind[1455]: New session 8 of user core. Jan 30 13:03:47.971796 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 30 13:03:48.163045 systemd-networkd[1402]: cali934d000ceac: Gained IPv6LL Jan 30 13:03:48.252093 sshd[5209]: Connection closed by 10.0.0.1 port 58636 Jan 30 13:03:48.252674 sshd-session[5205]: pam_unix(sshd:session): session closed for user core Jan 30 13:03:48.258375 systemd[1]: sshd@7-10.0.0.99:22-10.0.0.1:58636.service: Deactivated successfully. Jan 30 13:03:48.261228 systemd[1]: session-8.scope: Deactivated successfully. Jan 30 13:03:48.262201 systemd-logind[1455]: Session 8 logged out. Waiting for processes to exit. Jan 30 13:03:48.266224 systemd-logind[1455]: Removed session 8. Jan 30 13:03:48.519492 containerd[1476]: time="2025-01-30T13:03:48.518695886Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:03:48.519492 containerd[1476]: time="2025-01-30T13:03:48.519441426Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7464730" Jan 30 13:03:48.520202 containerd[1476]: time="2025-01-30T13:03:48.520162209Z" level=info msg="ImageCreate event name:\"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:03:48.522392 containerd[1476]: time="2025-01-30T13:03:48.522350035Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:03:48.523159 containerd[1476]: time="2025-01-30T13:03:48.523128490Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"8834384\" in 1.1963786s" Jan 30 13:03:48.523209 containerd[1476]: time="2025-01-30T13:03:48.523161486Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\"" Jan 30 13:03:48.525088 containerd[1476]: time="2025-01-30T13:03:48.524611371Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 30 13:03:48.528263 containerd[1476]: time="2025-01-30T13:03:48.528221806Z" level=info msg="CreateContainer within sandbox \"6d3b21d2bce095472b29c5e389d901c5fabead1b6d9e14c7857033382bca7b81\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 30 13:03:48.567443 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2622881755.mount: Deactivated successfully. Jan 30 13:03:48.569871 containerd[1476]: time="2025-01-30T13:03:48.569816495Z" level=info msg="CreateContainer within sandbox \"6d3b21d2bce095472b29c5e389d901c5fabead1b6d9e14c7857033382bca7b81\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"5ce6811a6e75901db412f3febc70376240090da789c1193aab645a24297b25f4\"" Jan 30 13:03:48.570899 containerd[1476]: time="2025-01-30T13:03:48.570809402Z" level=info msg="StartContainer for \"5ce6811a6e75901db412f3febc70376240090da789c1193aab645a24297b25f4\"" Jan 30 13:03:48.603550 systemd[1]: run-containerd-runc-k8s.io-5ce6811a6e75901db412f3febc70376240090da789c1193aab645a24297b25f4-runc.M03Vhz.mount: Deactivated successfully. Jan 30 13:03:48.618811 systemd[1]: Started cri-containerd-5ce6811a6e75901db412f3febc70376240090da789c1193aab645a24297b25f4.scope - libcontainer container 5ce6811a6e75901db412f3febc70376240090da789c1193aab645a24297b25f4. Jan 30 13:03:48.672914 containerd[1476]: time="2025-01-30T13:03:48.672860525Z" level=info msg="StartContainer for \"5ce6811a6e75901db412f3febc70376240090da789c1193aab645a24297b25f4\" returns successfully" Jan 30 13:03:48.769009 containerd[1476]: time="2025-01-30T13:03:48.767827441Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:03:48.769009 containerd[1476]: time="2025-01-30T13:03:48.768341172Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 30 13:03:48.770807 containerd[1476]: time="2025-01-30T13:03:48.770701935Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 246.054328ms" Jan 30 13:03:48.771325 containerd[1476]: time="2025-01-30T13:03:48.771301934Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Jan 30 13:03:48.772722 containerd[1476]: time="2025-01-30T13:03:48.772691907Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 30 13:03:48.774741 containerd[1476]: time="2025-01-30T13:03:48.774671521Z" level=info msg="CreateContainer within sandbox \"e5aae9b65e6c8a28413214c2f78738b712ec0f16ec69acdc2a91a140a8ce5ef6\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 30 13:03:48.786377 containerd[1476]: time="2025-01-30T13:03:48.786101865Z" level=info msg="CreateContainer within sandbox \"e5aae9b65e6c8a28413214c2f78738b712ec0f16ec69acdc2a91a140a8ce5ef6\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"acf86421ebed9f219aaa624d9bd109220ffaaf562827b4ef1746ee32c099fccb\"" Jan 30 13:03:48.787773 containerd[1476]: time="2025-01-30T13:03:48.787514755Z" level=info msg="StartContainer for \"acf86421ebed9f219aaa624d9bd109220ffaaf562827b4ef1746ee32c099fccb\"" Jan 30 13:03:48.847826 systemd[1]: Started cri-containerd-acf86421ebed9f219aaa624d9bd109220ffaaf562827b4ef1746ee32c099fccb.scope - libcontainer container acf86421ebed9f219aaa624d9bd109220ffaaf562827b4ef1746ee32c099fccb. Jan 30 13:03:48.929007 containerd[1476]: time="2025-01-30T13:03:48.928168370Z" level=info msg="StartContainer for \"acf86421ebed9f219aaa624d9bd109220ffaaf562827b4ef1746ee32c099fccb\" returns successfully" Jan 30 13:03:48.941257 kubelet[2559]: I0130 13:03:48.941212 2559 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:03:48.943050 kubelet[2559]: E0130 13:03:48.942490 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:03:48.943050 kubelet[2559]: E0130 13:03:48.942978 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:03:48.958644 kubelet[2559]: I0130 13:03:48.958573 2559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7874549f5f-h26th" podStartSLOduration=17.33184354 podStartE2EDuration="19.958552966s" podCreationTimestamp="2025-01-30 13:03:29 +0000 UTC" firstStartedPulling="2025-01-30 13:03:46.145557179 +0000 UTC m=+28.692653142" lastFinishedPulling="2025-01-30 13:03:48.772266605 +0000 UTC m=+31.319362568" observedRunningTime="2025-01-30 13:03:48.957219186 +0000 UTC m=+31.504315149" watchObservedRunningTime="2025-01-30 13:03:48.958552966 +0000 UTC m=+31.505649089" Jan 30 13:03:48.958928 kubelet[2559]: I0130 13:03:48.958901 2559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7874549f5f-gzb44" podStartSLOduration=18.508629173 podStartE2EDuration="19.95889556s" podCreationTimestamp="2025-01-30 13:03:29 +0000 UTC" firstStartedPulling="2025-01-30 13:03:45.875484007 +0000 UTC m=+28.422579970" lastFinishedPulling="2025-01-30 13:03:47.325750394 +0000 UTC m=+29.872846357" observedRunningTime="2025-01-30 13:03:47.92060659 +0000 UTC m=+30.467702553" watchObservedRunningTime="2025-01-30 13:03:48.95889556 +0000 UTC m=+31.505991523" Jan 30 13:03:50.289330 containerd[1476]: time="2025-01-30T13:03:50.289269584Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:03:50.291081 containerd[1476]: time="2025-01-30T13:03:50.290996490Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=31953828" Jan 30 13:03:50.292027 containerd[1476]: time="2025-01-30T13:03:50.291992846Z" level=info msg="ImageCreate event name:\"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:03:50.294134 containerd[1476]: time="2025-01-30T13:03:50.294077068Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:03:50.295391 containerd[1476]: time="2025-01-30T13:03:50.295008832Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"33323450\" in 1.522087595s" Jan 30 13:03:50.295391 containerd[1476]: time="2025-01-30T13:03:50.295044228Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\"" Jan 30 13:03:50.296215 containerd[1476]: time="2025-01-30T13:03:50.296194125Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 30 13:03:50.307093 containerd[1476]: time="2025-01-30T13:03:50.307041660Z" level=info msg="CreateContainer within sandbox \"3ff60305c2aa988deb41e0a8f6b97d187b1d7eaa166fd500ee2cadb7ce1e58b7\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 30 13:03:50.321325 containerd[1476]: time="2025-01-30T13:03:50.321273775Z" level=info msg="CreateContainer within sandbox \"3ff60305c2aa988deb41e0a8f6b97d187b1d7eaa166fd500ee2cadb7ce1e58b7\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"f5e10ea63a879266f8130520d14d692e1f8e0eddb7fce0cfc8f04c0ec12ec248\"" Jan 30 13:03:50.321848 containerd[1476]: time="2025-01-30T13:03:50.321820867Z" level=info msg="StartContainer for \"f5e10ea63a879266f8130520d14d692e1f8e0eddb7fce0cfc8f04c0ec12ec248\"" Jan 30 13:03:50.349803 systemd[1]: Started cri-containerd-f5e10ea63a879266f8130520d14d692e1f8e0eddb7fce0cfc8f04c0ec12ec248.scope - libcontainer container f5e10ea63a879266f8130520d14d692e1f8e0eddb7fce0cfc8f04c0ec12ec248. Jan 30 13:03:50.381531 containerd[1476]: time="2025-01-30T13:03:50.381473549Z" level=info msg="StartContainer for \"f5e10ea63a879266f8130520d14d692e1f8e0eddb7fce0cfc8f04c0ec12ec248\" returns successfully" Jan 30 13:03:50.966323 kubelet[2559]: I0130 13:03:50.966235 2559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5f499d887f-64bsx" podStartSLOduration=17.967139269 podStartE2EDuration="21.966215947s" podCreationTimestamp="2025-01-30 13:03:29 +0000 UTC" firstStartedPulling="2025-01-30 13:03:46.29693275 +0000 UTC m=+28.844028713" lastFinishedPulling="2025-01-30 13:03:50.296009428 +0000 UTC m=+32.843105391" observedRunningTime="2025-01-30 13:03:50.965321978 +0000 UTC m=+33.512417941" watchObservedRunningTime="2025-01-30 13:03:50.966215947 +0000 UTC m=+33.513311910" Jan 30 13:03:51.246777 containerd[1476]: time="2025-01-30T13:03:51.244207322Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:03:51.249909 containerd[1476]: time="2025-01-30T13:03:51.249842623Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=9883368" Jan 30 13:03:51.253485 containerd[1476]: time="2025-01-30T13:03:51.253440872Z" level=info msg="ImageCreate event name:\"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:03:51.256774 containerd[1476]: time="2025-01-30T13:03:51.256732097Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:03:51.257364 containerd[1476]: time="2025-01-30T13:03:51.257324586Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11252974\" in 961.012394ms" Jan 30 13:03:51.257423 containerd[1476]: time="2025-01-30T13:03:51.257366584Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\"" Jan 30 13:03:51.261771 containerd[1476]: time="2025-01-30T13:03:51.261652796Z" level=info msg="CreateContainer within sandbox \"6d3b21d2bce095472b29c5e389d901c5fabead1b6d9e14c7857033382bca7b81\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 30 13:03:51.274060 containerd[1476]: time="2025-01-30T13:03:51.274006301Z" level=info msg="CreateContainer within sandbox \"6d3b21d2bce095472b29c5e389d901c5fabead1b6d9e14c7857033382bca7b81\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"4f275c75cd442c3ad4c34f2288eb06d6b4eecf6c03e378b678d17bf7984468d1\"" Jan 30 13:03:51.274543 containerd[1476]: time="2025-01-30T13:03:51.274518794Z" level=info msg="StartContainer for \"4f275c75cd442c3ad4c34f2288eb06d6b4eecf6c03e378b678d17bf7984468d1\"" Jan 30 13:03:51.313850 systemd[1]: Started cri-containerd-4f275c75cd442c3ad4c34f2288eb06d6b4eecf6c03e378b678d17bf7984468d1.scope - libcontainer container 4f275c75cd442c3ad4c34f2288eb06d6b4eecf6c03e378b678d17bf7984468d1. Jan 30 13:03:51.348361 containerd[1476]: time="2025-01-30T13:03:51.348051133Z" level=info msg="StartContainer for \"4f275c75cd442c3ad4c34f2288eb06d6b4eecf6c03e378b678d17bf7984468d1\" returns successfully" Jan 30 13:03:51.558668 systemd[1]: run-containerd-runc-k8s.io-4f275c75cd442c3ad4c34f2288eb06d6b4eecf6c03e378b678d17bf7984468d1-runc.GZ6T5S.mount: Deactivated successfully. Jan 30 13:03:51.642758 kubelet[2559]: I0130 13:03:51.642701 2559 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 30 13:03:51.644954 kubelet[2559]: I0130 13:03:51.644921 2559 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 30 13:03:51.958497 kubelet[2559]: I0130 13:03:51.958020 2559 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:03:53.271198 systemd[1]: Started sshd@8-10.0.0.99:22-10.0.0.1:55800.service - OpenSSH per-connection server daemon (10.0.0.1:55800). Jan 30 13:03:53.349643 sshd[5521]: Accepted publickey for core from 10.0.0.1 port 55800 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 13:03:53.351635 sshd-session[5521]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:03:53.356342 systemd-logind[1455]: New session 9 of user core. Jan 30 13:03:53.365840 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 30 13:03:53.671869 sshd[5524]: Connection closed by 10.0.0.1 port 55800 Jan 30 13:03:53.671550 sshd-session[5521]: pam_unix(sshd:session): session closed for user core Jan 30 13:03:53.675355 systemd-logind[1455]: Session 9 logged out. Waiting for processes to exit. Jan 30 13:03:53.675696 systemd[1]: sshd@8-10.0.0.99:22-10.0.0.1:55800.service: Deactivated successfully. Jan 30 13:03:53.677645 systemd[1]: session-9.scope: Deactivated successfully. Jan 30 13:03:53.678593 systemd-logind[1455]: Removed session 9. Jan 30 13:03:54.257576 kubelet[2559]: I0130 13:03:54.257526 2559 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:03:54.257970 kubelet[2559]: E0130 13:03:54.257923 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:03:54.274074 kubelet[2559]: I0130 13:03:54.273977 2559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-kw84f" podStartSLOduration=19.982612993 podStartE2EDuration="25.273954157s" podCreationTimestamp="2025-01-30 13:03:29 +0000 UTC" firstStartedPulling="2025-01-30 13:03:45.966920452 +0000 UTC m=+28.514016375" lastFinishedPulling="2025-01-30 13:03:51.258261576 +0000 UTC m=+33.805357539" observedRunningTime="2025-01-30 13:03:51.975248105 +0000 UTC m=+34.522344068" watchObservedRunningTime="2025-01-30 13:03:54.273954157 +0000 UTC m=+36.821050160" Jan 30 13:03:54.916650 kernel: bpftool[5604]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 30 13:03:54.966550 kubelet[2559]: E0130 13:03:54.966398 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:03:55.088921 systemd-networkd[1402]: vxlan.calico: Link UP Jan 30 13:03:55.089397 systemd-networkd[1402]: vxlan.calico: Gained carrier Jan 30 13:03:56.865723 systemd-networkd[1402]: vxlan.calico: Gained IPv6LL Jan 30 13:03:58.684741 systemd[1]: Started sshd@9-10.0.0.99:22-10.0.0.1:55888.service - OpenSSH per-connection server daemon (10.0.0.1:55888). Jan 30 13:03:58.753038 sshd[5695]: Accepted publickey for core from 10.0.0.1 port 55888 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 13:03:58.754856 sshd-session[5695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:03:58.763328 systemd-logind[1455]: New session 10 of user core. Jan 30 13:03:58.773831 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 30 13:03:59.018097 sshd[5698]: Connection closed by 10.0.0.1 port 55888 Jan 30 13:03:59.017089 sshd-session[5695]: pam_unix(sshd:session): session closed for user core Jan 30 13:03:59.030052 systemd[1]: sshd@9-10.0.0.99:22-10.0.0.1:55888.service: Deactivated successfully. Jan 30 13:03:59.032268 systemd[1]: session-10.scope: Deactivated successfully. Jan 30 13:03:59.034862 systemd-logind[1455]: Session 10 logged out. Waiting for processes to exit. Jan 30 13:03:59.037261 systemd[1]: Started sshd@10-10.0.0.99:22-10.0.0.1:55894.service - OpenSSH per-connection server daemon (10.0.0.1:55894). Jan 30 13:03:59.039379 systemd-logind[1455]: Removed session 10. Jan 30 13:03:59.086234 sshd[5715]: Accepted publickey for core from 10.0.0.1 port 55894 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 13:03:59.087925 sshd-session[5715]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:03:59.096068 systemd-logind[1455]: New session 11 of user core. Jan 30 13:03:59.114783 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 30 13:03:59.372819 sshd[5717]: Connection closed by 10.0.0.1 port 55894 Jan 30 13:03:59.375642 sshd-session[5715]: pam_unix(sshd:session): session closed for user core Jan 30 13:03:59.384476 systemd[1]: sshd@10-10.0.0.99:22-10.0.0.1:55894.service: Deactivated successfully. Jan 30 13:03:59.386234 systemd[1]: session-11.scope: Deactivated successfully. Jan 30 13:03:59.391204 systemd-logind[1455]: Session 11 logged out. Waiting for processes to exit. Jan 30 13:03:59.394279 systemd[1]: Started sshd@11-10.0.0.99:22-10.0.0.1:55898.service - OpenSSH per-connection server daemon (10.0.0.1:55898). Jan 30 13:03:59.396052 systemd-logind[1455]: Removed session 11. Jan 30 13:03:59.452423 sshd[5728]: Accepted publickey for core from 10.0.0.1 port 55898 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 13:03:59.453869 sshd-session[5728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:03:59.458624 systemd-logind[1455]: New session 12 of user core. Jan 30 13:03:59.468829 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 30 13:03:59.631838 sshd[5730]: Connection closed by 10.0.0.1 port 55898 Jan 30 13:03:59.632333 sshd-session[5728]: pam_unix(sshd:session): session closed for user core Jan 30 13:03:59.639269 systemd[1]: sshd@11-10.0.0.99:22-10.0.0.1:55898.service: Deactivated successfully. Jan 30 13:03:59.642626 systemd[1]: session-12.scope: Deactivated successfully. Jan 30 13:03:59.644235 systemd-logind[1455]: Session 12 logged out. Waiting for processes to exit. Jan 30 13:03:59.645375 systemd-logind[1455]: Removed session 12. Jan 30 13:04:04.662094 systemd[1]: Started sshd@12-10.0.0.99:22-10.0.0.1:42432.service - OpenSSH per-connection server daemon (10.0.0.1:42432). Jan 30 13:04:04.707521 sshd[5750]: Accepted publickey for core from 10.0.0.1 port 42432 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 13:04:04.709264 sshd-session[5750]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:04:04.715324 systemd-logind[1455]: New session 13 of user core. Jan 30 13:04:04.722818 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 30 13:04:04.849134 sshd[5752]: Connection closed by 10.0.0.1 port 42432 Jan 30 13:04:04.849713 sshd-session[5750]: pam_unix(sshd:session): session closed for user core Jan 30 13:04:04.861301 systemd[1]: sshd@12-10.0.0.99:22-10.0.0.1:42432.service: Deactivated successfully. Jan 30 13:04:04.863056 systemd[1]: session-13.scope: Deactivated successfully. Jan 30 13:04:04.864563 systemd-logind[1455]: Session 13 logged out. Waiting for processes to exit. Jan 30 13:04:04.882988 systemd[1]: Started sshd@13-10.0.0.99:22-10.0.0.1:42434.service - OpenSSH per-connection server daemon (10.0.0.1:42434). Jan 30 13:04:04.884054 systemd-logind[1455]: Removed session 13. Jan 30 13:04:04.942185 sshd[5765]: Accepted publickey for core from 10.0.0.1 port 42434 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 13:04:04.943503 sshd-session[5765]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:04:04.947406 systemd-logind[1455]: New session 14 of user core. Jan 30 13:04:04.954804 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 30 13:04:05.182947 sshd[5767]: Connection closed by 10.0.0.1 port 42434 Jan 30 13:04:05.182734 sshd-session[5765]: pam_unix(sshd:session): session closed for user core Jan 30 13:04:05.193930 systemd[1]: sshd@13-10.0.0.99:22-10.0.0.1:42434.service: Deactivated successfully. Jan 30 13:04:05.196983 systemd[1]: session-14.scope: Deactivated successfully. Jan 30 13:04:05.199439 systemd-logind[1455]: Session 14 logged out. Waiting for processes to exit. Jan 30 13:04:05.212312 systemd[1]: Started sshd@14-10.0.0.99:22-10.0.0.1:42438.service - OpenSSH per-connection server daemon (10.0.0.1:42438). Jan 30 13:04:05.213114 systemd-logind[1455]: Removed session 14. Jan 30 13:04:05.263068 sshd[5779]: Accepted publickey for core from 10.0.0.1 port 42438 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 13:04:05.264988 sshd-session[5779]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:04:05.269761 systemd-logind[1455]: New session 15 of user core. Jan 30 13:04:05.279811 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 30 13:04:05.977836 sshd[5781]: Connection closed by 10.0.0.1 port 42438 Jan 30 13:04:05.978680 sshd-session[5779]: pam_unix(sshd:session): session closed for user core Jan 30 13:04:05.985121 systemd[1]: sshd@14-10.0.0.99:22-10.0.0.1:42438.service: Deactivated successfully. Jan 30 13:04:05.987343 systemd[1]: session-15.scope: Deactivated successfully. Jan 30 13:04:05.990531 systemd-logind[1455]: Session 15 logged out. Waiting for processes to exit. Jan 30 13:04:05.999102 systemd[1]: Started sshd@15-10.0.0.99:22-10.0.0.1:42448.service - OpenSSH per-connection server daemon (10.0.0.1:42448). Jan 30 13:04:06.002743 systemd-logind[1455]: Removed session 15. Jan 30 13:04:06.044392 sshd[5806]: Accepted publickey for core from 10.0.0.1 port 42448 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 13:04:06.045871 sshd-session[5806]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:04:06.050309 systemd-logind[1455]: New session 16 of user core. Jan 30 13:04:06.056764 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 30 13:04:06.432330 sshd[5809]: Connection closed by 10.0.0.1 port 42448 Jan 30 13:04:06.433334 sshd-session[5806]: pam_unix(sshd:session): session closed for user core Jan 30 13:04:06.439544 systemd[1]: sshd@15-10.0.0.99:22-10.0.0.1:42448.service: Deactivated successfully. Jan 30 13:04:06.443239 systemd[1]: session-16.scope: Deactivated successfully. Jan 30 13:04:06.445694 systemd-logind[1455]: Session 16 logged out. Waiting for processes to exit. Jan 30 13:04:06.453979 systemd[1]: Started sshd@16-10.0.0.99:22-10.0.0.1:42452.service - OpenSSH per-connection server daemon (10.0.0.1:42452). Jan 30 13:04:06.455120 systemd-logind[1455]: Removed session 16. Jan 30 13:04:06.498401 sshd[5819]: Accepted publickey for core from 10.0.0.1 port 42452 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 13:04:06.499880 sshd-session[5819]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:04:06.505103 systemd-logind[1455]: New session 17 of user core. Jan 30 13:04:06.516748 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 30 13:04:06.671444 sshd[5821]: Connection closed by 10.0.0.1 port 42452 Jan 30 13:04:06.672017 sshd-session[5819]: pam_unix(sshd:session): session closed for user core Jan 30 13:04:06.674887 systemd[1]: session-17.scope: Deactivated successfully. Jan 30 13:04:06.678032 systemd[1]: sshd@16-10.0.0.99:22-10.0.0.1:42452.service: Deactivated successfully. Jan 30 13:04:06.678074 systemd-logind[1455]: Session 17 logged out. Waiting for processes to exit. Jan 30 13:04:06.681083 systemd-logind[1455]: Removed session 17. Jan 30 13:04:07.391009 kubelet[2559]: I0130 13:04:07.390632 2559 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:04:11.699890 systemd[1]: Started sshd@17-10.0.0.99:22-10.0.0.1:42460.service - OpenSSH per-connection server daemon (10.0.0.1:42460). Jan 30 13:04:11.747752 sshd[5876]: Accepted publickey for core from 10.0.0.1 port 42460 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 13:04:11.749218 sshd-session[5876]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:04:11.757886 systemd-logind[1455]: New session 18 of user core. Jan 30 13:04:11.767840 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 30 13:04:11.948895 sshd[5878]: Connection closed by 10.0.0.1 port 42460 Jan 30 13:04:11.949082 sshd-session[5876]: pam_unix(sshd:session): session closed for user core Jan 30 13:04:11.952385 systemd[1]: sshd@17-10.0.0.99:22-10.0.0.1:42460.service: Deactivated successfully. Jan 30 13:04:11.954240 systemd[1]: session-18.scope: Deactivated successfully. Jan 30 13:04:11.955912 systemd-logind[1455]: Session 18 logged out. Waiting for processes to exit. Jan 30 13:04:11.957130 systemd-logind[1455]: Removed session 18. Jan 30 13:04:15.921034 kubelet[2559]: E0130 13:04:15.920726 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:04:16.961611 systemd[1]: Started sshd@18-10.0.0.99:22-10.0.0.1:36430.service - OpenSSH per-connection server daemon (10.0.0.1:36430). Jan 30 13:04:17.026693 sshd[5924]: Accepted publickey for core from 10.0.0.1 port 36430 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 13:04:17.028200 sshd-session[5924]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:04:17.032247 systemd-logind[1455]: New session 19 of user core. Jan 30 13:04:17.038854 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 30 13:04:17.208436 sshd[5926]: Connection closed by 10.0.0.1 port 36430 Jan 30 13:04:17.208901 sshd-session[5924]: pam_unix(sshd:session): session closed for user core Jan 30 13:04:17.212784 systemd[1]: sshd@18-10.0.0.99:22-10.0.0.1:36430.service: Deactivated successfully. Jan 30 13:04:17.214936 systemd[1]: session-19.scope: Deactivated successfully. Jan 30 13:04:17.215981 systemd-logind[1455]: Session 19 logged out. Waiting for processes to exit. Jan 30 13:04:17.216963 systemd-logind[1455]: Removed session 19. Jan 30 13:04:17.551847 containerd[1476]: time="2025-01-30T13:04:17.551803918Z" level=info msg="StopPodSandbox for \"144fc21131be39a0f42c1537abb474ebcc9fa2400214cdc58e533fbeaf811efa\"" Jan 30 13:04:17.552336 containerd[1476]: time="2025-01-30T13:04:17.551940475Z" level=info msg="TearDown network for sandbox \"144fc21131be39a0f42c1537abb474ebcc9fa2400214cdc58e533fbeaf811efa\" successfully" Jan 30 13:04:17.552336 containerd[1476]: time="2025-01-30T13:04:17.551953714Z" level=info msg="StopPodSandbox for \"144fc21131be39a0f42c1537abb474ebcc9fa2400214cdc58e533fbeaf811efa\" returns successfully" Jan 30 13:04:17.563716 containerd[1476]: time="2025-01-30T13:04:17.563648724Z" level=info msg="RemovePodSandbox for \"144fc21131be39a0f42c1537abb474ebcc9fa2400214cdc58e533fbeaf811efa\"" Jan 30 13:04:17.563716 containerd[1476]: time="2025-01-30T13:04:17.563725922Z" level=info msg="Forcibly stopping sandbox \"144fc21131be39a0f42c1537abb474ebcc9fa2400214cdc58e533fbeaf811efa\"" Jan 30 13:04:17.563959 containerd[1476]: time="2025-01-30T13:04:17.563816079Z" level=info msg="TearDown network for sandbox \"144fc21131be39a0f42c1537abb474ebcc9fa2400214cdc58e533fbeaf811efa\" successfully" Jan 30 13:04:17.576658 containerd[1476]: time="2025-01-30T13:04:17.576579020Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"144fc21131be39a0f42c1537abb474ebcc9fa2400214cdc58e533fbeaf811efa\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:04:17.576768 containerd[1476]: time="2025-01-30T13:04:17.576703217Z" level=info msg="RemovePodSandbox \"144fc21131be39a0f42c1537abb474ebcc9fa2400214cdc58e533fbeaf811efa\" returns successfully" Jan 30 13:04:17.577681 containerd[1476]: time="2025-01-30T13:04:17.577501796Z" level=info msg="StopPodSandbox for \"0052d1476361ed68d924c536afe59ee79b25df1b051eb83ee9e0d726b941bc9c\"" Jan 30 13:04:17.577681 containerd[1476]: time="2025-01-30T13:04:17.577631833Z" level=info msg="TearDown network for sandbox \"0052d1476361ed68d924c536afe59ee79b25df1b051eb83ee9e0d726b941bc9c\" successfully" Jan 30 13:04:17.577681 containerd[1476]: time="2025-01-30T13:04:17.577642072Z" level=info msg="StopPodSandbox for \"0052d1476361ed68d924c536afe59ee79b25df1b051eb83ee9e0d726b941bc9c\" returns successfully" Jan 30 13:04:17.584982 containerd[1476]: time="2025-01-30T13:04:17.584942638Z" level=info msg="RemovePodSandbox for \"0052d1476361ed68d924c536afe59ee79b25df1b051eb83ee9e0d726b941bc9c\"" Jan 30 13:04:17.584982 containerd[1476]: time="2025-01-30T13:04:17.584983397Z" level=info msg="Forcibly stopping sandbox \"0052d1476361ed68d924c536afe59ee79b25df1b051eb83ee9e0d726b941bc9c\"" Jan 30 13:04:17.585118 containerd[1476]: time="2025-01-30T13:04:17.585075795Z" level=info msg="TearDown network for sandbox \"0052d1476361ed68d924c536afe59ee79b25df1b051eb83ee9e0d726b941bc9c\" successfully" Jan 30 13:04:17.588077 containerd[1476]: time="2025-01-30T13:04:17.588032876Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0052d1476361ed68d924c536afe59ee79b25df1b051eb83ee9e0d726b941bc9c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:04:17.588163 containerd[1476]: time="2025-01-30T13:04:17.588099715Z" level=info msg="RemovePodSandbox \"0052d1476361ed68d924c536afe59ee79b25df1b051eb83ee9e0d726b941bc9c\" returns successfully" Jan 30 13:04:17.588544 containerd[1476]: time="2025-01-30T13:04:17.588518384Z" level=info msg="StopPodSandbox for \"5050c56e90f5114e30b9fd95242de2c970ed8d4595a32bf4a5efd4940ca8e5dc\"" Jan 30 13:04:17.588684 containerd[1476]: time="2025-01-30T13:04:17.588633540Z" level=info msg="TearDown network for sandbox \"5050c56e90f5114e30b9fd95242de2c970ed8d4595a32bf4a5efd4940ca8e5dc\" successfully" Jan 30 13:04:17.588684 containerd[1476]: time="2025-01-30T13:04:17.588649500Z" level=info msg="StopPodSandbox for \"5050c56e90f5114e30b9fd95242de2c970ed8d4595a32bf4a5efd4940ca8e5dc\" returns successfully" Jan 30 13:04:17.589615 containerd[1476]: time="2025-01-30T13:04:17.588930413Z" level=info msg="RemovePodSandbox for \"5050c56e90f5114e30b9fd95242de2c970ed8d4595a32bf4a5efd4940ca8e5dc\"" Jan 30 13:04:17.589615 containerd[1476]: time="2025-01-30T13:04:17.588961172Z" level=info msg="Forcibly stopping sandbox \"5050c56e90f5114e30b9fd95242de2c970ed8d4595a32bf4a5efd4940ca8e5dc\"" Jan 30 13:04:17.589615 containerd[1476]: time="2025-01-30T13:04:17.589022090Z" level=info msg="TearDown network for sandbox \"5050c56e90f5114e30b9fd95242de2c970ed8d4595a32bf4a5efd4940ca8e5dc\" successfully" Jan 30 13:04:17.591988 containerd[1476]: time="2025-01-30T13:04:17.591956932Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5050c56e90f5114e30b9fd95242de2c970ed8d4595a32bf4a5efd4940ca8e5dc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:04:17.592033 containerd[1476]: time="2025-01-30T13:04:17.592021211Z" level=info msg="RemovePodSandbox \"5050c56e90f5114e30b9fd95242de2c970ed8d4595a32bf4a5efd4940ca8e5dc\" returns successfully" Jan 30 13:04:17.592500 containerd[1476]: time="2025-01-30T13:04:17.592475318Z" level=info msg="StopPodSandbox for \"8e62769cb520bc417a3446ec3e424a25f086fcf882c1ae44e5ce183a6a33b58d\"" Jan 30 13:04:17.592620 containerd[1476]: time="2025-01-30T13:04:17.592581436Z" level=info msg="TearDown network for sandbox \"8e62769cb520bc417a3446ec3e424a25f086fcf882c1ae44e5ce183a6a33b58d\" successfully" Jan 30 13:04:17.592659 containerd[1476]: time="2025-01-30T13:04:17.592618315Z" level=info msg="StopPodSandbox for \"8e62769cb520bc417a3446ec3e424a25f086fcf882c1ae44e5ce183a6a33b58d\" returns successfully" Jan 30 13:04:17.593000 containerd[1476]: time="2025-01-30T13:04:17.592949706Z" level=info msg="RemovePodSandbox for \"8e62769cb520bc417a3446ec3e424a25f086fcf882c1ae44e5ce183a6a33b58d\"" Jan 30 13:04:17.593000 containerd[1476]: time="2025-01-30T13:04:17.592984345Z" level=info msg="Forcibly stopping sandbox \"8e62769cb520bc417a3446ec3e424a25f086fcf882c1ae44e5ce183a6a33b58d\"" Jan 30 13:04:17.593117 containerd[1476]: time="2025-01-30T13:04:17.593054103Z" level=info msg="TearDown network for sandbox \"8e62769cb520bc417a3446ec3e424a25f086fcf882c1ae44e5ce183a6a33b58d\" successfully" Jan 30 13:04:17.595625 containerd[1476]: time="2025-01-30T13:04:17.595558277Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8e62769cb520bc417a3446ec3e424a25f086fcf882c1ae44e5ce183a6a33b58d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:04:17.595711 containerd[1476]: time="2025-01-30T13:04:17.595645394Z" level=info msg="RemovePodSandbox \"8e62769cb520bc417a3446ec3e424a25f086fcf882c1ae44e5ce183a6a33b58d\" returns successfully" Jan 30 13:04:17.596068 containerd[1476]: time="2025-01-30T13:04:17.596033224Z" level=info msg="StopPodSandbox for \"90a6d303e458d7ac0616b8fab291bd48f4242f53521e4ae8c6439314ebdf1a12\"" Jan 30 13:04:17.596160 containerd[1476]: time="2025-01-30T13:04:17.596143341Z" level=info msg="TearDown network for sandbox \"90a6d303e458d7ac0616b8fab291bd48f4242f53521e4ae8c6439314ebdf1a12\" successfully" Jan 30 13:04:17.596200 containerd[1476]: time="2025-01-30T13:04:17.596158141Z" level=info msg="StopPodSandbox for \"90a6d303e458d7ac0616b8fab291bd48f4242f53521e4ae8c6439314ebdf1a12\" returns successfully" Jan 30 13:04:17.596591 containerd[1476]: time="2025-01-30T13:04:17.596553330Z" level=info msg="RemovePodSandbox for \"90a6d303e458d7ac0616b8fab291bd48f4242f53521e4ae8c6439314ebdf1a12\"" Jan 30 13:04:17.596638 containerd[1476]: time="2025-01-30T13:04:17.596615049Z" level=info msg="Forcibly stopping sandbox \"90a6d303e458d7ac0616b8fab291bd48f4242f53521e4ae8c6439314ebdf1a12\"" Jan 30 13:04:17.596710 containerd[1476]: time="2025-01-30T13:04:17.596687727Z" level=info msg="TearDown network for sandbox \"90a6d303e458d7ac0616b8fab291bd48f4242f53521e4ae8c6439314ebdf1a12\" successfully" Jan 30 13:04:17.600118 containerd[1476]: time="2025-01-30T13:04:17.600070397Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"90a6d303e458d7ac0616b8fab291bd48f4242f53521e4ae8c6439314ebdf1a12\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:04:17.600226 containerd[1476]: time="2025-01-30T13:04:17.600152435Z" level=info msg="RemovePodSandbox \"90a6d303e458d7ac0616b8fab291bd48f4242f53521e4ae8c6439314ebdf1a12\" returns successfully" Jan 30 13:04:17.601159 containerd[1476]: time="2025-01-30T13:04:17.600817257Z" level=info msg="StopPodSandbox for \"2fa7c71c031aa8d5814264930e8142dd1828c577abbe79735d158974f3c75698\"" Jan 30 13:04:17.601159 containerd[1476]: time="2025-01-30T13:04:17.600919134Z" level=info msg="TearDown network for sandbox \"2fa7c71c031aa8d5814264930e8142dd1828c577abbe79735d158974f3c75698\" successfully" Jan 30 13:04:17.601159 containerd[1476]: time="2025-01-30T13:04:17.600930734Z" level=info msg="StopPodSandbox for \"2fa7c71c031aa8d5814264930e8142dd1828c577abbe79735d158974f3c75698\" returns successfully" Jan 30 13:04:17.602200 containerd[1476]: time="2025-01-30T13:04:17.601483359Z" level=info msg="RemovePodSandbox for \"2fa7c71c031aa8d5814264930e8142dd1828c577abbe79735d158974f3c75698\"" Jan 30 13:04:17.602200 containerd[1476]: time="2025-01-30T13:04:17.601511159Z" level=info msg="Forcibly stopping sandbox \"2fa7c71c031aa8d5814264930e8142dd1828c577abbe79735d158974f3c75698\"" Jan 30 13:04:17.602200 containerd[1476]: time="2025-01-30T13:04:17.601577437Z" level=info msg="TearDown network for sandbox \"2fa7c71c031aa8d5814264930e8142dd1828c577abbe79735d158974f3c75698\" successfully" Jan 30 13:04:17.607011 containerd[1476]: time="2025-01-30T13:04:17.606956574Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2fa7c71c031aa8d5814264930e8142dd1828c577abbe79735d158974f3c75698\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:04:17.607138 containerd[1476]: time="2025-01-30T13:04:17.607030252Z" level=info msg="RemovePodSandbox \"2fa7c71c031aa8d5814264930e8142dd1828c577abbe79735d158974f3c75698\" returns successfully" Jan 30 13:04:17.608324 containerd[1476]: time="2025-01-30T13:04:17.608287499Z" level=info msg="StopPodSandbox for \"35b59a098d30c4bca7c9a975837a895bef0a72a468f7375cdbfb22ec4b909754\"" Jan 30 13:04:17.608434 containerd[1476]: time="2025-01-30T13:04:17.608409055Z" level=info msg="TearDown network for sandbox \"35b59a098d30c4bca7c9a975837a895bef0a72a468f7375cdbfb22ec4b909754\" successfully" Jan 30 13:04:17.608434 containerd[1476]: time="2025-01-30T13:04:17.608427975Z" level=info msg="StopPodSandbox for \"35b59a098d30c4bca7c9a975837a895bef0a72a468f7375cdbfb22ec4b909754\" returns successfully" Jan 30 13:04:17.608828 containerd[1476]: time="2025-01-30T13:04:17.608747407Z" level=info msg="RemovePodSandbox for \"35b59a098d30c4bca7c9a975837a895bef0a72a468f7375cdbfb22ec4b909754\"" Jan 30 13:04:17.608863 containerd[1476]: time="2025-01-30T13:04:17.608829124Z" level=info msg="Forcibly stopping sandbox \"35b59a098d30c4bca7c9a975837a895bef0a72a468f7375cdbfb22ec4b909754\"" Jan 30 13:04:17.608928 containerd[1476]: time="2025-01-30T13:04:17.608912122Z" level=info msg="TearDown network for sandbox \"35b59a098d30c4bca7c9a975837a895bef0a72a468f7375cdbfb22ec4b909754\" successfully" Jan 30 13:04:17.611695 containerd[1476]: time="2025-01-30T13:04:17.611642170Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"35b59a098d30c4bca7c9a975837a895bef0a72a468f7375cdbfb22ec4b909754\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:04:17.611755 containerd[1476]: time="2025-01-30T13:04:17.611720848Z" level=info msg="RemovePodSandbox \"35b59a098d30c4bca7c9a975837a895bef0a72a468f7375cdbfb22ec4b909754\" returns successfully" Jan 30 13:04:17.612495 containerd[1476]: time="2025-01-30T13:04:17.612455548Z" level=info msg="StopPodSandbox for \"c8efa7aec4a3f187326a7cf3623c2715e4156903579c6743f88b56f9ed168c4c\"" Jan 30 13:04:17.612583 containerd[1476]: time="2025-01-30T13:04:17.612558065Z" level=info msg="TearDown network for sandbox \"c8efa7aec4a3f187326a7cf3623c2715e4156903579c6743f88b56f9ed168c4c\" successfully" Jan 30 13:04:17.612583 containerd[1476]: time="2025-01-30T13:04:17.612573425Z" level=info msg="StopPodSandbox for \"c8efa7aec4a3f187326a7cf3623c2715e4156903579c6743f88b56f9ed168c4c\" returns successfully" Jan 30 13:04:17.613915 containerd[1476]: time="2025-01-30T13:04:17.613873990Z" level=info msg="RemovePodSandbox for \"c8efa7aec4a3f187326a7cf3623c2715e4156903579c6743f88b56f9ed168c4c\"" Jan 30 13:04:17.613915 containerd[1476]: time="2025-01-30T13:04:17.613912669Z" level=info msg="Forcibly stopping sandbox \"c8efa7aec4a3f187326a7cf3623c2715e4156903579c6743f88b56f9ed168c4c\"" Jan 30 13:04:17.613999 containerd[1476]: time="2025-01-30T13:04:17.613984267Z" level=info msg="TearDown network for sandbox \"c8efa7aec4a3f187326a7cf3623c2715e4156903579c6743f88b56f9ed168c4c\" successfully" Jan 30 13:04:17.617047 containerd[1476]: time="2025-01-30T13:04:17.616981428Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c8efa7aec4a3f187326a7cf3623c2715e4156903579c6743f88b56f9ed168c4c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:04:17.617109 containerd[1476]: time="2025-01-30T13:04:17.617075745Z" level=info msg="RemovePodSandbox \"c8efa7aec4a3f187326a7cf3623c2715e4156903579c6743f88b56f9ed168c4c\" returns successfully" Jan 30 13:04:17.617800 containerd[1476]: time="2025-01-30T13:04:17.617767167Z" level=info msg="StopPodSandbox for \"ec456f68e433fba4fb974e9bd18588ad52e90092aae461fdc7248fec270c3280\"" Jan 30 13:04:17.617902 containerd[1476]: time="2025-01-30T13:04:17.617886164Z" level=info msg="TearDown network for sandbox \"ec456f68e433fba4fb974e9bd18588ad52e90092aae461fdc7248fec270c3280\" successfully" Jan 30 13:04:17.617931 containerd[1476]: time="2025-01-30T13:04:17.617902603Z" level=info msg="StopPodSandbox for \"ec456f68e433fba4fb974e9bd18588ad52e90092aae461fdc7248fec270c3280\" returns successfully" Jan 30 13:04:17.618358 containerd[1476]: time="2025-01-30T13:04:17.618304033Z" level=info msg="RemovePodSandbox for \"ec456f68e433fba4fb974e9bd18588ad52e90092aae461fdc7248fec270c3280\"" Jan 30 13:04:17.618358 containerd[1476]: time="2025-01-30T13:04:17.618350952Z" level=info msg="Forcibly stopping sandbox \"ec456f68e433fba4fb974e9bd18588ad52e90092aae461fdc7248fec270c3280\"" Jan 30 13:04:17.618441 containerd[1476]: time="2025-01-30T13:04:17.618424350Z" level=info msg="TearDown network for sandbox \"ec456f68e433fba4fb974e9bd18588ad52e90092aae461fdc7248fec270c3280\" successfully" Jan 30 13:04:17.621721 containerd[1476]: time="2025-01-30T13:04:17.621672343Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ec456f68e433fba4fb974e9bd18588ad52e90092aae461fdc7248fec270c3280\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:04:17.621823 containerd[1476]: time="2025-01-30T13:04:17.621757421Z" level=info msg="RemovePodSandbox \"ec456f68e433fba4fb974e9bd18588ad52e90092aae461fdc7248fec270c3280\" returns successfully" Jan 30 13:04:17.622505 containerd[1476]: time="2025-01-30T13:04:17.622471322Z" level=info msg="StopPodSandbox for \"3c570fcefd704fc125ce677afb9f1dc9f63c773161f886b11963981eeb06515d\"" Jan 30 13:04:17.622597 containerd[1476]: time="2025-01-30T13:04:17.622569960Z" level=info msg="TearDown network for sandbox \"3c570fcefd704fc125ce677afb9f1dc9f63c773161f886b11963981eeb06515d\" successfully" Jan 30 13:04:17.622641 containerd[1476]: time="2025-01-30T13:04:17.622584359Z" level=info msg="StopPodSandbox for \"3c570fcefd704fc125ce677afb9f1dc9f63c773161f886b11963981eeb06515d\" returns successfully" Jan 30 13:04:17.623001 containerd[1476]: time="2025-01-30T13:04:17.622943510Z" level=info msg="RemovePodSandbox for \"3c570fcefd704fc125ce677afb9f1dc9f63c773161f886b11963981eeb06515d\"" Jan 30 13:04:17.623057 containerd[1476]: time="2025-01-30T13:04:17.623002548Z" level=info msg="Forcibly stopping sandbox \"3c570fcefd704fc125ce677afb9f1dc9f63c773161f886b11963981eeb06515d\"" Jan 30 13:04:17.623102 containerd[1476]: time="2025-01-30T13:04:17.623085466Z" level=info msg="TearDown network for sandbox \"3c570fcefd704fc125ce677afb9f1dc9f63c773161f886b11963981eeb06515d\" successfully" Jan 30 13:04:17.626145 containerd[1476]: time="2025-01-30T13:04:17.626099226Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3c570fcefd704fc125ce677afb9f1dc9f63c773161f886b11963981eeb06515d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:04:17.626241 containerd[1476]: time="2025-01-30T13:04:17.626210143Z" level=info msg="RemovePodSandbox \"3c570fcefd704fc125ce677afb9f1dc9f63c773161f886b11963981eeb06515d\" returns successfully" Jan 30 13:04:17.626685 containerd[1476]: time="2025-01-30T13:04:17.626653171Z" level=info msg="StopPodSandbox for \"df66bd6fcb6a38c10cff7d4d19cad38be0f23fa4e6d01253025405d35476663e\"" Jan 30 13:04:17.626771 containerd[1476]: time="2025-01-30T13:04:17.626755248Z" level=info msg="TearDown network for sandbox \"df66bd6fcb6a38c10cff7d4d19cad38be0f23fa4e6d01253025405d35476663e\" successfully" Jan 30 13:04:17.626771 containerd[1476]: time="2025-01-30T13:04:17.626769368Z" level=info msg="StopPodSandbox for \"df66bd6fcb6a38c10cff7d4d19cad38be0f23fa4e6d01253025405d35476663e\" returns successfully" Jan 30 13:04:17.627139 containerd[1476]: time="2025-01-30T13:04:17.627094439Z" level=info msg="RemovePodSandbox for \"df66bd6fcb6a38c10cff7d4d19cad38be0f23fa4e6d01253025405d35476663e\"" Jan 30 13:04:17.627139 containerd[1476]: time="2025-01-30T13:04:17.627129239Z" level=info msg="Forcibly stopping sandbox \"df66bd6fcb6a38c10cff7d4d19cad38be0f23fa4e6d01253025405d35476663e\"" Jan 30 13:04:17.627350 containerd[1476]: time="2025-01-30T13:04:17.627331113Z" level=info msg="TearDown network for sandbox \"df66bd6fcb6a38c10cff7d4d19cad38be0f23fa4e6d01253025405d35476663e\" successfully" Jan 30 13:04:17.636475 containerd[1476]: time="2025-01-30T13:04:17.636418712Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"df66bd6fcb6a38c10cff7d4d19cad38be0f23fa4e6d01253025405d35476663e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:04:17.636610 containerd[1476]: time="2025-01-30T13:04:17.636494470Z" level=info msg="RemovePodSandbox \"df66bd6fcb6a38c10cff7d4d19cad38be0f23fa4e6d01253025405d35476663e\" returns successfully" Jan 30 13:04:17.637386 containerd[1476]: time="2025-01-30T13:04:17.637343727Z" level=info msg="StopPodSandbox for \"35dd0410d989508b1ae72995429a86db41e358c8a098911b5b28bc01b367ad26\"" Jan 30 13:04:17.637493 containerd[1476]: time="2025-01-30T13:04:17.637476084Z" level=info msg="TearDown network for sandbox \"35dd0410d989508b1ae72995429a86db41e358c8a098911b5b28bc01b367ad26\" successfully" Jan 30 13:04:17.637534 containerd[1476]: time="2025-01-30T13:04:17.637491083Z" level=info msg="StopPodSandbox for \"35dd0410d989508b1ae72995429a86db41e358c8a098911b5b28bc01b367ad26\" returns successfully" Jan 30 13:04:17.638003 containerd[1476]: time="2025-01-30T13:04:17.637979991Z" level=info msg="RemovePodSandbox for \"35dd0410d989508b1ae72995429a86db41e358c8a098911b5b28bc01b367ad26\"" Jan 30 13:04:17.638049 containerd[1476]: time="2025-01-30T13:04:17.638008350Z" level=info msg="Forcibly stopping sandbox \"35dd0410d989508b1ae72995429a86db41e358c8a098911b5b28bc01b367ad26\"" Jan 30 13:04:17.638093 containerd[1476]: time="2025-01-30T13:04:17.638075668Z" level=info msg="TearDown network for sandbox \"35dd0410d989508b1ae72995429a86db41e358c8a098911b5b28bc01b367ad26\" successfully" Jan 30 13:04:17.647209 containerd[1476]: time="2025-01-30T13:04:17.647144467Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"35dd0410d989508b1ae72995429a86db41e358c8a098911b5b28bc01b367ad26\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:04:17.647309 containerd[1476]: time="2025-01-30T13:04:17.647216305Z" level=info msg="RemovePodSandbox \"35dd0410d989508b1ae72995429a86db41e358c8a098911b5b28bc01b367ad26\" returns successfully" Jan 30 13:04:17.647691 containerd[1476]: time="2025-01-30T13:04:17.647664533Z" level=info msg="StopPodSandbox for \"5ec65669131bf01e5f4583128fc9f6ee7dc6c04be79f0aad25a456a1b83b6797\"" Jan 30 13:04:17.647787 containerd[1476]: time="2025-01-30T13:04:17.647769251Z" level=info msg="TearDown network for sandbox \"5ec65669131bf01e5f4583128fc9f6ee7dc6c04be79f0aad25a456a1b83b6797\" successfully" Jan 30 13:04:17.647787 containerd[1476]: time="2025-01-30T13:04:17.647784450Z" level=info msg="StopPodSandbox for \"5ec65669131bf01e5f4583128fc9f6ee7dc6c04be79f0aad25a456a1b83b6797\" returns successfully" Jan 30 13:04:17.648159 containerd[1476]: time="2025-01-30T13:04:17.648135361Z" level=info msg="RemovePodSandbox for \"5ec65669131bf01e5f4583128fc9f6ee7dc6c04be79f0aad25a456a1b83b6797\"" Jan 30 13:04:17.648194 containerd[1476]: time="2025-01-30T13:04:17.648165680Z" level=info msg="Forcibly stopping sandbox \"5ec65669131bf01e5f4583128fc9f6ee7dc6c04be79f0aad25a456a1b83b6797\"" Jan 30 13:04:17.648245 containerd[1476]: time="2025-01-30T13:04:17.648230238Z" level=info msg="TearDown network for sandbox \"5ec65669131bf01e5f4583128fc9f6ee7dc6c04be79f0aad25a456a1b83b6797\" successfully" Jan 30 13:04:17.655763 containerd[1476]: time="2025-01-30T13:04:17.655686880Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5ec65669131bf01e5f4583128fc9f6ee7dc6c04be79f0aad25a456a1b83b6797\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:04:17.656051 containerd[1476]: time="2025-01-30T13:04:17.655936554Z" level=info msg="RemovePodSandbox \"5ec65669131bf01e5f4583128fc9f6ee7dc6c04be79f0aad25a456a1b83b6797\" returns successfully" Jan 30 13:04:17.656603 containerd[1476]: time="2025-01-30T13:04:17.656431821Z" level=info msg="StopPodSandbox for \"0495fb8fe4bb2805a66ec45e77c031e67b9b4d0cc58ba6657a80b75cb6bfae3c\"" Jan 30 13:04:17.656712 containerd[1476]: time="2025-01-30T13:04:17.656694014Z" level=info msg="TearDown network for sandbox \"0495fb8fe4bb2805a66ec45e77c031e67b9b4d0cc58ba6657a80b75cb6bfae3c\" successfully" Jan 30 13:04:17.656884 containerd[1476]: time="2025-01-30T13:04:17.656756932Z" level=info msg="StopPodSandbox for \"0495fb8fe4bb2805a66ec45e77c031e67b9b4d0cc58ba6657a80b75cb6bfae3c\" returns successfully" Jan 30 13:04:17.657304 containerd[1476]: time="2025-01-30T13:04:17.657279358Z" level=info msg="RemovePodSandbox for \"0495fb8fe4bb2805a66ec45e77c031e67b9b4d0cc58ba6657a80b75cb6bfae3c\"" Jan 30 13:04:17.657369 containerd[1476]: time="2025-01-30T13:04:17.657309317Z" level=info msg="Forcibly stopping sandbox \"0495fb8fe4bb2805a66ec45e77c031e67b9b4d0cc58ba6657a80b75cb6bfae3c\"" Jan 30 13:04:17.657395 containerd[1476]: time="2025-01-30T13:04:17.657371316Z" level=info msg="TearDown network for sandbox \"0495fb8fe4bb2805a66ec45e77c031e67b9b4d0cc58ba6657a80b75cb6bfae3c\" successfully" Jan 30 13:04:17.660572 containerd[1476]: time="2025-01-30T13:04:17.660503193Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0495fb8fe4bb2805a66ec45e77c031e67b9b4d0cc58ba6657a80b75cb6bfae3c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:04:17.660694 containerd[1476]: time="2025-01-30T13:04:17.660663028Z" level=info msg="RemovePodSandbox \"0495fb8fe4bb2805a66ec45e77c031e67b9b4d0cc58ba6657a80b75cb6bfae3c\" returns successfully" Jan 30 13:04:17.661343 containerd[1476]: time="2025-01-30T13:04:17.661311731Z" level=info msg="StopPodSandbox for \"a5d295c47e96d1c5b1c07aae599b48f3d11a387edc12b031c5eb4bda19e91a06\"" Jan 30 13:04:17.661672 containerd[1476]: time="2025-01-30T13:04:17.661566124Z" level=info msg="TearDown network for sandbox \"a5d295c47e96d1c5b1c07aae599b48f3d11a387edc12b031c5eb4bda19e91a06\" successfully" Jan 30 13:04:17.661672 containerd[1476]: time="2025-01-30T13:04:17.661583564Z" level=info msg="StopPodSandbox for \"a5d295c47e96d1c5b1c07aae599b48f3d11a387edc12b031c5eb4bda19e91a06\" returns successfully" Jan 30 13:04:17.661672 containerd[1476]: time="2025-01-30T13:04:17.661937595Z" level=info msg="RemovePodSandbox for \"a5d295c47e96d1c5b1c07aae599b48f3d11a387edc12b031c5eb4bda19e91a06\"" Jan 30 13:04:17.661672 containerd[1476]: time="2025-01-30T13:04:17.661968754Z" level=info msg="Forcibly stopping sandbox \"a5d295c47e96d1c5b1c07aae599b48f3d11a387edc12b031c5eb4bda19e91a06\"" Jan 30 13:04:17.661672 containerd[1476]: time="2025-01-30T13:04:17.662037552Z" level=info msg="TearDown network for sandbox \"a5d295c47e96d1c5b1c07aae599b48f3d11a387edc12b031c5eb4bda19e91a06\" successfully" Jan 30 13:04:17.664928 containerd[1476]: time="2025-01-30T13:04:17.664884756Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a5d295c47e96d1c5b1c07aae599b48f3d11a387edc12b031c5eb4bda19e91a06\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:04:17.665025 containerd[1476]: time="2025-01-30T13:04:17.664955754Z" level=info msg="RemovePodSandbox \"a5d295c47e96d1c5b1c07aae599b48f3d11a387edc12b031c5eb4bda19e91a06\" returns successfully" Jan 30 13:04:17.665661 containerd[1476]: time="2025-01-30T13:04:17.665631056Z" level=info msg="StopPodSandbox for \"0b8cab69669dccc4405556aab35436320fe776560fe176a6c6371978cd4d55c5\"" Jan 30 13:04:17.665996 containerd[1476]: time="2025-01-30T13:04:17.665920009Z" level=info msg="TearDown network for sandbox \"0b8cab69669dccc4405556aab35436320fe776560fe176a6c6371978cd4d55c5\" successfully" Jan 30 13:04:17.665996 containerd[1476]: time="2025-01-30T13:04:17.665936728Z" level=info msg="StopPodSandbox for \"0b8cab69669dccc4405556aab35436320fe776560fe176a6c6371978cd4d55c5\" returns successfully" Jan 30 13:04:17.666321 containerd[1476]: time="2025-01-30T13:04:17.666247000Z" level=info msg="RemovePodSandbox for \"0b8cab69669dccc4405556aab35436320fe776560fe176a6c6371978cd4d55c5\"" Jan 30 13:04:17.666321 containerd[1476]: time="2025-01-30T13:04:17.666278959Z" level=info msg="Forcibly stopping sandbox \"0b8cab69669dccc4405556aab35436320fe776560fe176a6c6371978cd4d55c5\"" Jan 30 13:04:17.666487 containerd[1476]: time="2025-01-30T13:04:17.666342358Z" level=info msg="TearDown network for sandbox \"0b8cab69669dccc4405556aab35436320fe776560fe176a6c6371978cd4d55c5\" successfully" Jan 30 13:04:17.670407 containerd[1476]: time="2025-01-30T13:04:17.669659750Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0b8cab69669dccc4405556aab35436320fe776560fe176a6c6371978cd4d55c5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:04:17.670407 containerd[1476]: time="2025-01-30T13:04:17.669726868Z" level=info msg="RemovePodSandbox \"0b8cab69669dccc4405556aab35436320fe776560fe176a6c6371978cd4d55c5\" returns successfully" Jan 30 13:04:17.670407 containerd[1476]: time="2025-01-30T13:04:17.670219415Z" level=info msg="StopPodSandbox for \"bf5dd00c3929bb02396db453e002b12287ce2b372add738dca0ef8a9044ac221\"" Jan 30 13:04:17.670407 containerd[1476]: time="2025-01-30T13:04:17.670316972Z" level=info msg="TearDown network for sandbox \"bf5dd00c3929bb02396db453e002b12287ce2b372add738dca0ef8a9044ac221\" successfully" Jan 30 13:04:17.670407 containerd[1476]: time="2025-01-30T13:04:17.670327012Z" level=info msg="StopPodSandbox for \"bf5dd00c3929bb02396db453e002b12287ce2b372add738dca0ef8a9044ac221\" returns successfully" Jan 30 13:04:17.671298 containerd[1476]: time="2025-01-30T13:04:17.671082152Z" level=info msg="RemovePodSandbox for \"bf5dd00c3929bb02396db453e002b12287ce2b372add738dca0ef8a9044ac221\"" Jan 30 13:04:17.671298 containerd[1476]: time="2025-01-30T13:04:17.671111431Z" level=info msg="Forcibly stopping sandbox \"bf5dd00c3929bb02396db453e002b12287ce2b372add738dca0ef8a9044ac221\"" Jan 30 13:04:17.671298 containerd[1476]: time="2025-01-30T13:04:17.671244307Z" level=info msg="TearDown network for sandbox \"bf5dd00c3929bb02396db453e002b12287ce2b372add738dca0ef8a9044ac221\" successfully" Jan 30 13:04:17.675346 containerd[1476]: time="2025-01-30T13:04:17.674615138Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bf5dd00c3929bb02396db453e002b12287ce2b372add738dca0ef8a9044ac221\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:04:17.675346 containerd[1476]: time="2025-01-30T13:04:17.674721175Z" level=info msg="RemovePodSandbox \"bf5dd00c3929bb02396db453e002b12287ce2b372add738dca0ef8a9044ac221\" returns successfully" Jan 30 13:04:17.677924 containerd[1476]: time="2025-01-30T13:04:17.676381851Z" level=info msg="StopPodSandbox for \"01b2b7084bf10be0191423ef465ccb8e5e96bdd56da1dc74b2073d6e2aca0bd4\"" Jan 30 13:04:17.677924 containerd[1476]: time="2025-01-30T13:04:17.676668043Z" level=info msg="TearDown network for sandbox \"01b2b7084bf10be0191423ef465ccb8e5e96bdd56da1dc74b2073d6e2aca0bd4\" successfully" Jan 30 13:04:17.677924 containerd[1476]: time="2025-01-30T13:04:17.676716322Z" level=info msg="StopPodSandbox for \"01b2b7084bf10be0191423ef465ccb8e5e96bdd56da1dc74b2073d6e2aca0bd4\" returns successfully" Jan 30 13:04:17.680281 containerd[1476]: time="2025-01-30T13:04:17.678397878Z" level=info msg="RemovePodSandbox for \"01b2b7084bf10be0191423ef465ccb8e5e96bdd56da1dc74b2073d6e2aca0bd4\"" Jan 30 13:04:17.680281 containerd[1476]: time="2025-01-30T13:04:17.678474956Z" level=info msg="Forcibly stopping sandbox \"01b2b7084bf10be0191423ef465ccb8e5e96bdd56da1dc74b2073d6e2aca0bd4\"" Jan 30 13:04:17.680281 containerd[1476]: time="2025-01-30T13:04:17.678600272Z" level=info msg="TearDown network for sandbox \"01b2b7084bf10be0191423ef465ccb8e5e96bdd56da1dc74b2073d6e2aca0bd4\" successfully" Jan 30 13:04:17.681720 containerd[1476]: time="2025-01-30T13:04:17.681686510Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"01b2b7084bf10be0191423ef465ccb8e5e96bdd56da1dc74b2073d6e2aca0bd4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:04:17.681872 containerd[1476]: time="2025-01-30T13:04:17.681843826Z" level=info msg="RemovePodSandbox \"01b2b7084bf10be0191423ef465ccb8e5e96bdd56da1dc74b2073d6e2aca0bd4\" returns successfully" Jan 30 13:04:17.682304 containerd[1476]: time="2025-01-30T13:04:17.682278935Z" level=info msg="StopPodSandbox for \"3cb1e6825a901ada41bfdeb0528a6faa8257a7dd5d8ca396955ed98c96d63640\"" Jan 30 13:04:17.682395 containerd[1476]: time="2025-01-30T13:04:17.682379772Z" level=info msg="TearDown network for sandbox \"3cb1e6825a901ada41bfdeb0528a6faa8257a7dd5d8ca396955ed98c96d63640\" successfully" Jan 30 13:04:17.682420 containerd[1476]: time="2025-01-30T13:04:17.682395211Z" level=info msg="StopPodSandbox for \"3cb1e6825a901ada41bfdeb0528a6faa8257a7dd5d8ca396955ed98c96d63640\" returns successfully" Jan 30 13:04:17.682745 containerd[1476]: time="2025-01-30T13:04:17.682720123Z" level=info msg="RemovePodSandbox for \"3cb1e6825a901ada41bfdeb0528a6faa8257a7dd5d8ca396955ed98c96d63640\"" Jan 30 13:04:17.682745 containerd[1476]: time="2025-01-30T13:04:17.682746442Z" level=info msg="Forcibly stopping sandbox \"3cb1e6825a901ada41bfdeb0528a6faa8257a7dd5d8ca396955ed98c96d63640\"" Jan 30 13:04:17.682852 containerd[1476]: time="2025-01-30T13:04:17.682799921Z" level=info msg="TearDown network for sandbox \"3cb1e6825a901ada41bfdeb0528a6faa8257a7dd5d8ca396955ed98c96d63640\" successfully" Jan 30 13:04:17.687652 containerd[1476]: time="2025-01-30T13:04:17.685676964Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3cb1e6825a901ada41bfdeb0528a6faa8257a7dd5d8ca396955ed98c96d63640\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:04:17.687652 containerd[1476]: time="2025-01-30T13:04:17.685776722Z" level=info msg="RemovePodSandbox \"3cb1e6825a901ada41bfdeb0528a6faa8257a7dd5d8ca396955ed98c96d63640\" returns successfully" Jan 30 13:04:17.688285 containerd[1476]: time="2025-01-30T13:04:17.688220017Z" level=info msg="StopPodSandbox for \"ed1fb33fca754cc40e42e26806dbbc3cfdda2a37a4de910c7334fae2b33524be\"" Jan 30 13:04:17.688645 containerd[1476]: time="2025-01-30T13:04:17.688624046Z" level=info msg="TearDown network for sandbox \"ed1fb33fca754cc40e42e26806dbbc3cfdda2a37a4de910c7334fae2b33524be\" successfully" Jan 30 13:04:17.688645 containerd[1476]: time="2025-01-30T13:04:17.688642846Z" level=info msg="StopPodSandbox for \"ed1fb33fca754cc40e42e26806dbbc3cfdda2a37a4de910c7334fae2b33524be\" returns successfully" Jan 30 13:04:17.689499 containerd[1476]: time="2025-01-30T13:04:17.688983437Z" level=info msg="RemovePodSandbox for \"ed1fb33fca754cc40e42e26806dbbc3cfdda2a37a4de910c7334fae2b33524be\"" Jan 30 13:04:17.689499 containerd[1476]: time="2025-01-30T13:04:17.689014476Z" level=info msg="Forcibly stopping sandbox \"ed1fb33fca754cc40e42e26806dbbc3cfdda2a37a4de910c7334fae2b33524be\"" Jan 30 13:04:17.689499 containerd[1476]: time="2025-01-30T13:04:17.689082154Z" level=info msg="TearDown network for sandbox \"ed1fb33fca754cc40e42e26806dbbc3cfdda2a37a4de910c7334fae2b33524be\" successfully" Jan 30 13:04:17.691550 containerd[1476]: time="2025-01-30T13:04:17.691513289Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ed1fb33fca754cc40e42e26806dbbc3cfdda2a37a4de910c7334fae2b33524be\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:04:17.691612 containerd[1476]: time="2025-01-30T13:04:17.691576208Z" level=info msg="RemovePodSandbox \"ed1fb33fca754cc40e42e26806dbbc3cfdda2a37a4de910c7334fae2b33524be\" returns successfully" Jan 30 13:04:17.692182 containerd[1476]: time="2025-01-30T13:04:17.692155152Z" level=info msg="StopPodSandbox for \"6cf762db3582e3aa060d9ce6a49e3ffb8dc1435b98c8ef8255cb92e6996dd824\"" Jan 30 13:04:17.692274 containerd[1476]: time="2025-01-30T13:04:17.692257350Z" level=info msg="TearDown network for sandbox \"6cf762db3582e3aa060d9ce6a49e3ffb8dc1435b98c8ef8255cb92e6996dd824\" successfully" Jan 30 13:04:17.692308 containerd[1476]: time="2025-01-30T13:04:17.692273229Z" level=info msg="StopPodSandbox for \"6cf762db3582e3aa060d9ce6a49e3ffb8dc1435b98c8ef8255cb92e6996dd824\" returns successfully" Jan 30 13:04:17.692582 containerd[1476]: time="2025-01-30T13:04:17.692559102Z" level=info msg="RemovePodSandbox for \"6cf762db3582e3aa060d9ce6a49e3ffb8dc1435b98c8ef8255cb92e6996dd824\"" Jan 30 13:04:17.692752 containerd[1476]: time="2025-01-30T13:04:17.692604580Z" level=info msg="Forcibly stopping sandbox \"6cf762db3582e3aa060d9ce6a49e3ffb8dc1435b98c8ef8255cb92e6996dd824\"" Jan 30 13:04:17.692752 containerd[1476]: time="2025-01-30T13:04:17.692665739Z" level=info msg="TearDown network for sandbox \"6cf762db3582e3aa060d9ce6a49e3ffb8dc1435b98c8ef8255cb92e6996dd824\" successfully" Jan 30 13:04:17.695069 containerd[1476]: time="2025-01-30T13:04:17.695038116Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6cf762db3582e3aa060d9ce6a49e3ffb8dc1435b98c8ef8255cb92e6996dd824\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:04:17.695143 containerd[1476]: time="2025-01-30T13:04:17.695094394Z" level=info msg="RemovePodSandbox \"6cf762db3582e3aa060d9ce6a49e3ffb8dc1435b98c8ef8255cb92e6996dd824\" returns successfully" Jan 30 13:04:17.695701 containerd[1476]: time="2025-01-30T13:04:17.695532583Z" level=info msg="StopPodSandbox for \"9dfac4677edde509b317abcf092ad75ffe004513ee0ec727b97a80603c8f198e\"" Jan 30 13:04:17.695701 containerd[1476]: time="2025-01-30T13:04:17.695625380Z" level=info msg="TearDown network for sandbox \"9dfac4677edde509b317abcf092ad75ffe004513ee0ec727b97a80603c8f198e\" successfully" Jan 30 13:04:17.695701 containerd[1476]: time="2025-01-30T13:04:17.695635620Z" level=info msg="StopPodSandbox for \"9dfac4677edde509b317abcf092ad75ffe004513ee0ec727b97a80603c8f198e\" returns successfully" Jan 30 13:04:17.695968 containerd[1476]: time="2025-01-30T13:04:17.695932772Z" level=info msg="RemovePodSandbox for \"9dfac4677edde509b317abcf092ad75ffe004513ee0ec727b97a80603c8f198e\"" Jan 30 13:04:17.695998 containerd[1476]: time="2025-01-30T13:04:17.695970491Z" level=info msg="Forcibly stopping sandbox \"9dfac4677edde509b317abcf092ad75ffe004513ee0ec727b97a80603c8f198e\"" Jan 30 13:04:17.696051 containerd[1476]: time="2025-01-30T13:04:17.696038289Z" level=info msg="TearDown network for sandbox \"9dfac4677edde509b317abcf092ad75ffe004513ee0ec727b97a80603c8f198e\" successfully" Jan 30 13:04:17.698486 containerd[1476]: time="2025-01-30T13:04:17.698448425Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9dfac4677edde509b317abcf092ad75ffe004513ee0ec727b97a80603c8f198e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:04:17.698541 containerd[1476]: time="2025-01-30T13:04:17.698514384Z" level=info msg="RemovePodSandbox \"9dfac4677edde509b317abcf092ad75ffe004513ee0ec727b97a80603c8f198e\" returns successfully" Jan 30 13:04:17.698934 containerd[1476]: time="2025-01-30T13:04:17.698894133Z" level=info msg="StopPodSandbox for \"4d753f08a139dc4852b685592e562d0c3a06f9e2140869db245cb78d1f07ff12\"" Jan 30 13:04:17.699016 containerd[1476]: time="2025-01-30T13:04:17.698999251Z" level=info msg="TearDown network for sandbox \"4d753f08a139dc4852b685592e562d0c3a06f9e2140869db245cb78d1f07ff12\" successfully" Jan 30 13:04:17.699044 containerd[1476]: time="2025-01-30T13:04:17.699014890Z" level=info msg="StopPodSandbox for \"4d753f08a139dc4852b685592e562d0c3a06f9e2140869db245cb78d1f07ff12\" returns successfully" Jan 30 13:04:17.699349 containerd[1476]: time="2025-01-30T13:04:17.699328522Z" level=info msg="RemovePodSandbox for \"4d753f08a139dc4852b685592e562d0c3a06f9e2140869db245cb78d1f07ff12\"" Jan 30 13:04:17.699380 containerd[1476]: time="2025-01-30T13:04:17.699352481Z" level=info msg="Forcibly stopping sandbox \"4d753f08a139dc4852b685592e562d0c3a06f9e2140869db245cb78d1f07ff12\"" Jan 30 13:04:17.699429 containerd[1476]: time="2025-01-30T13:04:17.699414120Z" level=info msg="TearDown network for sandbox \"4d753f08a139dc4852b685592e562d0c3a06f9e2140869db245cb78d1f07ff12\" successfully" Jan 30 13:04:17.701787 containerd[1476]: time="2025-01-30T13:04:17.701743938Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4d753f08a139dc4852b685592e562d0c3a06f9e2140869db245cb78d1f07ff12\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:04:17.701873 containerd[1476]: time="2025-01-30T13:04:17.701804216Z" level=info msg="RemovePodSandbox \"4d753f08a139dc4852b685592e562d0c3a06f9e2140869db245cb78d1f07ff12\" returns successfully" Jan 30 13:04:17.702136 containerd[1476]: time="2025-01-30T13:04:17.702114848Z" level=info msg="StopPodSandbox for \"71b6aa4a9833179615d228a0d19b1fb97dc78d75c32bf28c7e874e9434057227\"" Jan 30 13:04:17.702259 containerd[1476]: time="2025-01-30T13:04:17.702242285Z" level=info msg="TearDown network for sandbox \"71b6aa4a9833179615d228a0d19b1fb97dc78d75c32bf28c7e874e9434057227\" successfully" Jan 30 13:04:17.702352 containerd[1476]: time="2025-01-30T13:04:17.702258804Z" level=info msg="StopPodSandbox for \"71b6aa4a9833179615d228a0d19b1fb97dc78d75c32bf28c7e874e9434057227\" returns successfully" Jan 30 13:04:17.702580 containerd[1476]: time="2025-01-30T13:04:17.702555756Z" level=info msg="RemovePodSandbox for \"71b6aa4a9833179615d228a0d19b1fb97dc78d75c32bf28c7e874e9434057227\"" Jan 30 13:04:17.702621 containerd[1476]: time="2025-01-30T13:04:17.702602235Z" level=info msg="Forcibly stopping sandbox \"71b6aa4a9833179615d228a0d19b1fb97dc78d75c32bf28c7e874e9434057227\"" Jan 30 13:04:17.702679 containerd[1476]: time="2025-01-30T13:04:17.702664593Z" level=info msg="TearDown network for sandbox \"71b6aa4a9833179615d228a0d19b1fb97dc78d75c32bf28c7e874e9434057227\" successfully" Jan 30 13:04:17.705138 containerd[1476]: time="2025-01-30T13:04:17.705104209Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"71b6aa4a9833179615d228a0d19b1fb97dc78d75c32bf28c7e874e9434057227\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:04:17.705216 containerd[1476]: time="2025-01-30T13:04:17.705173127Z" level=info msg="RemovePodSandbox \"71b6aa4a9833179615d228a0d19b1fb97dc78d75c32bf28c7e874e9434057227\" returns successfully" Jan 30 13:04:17.705755 containerd[1476]: time="2025-01-30T13:04:17.705567356Z" level=info msg="StopPodSandbox for \"79977d568f0e32127ab2f5ffbcb9e92abaf88b4c4d292ab6f5d8d3e0685ef426\"" Jan 30 13:04:17.705755 containerd[1476]: time="2025-01-30T13:04:17.705678073Z" level=info msg="TearDown network for sandbox \"79977d568f0e32127ab2f5ffbcb9e92abaf88b4c4d292ab6f5d8d3e0685ef426\" successfully" Jan 30 13:04:17.705755 containerd[1476]: time="2025-01-30T13:04:17.705691273Z" level=info msg="StopPodSandbox for \"79977d568f0e32127ab2f5ffbcb9e92abaf88b4c4d292ab6f5d8d3e0685ef426\" returns successfully" Jan 30 13:04:17.706017 containerd[1476]: time="2025-01-30T13:04:17.705990825Z" level=info msg="RemovePodSandbox for \"79977d568f0e32127ab2f5ffbcb9e92abaf88b4c4d292ab6f5d8d3e0685ef426\"" Jan 30 13:04:17.706052 containerd[1476]: time="2025-01-30T13:04:17.706025504Z" level=info msg="Forcibly stopping sandbox \"79977d568f0e32127ab2f5ffbcb9e92abaf88b4c4d292ab6f5d8d3e0685ef426\"" Jan 30 13:04:17.706104 containerd[1476]: time="2025-01-30T13:04:17.706090062Z" level=info msg="TearDown network for sandbox \"79977d568f0e32127ab2f5ffbcb9e92abaf88b4c4d292ab6f5d8d3e0685ef426\" successfully" Jan 30 13:04:17.708467 containerd[1476]: time="2025-01-30T13:04:17.708429000Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"79977d568f0e32127ab2f5ffbcb9e92abaf88b4c4d292ab6f5d8d3e0685ef426\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:04:17.708540 containerd[1476]: time="2025-01-30T13:04:17.708488999Z" level=info msg="RemovePodSandbox \"79977d568f0e32127ab2f5ffbcb9e92abaf88b4c4d292ab6f5d8d3e0685ef426\" returns successfully" Jan 30 13:04:17.708825 containerd[1476]: time="2025-01-30T13:04:17.708803510Z" level=info msg="StopPodSandbox for \"d8cc1b41daa9d2b65a55b4361f0063fec06aea6109431bdcb518b45538ed87c7\"" Jan 30 13:04:17.708914 containerd[1476]: time="2025-01-30T13:04:17.708896388Z" level=info msg="TearDown network for sandbox \"d8cc1b41daa9d2b65a55b4361f0063fec06aea6109431bdcb518b45538ed87c7\" successfully" Jan 30 13:04:17.708940 containerd[1476]: time="2025-01-30T13:04:17.708913507Z" level=info msg="StopPodSandbox for \"d8cc1b41daa9d2b65a55b4361f0063fec06aea6109431bdcb518b45538ed87c7\" returns successfully" Jan 30 13:04:17.709617 containerd[1476]: time="2025-01-30T13:04:17.709167381Z" level=info msg="RemovePodSandbox for \"d8cc1b41daa9d2b65a55b4361f0063fec06aea6109431bdcb518b45538ed87c7\"" Jan 30 13:04:17.709617 containerd[1476]: time="2025-01-30T13:04:17.709196860Z" level=info msg="Forcibly stopping sandbox \"d8cc1b41daa9d2b65a55b4361f0063fec06aea6109431bdcb518b45538ed87c7\"" Jan 30 13:04:17.709617 containerd[1476]: time="2025-01-30T13:04:17.709259018Z" level=info msg="TearDown network for sandbox \"d8cc1b41daa9d2b65a55b4361f0063fec06aea6109431bdcb518b45538ed87c7\" successfully" Jan 30 13:04:17.711685 containerd[1476]: time="2025-01-30T13:04:17.711637715Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d8cc1b41daa9d2b65a55b4361f0063fec06aea6109431bdcb518b45538ed87c7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:04:17.711749 containerd[1476]: time="2025-01-30T13:04:17.711706913Z" level=info msg="RemovePodSandbox \"d8cc1b41daa9d2b65a55b4361f0063fec06aea6109431bdcb518b45538ed87c7\" returns successfully" Jan 30 13:04:17.712268 containerd[1476]: time="2025-01-30T13:04:17.712085943Z" level=info msg="StopPodSandbox for \"53d8ca28bf33ffd6ef59fa12aa77bae3998857794f8b0a7e3df66397d1e911b3\"" Jan 30 13:04:17.712268 containerd[1476]: time="2025-01-30T13:04:17.712189181Z" level=info msg="TearDown network for sandbox \"53d8ca28bf33ffd6ef59fa12aa77bae3998857794f8b0a7e3df66397d1e911b3\" successfully" Jan 30 13:04:17.712268 containerd[1476]: time="2025-01-30T13:04:17.712200340Z" level=info msg="StopPodSandbox for \"53d8ca28bf33ffd6ef59fa12aa77bae3998857794f8b0a7e3df66397d1e911b3\" returns successfully" Jan 30 13:04:17.712732 containerd[1476]: time="2025-01-30T13:04:17.712593610Z" level=info msg="RemovePodSandbox for \"53d8ca28bf33ffd6ef59fa12aa77bae3998857794f8b0a7e3df66397d1e911b3\"" Jan 30 13:04:17.712732 containerd[1476]: time="2025-01-30T13:04:17.712620009Z" level=info msg="Forcibly stopping sandbox \"53d8ca28bf33ffd6ef59fa12aa77bae3998857794f8b0a7e3df66397d1e911b3\"" Jan 30 13:04:17.712732 containerd[1476]: time="2025-01-30T13:04:17.712683327Z" level=info msg="TearDown network for sandbox \"53d8ca28bf33ffd6ef59fa12aa77bae3998857794f8b0a7e3df66397d1e911b3\" successfully" Jan 30 13:04:17.715396 containerd[1476]: time="2025-01-30T13:04:17.715360536Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"53d8ca28bf33ffd6ef59fa12aa77bae3998857794f8b0a7e3df66397d1e911b3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:04:17.715629 containerd[1476]: time="2025-01-30T13:04:17.715533252Z" level=info msg="RemovePodSandbox \"53d8ca28bf33ffd6ef59fa12aa77bae3998857794f8b0a7e3df66397d1e911b3\" returns successfully" Jan 30 13:04:17.716093 containerd[1476]: time="2025-01-30T13:04:17.715934881Z" level=info msg="StopPodSandbox for \"13593bf931cf9b52ef4cd2b4f372d47b29791f93df15ed1a7cce853b5ddda3c0\"" Jan 30 13:04:17.716093 containerd[1476]: time="2025-01-30T13:04:17.716026519Z" level=info msg="TearDown network for sandbox \"13593bf931cf9b52ef4cd2b4f372d47b29791f93df15ed1a7cce853b5ddda3c0\" successfully" Jan 30 13:04:17.716093 containerd[1476]: time="2025-01-30T13:04:17.716036558Z" level=info msg="StopPodSandbox for \"13593bf931cf9b52ef4cd2b4f372d47b29791f93df15ed1a7cce853b5ddda3c0\" returns successfully" Jan 30 13:04:17.716324 containerd[1476]: time="2025-01-30T13:04:17.716294952Z" level=info msg="RemovePodSandbox for \"13593bf931cf9b52ef4cd2b4f372d47b29791f93df15ed1a7cce853b5ddda3c0\"" Jan 30 13:04:17.716362 containerd[1476]: time="2025-01-30T13:04:17.716332111Z" level=info msg="Forcibly stopping sandbox \"13593bf931cf9b52ef4cd2b4f372d47b29791f93df15ed1a7cce853b5ddda3c0\"" Jan 30 13:04:17.716414 containerd[1476]: time="2025-01-30T13:04:17.716397349Z" level=info msg="TearDown network for sandbox \"13593bf931cf9b52ef4cd2b4f372d47b29791f93df15ed1a7cce853b5ddda3c0\" successfully" Jan 30 13:04:17.718779 containerd[1476]: time="2025-01-30T13:04:17.718742167Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"13593bf931cf9b52ef4cd2b4f372d47b29791f93df15ed1a7cce853b5ddda3c0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:04:17.718873 containerd[1476]: time="2025-01-30T13:04:17.718803685Z" level=info msg="RemovePodSandbox \"13593bf931cf9b52ef4cd2b4f372d47b29791f93df15ed1a7cce853b5ddda3c0\" returns successfully" Jan 30 13:04:22.229583 systemd[1]: Started sshd@19-10.0.0.99:22-10.0.0.1:36444.service - OpenSSH per-connection server daemon (10.0.0.1:36444). Jan 30 13:04:22.299289 sshd[5941]: Accepted publickey for core from 10.0.0.1 port 36444 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 13:04:22.300927 sshd-session[5941]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:04:22.308943 systemd-logind[1455]: New session 20 of user core. Jan 30 13:04:22.315854 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 30 13:04:22.542086 sshd[5943]: Connection closed by 10.0.0.1 port 36444 Jan 30 13:04:22.542881 sshd-session[5941]: pam_unix(sshd:session): session closed for user core Jan 30 13:04:22.548673 systemd[1]: sshd@19-10.0.0.99:22-10.0.0.1:36444.service: Deactivated successfully. Jan 30 13:04:22.552469 systemd[1]: session-20.scope: Deactivated successfully. Jan 30 13:04:22.553237 systemd-logind[1455]: Session 20 logged out. Waiting for processes to exit. Jan 30 13:04:22.555091 systemd-logind[1455]: Removed session 20.