Dec 13 13:26:19.876760 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Dec 13 13:26:19.876780 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Fri Dec 13 11:56:07 -00 2024 Dec 13 13:26:19.876789 kernel: KASLR enabled Dec 13 13:26:19.876795 kernel: efi: EFI v2.7 by EDK II Dec 13 13:26:19.876800 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbae018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40218 Dec 13 13:26:19.876805 kernel: random: crng init done Dec 13 13:26:19.876812 kernel: secureboot: Secure boot disabled Dec 13 13:26:19.876818 kernel: ACPI: Early table checksum verification disabled Dec 13 13:26:19.876823 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) Dec 13 13:26:19.876830 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) Dec 13 13:26:19.876836 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:26:19.876842 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:26:19.876847 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:26:19.876853 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:26:19.876861 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:26:19.876868 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:26:19.876874 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:26:19.876880 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:26:19.876886 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:26:19.876892 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Dec 13 13:26:19.876898 kernel: NUMA: Failed to initialise from firmware Dec 13 13:26:19.876904 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Dec 13 13:26:19.876910 kernel: NUMA: NODE_DATA [mem 0xdc957800-0xdc95cfff] Dec 13 13:26:19.876916 kernel: Zone ranges: Dec 13 13:26:19.876922 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Dec 13 13:26:19.876929 kernel: DMA32 empty Dec 13 13:26:19.876935 kernel: Normal empty Dec 13 13:26:19.876941 kernel: Movable zone start for each node Dec 13 13:26:19.876947 kernel: Early memory node ranges Dec 13 13:26:19.876953 kernel: node 0: [mem 0x0000000040000000-0x00000000d967ffff] Dec 13 13:26:19.876959 kernel: node 0: [mem 0x00000000d9680000-0x00000000d968ffff] Dec 13 13:26:19.876965 kernel: node 0: [mem 0x00000000d9690000-0x00000000d976ffff] Dec 13 13:26:19.876971 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Dec 13 13:26:19.876977 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Dec 13 13:26:19.876983 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Dec 13 13:26:19.876989 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Dec 13 13:26:19.876995 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Dec 13 13:26:19.877002 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Dec 13 13:26:19.877008 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Dec 13 13:26:19.877014 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Dec 13 13:26:19.877023 kernel: psci: probing for conduit method from ACPI. Dec 13 13:26:19.877029 kernel: psci: PSCIv1.1 detected in firmware. Dec 13 13:26:19.877035 kernel: psci: Using standard PSCI v0.2 function IDs Dec 13 13:26:19.877043 kernel: psci: Trusted OS migration not required Dec 13 13:26:19.877049 kernel: psci: SMC Calling Convention v1.1 Dec 13 13:26:19.877055 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Dec 13 13:26:19.877062 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Dec 13 13:26:19.877068 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Dec 13 13:26:19.877075 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Dec 13 13:26:19.877081 kernel: Detected PIPT I-cache on CPU0 Dec 13 13:26:19.877088 kernel: CPU features: detected: GIC system register CPU interface Dec 13 13:26:19.877094 kernel: CPU features: detected: Hardware dirty bit management Dec 13 13:26:19.877101 kernel: CPU features: detected: Spectre-v4 Dec 13 13:26:19.877108 kernel: CPU features: detected: Spectre-BHB Dec 13 13:26:19.877115 kernel: CPU features: kernel page table isolation forced ON by KASLR Dec 13 13:26:19.877122 kernel: CPU features: detected: Kernel page table isolation (KPTI) Dec 13 13:26:19.877128 kernel: CPU features: detected: ARM erratum 1418040 Dec 13 13:26:19.877145 kernel: CPU features: detected: SSBS not fully self-synchronizing Dec 13 13:26:19.877151 kernel: alternatives: applying boot alternatives Dec 13 13:26:19.877159 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c48af8adabdaf1d8e07ceb011d2665929c607ddf2c4d40203b31334d745cc472 Dec 13 13:26:19.877166 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 13:26:19.877177 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 13:26:19.877185 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 13:26:19.877192 kernel: Fallback order for Node 0: 0 Dec 13 13:26:19.877200 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Dec 13 13:26:19.877206 kernel: Policy zone: DMA Dec 13 13:26:19.877213 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 13:26:19.877220 kernel: software IO TLB: area num 4. Dec 13 13:26:19.877226 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Dec 13 13:26:19.877233 kernel: Memory: 2385936K/2572288K available (10304K kernel code, 2184K rwdata, 8088K rodata, 39936K init, 897K bss, 186352K reserved, 0K cma-reserved) Dec 13 13:26:19.877239 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Dec 13 13:26:19.877245 kernel: trace event string verifier disabled Dec 13 13:26:19.877252 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 13:26:19.877259 kernel: rcu: RCU event tracing is enabled. Dec 13 13:26:19.877265 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Dec 13 13:26:19.877272 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 13:26:19.877280 kernel: Tracing variant of Tasks RCU enabled. Dec 13 13:26:19.877286 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 13:26:19.877293 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Dec 13 13:26:19.877300 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 13 13:26:19.877306 kernel: GICv3: 256 SPIs implemented Dec 13 13:26:19.877312 kernel: GICv3: 0 Extended SPIs implemented Dec 13 13:26:19.877318 kernel: Root IRQ handler: gic_handle_irq Dec 13 13:26:19.877325 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Dec 13 13:26:19.877331 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Dec 13 13:26:19.877338 kernel: ITS [mem 0x08080000-0x0809ffff] Dec 13 13:26:19.877344 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Dec 13 13:26:19.877352 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Dec 13 13:26:19.877358 kernel: GICv3: using LPI property table @0x00000000400f0000 Dec 13 13:26:19.877364 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Dec 13 13:26:19.877370 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 13:26:19.877377 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 13:26:19.877383 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Dec 13 13:26:19.877390 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Dec 13 13:26:19.877396 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Dec 13 13:26:19.877402 kernel: arm-pv: using stolen time PV Dec 13 13:26:19.877409 kernel: Console: colour dummy device 80x25 Dec 13 13:26:19.877416 kernel: ACPI: Core revision 20230628 Dec 13 13:26:19.877424 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Dec 13 13:26:19.877431 kernel: pid_max: default: 32768 minimum: 301 Dec 13 13:26:19.877437 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 13:26:19.877444 kernel: landlock: Up and running. Dec 13 13:26:19.877450 kernel: SELinux: Initializing. Dec 13 13:26:19.877457 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 13:26:19.877463 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 13:26:19.877470 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 13:26:19.877476 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 13:26:19.877484 kernel: rcu: Hierarchical SRCU implementation. Dec 13 13:26:19.877491 kernel: rcu: Max phase no-delay instances is 400. Dec 13 13:26:19.877505 kernel: Platform MSI: ITS@0x8080000 domain created Dec 13 13:26:19.877513 kernel: PCI/MSI: ITS@0x8080000 domain created Dec 13 13:26:19.877519 kernel: Remapping and enabling EFI services. Dec 13 13:26:19.877526 kernel: smp: Bringing up secondary CPUs ... Dec 13 13:26:19.877533 kernel: Detected PIPT I-cache on CPU1 Dec 13 13:26:19.877539 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Dec 13 13:26:19.877546 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Dec 13 13:26:19.877555 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 13:26:19.877562 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Dec 13 13:26:19.877573 kernel: Detected PIPT I-cache on CPU2 Dec 13 13:26:19.877581 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Dec 13 13:26:19.877588 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Dec 13 13:26:19.877595 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 13:26:19.877602 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Dec 13 13:26:19.877608 kernel: Detected PIPT I-cache on CPU3 Dec 13 13:26:19.877615 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Dec 13 13:26:19.877624 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Dec 13 13:26:19.877630 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 13:26:19.877637 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Dec 13 13:26:19.877644 kernel: smp: Brought up 1 node, 4 CPUs Dec 13 13:26:19.877651 kernel: SMP: Total of 4 processors activated. Dec 13 13:26:19.877658 kernel: CPU features: detected: 32-bit EL0 Support Dec 13 13:26:19.877665 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Dec 13 13:26:19.877672 kernel: CPU features: detected: Common not Private translations Dec 13 13:26:19.877678 kernel: CPU features: detected: CRC32 instructions Dec 13 13:26:19.877686 kernel: CPU features: detected: Enhanced Virtualization Traps Dec 13 13:26:19.877693 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Dec 13 13:26:19.877700 kernel: CPU features: detected: LSE atomic instructions Dec 13 13:26:19.877707 kernel: CPU features: detected: Privileged Access Never Dec 13 13:26:19.877714 kernel: CPU features: detected: RAS Extension Support Dec 13 13:26:19.877721 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Dec 13 13:26:19.877728 kernel: CPU: All CPU(s) started at EL1 Dec 13 13:26:19.877734 kernel: alternatives: applying system-wide alternatives Dec 13 13:26:19.877741 kernel: devtmpfs: initialized Dec 13 13:26:19.877749 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 13:26:19.877756 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Dec 13 13:26:19.877763 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 13:26:19.877770 kernel: SMBIOS 3.0.0 present. Dec 13 13:26:19.877777 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Dec 13 13:26:19.877784 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 13:26:19.877791 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Dec 13 13:26:19.877798 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 13 13:26:19.877805 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 13 13:26:19.877813 kernel: audit: initializing netlink subsys (disabled) Dec 13 13:26:19.877820 kernel: audit: type=2000 audit(0.018:1): state=initialized audit_enabled=0 res=1 Dec 13 13:26:19.877827 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 13:26:19.877834 kernel: cpuidle: using governor menu Dec 13 13:26:19.877841 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 13 13:26:19.877848 kernel: ASID allocator initialised with 32768 entries Dec 13 13:26:19.877855 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 13:26:19.877861 kernel: Serial: AMBA PL011 UART driver Dec 13 13:26:19.877868 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Dec 13 13:26:19.877876 kernel: Modules: 0 pages in range for non-PLT usage Dec 13 13:26:19.877883 kernel: Modules: 508880 pages in range for PLT usage Dec 13 13:26:19.877890 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 13:26:19.877897 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 13:26:19.877903 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Dec 13 13:26:19.877910 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Dec 13 13:26:19.877917 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 13:26:19.877924 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 13:26:19.877931 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Dec 13 13:26:19.877939 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Dec 13 13:26:19.877945 kernel: ACPI: Added _OSI(Module Device) Dec 13 13:26:19.877952 kernel: ACPI: Added _OSI(Processor Device) Dec 13 13:26:19.877959 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 13:26:19.877966 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 13:26:19.877972 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 13:26:19.877979 kernel: ACPI: Interpreter enabled Dec 13 13:26:19.877986 kernel: ACPI: Using GIC for interrupt routing Dec 13 13:26:19.877993 kernel: ACPI: MCFG table detected, 1 entries Dec 13 13:26:19.878001 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Dec 13 13:26:19.878008 kernel: printk: console [ttyAMA0] enabled Dec 13 13:26:19.878015 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 13:26:19.878151 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 13:26:19.878227 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 13 13:26:19.878290 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 13 13:26:19.878350 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Dec 13 13:26:19.878413 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Dec 13 13:26:19.878422 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Dec 13 13:26:19.878429 kernel: PCI host bridge to bus 0000:00 Dec 13 13:26:19.878497 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Dec 13 13:26:19.878631 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Dec 13 13:26:19.878688 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Dec 13 13:26:19.878743 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 13:26:19.878820 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Dec 13 13:26:19.878904 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Dec 13 13:26:19.878968 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Dec 13 13:26:19.879029 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Dec 13 13:26:19.879089 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Dec 13 13:26:19.879164 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Dec 13 13:26:19.879227 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Dec 13 13:26:19.879292 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Dec 13 13:26:19.879348 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Dec 13 13:26:19.879402 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Dec 13 13:26:19.879456 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Dec 13 13:26:19.879465 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Dec 13 13:26:19.879472 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Dec 13 13:26:19.879479 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Dec 13 13:26:19.879486 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Dec 13 13:26:19.879495 kernel: iommu: Default domain type: Translated Dec 13 13:26:19.879541 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 13 13:26:19.879549 kernel: efivars: Registered efivars operations Dec 13 13:26:19.879555 kernel: vgaarb: loaded Dec 13 13:26:19.879562 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 13 13:26:19.879569 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 13:26:19.879576 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 13:26:19.879583 kernel: pnp: PnP ACPI init Dec 13 13:26:19.879657 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Dec 13 13:26:19.879670 kernel: pnp: PnP ACPI: found 1 devices Dec 13 13:26:19.879677 kernel: NET: Registered PF_INET protocol family Dec 13 13:26:19.879684 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 13:26:19.879691 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 13 13:26:19.879698 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 13:26:19.879705 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 13:26:19.879712 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 13 13:26:19.879719 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 13 13:26:19.879728 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 13:26:19.879735 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 13:26:19.879742 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 13:26:19.879749 kernel: PCI: CLS 0 bytes, default 64 Dec 13 13:26:19.879756 kernel: kvm [1]: HYP mode not available Dec 13 13:26:19.879763 kernel: Initialise system trusted keyrings Dec 13 13:26:19.879770 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 13 13:26:19.879777 kernel: Key type asymmetric registered Dec 13 13:26:19.879784 kernel: Asymmetric key parser 'x509' registered Dec 13 13:26:19.879793 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 13 13:26:19.879800 kernel: io scheduler mq-deadline registered Dec 13 13:26:19.879807 kernel: io scheduler kyber registered Dec 13 13:26:19.879815 kernel: io scheduler bfq registered Dec 13 13:26:19.879822 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Dec 13 13:26:19.879829 kernel: ACPI: button: Power Button [PWRB] Dec 13 13:26:19.879836 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Dec 13 13:26:19.879899 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Dec 13 13:26:19.879909 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 13:26:19.879917 kernel: thunder_xcv, ver 1.0 Dec 13 13:26:19.879924 kernel: thunder_bgx, ver 1.0 Dec 13 13:26:19.879931 kernel: nicpf, ver 1.0 Dec 13 13:26:19.879937 kernel: nicvf, ver 1.0 Dec 13 13:26:19.880008 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 13 13:26:19.880066 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-12-13T13:26:19 UTC (1734096379) Dec 13 13:26:19.880076 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 13:26:19.880083 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Dec 13 13:26:19.880092 kernel: watchdog: Delayed init of the lockup detector failed: -19 Dec 13 13:26:19.880099 kernel: watchdog: Hard watchdog permanently disabled Dec 13 13:26:19.880106 kernel: NET: Registered PF_INET6 protocol family Dec 13 13:26:19.880112 kernel: Segment Routing with IPv6 Dec 13 13:26:19.880119 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 13:26:19.880126 kernel: NET: Registered PF_PACKET protocol family Dec 13 13:26:19.880142 kernel: Key type dns_resolver registered Dec 13 13:26:19.880149 kernel: registered taskstats version 1 Dec 13 13:26:19.880157 kernel: Loading compiled-in X.509 certificates Dec 13 13:26:19.880166 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: 752b3e36c6039904ea643ccad2b3f5f3cb4ebf78' Dec 13 13:26:19.880173 kernel: Key type .fscrypt registered Dec 13 13:26:19.880180 kernel: Key type fscrypt-provisioning registered Dec 13 13:26:19.880187 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 13:26:19.880194 kernel: ima: Allocated hash algorithm: sha1 Dec 13 13:26:19.880201 kernel: ima: No architecture policies found Dec 13 13:26:19.880208 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 13 13:26:19.880215 kernel: clk: Disabling unused clocks Dec 13 13:26:19.880221 kernel: Freeing unused kernel memory: 39936K Dec 13 13:26:19.880230 kernel: Run /init as init process Dec 13 13:26:19.880237 kernel: with arguments: Dec 13 13:26:19.880243 kernel: /init Dec 13 13:26:19.880250 kernel: with environment: Dec 13 13:26:19.880257 kernel: HOME=/ Dec 13 13:26:19.880264 kernel: TERM=linux Dec 13 13:26:19.880270 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 13:26:19.880279 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 13:26:19.880289 systemd[1]: Detected virtualization kvm. Dec 13 13:26:19.880297 systemd[1]: Detected architecture arm64. Dec 13 13:26:19.880304 systemd[1]: Running in initrd. Dec 13 13:26:19.880311 systemd[1]: No hostname configured, using default hostname. Dec 13 13:26:19.880318 systemd[1]: Hostname set to . Dec 13 13:26:19.880325 systemd[1]: Initializing machine ID from VM UUID. Dec 13 13:26:19.880333 systemd[1]: Queued start job for default target initrd.target. Dec 13 13:26:19.880340 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 13:26:19.880349 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 13:26:19.880357 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 13:26:19.880364 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 13:26:19.880372 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 13:26:19.880380 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 13:26:19.880388 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 13:26:19.880396 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 13:26:19.880405 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 13:26:19.880412 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 13:26:19.880420 systemd[1]: Reached target paths.target - Path Units. Dec 13 13:26:19.880427 systemd[1]: Reached target slices.target - Slice Units. Dec 13 13:26:19.880434 systemd[1]: Reached target swap.target - Swaps. Dec 13 13:26:19.880442 systemd[1]: Reached target timers.target - Timer Units. Dec 13 13:26:19.880449 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 13:26:19.880456 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 13:26:19.880465 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 13:26:19.880473 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 13:26:19.880480 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 13:26:19.880488 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 13:26:19.880495 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 13:26:19.880512 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 13:26:19.880532 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 13:26:19.880540 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 13:26:19.880547 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 13:26:19.880557 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 13:26:19.880564 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 13:26:19.880572 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 13:26:19.880579 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:26:19.880587 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 13:26:19.880594 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 13:26:19.880601 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 13:26:19.880611 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 13:26:19.880637 systemd-journald[238]: Collecting audit messages is disabled. Dec 13 13:26:19.880658 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 13:26:19.880666 systemd-journald[238]: Journal started Dec 13 13:26:19.880689 systemd-journald[238]: Runtime Journal (/run/log/journal/9427aa67c14e409385e617d142a1e61f) is 5.9M, max 47.3M, 41.4M free. Dec 13 13:26:19.880724 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:26:19.872232 systemd-modules-load[240]: Inserted module 'overlay' Dec 13 13:26:19.885506 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 13:26:19.885543 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 13:26:19.887352 systemd-modules-load[240]: Inserted module 'br_netfilter' Dec 13 13:26:19.888124 kernel: Bridge firewalling registered Dec 13 13:26:19.887963 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 13:26:19.889609 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 13:26:19.892345 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 13:26:19.893432 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 13:26:19.897675 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 13:26:19.901291 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 13:26:19.902413 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 13:26:19.908128 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 13:26:19.911058 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 13:26:19.912040 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:26:19.914217 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 13:26:19.926971 dracut-cmdline[278]: dracut-dracut-053 Dec 13 13:26:19.929251 dracut-cmdline[278]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c48af8adabdaf1d8e07ceb011d2665929c607ddf2c4d40203b31334d745cc472 Dec 13 13:26:19.941707 systemd-resolved[276]: Positive Trust Anchors: Dec 13 13:26:19.941723 systemd-resolved[276]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 13:26:19.941755 systemd-resolved[276]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 13:26:19.946329 systemd-resolved[276]: Defaulting to hostname 'linux'. Dec 13 13:26:19.947263 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 13:26:19.949374 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 13:26:19.998537 kernel: SCSI subsystem initialized Dec 13 13:26:20.002520 kernel: Loading iSCSI transport class v2.0-870. Dec 13 13:26:20.009518 kernel: iscsi: registered transport (tcp) Dec 13 13:26:20.022521 kernel: iscsi: registered transport (qla4xxx) Dec 13 13:26:20.022542 kernel: QLogic iSCSI HBA Driver Dec 13 13:26:20.063042 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 13:26:20.075654 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 13:26:20.091840 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 13:26:20.091874 kernel: device-mapper: uevent: version 1.0.3 Dec 13 13:26:20.093524 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 13:26:20.138562 kernel: raid6: neonx8 gen() 15706 MB/s Dec 13 13:26:20.155518 kernel: raid6: neonx4 gen() 15744 MB/s Dec 13 13:26:20.172517 kernel: raid6: neonx2 gen() 13139 MB/s Dec 13 13:26:20.189528 kernel: raid6: neonx1 gen() 10425 MB/s Dec 13 13:26:20.206515 kernel: raid6: int64x8 gen() 6754 MB/s Dec 13 13:26:20.223528 kernel: raid6: int64x4 gen() 7302 MB/s Dec 13 13:26:20.240524 kernel: raid6: int64x2 gen() 6077 MB/s Dec 13 13:26:20.257522 kernel: raid6: int64x1 gen() 5053 MB/s Dec 13 13:26:20.257547 kernel: raid6: using algorithm neonx4 gen() 15744 MB/s Dec 13 13:26:20.274520 kernel: raid6: .... xor() 12356 MB/s, rmw enabled Dec 13 13:26:20.274533 kernel: raid6: using neon recovery algorithm Dec 13 13:26:20.279792 kernel: xor: measuring software checksum speed Dec 13 13:26:20.279809 kernel: 8regs : 21641 MB/sec Dec 13 13:26:20.279818 kernel: 32regs : 21653 MB/sec Dec 13 13:26:20.280713 kernel: arm64_neon : 27870 MB/sec Dec 13 13:26:20.280727 kernel: xor: using function: arm64_neon (27870 MB/sec) Dec 13 13:26:20.330727 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 13:26:20.342587 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 13:26:20.353644 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 13:26:20.364361 systemd-udevd[460]: Using default interface naming scheme 'v255'. Dec 13 13:26:20.367452 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 13:26:20.386642 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 13:26:20.398222 dracut-pre-trigger[468]: rd.md=0: removing MD RAID activation Dec 13 13:26:20.422788 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 13:26:20.433638 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 13:26:20.471622 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 13:26:20.483381 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 13:26:20.492527 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 13:26:20.493840 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 13:26:20.494787 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 13:26:20.496271 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 13:26:20.504646 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 13:26:20.514904 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 13:26:20.522544 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Dec 13 13:26:20.526836 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Dec 13 13:26:20.526929 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 13:26:20.526940 kernel: GPT:9289727 != 19775487 Dec 13 13:26:20.526956 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 13:26:20.526966 kernel: GPT:9289727 != 19775487 Dec 13 13:26:20.526975 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 13:26:20.526983 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 13:26:20.525242 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 13:26:20.525356 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:26:20.530427 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 13:26:20.531463 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 13:26:20.531741 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:26:20.533464 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:26:20.540762 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:26:20.545378 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (515) Dec 13 13:26:20.545400 kernel: BTRFS: device fsid 47b12626-f7d3-4179-9720-ca262eb4c614 devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (520) Dec 13 13:26:20.550811 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 13 13:26:20.551991 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:26:20.562574 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 13 13:26:20.569476 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 13:26:20.573043 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 13 13:26:20.574045 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Dec 13 13:26:20.581681 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 13:26:20.583369 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 13:26:20.590892 disk-uuid[552]: Primary Header is updated. Dec 13 13:26:20.590892 disk-uuid[552]: Secondary Entries is updated. Dec 13 13:26:20.590892 disk-uuid[552]: Secondary Header is updated. Dec 13 13:26:20.593766 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 13:26:20.602670 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:26:21.612522 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 13:26:21.612944 disk-uuid[556]: The operation has completed successfully. Dec 13 13:26:21.639296 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 13:26:21.640275 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 13:26:21.656702 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 13:26:21.659223 sh[572]: Success Dec 13 13:26:21.675684 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Dec 13 13:26:21.700338 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 13:26:21.714762 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 13:26:21.716111 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 13:26:21.726097 kernel: BTRFS info (device dm-0): first mount of filesystem 47b12626-f7d3-4179-9720-ca262eb4c614 Dec 13 13:26:21.726136 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Dec 13 13:26:21.726147 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 13:26:21.726157 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 13:26:21.727515 kernel: BTRFS info (device dm-0): using free space tree Dec 13 13:26:21.729926 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 13:26:21.731075 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 13:26:21.739627 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 13:26:21.741575 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 13:26:21.747709 kernel: BTRFS info (device vda6): first mount of filesystem d0a3d620-8ab2-45d8-a26c-bb488ffd59f2 Dec 13 13:26:21.747748 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 13:26:21.747758 kernel: BTRFS info (device vda6): using free space tree Dec 13 13:26:21.751058 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 13:26:21.756753 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 13:26:21.757994 kernel: BTRFS info (device vda6): last unmount of filesystem d0a3d620-8ab2-45d8-a26c-bb488ffd59f2 Dec 13 13:26:21.761834 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 13:26:21.768661 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 13:26:21.829644 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 13:26:21.838662 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 13:26:21.857100 ignition[662]: Ignition 2.20.0 Dec 13 13:26:21.857110 ignition[662]: Stage: fetch-offline Dec 13 13:26:21.857152 ignition[662]: no configs at "/usr/lib/ignition/base.d" Dec 13 13:26:21.857161 ignition[662]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 13:26:21.857309 ignition[662]: parsed url from cmdline: "" Dec 13 13:26:21.857312 ignition[662]: no config URL provided Dec 13 13:26:21.857317 ignition[662]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 13:26:21.857323 ignition[662]: no config at "/usr/lib/ignition/user.ign" Dec 13 13:26:21.861600 systemd-networkd[765]: lo: Link UP Dec 13 13:26:21.857347 ignition[662]: op(1): [started] loading QEMU firmware config module Dec 13 13:26:21.861604 systemd-networkd[765]: lo: Gained carrier Dec 13 13:26:21.857351 ignition[662]: op(1): executing: "modprobe" "qemu_fw_cfg" Dec 13 13:26:21.862365 systemd-networkd[765]: Enumeration completed Dec 13 13:26:21.862464 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 13:26:21.867356 ignition[662]: op(1): [finished] loading QEMU firmware config module Dec 13 13:26:21.862779 systemd-networkd[765]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 13:26:21.862782 systemd-networkd[765]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 13:26:21.863479 systemd-networkd[765]: eth0: Link UP Dec 13 13:26:21.863482 systemd-networkd[765]: eth0: Gained carrier Dec 13 13:26:21.863489 systemd-networkd[765]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 13:26:21.864253 systemd[1]: Reached target network.target - Network. Dec 13 13:26:21.889550 systemd-networkd[765]: eth0: DHCPv4 address 10.0.0.123/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 13:26:21.910856 ignition[662]: parsing config with SHA512: 9e0d5c29ed8e0133251ed1aceda25c89195659f01af634e1a61f107792815390fa7c735a8806d2c59907b3c6cd1ca0a7491f75622cff99088d26c4c3df71520f Dec 13 13:26:21.917186 unknown[662]: fetched base config from "system" Dec 13 13:26:21.917196 unknown[662]: fetched user config from "qemu" Dec 13 13:26:21.917727 ignition[662]: fetch-offline: fetch-offline passed Dec 13 13:26:21.917800 ignition[662]: Ignition finished successfully Dec 13 13:26:21.919548 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 13:26:21.920767 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 13 13:26:21.924642 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 13:26:21.935568 ignition[771]: Ignition 2.20.0 Dec 13 13:26:21.935580 ignition[771]: Stage: kargs Dec 13 13:26:21.935730 ignition[771]: no configs at "/usr/lib/ignition/base.d" Dec 13 13:26:21.935740 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 13:26:21.936601 ignition[771]: kargs: kargs passed Dec 13 13:26:21.936644 ignition[771]: Ignition finished successfully Dec 13 13:26:21.939553 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 13:26:21.950634 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 13:26:21.959662 ignition[780]: Ignition 2.20.0 Dec 13 13:26:21.959671 ignition[780]: Stage: disks Dec 13 13:26:21.959824 ignition[780]: no configs at "/usr/lib/ignition/base.d" Dec 13 13:26:21.959834 ignition[780]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 13:26:21.961780 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 13:26:21.960700 ignition[780]: disks: disks passed Dec 13 13:26:21.962898 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 13:26:21.960741 ignition[780]: Ignition finished successfully Dec 13 13:26:21.964163 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 13:26:21.965349 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 13:26:21.966658 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 13:26:21.967930 systemd[1]: Reached target basic.target - Basic System. Dec 13 13:26:21.976628 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 13:26:21.985631 systemd-fsck[790]: ROOT: clean, 14/553520 files, 52654/553472 blocks Dec 13 13:26:21.989018 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 13:26:21.991323 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 13:26:22.035295 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 13:26:22.036410 kernel: EXT4-fs (vda9): mounted filesystem 0aa4851d-a2ba-4d04-90b3-5d00bf608ecc r/w with ordered data mode. Quota mode: none. Dec 13 13:26:22.036300 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 13:26:22.048622 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 13:26:22.050053 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 13:26:22.051168 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 13 13:26:22.051206 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 13:26:22.056102 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (798) Dec 13 13:26:22.051226 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 13:26:22.059378 kernel: BTRFS info (device vda6): first mount of filesystem d0a3d620-8ab2-45d8-a26c-bb488ffd59f2 Dec 13 13:26:22.059394 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 13:26:22.059403 kernel: BTRFS info (device vda6): using free space tree Dec 13 13:26:22.055394 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 13:26:22.061519 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 13:26:22.058953 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 13:26:22.062840 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 13:26:22.098524 initrd-setup-root[822]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 13:26:22.101536 initrd-setup-root[829]: cut: /sysroot/etc/group: No such file or directory Dec 13 13:26:22.104383 initrd-setup-root[836]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 13:26:22.107073 initrd-setup-root[843]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 13:26:22.175011 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 13:26:22.185670 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 13:26:22.187805 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 13:26:22.191515 kernel: BTRFS info (device vda6): last unmount of filesystem d0a3d620-8ab2-45d8-a26c-bb488ffd59f2 Dec 13 13:26:22.207496 ignition[911]: INFO : Ignition 2.20.0 Dec 13 13:26:22.207496 ignition[911]: INFO : Stage: mount Dec 13 13:26:22.209939 ignition[911]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 13:26:22.209939 ignition[911]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 13:26:22.209939 ignition[911]: INFO : mount: mount passed Dec 13 13:26:22.209939 ignition[911]: INFO : Ignition finished successfully Dec 13 13:26:22.208544 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 13:26:22.210053 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 13:26:22.219632 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 13:26:22.725972 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 13:26:22.744666 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 13:26:22.749521 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (925) Dec 13 13:26:22.751756 kernel: BTRFS info (device vda6): first mount of filesystem d0a3d620-8ab2-45d8-a26c-bb488ffd59f2 Dec 13 13:26:22.751770 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 13:26:22.751786 kernel: BTRFS info (device vda6): using free space tree Dec 13 13:26:22.753518 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 13:26:22.754618 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 13:26:22.769452 ignition[942]: INFO : Ignition 2.20.0 Dec 13 13:26:22.769452 ignition[942]: INFO : Stage: files Dec 13 13:26:22.770793 ignition[942]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 13:26:22.770793 ignition[942]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 13:26:22.770793 ignition[942]: DEBUG : files: compiled without relabeling support, skipping Dec 13 13:26:22.773431 ignition[942]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 13:26:22.773431 ignition[942]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 13:26:22.775759 ignition[942]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 13:26:22.775759 ignition[942]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 13:26:22.775759 ignition[942]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 13:26:22.775479 unknown[942]: wrote ssh authorized keys file for user: core Dec 13 13:26:22.779864 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Dec 13 13:26:22.779864 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Dec 13 13:26:22.847784 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 13:26:23.017577 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Dec 13 13:26:23.019084 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 13 13:26:23.019084 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 13:26:23.019084 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 13:26:23.019084 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 13:26:23.019084 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 13:26:23.019084 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 13:26:23.019084 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 13:26:23.019084 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 13:26:23.029794 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 13:26:23.029794 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 13:26:23.029794 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Dec 13 13:26:23.029794 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Dec 13 13:26:23.029794 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Dec 13 13:26:23.029794 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Dec 13 13:26:23.357281 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 13 13:26:23.598267 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Dec 13 13:26:23.598267 ignition[942]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 13 13:26:23.600743 ignition[942]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 13:26:23.600743 ignition[942]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 13:26:23.600743 ignition[942]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 13 13:26:23.600743 ignition[942]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Dec 13 13:26:23.600743 ignition[942]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 13:26:23.600743 ignition[942]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 13:26:23.600743 ignition[942]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Dec 13 13:26:23.600743 ignition[942]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Dec 13 13:26:23.623069 ignition[942]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 13:26:23.626596 ignition[942]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 13:26:23.627971 ignition[942]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Dec 13 13:26:23.627971 ignition[942]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Dec 13 13:26:23.627971 ignition[942]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 13:26:23.627971 ignition[942]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 13:26:23.627971 ignition[942]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 13:26:23.627971 ignition[942]: INFO : files: files passed Dec 13 13:26:23.627971 ignition[942]: INFO : Ignition finished successfully Dec 13 13:26:23.629129 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 13:26:23.641695 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 13:26:23.643086 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 13:26:23.645939 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 13:26:23.646031 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 13:26:23.650224 initrd-setup-root-after-ignition[971]: grep: /sysroot/oem/oem-release: No such file or directory Dec 13 13:26:23.653517 initrd-setup-root-after-ignition[973]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 13:26:23.653517 initrd-setup-root-after-ignition[973]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 13:26:23.656116 initrd-setup-root-after-ignition[977]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 13:26:23.656693 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 13:26:23.658709 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 13:26:23.673748 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 13:26:23.691657 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 13:26:23.691775 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 13:26:23.693475 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 13:26:23.694939 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 13:26:23.696300 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 13:26:23.697045 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 13:26:23.711336 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 13:26:23.719686 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 13:26:23.726999 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 13:26:23.727924 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 13:26:23.729370 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 13:26:23.730653 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 13:26:23.730758 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 13:26:23.732553 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 13:26:23.734004 systemd[1]: Stopped target basic.target - Basic System. Dec 13 13:26:23.735197 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 13:26:23.736398 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 13:26:23.737903 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 13:26:23.739324 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 13:26:23.740650 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 13:26:23.742191 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 13:26:23.743584 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 13:26:23.744845 systemd[1]: Stopped target swap.target - Swaps. Dec 13 13:26:23.745990 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 13:26:23.746100 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 13:26:23.747805 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 13:26:23.749192 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 13:26:23.750541 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 13:26:23.750633 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 13:26:23.752093 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 13:26:23.752207 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 13:26:23.754329 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 13:26:23.754443 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 13:26:23.755789 systemd[1]: Stopped target paths.target - Path Units. Dec 13 13:26:23.756884 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 13:26:23.757595 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 13:26:23.759128 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 13:26:23.760222 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 13:26:23.761442 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 13:26:23.761537 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 13:26:23.763112 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 13:26:23.763200 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 13:26:23.764281 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 13:26:23.764384 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 13:26:23.765611 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 13:26:23.765708 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 13:26:23.776727 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 13:26:23.778714 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 13:26:23.779338 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 13:26:23.779442 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 13:26:23.780758 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 13:26:23.780841 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 13:26:23.785585 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 13:26:23.786323 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 13:26:23.790068 ignition[998]: INFO : Ignition 2.20.0 Dec 13 13:26:23.790068 ignition[998]: INFO : Stage: umount Dec 13 13:26:23.791456 ignition[998]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 13:26:23.791456 ignition[998]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 13:26:23.791456 ignition[998]: INFO : umount: umount passed Dec 13 13:26:23.791456 ignition[998]: INFO : Ignition finished successfully Dec 13 13:26:23.792628 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 13:26:23.793496 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 13:26:23.793616 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 13:26:23.795196 systemd[1]: Stopped target network.target - Network. Dec 13 13:26:23.796194 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 13:26:23.796247 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 13:26:23.797521 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 13:26:23.797559 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 13:26:23.798910 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 13:26:23.798951 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 13:26:23.800127 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 13:26:23.800170 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 13:26:23.801638 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 13:26:23.804678 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 13:26:23.816299 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 13:26:23.816427 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 13:26:23.818669 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 13:26:23.818738 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 13:26:23.820258 systemd-networkd[765]: eth0: DHCPv6 lease lost Dec 13 13:26:23.821873 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 13:26:23.821983 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 13:26:23.822958 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 13:26:23.822987 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 13:26:23.833643 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 13:26:23.834365 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 13:26:23.834417 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 13:26:23.835827 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 13:26:23.835863 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 13:26:23.837177 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 13:26:23.837219 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 13:26:23.838776 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 13:26:23.847390 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 13:26:23.847649 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 13:26:23.852944 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 13:26:23.853675 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 13:26:23.854463 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 13:26:23.854537 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 13:26:23.855926 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 13:26:23.856044 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 13:26:23.857668 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 13:26:23.857705 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 13:26:23.859643 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 13:26:23.859674 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 13:26:23.860402 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 13:26:23.860440 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 13:26:23.861794 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 13:26:23.861835 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 13:26:23.863917 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 13:26:23.863953 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:26:23.875616 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 13:26:23.876354 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 13:26:23.876400 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 13:26:23.877953 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 13 13:26:23.877994 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 13:26:23.879374 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 13:26:23.879409 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 13:26:23.880983 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 13:26:23.881019 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:26:23.882710 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 13:26:23.882782 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 13:26:23.885708 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 13:26:23.891615 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 13:26:23.896690 systemd[1]: Switching root. Dec 13 13:26:23.922076 systemd-journald[238]: Journal stopped Dec 13 13:26:24.558748 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). Dec 13 13:26:24.558806 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 13:26:24.558822 kernel: SELinux: policy capability open_perms=1 Dec 13 13:26:24.558831 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 13:26:24.558842 kernel: SELinux: policy capability always_check_network=0 Dec 13 13:26:24.558851 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 13:26:24.558863 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 13:26:24.558873 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 13:26:24.558885 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 13:26:24.558894 kernel: audit: type=1403 audit(1734096384.052:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 13:26:24.558905 systemd[1]: Successfully loaded SELinux policy in 28.712ms. Dec 13 13:26:24.558924 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 8.798ms. Dec 13 13:26:24.558935 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 13:26:24.558945 systemd[1]: Detected virtualization kvm. Dec 13 13:26:24.558970 systemd[1]: Detected architecture arm64. Dec 13 13:26:24.558982 systemd[1]: Detected first boot. Dec 13 13:26:24.558993 systemd[1]: Initializing machine ID from VM UUID. Dec 13 13:26:24.559003 zram_generator::config[1043]: No configuration found. Dec 13 13:26:24.559014 systemd[1]: Populated /etc with preset unit settings. Dec 13 13:26:24.559024 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 13:26:24.559033 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 13 13:26:24.559044 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 13:26:24.559054 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 13:26:24.559066 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 13:26:24.559077 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 13:26:24.559088 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 13:26:24.559098 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 13:26:24.559108 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 13:26:24.559126 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 13:26:24.559138 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 13:26:24.559148 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 13:26:24.559158 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 13:26:24.559171 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 13:26:24.559180 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 13:26:24.559190 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 13:26:24.559200 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 13:26:24.559210 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Dec 13 13:26:24.559220 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 13:26:24.559230 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 13 13:26:24.559239 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 13 13:26:24.559249 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 13 13:26:24.559261 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 13:26:24.559271 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 13:26:24.559281 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 13:26:24.559290 systemd[1]: Reached target slices.target - Slice Units. Dec 13 13:26:24.559300 systemd[1]: Reached target swap.target - Swaps. Dec 13 13:26:24.559311 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 13:26:24.559321 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 13:26:24.559331 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 13:26:24.559342 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 13:26:24.559353 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 13:26:24.559362 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 13:26:24.559373 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 13:26:24.559382 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 13:26:24.559392 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 13:26:24.559402 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 13:26:24.559411 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 13:26:24.559421 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 13:26:24.559433 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 13:26:24.559443 systemd[1]: Reached target machines.target - Containers. Dec 13 13:26:24.559453 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 13:26:24.559463 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 13:26:24.559473 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 13:26:24.559484 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 13:26:24.559493 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 13:26:24.559529 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 13:26:24.559547 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 13:26:24.559557 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 13:26:24.559566 kernel: fuse: init (API version 7.39) Dec 13 13:26:24.559576 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 13:26:24.559586 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 13:26:24.559596 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 13:26:24.559606 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 13 13:26:24.559616 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 13:26:24.559626 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 13:26:24.559639 kernel: ACPI: bus type drm_connector registered Dec 13 13:26:24.559649 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 13:26:24.559660 kernel: loop: module loaded Dec 13 13:26:24.559669 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 13:26:24.559680 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 13:26:24.559689 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 13:26:24.559699 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 13:26:24.559709 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 13:26:24.559718 systemd[1]: Stopped verity-setup.service. Dec 13 13:26:24.559734 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 13:26:24.559743 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 13:26:24.559771 systemd-journald[1121]: Collecting audit messages is disabled. Dec 13 13:26:24.559791 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 13:26:24.559802 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 13:26:24.559812 systemd-journald[1121]: Journal started Dec 13 13:26:24.559838 systemd-journald[1121]: Runtime Journal (/run/log/journal/9427aa67c14e409385e617d142a1e61f) is 5.9M, max 47.3M, 41.4M free. Dec 13 13:26:24.386421 systemd[1]: Queued start job for default target multi-user.target. Dec 13 13:26:24.398829 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 13 13:26:24.399204 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 13:26:24.562522 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 13:26:24.562978 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 13:26:24.564008 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 13:26:24.565078 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 13:26:24.567529 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 13:26:24.568750 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 13:26:24.568876 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 13:26:24.570155 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 13:26:24.570292 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 13:26:24.571469 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 13:26:24.571647 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 13:26:24.572745 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 13:26:24.572866 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 13:26:24.574244 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 13:26:24.574369 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 13:26:24.575556 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 13:26:24.575690 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 13:26:24.576825 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 13:26:24.578160 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 13:26:24.579418 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 13:26:24.590865 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 13:26:24.599614 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 13:26:24.601423 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 13:26:24.602400 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 13:26:24.602427 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 13:26:24.604213 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 13:26:24.606134 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 13:26:24.607994 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 13:26:24.608936 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 13:26:24.610196 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 13:26:24.611798 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 13:26:24.612691 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 13:26:24.615679 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 13:26:24.616735 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 13:26:24.619716 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 13:26:24.621469 systemd-journald[1121]: Time spent on flushing to /var/log/journal/9427aa67c14e409385e617d142a1e61f is 20.984ms for 857 entries. Dec 13 13:26:24.621469 systemd-journald[1121]: System Journal (/var/log/journal/9427aa67c14e409385e617d142a1e61f) is 8.0M, max 195.6M, 187.6M free. Dec 13 13:26:24.657918 systemd-journald[1121]: Received client request to flush runtime journal. Dec 13 13:26:24.657963 kernel: loop0: detected capacity change from 0 to 116784 Dec 13 13:26:24.622870 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 13:26:24.627823 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 13:26:24.632006 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 13:26:24.633176 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 13:26:24.634089 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 13:26:24.636780 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 13:26:24.637903 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 13:26:24.643602 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 13:26:24.647217 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 13:26:24.650725 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 13:26:24.667570 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 13:26:24.670778 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 13:26:24.672787 systemd-tmpfiles[1155]: ACLs are not supported, ignoring. Dec 13 13:26:24.673594 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 13:26:24.672802 systemd-tmpfiles[1155]: ACLs are not supported, ignoring. Dec 13 13:26:24.672925 udevadm[1165]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Dec 13 13:26:24.677837 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 13:26:24.691640 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 13:26:24.693140 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 13:26:24.693687 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 13:26:24.714457 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 13:26:24.718546 kernel: loop1: detected capacity change from 0 to 113552 Dec 13 13:26:24.728671 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 13:26:24.741944 systemd-tmpfiles[1178]: ACLs are not supported, ignoring. Dec 13 13:26:24.741963 systemd-tmpfiles[1178]: ACLs are not supported, ignoring. Dec 13 13:26:24.745758 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 13:26:24.749527 kernel: loop2: detected capacity change from 0 to 194096 Dec 13 13:26:24.807541 kernel: loop3: detected capacity change from 0 to 116784 Dec 13 13:26:24.813536 kernel: loop4: detected capacity change from 0 to 113552 Dec 13 13:26:24.818518 kernel: loop5: detected capacity change from 0 to 194096 Dec 13 13:26:24.823081 (sd-merge)[1184]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Dec 13 13:26:24.823820 (sd-merge)[1184]: Merged extensions into '/usr'. Dec 13 13:26:24.827574 systemd[1]: Reloading requested from client PID 1154 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 13:26:24.827587 systemd[1]: Reloading... Dec 13 13:26:24.875529 zram_generator::config[1210]: No configuration found. Dec 13 13:26:24.888901 ldconfig[1149]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 13:26:24.962950 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 13:26:24.997416 systemd[1]: Reloading finished in 169 ms. Dec 13 13:26:25.035067 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 13:26:25.036553 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 13:26:25.057735 systemd[1]: Starting ensure-sysext.service... Dec 13 13:26:25.059341 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 13:26:25.074322 systemd[1]: Reloading requested from client PID 1244 ('systemctl') (unit ensure-sysext.service)... Dec 13 13:26:25.074338 systemd[1]: Reloading... Dec 13 13:26:25.079087 systemd-tmpfiles[1245]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 13:26:25.079615 systemd-tmpfiles[1245]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 13:26:25.080352 systemd-tmpfiles[1245]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 13:26:25.080685 systemd-tmpfiles[1245]: ACLs are not supported, ignoring. Dec 13 13:26:25.080809 systemd-tmpfiles[1245]: ACLs are not supported, ignoring. Dec 13 13:26:25.083295 systemd-tmpfiles[1245]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 13:26:25.083384 systemd-tmpfiles[1245]: Skipping /boot Dec 13 13:26:25.091343 systemd-tmpfiles[1245]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 13:26:25.091441 systemd-tmpfiles[1245]: Skipping /boot Dec 13 13:26:25.116534 zram_generator::config[1271]: No configuration found. Dec 13 13:26:25.192489 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 13:26:25.226958 systemd[1]: Reloading finished in 152 ms. Dec 13 13:26:25.241126 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 13:26:25.256905 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 13:26:25.263773 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 13 13:26:25.265907 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 13:26:25.267804 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 13:26:25.271646 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 13:26:25.274917 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 13:26:25.279122 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 13:26:25.284455 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 13:26:25.285496 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 13:26:25.289166 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 13:26:25.293425 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 13:26:25.294369 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 13:26:25.298777 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 13:26:25.300599 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 13:26:25.302035 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 13:26:25.303459 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 13:26:25.303594 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 13:26:25.304463 systemd-udevd[1313]: Using default interface naming scheme 'v255'. Dec 13 13:26:25.305274 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 13:26:25.305389 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 13:26:25.308242 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 13:26:25.313467 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 13:26:25.313718 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 13:26:25.322852 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 13:26:25.326548 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 13:26:25.329238 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 13:26:25.335760 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 13:26:25.337666 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 13:26:25.338434 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 13:26:25.338968 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 13:26:25.343631 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 13:26:25.345612 augenrules[1353]: No rules Dec 13 13:26:25.345610 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 13:26:25.346824 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 13:26:25.346975 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 13 13:26:25.348522 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 13:26:25.348651 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 13:26:25.350524 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 13:26:25.351667 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 13:26:25.353261 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 13:26:25.353401 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 13:26:25.354891 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 13:26:25.356387 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 13:26:25.368146 systemd[1]: Finished ensure-sysext.service. Dec 13 13:26:25.384711 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 13 13:26:25.386812 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 13:26:25.388688 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 13:26:25.392482 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 13:26:25.394536 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1374) Dec 13 13:26:25.396312 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 13:26:25.399532 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1346) Dec 13 13:26:25.399592 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1346) Dec 13 13:26:25.403374 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 13:26:25.404250 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 13:26:25.405798 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 13:26:25.408837 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 13 13:26:25.409818 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 13:26:25.410232 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 13:26:25.410506 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 13:26:25.416276 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 13:26:25.417577 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 13:26:25.420344 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Dec 13 13:26:25.423731 augenrules[1382]: /sbin/augenrules: No change Dec 13 13:26:25.428232 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 13:26:25.429554 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 13:26:25.430785 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 13:26:25.430913 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 13:26:25.432637 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 13:26:25.432703 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 13:26:25.434778 augenrules[1412]: No rules Dec 13 13:26:25.436302 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 13:26:25.436858 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 13 13:26:25.454017 systemd-resolved[1311]: Positive Trust Anchors: Dec 13 13:26:25.454599 systemd-resolved[1311]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 13:26:25.454642 systemd-resolved[1311]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 13:26:25.461169 systemd-resolved[1311]: Defaulting to hostname 'linux'. Dec 13 13:26:25.466152 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 13:26:25.467453 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 13:26:25.482081 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 13 13:26:25.483709 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 13:26:25.487810 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 13:26:25.493489 systemd-networkd[1397]: lo: Link UP Dec 13 13:26:25.493656 systemd-networkd[1397]: lo: Gained carrier Dec 13 13:26:25.494375 systemd-networkd[1397]: Enumeration completed Dec 13 13:26:25.496366 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 13:26:25.497289 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 13:26:25.498768 systemd-networkd[1397]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 13:26:25.498777 systemd-networkd[1397]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 13:26:25.501804 systemd-networkd[1397]: eth0: Link UP Dec 13 13:26:25.501811 systemd-networkd[1397]: eth0: Gained carrier Dec 13 13:26:25.501825 systemd-networkd[1397]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 13:26:25.502437 systemd[1]: Reached target network.target - Network. Dec 13 13:26:25.504268 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 13:26:25.507275 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:26:25.514057 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 13:26:25.517624 systemd-networkd[1397]: eth0: DHCPv4 address 10.0.0.123/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 13:26:25.518301 systemd-timesyncd[1399]: Network configuration changed, trying to establish connection. Dec 13 13:26:25.111361 systemd-resolved[1311]: Clock change detected. Flushing caches. Dec 13 13:26:25.115753 systemd-journald[1121]: Time jumped backwards, rotating. Dec 13 13:26:25.111997 systemd-timesyncd[1399]: Contacted time server 10.0.0.1:123 (10.0.0.1). Dec 13 13:26:25.112040 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 13:26:25.112048 systemd-timesyncd[1399]: Initial clock synchronization to Fri 2024-12-13 13:26:25.111324 UTC. Dec 13 13:26:25.121085 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 13:26:25.138841 lvm[1433]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 13:26:25.144878 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:26:25.176949 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 13:26:25.178157 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 13:26:25.179061 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 13:26:25.179987 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 13:26:25.180963 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 13:26:25.182092 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 13:26:25.183022 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 13:26:25.184036 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 13:26:25.185058 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 13:26:25.185091 systemd[1]: Reached target paths.target - Path Units. Dec 13 13:26:25.185797 systemd[1]: Reached target timers.target - Timer Units. Dec 13 13:26:25.187922 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 13:26:25.189888 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 13:26:25.202634 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 13:26:25.204517 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 13:26:25.205773 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 13:26:25.206677 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 13:26:25.207396 systemd[1]: Reached target basic.target - Basic System. Dec 13 13:26:25.208101 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 13:26:25.208130 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 13:26:25.208962 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 13:26:25.210589 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 13:26:25.213000 lvm[1441]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 13:26:25.214020 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 13:26:25.219277 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 13:26:25.220031 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 13:26:25.223138 jq[1444]: false Dec 13 13:26:25.221786 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 13:26:25.225092 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 13 13:26:25.228084 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 13:26:25.231882 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 13:26:25.234736 extend-filesystems[1445]: Found loop3 Dec 13 13:26:25.236031 extend-filesystems[1445]: Found loop4 Dec 13 13:26:25.236031 extend-filesystems[1445]: Found loop5 Dec 13 13:26:25.236031 extend-filesystems[1445]: Found vda Dec 13 13:26:25.236031 extend-filesystems[1445]: Found vda1 Dec 13 13:26:25.236031 extend-filesystems[1445]: Found vda2 Dec 13 13:26:25.236031 extend-filesystems[1445]: Found vda3 Dec 13 13:26:25.236031 extend-filesystems[1445]: Found usr Dec 13 13:26:25.236031 extend-filesystems[1445]: Found vda4 Dec 13 13:26:25.236031 extend-filesystems[1445]: Found vda6 Dec 13 13:26:25.236031 extend-filesystems[1445]: Found vda7 Dec 13 13:26:25.236031 extend-filesystems[1445]: Found vda9 Dec 13 13:26:25.236031 extend-filesystems[1445]: Checking size of /dev/vda9 Dec 13 13:26:25.236021 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 13:26:25.244562 dbus-daemon[1443]: [system] SELinux support is enabled Dec 13 13:26:25.237381 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 13:26:25.237754 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 13:26:25.241047 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 13:26:25.243610 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 13:26:25.245004 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 13:26:25.249214 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 13:26:25.254300 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 13:26:25.254465 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 13:26:25.254661 jq[1459]: true Dec 13 13:26:25.257369 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 13:26:25.257523 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 13:26:25.261331 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 13:26:25.261812 extend-filesystems[1445]: Resized partition /dev/vda9 Dec 13 13:26:25.264116 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 13:26:25.274852 jq[1468]: true Dec 13 13:26:25.281637 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1345) Dec 13 13:26:25.281737 (ntainerd)[1470]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 13:26:25.292290 extend-filesystems[1469]: resize2fs 1.47.1 (20-May-2024) Dec 13 13:26:25.298894 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Dec 13 13:26:25.299338 tar[1466]: linux-arm64/helm Dec 13 13:26:25.304322 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 13:26:25.304359 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 13:26:25.306223 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 13:26:25.306246 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 13:26:25.310655 update_engine[1455]: I20241213 13:26:25.309525 1455 main.cc:92] Flatcar Update Engine starting Dec 13 13:26:25.315281 update_engine[1455]: I20241213 13:26:25.314911 1455 update_check_scheduler.cc:74] Next update check in 4m59s Dec 13 13:26:25.316469 systemd[1]: Started update-engine.service - Update Engine. Dec 13 13:26:25.324465 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Dec 13 13:26:25.331086 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 13:26:25.338134 extend-filesystems[1469]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 13 13:26:25.338134 extend-filesystems[1469]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 13 13:26:25.338134 extend-filesystems[1469]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Dec 13 13:26:25.354527 extend-filesystems[1445]: Resized filesystem in /dev/vda9 Dec 13 13:26:25.338754 systemd-logind[1454]: Watching system buttons on /dev/input/event0 (Power Button) Dec 13 13:26:25.340842 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 13:26:25.342441 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 13:26:25.344938 systemd-logind[1454]: New seat seat0. Dec 13 13:26:25.350718 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 13:26:25.375960 bash[1497]: Updated "/home/core/.ssh/authorized_keys" Dec 13 13:26:25.379080 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 13:26:25.380552 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 13 13:26:25.384056 locksmithd[1493]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 13:26:25.507945 containerd[1470]: time="2024-12-13T13:26:25.507825997Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Dec 13 13:26:25.535212 containerd[1470]: time="2024-12-13T13:26:25.534914957Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 13:26:25.536325 containerd[1470]: time="2024-12-13T13:26:25.536292157Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:26:25.537491 containerd[1470]: time="2024-12-13T13:26:25.536446477Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 13:26:25.537491 containerd[1470]: time="2024-12-13T13:26:25.536471997Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 13:26:25.537491 containerd[1470]: time="2024-12-13T13:26:25.536625197Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 13:26:25.537491 containerd[1470]: time="2024-12-13T13:26:25.536643277Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 13:26:25.537491 containerd[1470]: time="2024-12-13T13:26:25.536696357Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:26:25.537491 containerd[1470]: time="2024-12-13T13:26:25.536708437Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 13:26:25.537491 containerd[1470]: time="2024-12-13T13:26:25.536851837Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:26:25.537491 containerd[1470]: time="2024-12-13T13:26:25.536864877Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 13:26:25.537491 containerd[1470]: time="2024-12-13T13:26:25.536900397Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:26:25.537491 containerd[1470]: time="2024-12-13T13:26:25.536909517Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 13:26:25.537491 containerd[1470]: time="2024-12-13T13:26:25.536978917Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 13:26:25.537491 containerd[1470]: time="2024-12-13T13:26:25.537153637Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 13:26:25.537770 containerd[1470]: time="2024-12-13T13:26:25.537240237Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:26:25.537770 containerd[1470]: time="2024-12-13T13:26:25.537252197Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 13:26:25.537770 containerd[1470]: time="2024-12-13T13:26:25.537320077Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 13:26:25.537770 containerd[1470]: time="2024-12-13T13:26:25.537357477Z" level=info msg="metadata content store policy set" policy=shared Dec 13 13:26:25.541355 containerd[1470]: time="2024-12-13T13:26:25.541329597Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 13:26:25.541470 containerd[1470]: time="2024-12-13T13:26:25.541453477Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 13:26:25.541589 containerd[1470]: time="2024-12-13T13:26:25.541571477Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 13:26:25.541674 containerd[1470]: time="2024-12-13T13:26:25.541660157Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 13:26:25.541785 containerd[1470]: time="2024-12-13T13:26:25.541769477Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 13:26:25.542051 containerd[1470]: time="2024-12-13T13:26:25.542031357Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 13:26:25.542428 containerd[1470]: time="2024-12-13T13:26:25.542398597Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 13:26:25.542564 containerd[1470]: time="2024-12-13T13:26:25.542545797Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 13:26:25.542590 containerd[1470]: time="2024-12-13T13:26:25.542567557Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 13:26:25.542590 containerd[1470]: time="2024-12-13T13:26:25.542583517Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 13:26:25.542627 containerd[1470]: time="2024-12-13T13:26:25.542597037Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 13:26:25.542627 containerd[1470]: time="2024-12-13T13:26:25.542609317Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 13:26:25.542627 containerd[1470]: time="2024-12-13T13:26:25.542621757Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 13:26:25.542674 containerd[1470]: time="2024-12-13T13:26:25.542634877Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 13:26:25.542674 containerd[1470]: time="2024-12-13T13:26:25.542649197Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 13:26:25.542674 containerd[1470]: time="2024-12-13T13:26:25.542661717Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 13:26:25.542674 containerd[1470]: time="2024-12-13T13:26:25.542672917Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 13:26:25.542734 containerd[1470]: time="2024-12-13T13:26:25.542683957Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 13:26:25.542734 containerd[1470]: time="2024-12-13T13:26:25.542702997Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 13:26:25.542734 containerd[1470]: time="2024-12-13T13:26:25.542715037Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 13:26:25.542734 containerd[1470]: time="2024-12-13T13:26:25.542726037Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 13:26:25.542797 containerd[1470]: time="2024-12-13T13:26:25.542736917Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 13:26:25.542797 containerd[1470]: time="2024-12-13T13:26:25.542748957Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 13:26:25.542797 containerd[1470]: time="2024-12-13T13:26:25.542761077Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 13:26:25.542797 containerd[1470]: time="2024-12-13T13:26:25.542772437Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 13:26:25.542797 containerd[1470]: time="2024-12-13T13:26:25.542788637Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 13:26:25.542888 containerd[1470]: time="2024-12-13T13:26:25.542801557Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 13:26:25.542888 containerd[1470]: time="2024-12-13T13:26:25.542815957Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 13:26:25.542888 containerd[1470]: time="2024-12-13T13:26:25.542829837Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 13:26:25.542888 containerd[1470]: time="2024-12-13T13:26:25.542840397Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 13:26:25.542888 containerd[1470]: time="2024-12-13T13:26:25.542851237Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 13:26:25.542888 containerd[1470]: time="2024-12-13T13:26:25.542864077Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 13:26:25.543019 containerd[1470]: time="2024-12-13T13:26:25.542899717Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 13:26:25.543019 containerd[1470]: time="2024-12-13T13:26:25.542913837Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 13:26:25.543019 containerd[1470]: time="2024-12-13T13:26:25.542924357Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 13:26:25.543106 containerd[1470]: time="2024-12-13T13:26:25.543091717Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 13:26:25.543126 containerd[1470]: time="2024-12-13T13:26:25.543110637Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 13:26:25.543126 containerd[1470]: time="2024-12-13T13:26:25.543121077Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 13:26:25.543164 containerd[1470]: time="2024-12-13T13:26:25.543133077Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 13:26:25.543164 containerd[1470]: time="2024-12-13T13:26:25.543142437Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 13:26:25.543164 containerd[1470]: time="2024-12-13T13:26:25.543153477Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 13:26:25.543164 containerd[1470]: time="2024-12-13T13:26:25.543163157Z" level=info msg="NRI interface is disabled by configuration." Dec 13 13:26:25.543229 containerd[1470]: time="2024-12-13T13:26:25.543173917Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 13:26:25.543578 containerd[1470]: time="2024-12-13T13:26:25.543531317Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 13:26:25.543578 containerd[1470]: time="2024-12-13T13:26:25.543582477Z" level=info msg="Connect containerd service" Dec 13 13:26:25.543699 containerd[1470]: time="2024-12-13T13:26:25.543612757Z" level=info msg="using legacy CRI server" Dec 13 13:26:25.543699 containerd[1470]: time="2024-12-13T13:26:25.543619957Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 13:26:25.543864 containerd[1470]: time="2024-12-13T13:26:25.543847677Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 13:26:25.547489 containerd[1470]: time="2024-12-13T13:26:25.546834277Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 13:26:25.547489 containerd[1470]: time="2024-12-13T13:26:25.547353317Z" level=info msg="Start subscribing containerd event" Dec 13 13:26:25.547489 containerd[1470]: time="2024-12-13T13:26:25.547403277Z" level=info msg="Start recovering state" Dec 13 13:26:25.547489 containerd[1470]: time="2024-12-13T13:26:25.547361157Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 13:26:25.547634 containerd[1470]: time="2024-12-13T13:26:25.547610237Z" level=info msg="Start event monitor" Dec 13 13:26:25.547634 containerd[1470]: time="2024-12-13T13:26:25.547632397Z" level=info msg="Start snapshots syncer" Dec 13 13:26:25.547687 containerd[1470]: time="2024-12-13T13:26:25.547644517Z" level=info msg="Start cni network conf syncer for default" Dec 13 13:26:25.547687 containerd[1470]: time="2024-12-13T13:26:25.547652237Z" level=info msg="Start streaming server" Dec 13 13:26:25.547956 containerd[1470]: time="2024-12-13T13:26:25.547924997Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 13:26:25.549139 containerd[1470]: time="2024-12-13T13:26:25.547990477Z" level=info msg="containerd successfully booted in 0.041050s" Dec 13 13:26:25.548065 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 13:26:25.688905 tar[1466]: linux-arm64/LICENSE Dec 13 13:26:25.689008 tar[1466]: linux-arm64/README.md Dec 13 13:26:25.699041 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 13 13:26:26.310732 sshd_keygen[1460]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 13:26:26.328016 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 13:26:26.334101 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 13:26:26.339560 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 13:26:26.339739 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 13:26:26.342405 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 13:26:26.354165 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 13:26:26.356422 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 13:26:26.358163 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Dec 13 13:26:26.359111 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 13:26:26.889081 systemd-networkd[1397]: eth0: Gained IPv6LL Dec 13 13:26:26.891432 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 13:26:26.893240 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 13:26:26.910109 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Dec 13 13:26:26.912411 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:26:26.914338 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 13:26:26.928615 systemd[1]: coreos-metadata.service: Deactivated successfully. Dec 13 13:26:26.929530 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Dec 13 13:26:26.931442 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 13:26:26.933628 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 13:26:27.399266 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:26:27.400643 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 13:26:27.401596 systemd[1]: Startup finished in 524ms (kernel) + 4.359s (initrd) + 3.787s (userspace) = 8.671s. Dec 13 13:26:27.403290 (kubelet)[1556]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 13:26:27.420257 agetty[1533]: failed to open credentials directory Dec 13 13:26:27.421297 agetty[1532]: failed to open credentials directory Dec 13 13:26:27.898881 kubelet[1556]: E1213 13:26:27.898821 1556 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 13:26:27.901142 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 13:26:27.901290 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 13:26:32.007395 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 13:26:32.008540 systemd[1]: Started sshd@0-10.0.0.123:22-10.0.0.1:39046.service - OpenSSH per-connection server daemon (10.0.0.1:39046). Dec 13 13:26:32.072162 sshd[1571]: Accepted publickey for core from 10.0.0.1 port 39046 ssh2: RSA SHA256:q9cWvSR3bBxu+L28Z4JmOHhvW5qF2BbU+1GVJNGhIf4 Dec 13 13:26:32.073858 sshd-session[1571]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:26:32.085464 systemd-logind[1454]: New session 1 of user core. Dec 13 13:26:32.086449 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 13:26:32.098091 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 13:26:32.108366 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 13:26:32.111701 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 13:26:32.117355 (systemd)[1575]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 13:26:32.188072 systemd[1575]: Queued start job for default target default.target. Dec 13 13:26:32.199825 systemd[1575]: Created slice app.slice - User Application Slice. Dec 13 13:26:32.199897 systemd[1575]: Reached target paths.target - Paths. Dec 13 13:26:32.199911 systemd[1575]: Reached target timers.target - Timers. Dec 13 13:26:32.201166 systemd[1575]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 13:26:32.214333 systemd[1575]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 13:26:32.214443 systemd[1575]: Reached target sockets.target - Sockets. Dec 13 13:26:32.214466 systemd[1575]: Reached target basic.target - Basic System. Dec 13 13:26:32.214504 systemd[1575]: Reached target default.target - Main User Target. Dec 13 13:26:32.214531 systemd[1575]: Startup finished in 92ms. Dec 13 13:26:32.214697 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 13:26:32.216044 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 13:26:32.276028 systemd[1]: Started sshd@1-10.0.0.123:22-10.0.0.1:39058.service - OpenSSH per-connection server daemon (10.0.0.1:39058). Dec 13 13:26:32.321176 sshd[1586]: Accepted publickey for core from 10.0.0.1 port 39058 ssh2: RSA SHA256:q9cWvSR3bBxu+L28Z4JmOHhvW5qF2BbU+1GVJNGhIf4 Dec 13 13:26:32.322454 sshd-session[1586]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:26:32.326582 systemd-logind[1454]: New session 2 of user core. Dec 13 13:26:32.337046 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 13:26:32.387433 sshd[1588]: Connection closed by 10.0.0.1 port 39058 Dec 13 13:26:32.388705 sshd-session[1586]: pam_unix(sshd:session): session closed for user core Dec 13 13:26:32.398016 systemd[1]: sshd@1-10.0.0.123:22-10.0.0.1:39058.service: Deactivated successfully. Dec 13 13:26:32.399221 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 13:26:32.401237 systemd-logind[1454]: Session 2 logged out. Waiting for processes to exit. Dec 13 13:26:32.402782 systemd[1]: Started sshd@2-10.0.0.123:22-10.0.0.1:39068.service - OpenSSH per-connection server daemon (10.0.0.1:39068). Dec 13 13:26:32.405891 systemd-logind[1454]: Removed session 2. Dec 13 13:26:32.474699 sshd[1593]: Accepted publickey for core from 10.0.0.1 port 39068 ssh2: RSA SHA256:q9cWvSR3bBxu+L28Z4JmOHhvW5qF2BbU+1GVJNGhIf4 Dec 13 13:26:32.475907 sshd-session[1593]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:26:32.480109 systemd-logind[1454]: New session 3 of user core. Dec 13 13:26:32.491035 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 13:26:32.538974 sshd[1595]: Connection closed by 10.0.0.1 port 39068 Dec 13 13:26:32.539501 sshd-session[1593]: pam_unix(sshd:session): session closed for user core Dec 13 13:26:32.553096 systemd[1]: sshd@2-10.0.0.123:22-10.0.0.1:39068.service: Deactivated successfully. Dec 13 13:26:32.555374 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 13:26:32.556947 systemd-logind[1454]: Session 3 logged out. Waiting for processes to exit. Dec 13 13:26:32.558146 systemd[1]: Started sshd@3-10.0.0.123:22-10.0.0.1:54838.service - OpenSSH per-connection server daemon (10.0.0.1:54838). Dec 13 13:26:32.560242 systemd-logind[1454]: Removed session 3. Dec 13 13:26:32.627580 sshd[1600]: Accepted publickey for core from 10.0.0.1 port 54838 ssh2: RSA SHA256:q9cWvSR3bBxu+L28Z4JmOHhvW5qF2BbU+1GVJNGhIf4 Dec 13 13:26:32.628614 sshd-session[1600]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:26:32.632399 systemd-logind[1454]: New session 4 of user core. Dec 13 13:26:32.642998 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 13:26:32.693327 sshd[1602]: Connection closed by 10.0.0.1 port 54838 Dec 13 13:26:32.693747 sshd-session[1600]: pam_unix(sshd:session): session closed for user core Dec 13 13:26:32.708019 systemd[1]: sshd@3-10.0.0.123:22-10.0.0.1:54838.service: Deactivated successfully. Dec 13 13:26:32.710238 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 13:26:32.713010 systemd-logind[1454]: Session 4 logged out. Waiting for processes to exit. Dec 13 13:26:32.714962 systemd[1]: Started sshd@4-10.0.0.123:22-10.0.0.1:54852.service - OpenSSH per-connection server daemon (10.0.0.1:54852). Dec 13 13:26:32.715317 systemd-logind[1454]: Removed session 4. Dec 13 13:26:32.758807 sshd[1607]: Accepted publickey for core from 10.0.0.1 port 54852 ssh2: RSA SHA256:q9cWvSR3bBxu+L28Z4JmOHhvW5qF2BbU+1GVJNGhIf4 Dec 13 13:26:32.759887 sshd-session[1607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:26:32.763473 systemd-logind[1454]: New session 5 of user core. Dec 13 13:26:32.778002 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 13:26:32.832710 sudo[1610]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 13:26:32.833010 sudo[1610]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 13:26:32.847621 sudo[1610]: pam_unix(sudo:session): session closed for user root Dec 13 13:26:32.848910 sshd[1609]: Connection closed by 10.0.0.1 port 54852 Dec 13 13:26:32.849204 sshd-session[1607]: pam_unix(sshd:session): session closed for user core Dec 13 13:26:32.862329 systemd[1]: sshd@4-10.0.0.123:22-10.0.0.1:54852.service: Deactivated successfully. Dec 13 13:26:32.863715 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 13:26:32.864969 systemd-logind[1454]: Session 5 logged out. Waiting for processes to exit. Dec 13 13:26:32.866900 systemd[1]: Started sshd@5-10.0.0.123:22-10.0.0.1:54856.service - OpenSSH per-connection server daemon (10.0.0.1:54856). Dec 13 13:26:32.868158 systemd-logind[1454]: Removed session 5. Dec 13 13:26:32.911548 sshd[1615]: Accepted publickey for core from 10.0.0.1 port 54856 ssh2: RSA SHA256:q9cWvSR3bBxu+L28Z4JmOHhvW5qF2BbU+1GVJNGhIf4 Dec 13 13:26:32.912714 sshd-session[1615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:26:32.917152 systemd-logind[1454]: New session 6 of user core. Dec 13 13:26:32.923032 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 13:26:32.973113 sudo[1619]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 13:26:32.973383 sudo[1619]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 13:26:32.976416 sudo[1619]: pam_unix(sudo:session): session closed for user root Dec 13 13:26:32.981112 sudo[1618]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 13 13:26:32.981364 sudo[1618]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 13:26:33.000257 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 13 13:26:33.023042 augenrules[1641]: No rules Dec 13 13:26:33.024157 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 13:26:33.024975 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 13 13:26:33.026040 sudo[1618]: pam_unix(sudo:session): session closed for user root Dec 13 13:26:33.027333 sshd[1617]: Connection closed by 10.0.0.1 port 54856 Dec 13 13:26:33.027855 sshd-session[1615]: pam_unix(sshd:session): session closed for user core Dec 13 13:26:33.039164 systemd[1]: sshd@5-10.0.0.123:22-10.0.0.1:54856.service: Deactivated successfully. Dec 13 13:26:33.040522 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 13:26:33.042909 systemd-logind[1454]: Session 6 logged out. Waiting for processes to exit. Dec 13 13:26:33.047101 systemd[1]: Started sshd@6-10.0.0.123:22-10.0.0.1:54866.service - OpenSSH per-connection server daemon (10.0.0.1:54866). Dec 13 13:26:33.047926 systemd-logind[1454]: Removed session 6. Dec 13 13:26:33.089377 sshd[1649]: Accepted publickey for core from 10.0.0.1 port 54866 ssh2: RSA SHA256:q9cWvSR3bBxu+L28Z4JmOHhvW5qF2BbU+1GVJNGhIf4 Dec 13 13:26:33.090911 sshd-session[1649]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:26:33.094770 systemd-logind[1454]: New session 7 of user core. Dec 13 13:26:33.104019 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 13:26:33.154762 sudo[1652]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 13:26:33.155336 sudo[1652]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 13:26:33.493098 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 13 13:26:33.493191 (dockerd)[1672]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 13 13:26:33.740462 dockerd[1672]: time="2024-12-13T13:26:33.740396957Z" level=info msg="Starting up" Dec 13 13:26:33.885263 dockerd[1672]: time="2024-12-13T13:26:33.885156317Z" level=info msg="Loading containers: start." Dec 13 13:26:34.056962 kernel: Initializing XFRM netlink socket Dec 13 13:26:34.124480 systemd-networkd[1397]: docker0: Link UP Dec 13 13:26:34.156313 dockerd[1672]: time="2024-12-13T13:26:34.156191957Z" level=info msg="Loading containers: done." Dec 13 13:26:34.185604 dockerd[1672]: time="2024-12-13T13:26:34.185547477Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 13:26:34.185838 dockerd[1672]: time="2024-12-13T13:26:34.185651557Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Dec 13 13:26:34.186055 dockerd[1672]: time="2024-12-13T13:26:34.186033237Z" level=info msg="Daemon has completed initialization" Dec 13 13:26:34.214986 dockerd[1672]: time="2024-12-13T13:26:34.214925077Z" level=info msg="API listen on /run/docker.sock" Dec 13 13:26:34.215247 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 13 13:26:34.918860 containerd[1470]: time="2024-12-13T13:26:34.918818917Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\"" Dec 13 13:26:35.633461 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2555247813.mount: Deactivated successfully. Dec 13 13:26:36.784434 containerd[1470]: time="2024-12-13T13:26:36.784382917Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:26:36.785417 containerd[1470]: time="2024-12-13T13:26:36.785200357Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.8: active requests=0, bytes read=29864012" Dec 13 13:26:36.786237 containerd[1470]: time="2024-12-13T13:26:36.786207437Z" level=info msg="ImageCreate event name:\"sha256:8202e87ffef091fe4f11dd113ff6f2ab16c70279775d224ddd8aa95e2dd0b966\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:26:36.790901 containerd[1470]: time="2024-12-13T13:26:36.790348317Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:26:36.791223 containerd[1470]: time="2024-12-13T13:26:36.791049717Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.8\" with image id \"sha256:8202e87ffef091fe4f11dd113ff6f2ab16c70279775d224ddd8aa95e2dd0b966\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\", size \"29860810\" in 1.8721848s" Dec 13 13:26:36.791223 containerd[1470]: time="2024-12-13T13:26:36.791087797Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\" returns image reference \"sha256:8202e87ffef091fe4f11dd113ff6f2ab16c70279775d224ddd8aa95e2dd0b966\"" Dec 13 13:26:36.808914 containerd[1470]: time="2024-12-13T13:26:36.808864917Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\"" Dec 13 13:26:37.924598 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 13:26:37.933124 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:26:38.026620 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:26:38.030142 (kubelet)[1952]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 13:26:38.071467 kubelet[1952]: E1213 13:26:38.071310 1952 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 13:26:38.074745 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 13:26:38.075527 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 13:26:38.537430 containerd[1470]: time="2024-12-13T13:26:38.537380557Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:26:38.538378 containerd[1470]: time="2024-12-13T13:26:38.538113477Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.8: active requests=0, bytes read=26900696" Dec 13 13:26:38.541591 containerd[1470]: time="2024-12-13T13:26:38.541530197Z" level=info msg="ImageCreate event name:\"sha256:4b2191aa4d4d6ca9fbd7704b35401bfa6b0b90de75db22c425053e97fd5c8338\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:26:38.544290 containerd[1470]: time="2024-12-13T13:26:38.544239117Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:26:38.545482 containerd[1470]: time="2024-12-13T13:26:38.545451477Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.8\" with image id \"sha256:4b2191aa4d4d6ca9fbd7704b35401bfa6b0b90de75db22c425053e97fd5c8338\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\", size \"28303015\" in 1.73652548s" Dec 13 13:26:38.545555 containerd[1470]: time="2024-12-13T13:26:38.545485037Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\" returns image reference \"sha256:4b2191aa4d4d6ca9fbd7704b35401bfa6b0b90de75db22c425053e97fd5c8338\"" Dec 13 13:26:38.564168 containerd[1470]: time="2024-12-13T13:26:38.564131317Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\"" Dec 13 13:26:39.449804 containerd[1470]: time="2024-12-13T13:26:39.449748517Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:26:39.450873 containerd[1470]: time="2024-12-13T13:26:39.450816957Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.8: active requests=0, bytes read=16164334" Dec 13 13:26:39.451055 containerd[1470]: time="2024-12-13T13:26:39.451021797Z" level=info msg="ImageCreate event name:\"sha256:d43326c1723208785a33cdc1507082792eb041ca0d789c103c90180e31f65ca8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:26:39.454001 containerd[1470]: time="2024-12-13T13:26:39.453965557Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:26:39.455864 containerd[1470]: time="2024-12-13T13:26:39.455826077Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.8\" with image id \"sha256:d43326c1723208785a33cdc1507082792eb041ca0d789c103c90180e31f65ca8\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\", size \"17566671\" in 891.6562ms" Dec 13 13:26:39.455909 containerd[1470]: time="2024-12-13T13:26:39.455862877Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\" returns image reference \"sha256:d43326c1723208785a33cdc1507082792eb041ca0d789c103c90180e31f65ca8\"" Dec 13 13:26:39.474213 containerd[1470]: time="2024-12-13T13:26:39.474171597Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\"" Dec 13 13:26:40.486066 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1173246379.mount: Deactivated successfully. Dec 13 13:26:40.672391 containerd[1470]: time="2024-12-13T13:26:40.672331477Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:26:40.673249 containerd[1470]: time="2024-12-13T13:26:40.673186677Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.8: active requests=0, bytes read=25662013" Dec 13 13:26:40.673551 containerd[1470]: time="2024-12-13T13:26:40.673516957Z" level=info msg="ImageCreate event name:\"sha256:4612aebc0675831aedbbde7cd56b85db91f1fdcf05ef923072961538ec497adb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:26:40.675847 containerd[1470]: time="2024-12-13T13:26:40.675814157Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:26:40.676645 containerd[1470]: time="2024-12-13T13:26:40.676614917Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.8\" with image id \"sha256:4612aebc0675831aedbbde7cd56b85db91f1fdcf05ef923072961538ec497adb\", repo tag \"registry.k8s.io/kube-proxy:v1.30.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\", size \"25661030\" in 1.20240452s" Dec 13 13:26:40.676674 containerd[1470]: time="2024-12-13T13:26:40.676645517Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\" returns image reference \"sha256:4612aebc0675831aedbbde7cd56b85db91f1fdcf05ef923072961538ec497adb\"" Dec 13 13:26:40.694911 containerd[1470]: time="2024-12-13T13:26:40.694761157Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 13:26:41.326043 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2972641957.mount: Deactivated successfully. Dec 13 13:26:42.003917 containerd[1470]: time="2024-12-13T13:26:42.003282957Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:26:42.003917 containerd[1470]: time="2024-12-13T13:26:42.003694077Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Dec 13 13:26:42.005516 containerd[1470]: time="2024-12-13T13:26:42.005476197Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:26:42.010419 containerd[1470]: time="2024-12-13T13:26:42.010370237Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:26:42.013036 containerd[1470]: time="2024-12-13T13:26:42.012992277Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.3181886s" Dec 13 13:26:42.013036 containerd[1470]: time="2024-12-13T13:26:42.013031037Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Dec 13 13:26:42.030982 containerd[1470]: time="2024-12-13T13:26:42.030943077Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 13:26:42.474278 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2699350294.mount: Deactivated successfully. Dec 13 13:26:42.478500 containerd[1470]: time="2024-12-13T13:26:42.478276237Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:26:42.479151 containerd[1470]: time="2024-12-13T13:26:42.479084517Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" Dec 13 13:26:42.480903 containerd[1470]: time="2024-12-13T13:26:42.479766517Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:26:42.481960 containerd[1470]: time="2024-12-13T13:26:42.481932077Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:26:42.483056 containerd[1470]: time="2024-12-13T13:26:42.483018157Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 452.03508ms" Dec 13 13:26:42.483105 containerd[1470]: time="2024-12-13T13:26:42.483055037Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Dec 13 13:26:42.502752 containerd[1470]: time="2024-12-13T13:26:42.502703677Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Dec 13 13:26:43.108249 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount268976403.mount: Deactivated successfully. Dec 13 13:26:44.850440 containerd[1470]: time="2024-12-13T13:26:44.850383877Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:26:44.852178 containerd[1470]: time="2024-12-13T13:26:44.852115677Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191474" Dec 13 13:26:44.853375 containerd[1470]: time="2024-12-13T13:26:44.853321037Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:26:44.856230 containerd[1470]: time="2024-12-13T13:26:44.856181797Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:26:44.857351 containerd[1470]: time="2024-12-13T13:26:44.857312117Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 2.35456728s" Dec 13 13:26:44.857351 containerd[1470]: time="2024-12-13T13:26:44.857347517Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Dec 13 13:26:48.174643 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 13:26:48.185048 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:26:48.282758 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:26:48.286299 (kubelet)[2183]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 13:26:48.322545 kubelet[2183]: E1213 13:26:48.322498 2183 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 13:26:48.325163 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 13:26:48.325301 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 13:26:49.686020 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:26:49.696074 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:26:49.715702 systemd[1]: Reloading requested from client PID 2198 ('systemctl') (unit session-7.scope)... Dec 13 13:26:49.715720 systemd[1]: Reloading... Dec 13 13:26:49.784565 zram_generator::config[2239]: No configuration found. Dec 13 13:26:49.868847 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 13:26:49.919839 systemd[1]: Reloading finished in 203 ms. Dec 13 13:26:49.964022 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:26:49.966502 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:26:49.968238 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 13:26:49.968430 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:26:49.976354 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:26:50.078311 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:26:50.082843 (kubelet)[2284]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 13:26:50.119264 kubelet[2284]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 13:26:50.119264 kubelet[2284]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 13:26:50.119264 kubelet[2284]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 13:26:50.120117 kubelet[2284]: I1213 13:26:50.120073 2284 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 13:26:51.391477 kubelet[2284]: I1213 13:26:51.391432 2284 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Dec 13 13:26:51.391785 kubelet[2284]: I1213 13:26:51.391510 2284 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 13:26:51.391785 kubelet[2284]: I1213 13:26:51.391711 2284 server.go:927] "Client rotation is on, will bootstrap in background" Dec 13 13:26:51.419613 kubelet[2284]: E1213 13:26:51.419573 2284 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.123:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.123:6443: connect: connection refused Dec 13 13:26:51.419836 kubelet[2284]: I1213 13:26:51.419811 2284 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 13:26:51.428501 kubelet[2284]: I1213 13:26:51.428477 2284 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 13:26:51.429908 kubelet[2284]: I1213 13:26:51.429719 2284 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 13:26:51.430009 kubelet[2284]: I1213 13:26:51.429756 2284 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 13:26:51.430009 kubelet[2284]: I1213 13:26:51.430002 2284 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 13:26:51.430009 kubelet[2284]: I1213 13:26:51.430010 2284 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 13:26:51.430279 kubelet[2284]: I1213 13:26:51.430245 2284 state_mem.go:36] "Initialized new in-memory state store" Dec 13 13:26:51.431056 kubelet[2284]: I1213 13:26:51.431036 2284 kubelet.go:400] "Attempting to sync node with API server" Dec 13 13:26:51.431056 kubelet[2284]: I1213 13:26:51.431056 2284 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 13:26:51.431167 kubelet[2284]: I1213 13:26:51.431148 2284 kubelet.go:312] "Adding apiserver pod source" Dec 13 13:26:51.431354 kubelet[2284]: I1213 13:26:51.431336 2284 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 13:26:51.431810 kubelet[2284]: W1213 13:26:51.431757 2284 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.123:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.123:6443: connect: connection refused Dec 13 13:26:51.431838 kubelet[2284]: E1213 13:26:51.431822 2284 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.123:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.123:6443: connect: connection refused Dec 13 13:26:51.432201 kubelet[2284]: W1213 13:26:51.432168 2284 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.123:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.123:6443: connect: connection refused Dec 13 13:26:51.432242 kubelet[2284]: E1213 13:26:51.432208 2284 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.123:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.123:6443: connect: connection refused Dec 13 13:26:51.432549 kubelet[2284]: I1213 13:26:51.432532 2284 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Dec 13 13:26:51.432901 kubelet[2284]: I1213 13:26:51.432890 2284 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 13:26:51.433003 kubelet[2284]: W1213 13:26:51.432992 2284 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 13:26:51.433796 kubelet[2284]: I1213 13:26:51.433782 2284 server.go:1264] "Started kubelet" Dec 13 13:26:51.435145 kubelet[2284]: I1213 13:26:51.435120 2284 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 13:26:51.436849 kubelet[2284]: I1213 13:26:51.436803 2284 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 13:26:51.437799 kubelet[2284]: I1213 13:26:51.437770 2284 server.go:455] "Adding debug handlers to kubelet server" Dec 13 13:26:51.438405 kubelet[2284]: I1213 13:26:51.438371 2284 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 13:26:51.438511 kubelet[2284]: I1213 13:26:51.438490 2284 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Dec 13 13:26:51.438607 kubelet[2284]: I1213 13:26:51.438562 2284 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 13:26:51.438820 kubelet[2284]: I1213 13:26:51.438755 2284 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 13:26:51.439845 kubelet[2284]: I1213 13:26:51.439816 2284 reconciler.go:26] "Reconciler: start to sync state" Dec 13 13:26:51.440368 kubelet[2284]: W1213 13:26:51.440331 2284 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.123:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.123:6443: connect: connection refused Dec 13 13:26:51.440410 kubelet[2284]: E1213 13:26:51.440378 2284 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.123:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.123:6443: connect: connection refused Dec 13 13:26:51.442024 kubelet[2284]: E1213 13:26:51.441999 2284 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.123:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.123:6443: connect: connection refused" interval="200ms" Dec 13 13:26:51.442899 kubelet[2284]: E1213 13:26:51.442674 2284 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.123:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.123:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1810bf7fa1c9fdbd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-12-13 13:26:51.433762237 +0000 UTC m=+1.347907961,LastTimestamp:2024-12-13 13:26:51.433762237 +0000 UTC m=+1.347907961,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 13 13:26:51.443066 kubelet[2284]: I1213 13:26:51.443037 2284 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 13:26:51.444847 kubelet[2284]: I1213 13:26:51.444644 2284 factory.go:221] Registration of the containerd container factory successfully Dec 13 13:26:51.444847 kubelet[2284]: I1213 13:26:51.444663 2284 factory.go:221] Registration of the systemd container factory successfully Dec 13 13:26:51.447488 kubelet[2284]: E1213 13:26:51.447455 2284 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 13:26:51.449949 kubelet[2284]: I1213 13:26:51.449917 2284 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 13:26:51.452815 kubelet[2284]: I1213 13:26:51.452775 2284 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 13:26:51.452964 kubelet[2284]: I1213 13:26:51.452943 2284 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 13:26:51.452964 kubelet[2284]: I1213 13:26:51.452963 2284 kubelet.go:2337] "Starting kubelet main sync loop" Dec 13 13:26:51.453017 kubelet[2284]: E1213 13:26:51.453000 2284 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 13:26:51.455694 kubelet[2284]: W1213 13:26:51.455637 2284 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.123:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.123:6443: connect: connection refused Dec 13 13:26:51.455694 kubelet[2284]: E1213 13:26:51.455674 2284 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.123:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.123:6443: connect: connection refused Dec 13 13:26:51.458852 kubelet[2284]: I1213 13:26:51.458833 2284 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 13:26:51.458852 kubelet[2284]: I1213 13:26:51.458847 2284 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 13:26:51.459021 kubelet[2284]: I1213 13:26:51.458910 2284 state_mem.go:36] "Initialized new in-memory state store" Dec 13 13:26:51.540886 kubelet[2284]: I1213 13:26:51.540542 2284 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 13:26:51.540886 kubelet[2284]: E1213 13:26:51.540854 2284 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.123:6443/api/v1/nodes\": dial tcp 10.0.0.123:6443: connect: connection refused" node="localhost" Dec 13 13:26:51.553395 kubelet[2284]: E1213 13:26:51.553369 2284 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 13:26:51.642639 kubelet[2284]: E1213 13:26:51.642544 2284 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.123:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.123:6443: connect: connection refused" interval="400ms" Dec 13 13:26:51.659636 kubelet[2284]: I1213 13:26:51.659542 2284 policy_none.go:49] "None policy: Start" Dec 13 13:26:51.660342 kubelet[2284]: I1213 13:26:51.660309 2284 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 13:26:51.660803 kubelet[2284]: I1213 13:26:51.660492 2284 state_mem.go:35] "Initializing new in-memory state store" Dec 13 13:26:51.666406 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 13 13:26:51.680222 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 13 13:26:51.693008 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 13 13:26:51.694486 kubelet[2284]: I1213 13:26:51.694438 2284 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 13:26:51.694665 kubelet[2284]: I1213 13:26:51.694635 2284 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 13:26:51.695200 kubelet[2284]: I1213 13:26:51.694740 2284 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 13:26:51.696223 kubelet[2284]: E1213 13:26:51.696203 2284 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Dec 13 13:26:51.741929 kubelet[2284]: I1213 13:26:51.741904 2284 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 13:26:51.742271 kubelet[2284]: E1213 13:26:51.742244 2284 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.123:6443/api/v1/nodes\": dial tcp 10.0.0.123:6443: connect: connection refused" node="localhost" Dec 13 13:26:51.754920 kubelet[2284]: I1213 13:26:51.754414 2284 topology_manager.go:215] "Topology Admit Handler" podUID="8a50003978138b3ab9890682eff4eae8" podNamespace="kube-system" podName="kube-controller-manager-localhost" Dec 13 13:26:51.755416 kubelet[2284]: I1213 13:26:51.755383 2284 topology_manager.go:215] "Topology Admit Handler" podUID="b107a98bcf27297d642d248711a3fc70" podNamespace="kube-system" podName="kube-scheduler-localhost" Dec 13 13:26:51.756962 kubelet[2284]: I1213 13:26:51.756923 2284 topology_manager.go:215] "Topology Admit Handler" podUID="8776b99abf086dcca6dffe34fa80fd53" podNamespace="kube-system" podName="kube-apiserver-localhost" Dec 13 13:26:51.763297 systemd[1]: Created slice kubepods-burstable-podb107a98bcf27297d642d248711a3fc70.slice - libcontainer container kubepods-burstable-podb107a98bcf27297d642d248711a3fc70.slice. Dec 13 13:26:51.782588 systemd[1]: Created slice kubepods-burstable-pod8a50003978138b3ab9890682eff4eae8.slice - libcontainer container kubepods-burstable-pod8a50003978138b3ab9890682eff4eae8.slice. Dec 13 13:26:51.796733 systemd[1]: Created slice kubepods-burstable-pod8776b99abf086dcca6dffe34fa80fd53.slice - libcontainer container kubepods-burstable-pod8776b99abf086dcca6dffe34fa80fd53.slice. Dec 13 13:26:51.841653 kubelet[2284]: I1213 13:26:51.841622 2284 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:26:51.841737 kubelet[2284]: I1213 13:26:51.841659 2284 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b107a98bcf27297d642d248711a3fc70-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b107a98bcf27297d642d248711a3fc70\") " pod="kube-system/kube-scheduler-localhost" Dec 13 13:26:51.841737 kubelet[2284]: I1213 13:26:51.841681 2284 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8776b99abf086dcca6dffe34fa80fd53-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"8776b99abf086dcca6dffe34fa80fd53\") " pod="kube-system/kube-apiserver-localhost" Dec 13 13:26:51.841780 kubelet[2284]: I1213 13:26:51.841739 2284 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:26:51.841820 kubelet[2284]: I1213 13:26:51.841798 2284 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:26:51.841859 kubelet[2284]: I1213 13:26:51.841839 2284 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:26:51.841925 kubelet[2284]: I1213 13:26:51.841883 2284 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:26:51.841953 kubelet[2284]: I1213 13:26:51.841929 2284 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8776b99abf086dcca6dffe34fa80fd53-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"8776b99abf086dcca6dffe34fa80fd53\") " pod="kube-system/kube-apiserver-localhost" Dec 13 13:26:51.841953 kubelet[2284]: I1213 13:26:51.841945 2284 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8776b99abf086dcca6dffe34fa80fd53-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"8776b99abf086dcca6dffe34fa80fd53\") " pod="kube-system/kube-apiserver-localhost" Dec 13 13:26:52.043048 kubelet[2284]: E1213 13:26:52.042999 2284 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.123:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.123:6443: connect: connection refused" interval="800ms" Dec 13 13:26:52.079550 kubelet[2284]: E1213 13:26:52.079506 2284 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:26:52.080241 containerd[1470]: time="2024-12-13T13:26:52.080199037Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b107a98bcf27297d642d248711a3fc70,Namespace:kube-system,Attempt:0,}" Dec 13 13:26:52.084948 kubelet[2284]: E1213 13:26:52.084899 2284 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:26:52.085540 containerd[1470]: time="2024-12-13T13:26:52.085289757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8a50003978138b3ab9890682eff4eae8,Namespace:kube-system,Attempt:0,}" Dec 13 13:26:52.099091 kubelet[2284]: E1213 13:26:52.099061 2284 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:26:52.099623 containerd[1470]: time="2024-12-13T13:26:52.099407957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:8776b99abf086dcca6dffe34fa80fd53,Namespace:kube-system,Attempt:0,}" Dec 13 13:26:52.144191 kubelet[2284]: I1213 13:26:52.144163 2284 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 13:26:52.144652 kubelet[2284]: E1213 13:26:52.144614 2284 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.123:6443/api/v1/nodes\": dial tcp 10.0.0.123:6443: connect: connection refused" node="localhost" Dec 13 13:26:52.377178 kubelet[2284]: W1213 13:26:52.376965 2284 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.123:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.123:6443: connect: connection refused Dec 13 13:26:52.377178 kubelet[2284]: E1213 13:26:52.377029 2284 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.123:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.123:6443: connect: connection refused Dec 13 13:26:52.512611 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2598743670.mount: Deactivated successfully. Dec 13 13:26:52.516300 containerd[1470]: time="2024-12-13T13:26:52.516256037Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:26:52.517963 containerd[1470]: time="2024-12-13T13:26:52.517914957Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Dec 13 13:26:52.518757 containerd[1470]: time="2024-12-13T13:26:52.518706717Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:26:52.523825 containerd[1470]: time="2024-12-13T13:26:52.521826037Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:26:52.523825 containerd[1470]: time="2024-12-13T13:26:52.522311717Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 13:26:52.523825 containerd[1470]: time="2024-12-13T13:26:52.523092557Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:26:52.524481 containerd[1470]: time="2024-12-13T13:26:52.524455397Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 444.17248ms" Dec 13 13:26:52.525808 containerd[1470]: time="2024-12-13T13:26:52.525095357Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:26:52.526572 containerd[1470]: time="2024-12-13T13:26:52.526533117Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 13:26:52.528092 containerd[1470]: time="2024-12-13T13:26:52.528050757Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 442.69652ms" Dec 13 13:26:52.535476 containerd[1470]: time="2024-12-13T13:26:52.535440517Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 435.97224ms" Dec 13 13:26:52.592608 kubelet[2284]: E1213 13:26:52.592505 2284 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.123:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.123:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1810bf7fa1c9fdbd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-12-13 13:26:51.433762237 +0000 UTC m=+1.347907961,LastTimestamp:2024-12-13 13:26:51.433762237 +0000 UTC m=+1.347907961,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 13 13:26:52.722068 containerd[1470]: time="2024-12-13T13:26:52.721934797Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:26:52.722068 containerd[1470]: time="2024-12-13T13:26:52.722021437Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:26:52.722378 containerd[1470]: time="2024-12-13T13:26:52.722055077Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:26:52.722378 containerd[1470]: time="2024-12-13T13:26:52.722148957Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:26:52.726613 containerd[1470]: time="2024-12-13T13:26:52.725846717Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:26:52.726703 containerd[1470]: time="2024-12-13T13:26:52.726624837Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:26:52.726703 containerd[1470]: time="2024-12-13T13:26:52.726663077Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:26:52.726703 containerd[1470]: time="2024-12-13T13:26:52.726673357Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:26:52.726790 containerd[1470]: time="2024-12-13T13:26:52.726731397Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:26:52.726924 containerd[1470]: time="2024-12-13T13:26:52.726864197Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:26:52.727092 containerd[1470]: time="2024-12-13T13:26:52.726983237Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:26:52.727240 containerd[1470]: time="2024-12-13T13:26:52.727207557Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:26:52.743866 kubelet[2284]: W1213 13:26:52.743764 2284 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.123:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.123:6443: connect: connection refused Dec 13 13:26:52.743866 kubelet[2284]: E1213 13:26:52.743823 2284 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.123:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.123:6443: connect: connection refused Dec 13 13:26:52.750037 systemd[1]: Started cri-containerd-1cbf59b24f6a272dc79c1903b97a84405efc0a6e0574624eaac232bc43e3c2aa.scope - libcontainer container 1cbf59b24f6a272dc79c1903b97a84405efc0a6e0574624eaac232bc43e3c2aa. Dec 13 13:26:52.751042 systemd[1]: Started cri-containerd-8438df6518478cef84e70f0119eb81255b77547651cd835aac7a6a4cfea67016.scope - libcontainer container 8438df6518478cef84e70f0119eb81255b77547651cd835aac7a6a4cfea67016. Dec 13 13:26:52.751957 systemd[1]: Started cri-containerd-f109e79f4810547d978981d90fe5cdec60b2cc87f0b83405b60c22729de51af8.scope - libcontainer container f109e79f4810547d978981d90fe5cdec60b2cc87f0b83405b60c22729de51af8. Dec 13 13:26:52.784171 containerd[1470]: time="2024-12-13T13:26:52.784083797Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b107a98bcf27297d642d248711a3fc70,Namespace:kube-system,Attempt:0,} returns sandbox id \"1cbf59b24f6a272dc79c1903b97a84405efc0a6e0574624eaac232bc43e3c2aa\"" Dec 13 13:26:52.786607 kubelet[2284]: E1213 13:26:52.786574 2284 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:26:52.789639 containerd[1470]: time="2024-12-13T13:26:52.789606957Z" level=info msg="CreateContainer within sandbox \"1cbf59b24f6a272dc79c1903b97a84405efc0a6e0574624eaac232bc43e3c2aa\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 13:26:52.791032 containerd[1470]: time="2024-12-13T13:26:52.790963277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8a50003978138b3ab9890682eff4eae8,Namespace:kube-system,Attempt:0,} returns sandbox id \"8438df6518478cef84e70f0119eb81255b77547651cd835aac7a6a4cfea67016\"" Dec 13 13:26:52.791601 containerd[1470]: time="2024-12-13T13:26:52.791020557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:8776b99abf086dcca6dffe34fa80fd53,Namespace:kube-system,Attempt:0,} returns sandbox id \"f109e79f4810547d978981d90fe5cdec60b2cc87f0b83405b60c22729de51af8\"" Dec 13 13:26:52.791824 kubelet[2284]: E1213 13:26:52.791802 2284 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:26:52.792515 kubelet[2284]: E1213 13:26:52.792228 2284 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:26:52.794743 containerd[1470]: time="2024-12-13T13:26:52.794709997Z" level=info msg="CreateContainer within sandbox \"8438df6518478cef84e70f0119eb81255b77547651cd835aac7a6a4cfea67016\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 13:26:52.795515 containerd[1470]: time="2024-12-13T13:26:52.795480037Z" level=info msg="CreateContainer within sandbox \"f109e79f4810547d978981d90fe5cdec60b2cc87f0b83405b60c22729de51af8\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 13:26:52.803393 containerd[1470]: time="2024-12-13T13:26:52.803298357Z" level=info msg="CreateContainer within sandbox \"1cbf59b24f6a272dc79c1903b97a84405efc0a6e0574624eaac232bc43e3c2aa\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9cef548a4516469daf255545994b5c8c9fde252868833b4a18d3ff0231fc3d9f\"" Dec 13 13:26:52.803997 containerd[1470]: time="2024-12-13T13:26:52.803967557Z" level=info msg="StartContainer for \"9cef548a4516469daf255545994b5c8c9fde252868833b4a18d3ff0231fc3d9f\"" Dec 13 13:26:52.807183 containerd[1470]: time="2024-12-13T13:26:52.807152557Z" level=info msg="CreateContainer within sandbox \"8438df6518478cef84e70f0119eb81255b77547651cd835aac7a6a4cfea67016\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"1e9dbe79a548b2ce64c4854f40aa4a0ed81b7bdc10f57b003d27f5822207b214\"" Dec 13 13:26:52.807921 containerd[1470]: time="2024-12-13T13:26:52.807802597Z" level=info msg="StartContainer for \"1e9dbe79a548b2ce64c4854f40aa4a0ed81b7bdc10f57b003d27f5822207b214\"" Dec 13 13:26:52.813904 containerd[1470]: time="2024-12-13T13:26:52.813850837Z" level=info msg="CreateContainer within sandbox \"f109e79f4810547d978981d90fe5cdec60b2cc87f0b83405b60c22729de51af8\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"18f5397562415a48ae6104a1bf9cc3e004be3990b4514fbff14a92b1257dfe07\"" Dec 13 13:26:52.814250 containerd[1470]: time="2024-12-13T13:26:52.814220317Z" level=info msg="StartContainer for \"18f5397562415a48ae6104a1bf9cc3e004be3990b4514fbff14a92b1257dfe07\"" Dec 13 13:26:52.829016 systemd[1]: Started cri-containerd-9cef548a4516469daf255545994b5c8c9fde252868833b4a18d3ff0231fc3d9f.scope - libcontainer container 9cef548a4516469daf255545994b5c8c9fde252868833b4a18d3ff0231fc3d9f. Dec 13 13:26:52.832157 systemd[1]: Started cri-containerd-1e9dbe79a548b2ce64c4854f40aa4a0ed81b7bdc10f57b003d27f5822207b214.scope - libcontainer container 1e9dbe79a548b2ce64c4854f40aa4a0ed81b7bdc10f57b003d27f5822207b214. Dec 13 13:26:52.843011 systemd[1]: Started cri-containerd-18f5397562415a48ae6104a1bf9cc3e004be3990b4514fbff14a92b1257dfe07.scope - libcontainer container 18f5397562415a48ae6104a1bf9cc3e004be3990b4514fbff14a92b1257dfe07. Dec 13 13:26:52.845134 kubelet[2284]: E1213 13:26:52.845099 2284 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.123:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.123:6443: connect: connection refused" interval="1.6s" Dec 13 13:26:52.893210 containerd[1470]: time="2024-12-13T13:26:52.893160237Z" level=info msg="StartContainer for \"9cef548a4516469daf255545994b5c8c9fde252868833b4a18d3ff0231fc3d9f\" returns successfully" Dec 13 13:26:52.893319 containerd[1470]: time="2024-12-13T13:26:52.893298077Z" level=info msg="StartContainer for \"1e9dbe79a548b2ce64c4854f40aa4a0ed81b7bdc10f57b003d27f5822207b214\" returns successfully" Dec 13 13:26:52.893348 containerd[1470]: time="2024-12-13T13:26:52.893323277Z" level=info msg="StartContainer for \"18f5397562415a48ae6104a1bf9cc3e004be3990b4514fbff14a92b1257dfe07\" returns successfully" Dec 13 13:26:52.894177 kubelet[2284]: W1213 13:26:52.894113 2284 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.123:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.123:6443: connect: connection refused Dec 13 13:26:52.894177 kubelet[2284]: E1213 13:26:52.894162 2284 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.123:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.123:6443: connect: connection refused Dec 13 13:26:52.946470 kubelet[2284]: I1213 13:26:52.946444 2284 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 13:26:52.947208 kubelet[2284]: E1213 13:26:52.947176 2284 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.123:6443/api/v1/nodes\": dial tcp 10.0.0.123:6443: connect: connection refused" node="localhost" Dec 13 13:26:53.023927 kubelet[2284]: W1213 13:26:53.023761 2284 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.123:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.123:6443: connect: connection refused Dec 13 13:26:53.023927 kubelet[2284]: E1213 13:26:53.023828 2284 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.123:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.123:6443: connect: connection refused Dec 13 13:26:53.464748 kubelet[2284]: E1213 13:26:53.464719 2284 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:26:53.469915 kubelet[2284]: E1213 13:26:53.466083 2284 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:26:53.475936 kubelet[2284]: E1213 13:26:53.475914 2284 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:26:54.469948 kubelet[2284]: E1213 13:26:54.469894 2284 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:26:54.552481 kubelet[2284]: I1213 13:26:54.551917 2284 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 13:26:54.635821 kubelet[2284]: E1213 13:26:54.635781 2284 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Dec 13 13:26:54.728466 kubelet[2284]: I1213 13:26:54.728043 2284 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Dec 13 13:26:54.736839 kubelet[2284]: E1213 13:26:54.736815 2284 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 13:26:54.837724 kubelet[2284]: E1213 13:26:54.837696 2284 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 13:26:54.938509 kubelet[2284]: E1213 13:26:54.938457 2284 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 13:26:55.039214 kubelet[2284]: E1213 13:26:55.038892 2284 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 13:26:55.139542 kubelet[2284]: E1213 13:26:55.139506 2284 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 13:26:55.240444 kubelet[2284]: E1213 13:26:55.240410 2284 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 13:26:55.341053 kubelet[2284]: E1213 13:26:55.340971 2284 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 13:26:55.442049 kubelet[2284]: E1213 13:26:55.442016 2284 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 13:26:55.469797 kubelet[2284]: E1213 13:26:55.469690 2284 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:26:56.434625 kubelet[2284]: I1213 13:26:56.434576 2284 apiserver.go:52] "Watching apiserver" Dec 13 13:26:56.439075 kubelet[2284]: I1213 13:26:56.439025 2284 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Dec 13 13:26:56.542130 systemd[1]: Reloading requested from client PID 2563 ('systemctl') (unit session-7.scope)... Dec 13 13:26:56.542144 systemd[1]: Reloading... Dec 13 13:26:56.610907 zram_generator::config[2604]: No configuration found. Dec 13 13:26:56.689154 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 13:26:56.754697 systemd[1]: Reloading finished in 212 ms. Dec 13 13:26:56.786323 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:26:56.786711 kubelet[2284]: I1213 13:26:56.786431 2284 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 13:26:56.801673 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 13:26:56.802055 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:26:56.802158 systemd[1]: kubelet.service: Consumed 1.671s CPU time, 116.6M memory peak, 0B memory swap peak. Dec 13 13:26:56.809251 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:26:56.897069 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:26:56.900500 (kubelet)[2644]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 13:26:56.938221 kubelet[2644]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 13:26:56.938221 kubelet[2644]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 13:26:56.938221 kubelet[2644]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 13:26:56.938535 kubelet[2644]: I1213 13:26:56.938257 2644 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 13:26:56.945249 kubelet[2644]: I1213 13:26:56.944742 2644 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Dec 13 13:26:56.945249 kubelet[2644]: I1213 13:26:56.944832 2644 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 13:26:56.945952 kubelet[2644]: I1213 13:26:56.945930 2644 server.go:927] "Client rotation is on, will bootstrap in background" Dec 13 13:26:56.947210 kubelet[2644]: I1213 13:26:56.947194 2644 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 13:26:56.950444 kubelet[2644]: I1213 13:26:56.950420 2644 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 13:26:56.954634 kubelet[2644]: I1213 13:26:56.954584 2644 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 13:26:56.954776 kubelet[2644]: I1213 13:26:56.954730 2644 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 13:26:56.954908 kubelet[2644]: I1213 13:26:56.954749 2644 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 13:26:56.954985 kubelet[2644]: I1213 13:26:56.954914 2644 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 13:26:56.954985 kubelet[2644]: I1213 13:26:56.954923 2644 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 13:26:56.954985 kubelet[2644]: I1213 13:26:56.954953 2644 state_mem.go:36] "Initialized new in-memory state store" Dec 13 13:26:56.955052 kubelet[2644]: I1213 13:26:56.955040 2644 kubelet.go:400] "Attempting to sync node with API server" Dec 13 13:26:56.955052 kubelet[2644]: I1213 13:26:56.955051 2644 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 13:26:56.955089 kubelet[2644]: I1213 13:26:56.955070 2644 kubelet.go:312] "Adding apiserver pod source" Dec 13 13:26:56.955089 kubelet[2644]: I1213 13:26:56.955085 2644 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 13:26:56.955488 kubelet[2644]: I1213 13:26:56.955453 2644 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Dec 13 13:26:56.958937 kubelet[2644]: I1213 13:26:56.958909 2644 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 13:26:56.962029 kubelet[2644]: I1213 13:26:56.961965 2644 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 13:26:56.962237 kubelet[2644]: I1213 13:26:56.962221 2644 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 13:26:56.964060 kubelet[2644]: I1213 13:26:56.964035 2644 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 13:26:56.965123 kubelet[2644]: I1213 13:26:56.965003 2644 server.go:1264] "Started kubelet" Dec 13 13:26:56.968380 kubelet[2644]: I1213 13:26:56.965837 2644 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 13:26:56.968380 kubelet[2644]: I1213 13:26:56.966977 2644 server.go:455] "Adding debug handlers to kubelet server" Dec 13 13:26:56.968380 kubelet[2644]: I1213 13:26:56.967939 2644 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 13:26:56.968380 kubelet[2644]: I1213 13:26:56.968314 2644 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Dec 13 13:26:56.969501 kubelet[2644]: I1213 13:26:56.969487 2644 reconciler.go:26] "Reconciler: start to sync state" Dec 13 13:26:56.978036 kubelet[2644]: I1213 13:26:56.978007 2644 factory.go:221] Registration of the containerd container factory successfully Dec 13 13:26:56.978036 kubelet[2644]: I1213 13:26:56.978029 2644 factory.go:221] Registration of the systemd container factory successfully Dec 13 13:26:56.978110 kubelet[2644]: I1213 13:26:56.978087 2644 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 13:26:56.978589 kubelet[2644]: E1213 13:26:56.978562 2644 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 13:26:56.981205 kubelet[2644]: I1213 13:26:56.981181 2644 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 13:26:56.983577 kubelet[2644]: I1213 13:26:56.983559 2644 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 13:26:56.983688 kubelet[2644]: I1213 13:26:56.983675 2644 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 13:26:56.985697 kubelet[2644]: I1213 13:26:56.985677 2644 kubelet.go:2337] "Starting kubelet main sync loop" Dec 13 13:26:56.985756 kubelet[2644]: E1213 13:26:56.985737 2644 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 13:26:57.014803 kubelet[2644]: I1213 13:26:57.014777 2644 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 13:26:57.014803 kubelet[2644]: I1213 13:26:57.014796 2644 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 13:26:57.014918 kubelet[2644]: I1213 13:26:57.014814 2644 state_mem.go:36] "Initialized new in-memory state store" Dec 13 13:26:57.014987 kubelet[2644]: I1213 13:26:57.014969 2644 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 13:26:57.015021 kubelet[2644]: I1213 13:26:57.014986 2644 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 13:26:57.015021 kubelet[2644]: I1213 13:26:57.015004 2644 policy_none.go:49] "None policy: Start" Dec 13 13:26:57.015564 kubelet[2644]: I1213 13:26:57.015539 2644 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 13:26:57.015564 kubelet[2644]: I1213 13:26:57.015562 2644 state_mem.go:35] "Initializing new in-memory state store" Dec 13 13:26:57.015722 kubelet[2644]: I1213 13:26:57.015707 2644 state_mem.go:75] "Updated machine memory state" Dec 13 13:26:57.019161 kubelet[2644]: I1213 13:26:57.019141 2644 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 13:26:57.019445 kubelet[2644]: I1213 13:26:57.019282 2644 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 13:26:57.019445 kubelet[2644]: I1213 13:26:57.019380 2644 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 13:26:57.071768 kubelet[2644]: I1213 13:26:57.071736 2644 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 13:26:57.079630 kubelet[2644]: I1213 13:26:57.079594 2644 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Dec 13 13:26:57.079686 kubelet[2644]: I1213 13:26:57.079667 2644 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Dec 13 13:26:57.086804 kubelet[2644]: I1213 13:26:57.086712 2644 topology_manager.go:215] "Topology Admit Handler" podUID="8776b99abf086dcca6dffe34fa80fd53" podNamespace="kube-system" podName="kube-apiserver-localhost" Dec 13 13:26:57.086866 kubelet[2644]: I1213 13:26:57.086830 2644 topology_manager.go:215] "Topology Admit Handler" podUID="8a50003978138b3ab9890682eff4eae8" podNamespace="kube-system" podName="kube-controller-manager-localhost" Dec 13 13:26:57.086904 kubelet[2644]: I1213 13:26:57.086881 2644 topology_manager.go:215] "Topology Admit Handler" podUID="b107a98bcf27297d642d248711a3fc70" podNamespace="kube-system" podName="kube-scheduler-localhost" Dec 13 13:26:57.170595 kubelet[2644]: I1213 13:26:57.170562 2644 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:26:57.170595 kubelet[2644]: I1213 13:26:57.170595 2644 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:26:57.170705 kubelet[2644]: I1213 13:26:57.170614 2644 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:26:57.170705 kubelet[2644]: I1213 13:26:57.170640 2644 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8776b99abf086dcca6dffe34fa80fd53-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"8776b99abf086dcca6dffe34fa80fd53\") " pod="kube-system/kube-apiserver-localhost" Dec 13 13:26:57.170705 kubelet[2644]: I1213 13:26:57.170656 2644 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8776b99abf086dcca6dffe34fa80fd53-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"8776b99abf086dcca6dffe34fa80fd53\") " pod="kube-system/kube-apiserver-localhost" Dec 13 13:26:57.170705 kubelet[2644]: I1213 13:26:57.170671 2644 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8776b99abf086dcca6dffe34fa80fd53-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"8776b99abf086dcca6dffe34fa80fd53\") " pod="kube-system/kube-apiserver-localhost" Dec 13 13:26:57.170705 kubelet[2644]: I1213 13:26:57.170686 2644 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b107a98bcf27297d642d248711a3fc70-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b107a98bcf27297d642d248711a3fc70\") " pod="kube-system/kube-scheduler-localhost" Dec 13 13:26:57.170806 kubelet[2644]: I1213 13:26:57.170700 2644 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:26:57.170806 kubelet[2644]: I1213 13:26:57.170724 2644 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:26:57.396475 kubelet[2644]: E1213 13:26:57.396160 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:26:57.396475 kubelet[2644]: E1213 13:26:57.396310 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:26:57.396598 kubelet[2644]: E1213 13:26:57.396511 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:26:57.956116 kubelet[2644]: I1213 13:26:57.956081 2644 apiserver.go:52] "Watching apiserver" Dec 13 13:26:57.968614 kubelet[2644]: I1213 13:26:57.968560 2644 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Dec 13 13:26:57.997556 kubelet[2644]: E1213 13:26:57.997504 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:26:57.998529 kubelet[2644]: E1213 13:26:57.998456 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:26:58.019423 kubelet[2644]: E1213 13:26:58.019067 2644 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Dec 13 13:26:58.019771 kubelet[2644]: E1213 13:26:58.019753 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:26:58.028611 kubelet[2644]: I1213 13:26:58.028559 2644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.028546686 podStartE2EDuration="1.028546686s" podCreationTimestamp="2024-12-13 13:26:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:26:58.027937123 +0000 UTC m=+1.123025967" watchObservedRunningTime="2024-12-13 13:26:58.028546686 +0000 UTC m=+1.123635530" Dec 13 13:26:58.028707 kubelet[2644]: I1213 13:26:58.028661 2644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.028656566 podStartE2EDuration="1.028656566s" podCreationTimestamp="2024-12-13 13:26:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:26:58.018444678 +0000 UTC m=+1.113533522" watchObservedRunningTime="2024-12-13 13:26:58.028656566 +0000 UTC m=+1.123745410" Dec 13 13:26:58.045928 kubelet[2644]: I1213 13:26:58.045802 2644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.045786806 podStartE2EDuration="1.045786806s" podCreationTimestamp="2024-12-13 13:26:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:26:58.036765484 +0000 UTC m=+1.131854288" watchObservedRunningTime="2024-12-13 13:26:58.045786806 +0000 UTC m=+1.140875650" Dec 13 13:26:59.004284 kubelet[2644]: E1213 13:26:59.004254 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:26:59.004712 kubelet[2644]: E1213 13:26:59.004563 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:26:59.999957 kubelet[2644]: E1213 13:26:59.999862 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:27:01.684859 sudo[1652]: pam_unix(sudo:session): session closed for user root Dec 13 13:27:01.686251 sshd[1651]: Connection closed by 10.0.0.1 port 54866 Dec 13 13:27:01.686802 sshd-session[1649]: pam_unix(sshd:session): session closed for user core Dec 13 13:27:01.690281 systemd[1]: sshd@6-10.0.0.123:22-10.0.0.1:54866.service: Deactivated successfully. Dec 13 13:27:01.692054 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 13:27:01.692200 systemd[1]: session-7.scope: Consumed 6.921s CPU time, 191.4M memory peak, 0B memory swap peak. Dec 13 13:27:01.692636 systemd-logind[1454]: Session 7 logged out. Waiting for processes to exit. Dec 13 13:27:01.693625 systemd-logind[1454]: Removed session 7. Dec 13 13:27:03.074503 kubelet[2644]: E1213 13:27:03.074466 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:27:04.006816 kubelet[2644]: E1213 13:27:04.006777 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:27:08.709056 kubelet[2644]: E1213 13:27:08.709015 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:27:08.743878 kubelet[2644]: E1213 13:27:08.743848 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:27:10.736005 update_engine[1455]: I20241213 13:27:10.735931 1455 update_attempter.cc:509] Updating boot flags... Dec 13 13:27:10.761898 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2746) Dec 13 13:27:10.804903 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2750) Dec 13 13:27:10.829911 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2750) Dec 13 13:27:11.421464 kubelet[2644]: I1213 13:27:11.421431 2644 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 13:27:11.426408 containerd[1470]: time="2024-12-13T13:27:11.426291592Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 13:27:11.427303 kubelet[2644]: I1213 13:27:11.426553 2644 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 13:27:12.389228 kubelet[2644]: I1213 13:27:12.389151 2644 topology_manager.go:215] "Topology Admit Handler" podUID="aa72ef9a-464d-423b-902e-3b48945ff582" podNamespace="kube-system" podName="kube-proxy-g9gwp" Dec 13 13:27:12.398090 systemd[1]: Created slice kubepods-besteffort-podaa72ef9a_464d_423b_902e_3b48945ff582.slice - libcontainer container kubepods-besteffort-podaa72ef9a_464d_423b_902e_3b48945ff582.slice. Dec 13 13:27:12.488473 kubelet[2644]: I1213 13:27:12.488410 2644 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aa72ef9a-464d-423b-902e-3b48945ff582-lib-modules\") pod \"kube-proxy-g9gwp\" (UID: \"aa72ef9a-464d-423b-902e-3b48945ff582\") " pod="kube-system/kube-proxy-g9gwp" Dec 13 13:27:12.488473 kubelet[2644]: I1213 13:27:12.488472 2644 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/aa72ef9a-464d-423b-902e-3b48945ff582-kube-proxy\") pod \"kube-proxy-g9gwp\" (UID: \"aa72ef9a-464d-423b-902e-3b48945ff582\") " pod="kube-system/kube-proxy-g9gwp" Dec 13 13:27:12.488816 kubelet[2644]: I1213 13:27:12.488495 2644 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aa72ef9a-464d-423b-902e-3b48945ff582-xtables-lock\") pod \"kube-proxy-g9gwp\" (UID: \"aa72ef9a-464d-423b-902e-3b48945ff582\") " pod="kube-system/kube-proxy-g9gwp" Dec 13 13:27:12.488816 kubelet[2644]: I1213 13:27:12.488511 2644 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9vnf\" (UniqueName: \"kubernetes.io/projected/aa72ef9a-464d-423b-902e-3b48945ff582-kube-api-access-z9vnf\") pod \"kube-proxy-g9gwp\" (UID: \"aa72ef9a-464d-423b-902e-3b48945ff582\") " pod="kube-system/kube-proxy-g9gwp" Dec 13 13:27:12.542091 kubelet[2644]: I1213 13:27:12.542052 2644 topology_manager.go:215] "Topology Admit Handler" podUID="c33cd43d-6cfd-4215-93c6-409dc6ed842e" podNamespace="tigera-operator" podName="tigera-operator-7bc55997bb-pbbfk" Dec 13 13:27:12.549772 systemd[1]: Created slice kubepods-besteffort-podc33cd43d_6cfd_4215_93c6_409dc6ed842e.slice - libcontainer container kubepods-besteffort-podc33cd43d_6cfd_4215_93c6_409dc6ed842e.slice. Dec 13 13:27:12.589358 kubelet[2644]: I1213 13:27:12.589320 2644 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c33cd43d-6cfd-4215-93c6-409dc6ed842e-var-lib-calico\") pod \"tigera-operator-7bc55997bb-pbbfk\" (UID: \"c33cd43d-6cfd-4215-93c6-409dc6ed842e\") " pod="tigera-operator/tigera-operator-7bc55997bb-pbbfk" Dec 13 13:27:12.589358 kubelet[2644]: I1213 13:27:12.589357 2644 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4wbk\" (UniqueName: \"kubernetes.io/projected/c33cd43d-6cfd-4215-93c6-409dc6ed842e-kube-api-access-x4wbk\") pod \"tigera-operator-7bc55997bb-pbbfk\" (UID: \"c33cd43d-6cfd-4215-93c6-409dc6ed842e\") " pod="tigera-operator/tigera-operator-7bc55997bb-pbbfk" Dec 13 13:27:12.710839 kubelet[2644]: E1213 13:27:12.710796 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:27:12.711799 containerd[1470]: time="2024-12-13T13:27:12.711421173Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-g9gwp,Uid:aa72ef9a-464d-423b-902e-3b48945ff582,Namespace:kube-system,Attempt:0,}" Dec 13 13:27:12.729600 containerd[1470]: time="2024-12-13T13:27:12.729387607Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:27:12.729600 containerd[1470]: time="2024-12-13T13:27:12.729441927Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:27:12.729600 containerd[1470]: time="2024-12-13T13:27:12.729453127Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:27:12.729600 containerd[1470]: time="2024-12-13T13:27:12.729525767Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:27:12.751056 systemd[1]: Started cri-containerd-a62ae787ea318a72b0ecfeb8760484fe749349f99f8928d3eab9e64e07cc162f.scope - libcontainer container a62ae787ea318a72b0ecfeb8760484fe749349f99f8928d3eab9e64e07cc162f. Dec 13 13:27:12.769730 containerd[1470]: time="2024-12-13T13:27:12.769691043Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-g9gwp,Uid:aa72ef9a-464d-423b-902e-3b48945ff582,Namespace:kube-system,Attempt:0,} returns sandbox id \"a62ae787ea318a72b0ecfeb8760484fe749349f99f8928d3eab9e64e07cc162f\"" Dec 13 13:27:12.772180 kubelet[2644]: E1213 13:27:12.772152 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:27:12.780524 containerd[1470]: time="2024-12-13T13:27:12.780482383Z" level=info msg="CreateContainer within sandbox \"a62ae787ea318a72b0ecfeb8760484fe749349f99f8928d3eab9e64e07cc162f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 13:27:12.803540 containerd[1470]: time="2024-12-13T13:27:12.803489107Z" level=info msg="CreateContainer within sandbox \"a62ae787ea318a72b0ecfeb8760484fe749349f99f8928d3eab9e64e07cc162f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1124da45bccf9d26bf8cffe80c1685edbdf7a5b4b2efbe880dd376df81404149\"" Dec 13 13:27:12.804894 containerd[1470]: time="2024-12-13T13:27:12.804308909Z" level=info msg="StartContainer for \"1124da45bccf9d26bf8cffe80c1685edbdf7a5b4b2efbe880dd376df81404149\"" Dec 13 13:27:12.830047 systemd[1]: Started cri-containerd-1124da45bccf9d26bf8cffe80c1685edbdf7a5b4b2efbe880dd376df81404149.scope - libcontainer container 1124da45bccf9d26bf8cffe80c1685edbdf7a5b4b2efbe880dd376df81404149. Dec 13 13:27:12.854176 containerd[1470]: time="2024-12-13T13:27:12.854125403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-pbbfk,Uid:c33cd43d-6cfd-4215-93c6-409dc6ed842e,Namespace:tigera-operator,Attempt:0,}" Dec 13 13:27:12.860625 containerd[1470]: time="2024-12-13T13:27:12.860596775Z" level=info msg="StartContainer for \"1124da45bccf9d26bf8cffe80c1685edbdf7a5b4b2efbe880dd376df81404149\" returns successfully" Dec 13 13:27:12.882775 containerd[1470]: time="2024-12-13T13:27:12.882626536Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:27:12.882775 containerd[1470]: time="2024-12-13T13:27:12.882687017Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:27:12.882775 containerd[1470]: time="2024-12-13T13:27:12.882708457Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:27:12.882994 containerd[1470]: time="2024-12-13T13:27:12.882865737Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:27:12.904079 systemd[1]: Started cri-containerd-4ea740fe960c6f59a7cf54530426a9a9e8b53d4cfc173eaffb0bf5bb98309c73.scope - libcontainer container 4ea740fe960c6f59a7cf54530426a9a9e8b53d4cfc173eaffb0bf5bb98309c73. Dec 13 13:27:12.936805 containerd[1470]: time="2024-12-13T13:27:12.936766999Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-pbbfk,Uid:c33cd43d-6cfd-4215-93c6-409dc6ed842e,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"4ea740fe960c6f59a7cf54530426a9a9e8b53d4cfc173eaffb0bf5bb98309c73\"" Dec 13 13:27:12.947161 containerd[1470]: time="2024-12-13T13:27:12.947105818Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Dec 13 13:27:13.021513 kubelet[2644]: E1213 13:27:13.021159 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:27:13.034540 kubelet[2644]: I1213 13:27:13.034238 2644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-g9gwp" podStartSLOduration=1.034221859 podStartE2EDuration="1.034221859s" podCreationTimestamp="2024-12-13 13:27:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:27:13.033624938 +0000 UTC m=+16.128713782" watchObservedRunningTime="2024-12-13 13:27:13.034221859 +0000 UTC m=+16.129310703" Dec 13 13:27:14.166672 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1873998068.mount: Deactivated successfully. Dec 13 13:27:14.590944 containerd[1470]: time="2024-12-13T13:27:14.590762231Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=19125992" Dec 13 13:27:14.594911 containerd[1470]: time="2024-12-13T13:27:14.594881998Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"19120155\" in 1.64773018s" Dec 13 13:27:14.594911 containerd[1470]: time="2024-12-13T13:27:14.594912878Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\"" Dec 13 13:27:14.598482 containerd[1470]: time="2024-12-13T13:27:14.598346164Z" level=info msg="CreateContainer within sandbox \"4ea740fe960c6f59a7cf54530426a9a9e8b53d4cfc173eaffb0bf5bb98309c73\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 13 13:27:14.613176 containerd[1470]: time="2024-12-13T13:27:14.613051748Z" level=info msg="CreateContainer within sandbox \"4ea740fe960c6f59a7cf54530426a9a9e8b53d4cfc173eaffb0bf5bb98309c73\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"0166d860af7e9242886a509256bedf89c35b53e79dadb2d1762de9a7e92e0b54\"" Dec 13 13:27:14.613520 containerd[1470]: time="2024-12-13T13:27:14.613495389Z" level=info msg="StartContainer for \"0166d860af7e9242886a509256bedf89c35b53e79dadb2d1762de9a7e92e0b54\"" Dec 13 13:27:14.616112 containerd[1470]: time="2024-12-13T13:27:14.615181112Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:27:14.616112 containerd[1470]: time="2024-12-13T13:27:14.615904913Z" level=info msg="ImageCreate event name:\"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:27:14.616531 containerd[1470]: time="2024-12-13T13:27:14.616501674Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:27:14.642028 systemd[1]: Started cri-containerd-0166d860af7e9242886a509256bedf89c35b53e79dadb2d1762de9a7e92e0b54.scope - libcontainer container 0166d860af7e9242886a509256bedf89c35b53e79dadb2d1762de9a7e92e0b54. Dec 13 13:27:14.669752 containerd[1470]: time="2024-12-13T13:27:14.667512919Z" level=info msg="StartContainer for \"0166d860af7e9242886a509256bedf89c35b53e79dadb2d1762de9a7e92e0b54\" returns successfully" Dec 13 13:27:15.038451 kubelet[2644]: I1213 13:27:15.038374 2644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7bc55997bb-pbbfk" podStartSLOduration=1.387746586 podStartE2EDuration="3.038357931s" podCreationTimestamp="2024-12-13 13:27:12 +0000 UTC" firstStartedPulling="2024-12-13 13:27:12.946567817 +0000 UTC m=+16.041656661" lastFinishedPulling="2024-12-13 13:27:14.597179162 +0000 UTC m=+17.692268006" observedRunningTime="2024-12-13 13:27:15.03834981 +0000 UTC m=+18.133438654" watchObservedRunningTime="2024-12-13 13:27:15.038357931 +0000 UTC m=+18.133446735" Dec 13 13:27:18.105884 kubelet[2644]: I1213 13:27:18.105821 2644 topology_manager.go:215] "Topology Admit Handler" podUID="abfff615-a3d8-4491-92b1-5e5d2bf4c779" podNamespace="calico-system" podName="calico-typha-867dd587b-dljjl" Dec 13 13:27:18.123832 systemd[1]: Created slice kubepods-besteffort-podabfff615_a3d8_4491_92b1_5e5d2bf4c779.slice - libcontainer container kubepods-besteffort-podabfff615_a3d8_4491_92b1_5e5d2bf4c779.slice. Dec 13 13:27:18.172241 kubelet[2644]: I1213 13:27:18.172189 2644 topology_manager.go:215] "Topology Admit Handler" podUID="3c6840b6-a59f-44c6-8adb-75163cd2ce0b" podNamespace="calico-system" podName="calico-node-b5ckg" Dec 13 13:27:18.174088 kubelet[2644]: W1213 13:27:18.174023 2644 reflector.go:547] object-"calico-system"/"node-certs": failed to list *v1.Secret: secrets "node-certs" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'localhost' and this object Dec 13 13:27:18.174862 kubelet[2644]: E1213 13:27:18.174817 2644 reflector.go:150] object-"calico-system"/"node-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "node-certs" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'localhost' and this object Dec 13 13:27:18.180352 systemd[1]: Created slice kubepods-besteffort-pod3c6840b6_a59f_44c6_8adb_75163cd2ce0b.slice - libcontainer container kubepods-besteffort-pod3c6840b6_a59f_44c6_8adb_75163cd2ce0b.slice. Dec 13 13:27:18.227477 kubelet[2644]: I1213 13:27:18.227430 2644 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/3c6840b6-a59f-44c6-8adb-75163cd2ce0b-var-run-calico\") pod \"calico-node-b5ckg\" (UID: \"3c6840b6-a59f-44c6-8adb-75163cd2ce0b\") " pod="calico-system/calico-node-b5ckg" Dec 13 13:27:18.227577 kubelet[2644]: I1213 13:27:18.227492 2644 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/abfff615-a3d8-4491-92b1-5e5d2bf4c779-tigera-ca-bundle\") pod \"calico-typha-867dd587b-dljjl\" (UID: \"abfff615-a3d8-4491-92b1-5e5d2bf4c779\") " pod="calico-system/calico-typha-867dd587b-dljjl" Dec 13 13:27:18.227577 kubelet[2644]: I1213 13:27:18.227519 2644 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3c6840b6-a59f-44c6-8adb-75163cd2ce0b-lib-modules\") pod \"calico-node-b5ckg\" (UID: \"3c6840b6-a59f-44c6-8adb-75163cd2ce0b\") " pod="calico-system/calico-node-b5ckg" Dec 13 13:27:18.227577 kubelet[2644]: I1213 13:27:18.227537 2644 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3c6840b6-a59f-44c6-8adb-75163cd2ce0b-tigera-ca-bundle\") pod \"calico-node-b5ckg\" (UID: \"3c6840b6-a59f-44c6-8adb-75163cd2ce0b\") " pod="calico-system/calico-node-b5ckg" Dec 13 13:27:18.227577 kubelet[2644]: I1213 13:27:18.227553 2644 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/3c6840b6-a59f-44c6-8adb-75163cd2ce0b-flexvol-driver-host\") pod \"calico-node-b5ckg\" (UID: \"3c6840b6-a59f-44c6-8adb-75163cd2ce0b\") " pod="calico-system/calico-node-b5ckg" Dec 13 13:27:18.227577 kubelet[2644]: I1213 13:27:18.227571 2644 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvr9l\" (UniqueName: \"kubernetes.io/projected/abfff615-a3d8-4491-92b1-5e5d2bf4c779-kube-api-access-zvr9l\") pod \"calico-typha-867dd587b-dljjl\" (UID: \"abfff615-a3d8-4491-92b1-5e5d2bf4c779\") " pod="calico-system/calico-typha-867dd587b-dljjl" Dec 13 13:27:18.227688 kubelet[2644]: I1213 13:27:18.227586 2644 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/3c6840b6-a59f-44c6-8adb-75163cd2ce0b-node-certs\") pod \"calico-node-b5ckg\" (UID: \"3c6840b6-a59f-44c6-8adb-75163cd2ce0b\") " pod="calico-system/calico-node-b5ckg" Dec 13 13:27:18.227688 kubelet[2644]: I1213 13:27:18.227603 2644 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3c6840b6-a59f-44c6-8adb-75163cd2ce0b-var-lib-calico\") pod \"calico-node-b5ckg\" (UID: \"3c6840b6-a59f-44c6-8adb-75163cd2ce0b\") " pod="calico-system/calico-node-b5ckg" Dec 13 13:27:18.227688 kubelet[2644]: I1213 13:27:18.227618 2644 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ncvv\" (UniqueName: \"kubernetes.io/projected/3c6840b6-a59f-44c6-8adb-75163cd2ce0b-kube-api-access-9ncvv\") pod \"calico-node-b5ckg\" (UID: \"3c6840b6-a59f-44c6-8adb-75163cd2ce0b\") " pod="calico-system/calico-node-b5ckg" Dec 13 13:27:18.227688 kubelet[2644]: I1213 13:27:18.227661 2644 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/3c6840b6-a59f-44c6-8adb-75163cd2ce0b-cni-net-dir\") pod \"calico-node-b5ckg\" (UID: \"3c6840b6-a59f-44c6-8adb-75163cd2ce0b\") " pod="calico-system/calico-node-b5ckg" Dec 13 13:27:18.227767 kubelet[2644]: I1213 13:27:18.227701 2644 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/3c6840b6-a59f-44c6-8adb-75163cd2ce0b-cni-log-dir\") pod \"calico-node-b5ckg\" (UID: \"3c6840b6-a59f-44c6-8adb-75163cd2ce0b\") " pod="calico-system/calico-node-b5ckg" Dec 13 13:27:18.227767 kubelet[2644]: I1213 13:27:18.227721 2644 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/3c6840b6-a59f-44c6-8adb-75163cd2ce0b-policysync\") pod \"calico-node-b5ckg\" (UID: \"3c6840b6-a59f-44c6-8adb-75163cd2ce0b\") " pod="calico-system/calico-node-b5ckg" Dec 13 13:27:18.227767 kubelet[2644]: I1213 13:27:18.227737 2644 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/3c6840b6-a59f-44c6-8adb-75163cd2ce0b-cni-bin-dir\") pod \"calico-node-b5ckg\" (UID: \"3c6840b6-a59f-44c6-8adb-75163cd2ce0b\") " pod="calico-system/calico-node-b5ckg" Dec 13 13:27:18.227767 kubelet[2644]: I1213 13:27:18.227754 2644 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3c6840b6-a59f-44c6-8adb-75163cd2ce0b-xtables-lock\") pod \"calico-node-b5ckg\" (UID: \"3c6840b6-a59f-44c6-8adb-75163cd2ce0b\") " pod="calico-system/calico-node-b5ckg" Dec 13 13:27:18.227925 kubelet[2644]: I1213 13:27:18.227770 2644 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/abfff615-a3d8-4491-92b1-5e5d2bf4c779-typha-certs\") pod \"calico-typha-867dd587b-dljjl\" (UID: \"abfff615-a3d8-4491-92b1-5e5d2bf4c779\") " pod="calico-system/calico-typha-867dd587b-dljjl" Dec 13 13:27:18.301428 kubelet[2644]: I1213 13:27:18.301364 2644 topology_manager.go:215] "Topology Admit Handler" podUID="991aee5f-8651-49bc-ac7b-b0b4b2cc81c5" podNamespace="calico-system" podName="csi-node-driver-jl58q" Dec 13 13:27:18.301653 kubelet[2644]: E1213 13:27:18.301629 2644 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jl58q" podUID="991aee5f-8651-49bc-ac7b-b0b4b2cc81c5" Dec 13 13:27:18.329981 kubelet[2644]: I1213 13:27:18.328151 2644 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t977x\" (UniqueName: \"kubernetes.io/projected/991aee5f-8651-49bc-ac7b-b0b4b2cc81c5-kube-api-access-t977x\") pod \"csi-node-driver-jl58q\" (UID: \"991aee5f-8651-49bc-ac7b-b0b4b2cc81c5\") " pod="calico-system/csi-node-driver-jl58q" Dec 13 13:27:18.329981 kubelet[2644]: I1213 13:27:18.328253 2644 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/991aee5f-8651-49bc-ac7b-b0b4b2cc81c5-varrun\") pod \"csi-node-driver-jl58q\" (UID: \"991aee5f-8651-49bc-ac7b-b0b4b2cc81c5\") " pod="calico-system/csi-node-driver-jl58q" Dec 13 13:27:18.329981 kubelet[2644]: I1213 13:27:18.328284 2644 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/991aee5f-8651-49bc-ac7b-b0b4b2cc81c5-kubelet-dir\") pod \"csi-node-driver-jl58q\" (UID: \"991aee5f-8651-49bc-ac7b-b0b4b2cc81c5\") " pod="calico-system/csi-node-driver-jl58q" Dec 13 13:27:18.329981 kubelet[2644]: I1213 13:27:18.328346 2644 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/991aee5f-8651-49bc-ac7b-b0b4b2cc81c5-registration-dir\") pod \"csi-node-driver-jl58q\" (UID: \"991aee5f-8651-49bc-ac7b-b0b4b2cc81c5\") " pod="calico-system/csi-node-driver-jl58q" Dec 13 13:27:18.329981 kubelet[2644]: I1213 13:27:18.328398 2644 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/991aee5f-8651-49bc-ac7b-b0b4b2cc81c5-socket-dir\") pod \"csi-node-driver-jl58q\" (UID: \"991aee5f-8651-49bc-ac7b-b0b4b2cc81c5\") " pod="calico-system/csi-node-driver-jl58q" Dec 13 13:27:18.338224 kubelet[2644]: E1213 13:27:18.338134 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:27:18.338224 kubelet[2644]: W1213 13:27:18.338157 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:27:18.338224 kubelet[2644]: E1213 13:27:18.338181 2644 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:27:18.353780 kubelet[2644]: E1213 13:27:18.353472 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:27:18.353780 kubelet[2644]: W1213 13:27:18.353493 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:27:18.353780 kubelet[2644]: E1213 13:27:18.353513 2644 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:27:18.356041 kubelet[2644]: E1213 13:27:18.355968 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:27:18.356041 kubelet[2644]: W1213 13:27:18.355988 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:27:18.356041 kubelet[2644]: E1213 13:27:18.356005 2644 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:27:18.428985 kubelet[2644]: E1213 13:27:18.428812 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:27:18.430700 kubelet[2644]: E1213 13:27:18.429362 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:27:18.430700 kubelet[2644]: W1213 13:27:18.429392 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:27:18.430700 kubelet[2644]: E1213 13:27:18.429412 2644 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:27:18.430700 kubelet[2644]: E1213 13:27:18.430493 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:27:18.430700 kubelet[2644]: W1213 13:27:18.430517 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:27:18.430700 kubelet[2644]: E1213 13:27:18.430532 2644 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:27:18.430918 containerd[1470]: time="2024-12-13T13:27:18.429794687Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-867dd587b-dljjl,Uid:abfff615-a3d8-4491-92b1-5e5d2bf4c779,Namespace:calico-system,Attempt:0,}" Dec 13 13:27:18.431157 kubelet[2644]: E1213 13:27:18.430801 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:27:18.431157 kubelet[2644]: W1213 13:27:18.430820 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:27:18.431157 kubelet[2644]: E1213 13:27:18.430830 2644 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:27:18.431157 kubelet[2644]: E1213 13:27:18.431022 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:27:18.431157 kubelet[2644]: W1213 13:27:18.431032 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:27:18.431157 kubelet[2644]: E1213 13:27:18.431041 2644 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:27:18.431276 kubelet[2644]: E1213 13:27:18.431212 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:27:18.431276 kubelet[2644]: W1213 13:27:18.431222 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:27:18.431276 kubelet[2644]: E1213 13:27:18.431231 2644 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:27:18.432318 kubelet[2644]: E1213 13:27:18.431450 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:27:18.432318 kubelet[2644]: W1213 13:27:18.431464 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:27:18.432318 kubelet[2644]: E1213 13:27:18.431474 2644 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:27:18.432318 kubelet[2644]: E1213 13:27:18.431724 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:27:18.432318 kubelet[2644]: W1213 13:27:18.431733 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:27:18.432318 kubelet[2644]: E1213 13:27:18.431835 2644 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:27:18.432318 kubelet[2644]: E1213 13:27:18.432250 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:27:18.432318 kubelet[2644]: W1213 13:27:18.432262 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:27:18.432318 kubelet[2644]: E1213 13:27:18.432296 2644 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:27:18.432681 kubelet[2644]: E1213 13:27:18.432476 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:27:18.432681 kubelet[2644]: W1213 13:27:18.432485 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:27:18.432681 kubelet[2644]: E1213 13:27:18.432529 2644 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:27:18.432779 kubelet[2644]: E1213 13:27:18.432761 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:27:18.432812 kubelet[2644]: W1213 13:27:18.432792 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:27:18.432930 kubelet[2644]: E1213 13:27:18.432855 2644 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:27:18.433004 kubelet[2644]: E1213 13:27:18.432993 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:27:18.433004 kubelet[2644]: W1213 13:27:18.433004 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:27:18.433109 kubelet[2644]: E1213 13:27:18.433062 2644 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:27:18.433167 kubelet[2644]: E1213 13:27:18.433148 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:27:18.433167 kubelet[2644]: W1213 13:27:18.433164 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:27:18.433268 kubelet[2644]: E1213 13:27:18.433211 2644 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:27:18.433302 kubelet[2644]: E1213 13:27:18.433288 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:27:18.433332 kubelet[2644]: W1213 13:27:18.433299 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:27:18.433332 kubelet[2644]: E1213 13:27:18.433324 2644 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:27:18.433569 kubelet[2644]: E1213 13:27:18.433550 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:27:18.433569 kubelet[2644]: W1213 13:27:18.433563 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:27:18.433569 kubelet[2644]: E1213 13:27:18.433577 2644 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:27:18.433946 kubelet[2644]: E1213 13:27:18.433932 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:27:18.434001 kubelet[2644]: W1213 13:27:18.433989 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:27:18.434088 kubelet[2644]: E1213 13:27:18.434048 2644 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:27:18.434427 kubelet[2644]: E1213 13:27:18.434356 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:27:18.434427 kubelet[2644]: W1213 13:27:18.434377 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:27:18.434511 kubelet[2644]: E1213 13:27:18.434415 2644 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:27:18.434772 kubelet[2644]: E1213 13:27:18.434687 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:27:18.434772 kubelet[2644]: W1213 13:27:18.434699 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:27:18.434772 kubelet[2644]: E1213 13:27:18.434733 2644 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:27:18.435033 kubelet[2644]: E1213 13:27:18.434964 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:27:18.435033 kubelet[2644]: W1213 13:27:18.434976 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:27:18.435033 kubelet[2644]: E1213 13:27:18.435006 2644 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:27:18.435358 kubelet[2644]: E1213 13:27:18.435345 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:27:18.435539 kubelet[2644]: W1213 13:27:18.435416 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:27:18.435539 kubelet[2644]: E1213 13:27:18.435449 2644 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:27:18.435670 kubelet[2644]: E1213 13:27:18.435656 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:27:18.435725 kubelet[2644]: W1213 13:27:18.435714 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:27:18.435841 kubelet[2644]: E1213 13:27:18.435781 2644 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:27:18.436045 kubelet[2644]: E1213 13:27:18.436031 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:27:18.436190 kubelet[2644]: W1213 13:27:18.436103 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:27:18.436190 kubelet[2644]: E1213 13:27:18.436130 2644 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:27:18.436425 kubelet[2644]: E1213 13:27:18.436410 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:27:18.436509 kubelet[2644]: W1213 13:27:18.436474 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:27:18.436509 kubelet[2644]: E1213 13:27:18.436504 2644 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:27:18.436807 kubelet[2644]: E1213 13:27:18.436716 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:27:18.436807 kubelet[2644]: W1213 13:27:18.436728 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:27:18.436807 kubelet[2644]: E1213 13:27:18.436775 2644 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:27:18.437038 kubelet[2644]: E1213 13:27:18.436945 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:27:18.437038 kubelet[2644]: W1213 13:27:18.436958 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:27:18.437038 kubelet[2644]: E1213 13:27:18.436969 2644 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:27:18.437406 kubelet[2644]: E1213 13:27:18.437277 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:27:18.437406 kubelet[2644]: W1213 13:27:18.437289 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:27:18.437406 kubelet[2644]: E1213 13:27:18.437307 2644 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:27:18.437549 kubelet[2644]: E1213 13:27:18.437538 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:27:18.437604 kubelet[2644]: W1213 13:27:18.437593 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:27:18.437670 kubelet[2644]: E1213 13:27:18.437646 2644 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:27:18.467775 kubelet[2644]: E1213 13:27:18.467710 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:27:18.467775 kubelet[2644]: W1213 13:27:18.467728 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:27:18.467775 kubelet[2644]: E1213 13:27:18.467743 2644 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:27:18.475808 containerd[1470]: time="2024-12-13T13:27:18.475715946Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:27:18.475808 containerd[1470]: time="2024-12-13T13:27:18.475779306Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:27:18.475808 containerd[1470]: time="2024-12-13T13:27:18.475791066Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:27:18.476016 containerd[1470]: time="2024-12-13T13:27:18.475861987Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:27:18.499066 systemd[1]: Started cri-containerd-ca3e15e360e66185aa89f72a70b8c466c46de9f39ecc144db13463e1c85b631d.scope - libcontainer container ca3e15e360e66185aa89f72a70b8c466c46de9f39ecc144db13463e1c85b631d. Dec 13 13:27:18.527256 containerd[1470]: time="2024-12-13T13:27:18.527201852Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-867dd587b-dljjl,Uid:abfff615-a3d8-4491-92b1-5e5d2bf4c779,Namespace:calico-system,Attempt:0,} returns sandbox id \"ca3e15e360e66185aa89f72a70b8c466c46de9f39ecc144db13463e1c85b631d\"" Dec 13 13:27:18.527998 kubelet[2644]: E1213 13:27:18.527856 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:27:18.529270 containerd[1470]: time="2024-12-13T13:27:18.529226215Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Dec 13 13:27:18.532876 kubelet[2644]: E1213 13:27:18.532804 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:27:18.532876 kubelet[2644]: W1213 13:27:18.532822 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:27:18.532876 kubelet[2644]: E1213 13:27:18.532838 2644 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:27:18.634022 kubelet[2644]: E1213 13:27:18.633924 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:27:18.634022 kubelet[2644]: W1213 13:27:18.633946 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:27:18.634022 kubelet[2644]: E1213 13:27:18.633970 2644 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:27:18.734609 kubelet[2644]: E1213 13:27:18.734583 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:27:18.734609 kubelet[2644]: W1213 13:27:18.734602 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:27:18.734609 kubelet[2644]: E1213 13:27:18.734619 2644 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:27:18.835184 kubelet[2644]: E1213 13:27:18.835146 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:27:18.835184 kubelet[2644]: W1213 13:27:18.835165 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:27:18.835184 kubelet[2644]: E1213 13:27:18.835182 2644 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:27:18.935883 kubelet[2644]: E1213 13:27:18.935730 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:27:18.935997 kubelet[2644]: W1213 13:27:18.935910 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:27:18.935997 kubelet[2644]: E1213 13:27:18.935936 2644 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:27:19.036465 kubelet[2644]: E1213 13:27:19.036438 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:27:19.036664 kubelet[2644]: W1213 13:27:19.036600 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:27:19.036664 kubelet[2644]: E1213 13:27:19.036625 2644 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:27:19.137350 kubelet[2644]: E1213 13:27:19.137316 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:27:19.137350 kubelet[2644]: W1213 13:27:19.137337 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:27:19.137350 kubelet[2644]: E1213 13:27:19.137357 2644 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:27:19.194327 kubelet[2644]: E1213 13:27:19.194243 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:27:19.194327 kubelet[2644]: W1213 13:27:19.194265 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:27:19.194327 kubelet[2644]: E1213 13:27:19.194284 2644 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:27:19.383505 kubelet[2644]: E1213 13:27:19.383475 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:27:19.384140 containerd[1470]: time="2024-12-13T13:27:19.384107041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-b5ckg,Uid:3c6840b6-a59f-44c6-8adb-75163cd2ce0b,Namespace:calico-system,Attempt:0,}" Dec 13 13:27:19.402741 containerd[1470]: time="2024-12-13T13:27:19.402669223Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:27:19.402838 containerd[1470]: time="2024-12-13T13:27:19.402752463Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:27:19.402838 containerd[1470]: time="2024-12-13T13:27:19.402779863Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:27:19.403077 containerd[1470]: time="2024-12-13T13:27:19.403001664Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:27:19.425037 systemd[1]: Started cri-containerd-997ce60831c9ab1184f945dd32241f976f01c01233bb01752d0581958e8c335c.scope - libcontainer container 997ce60831c9ab1184f945dd32241f976f01c01233bb01752d0581958e8c335c. Dec 13 13:27:19.447198 containerd[1470]: time="2024-12-13T13:27:19.447005957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-b5ckg,Uid:3c6840b6-a59f-44c6-8adb-75163cd2ce0b,Namespace:calico-system,Attempt:0,} returns sandbox id \"997ce60831c9ab1184f945dd32241f976f01c01233bb01752d0581958e8c335c\"" Dec 13 13:27:19.448614 kubelet[2644]: E1213 13:27:19.448555 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:27:19.986693 kubelet[2644]: E1213 13:27:19.986524 2644 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jl58q" podUID="991aee5f-8651-49bc-ac7b-b0b4b2cc81c5" Dec 13 13:27:20.868403 containerd[1470]: time="2024-12-13T13:27:20.868347521Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:27:20.869359 containerd[1470]: time="2024-12-13T13:27:20.869312682Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29231308" Dec 13 13:27:20.870341 containerd[1470]: time="2024-12-13T13:27:20.870305443Z" level=info msg="ImageCreate event name:\"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:27:20.871914 containerd[1470]: time="2024-12-13T13:27:20.871880765Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:27:20.872802 containerd[1470]: time="2024-12-13T13:27:20.872763726Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"29231162\" in 2.343509111s" Dec 13 13:27:20.872802 containerd[1470]: time="2024-12-13T13:27:20.872799526Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\"" Dec 13 13:27:20.873888 containerd[1470]: time="2024-12-13T13:27:20.873782847Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Dec 13 13:27:20.887495 containerd[1470]: time="2024-12-13T13:27:20.887464822Z" level=info msg="CreateContainer within sandbox \"ca3e15e360e66185aa89f72a70b8c466c46de9f39ecc144db13463e1c85b631d\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 13 13:27:20.898611 containerd[1470]: time="2024-12-13T13:27:20.898574955Z" level=info msg="CreateContainer within sandbox \"ca3e15e360e66185aa89f72a70b8c466c46de9f39ecc144db13463e1c85b631d\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"8bfbac08ddf0a6742cb22f1966d8799892f8a61644df079291710c6a018894bd\"" Dec 13 13:27:20.899311 containerd[1470]: time="2024-12-13T13:27:20.899273756Z" level=info msg="StartContainer for \"8bfbac08ddf0a6742cb22f1966d8799892f8a61644df079291710c6a018894bd\"" Dec 13 13:27:20.927025 systemd[1]: Started cri-containerd-8bfbac08ddf0a6742cb22f1966d8799892f8a61644df079291710c6a018894bd.scope - libcontainer container 8bfbac08ddf0a6742cb22f1966d8799892f8a61644df079291710c6a018894bd. Dec 13 13:27:20.961141 containerd[1470]: time="2024-12-13T13:27:20.961101705Z" level=info msg="StartContainer for \"8bfbac08ddf0a6742cb22f1966d8799892f8a61644df079291710c6a018894bd\" returns successfully" Dec 13 13:27:21.045092 kubelet[2644]: E1213 13:27:21.045057 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:27:21.126814 kubelet[2644]: E1213 13:27:21.126724 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:27:21.126814 kubelet[2644]: W1213 13:27:21.126749 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:27:21.126814 kubelet[2644]: E1213 13:27:21.126768 2644 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:27:21.127055 kubelet[2644]: E1213 13:27:21.127027 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:27:21.127055 kubelet[2644]: W1213 13:27:21.127040 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:27:21.127110 kubelet[2644]: E1213 13:27:21.127056 2644 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:27:21.127525 kubelet[2644]: E1213 13:27:21.127508 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:27:21.127525 kubelet[2644]: W1213 13:27:21.127522 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:27:21.127581 kubelet[2644]: E1213 13:27:21.127533 2644 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:27:21.127960 kubelet[2644]: E1213 13:27:21.127771 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:27:21.127960 kubelet[2644]: W1213 13:27:21.127784 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:27:21.127960 kubelet[2644]: E1213 13:27:21.127795 2644 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:27:21.128164 kubelet[2644]: E1213 13:27:21.127982 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:27:21.128164 kubelet[2644]: W1213 13:27:21.127991 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:27:21.128164 kubelet[2644]: E1213 13:27:21.127999 2644 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:27:21.128164 kubelet[2644]: E1213 13:27:21.128139 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:27:21.128164 kubelet[2644]: W1213 13:27:21.128146 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:27:21.128164 kubelet[2644]: E1213 13:27:21.128153 2644 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:27:21.128313 kubelet[2644]: E1213 13:27:21.128296 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:27:21.128313 kubelet[2644]: W1213 13:27:21.128307 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:27:21.128313 kubelet[2644]: E1213 13:27:21.128313 2644 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:27:21.128471 kubelet[2644]: E1213 13:27:21.128459 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:27:21.128471 kubelet[2644]: W1213 13:27:21.128469 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:27:21.128516 kubelet[2644]: E1213 13:27:21.128477 2644 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:27:21.128631 kubelet[2644]: E1213 13:27:21.128619 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:27:21.128655 kubelet[2644]: W1213 13:27:21.128633 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:27:21.128655 kubelet[2644]: E1213 13:27:21.128641 2644 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:27:21.128785 kubelet[2644]: E1213 13:27:21.128773 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:27:21.128785 kubelet[2644]: W1213 13:27:21.128783 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:27:21.128827 kubelet[2644]: E1213 13:27:21.128790 2644 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:27:21.128931 kubelet[2644]: E1213 13:27:21.128920 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:27:21.128931 kubelet[2644]: W1213 13:27:21.128930 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:27:21.128978 kubelet[2644]: E1213 13:27:21.128938 2644 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:27:21.129083 kubelet[2644]: E1213 13:27:21.129071 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:27:21.129112 kubelet[2644]: W1213 13:27:21.129084 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:27:21.129112 kubelet[2644]: E1213 13:27:21.129092 2644 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:27:21.129237 kubelet[2644]: E1213 13:27:21.129221 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:27:21.129237 kubelet[2644]: W1213 13:27:21.129236 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:27:21.129282 kubelet[2644]: E1213 13:27:21.129243 2644 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:27:21.129398 kubelet[2644]: E1213 13:27:21.129379 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:27:21.129398 kubelet[2644]: W1213 13:27:21.129390 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:27:21.129398 kubelet[2644]: E1213 13:27:21.129397 2644 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:27:21.129531 kubelet[2644]: E1213 13:27:21.129521 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:27:21.129555 kubelet[2644]: W1213 13:27:21.129535 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:27:21.129555 kubelet[2644]: E1213 13:27:21.129543 2644 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:27:21.152925 kubelet[2644]: E1213 13:27:21.152904 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:27:21.152925 kubelet[2644]: W1213 13:27:21.152920 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:27:21.153021 kubelet[2644]: E1213 13:27:21.152933 2644 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:27:21.153156 kubelet[2644]: E1213 13:27:21.153144 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:27:21.153156 kubelet[2644]: W1213 13:27:21.153157 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:27:21.153212 kubelet[2644]: E1213 13:27:21.153172 2644 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:27:21.153431 kubelet[2644]: E1213 13:27:21.153404 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:27:21.153431 kubelet[2644]: W1213 13:27:21.153415 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:27:21.153431 kubelet[2644]: E1213 13:27:21.153427 2644 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:27:21.153632 kubelet[2644]: E1213 13:27:21.153618 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:27:21.153632 kubelet[2644]: W1213 13:27:21.153629 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:27:21.153685 kubelet[2644]: E1213 13:27:21.153642 2644 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:27:21.153797 kubelet[2644]: E1213 13:27:21.153787 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:27:21.153797 kubelet[2644]: W1213 13:27:21.153796 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:27:21.153850 kubelet[2644]: E1213 13:27:21.153808 2644 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:27:21.153955 kubelet[2644]: E1213 13:27:21.153941 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:27:21.153955 kubelet[2644]: W1213 13:27:21.153950 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:27:21.154009 kubelet[2644]: E1213 13:27:21.153958 2644 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:27:21.154156 kubelet[2644]: E1213 13:27:21.154143 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:27:21.154156 kubelet[2644]: W1213 13:27:21.154155 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:27:21.154205 kubelet[2644]: E1213 13:27:21.154168 2644 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:27:21.154412 kubelet[2644]: E1213 13:27:21.154394 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:27:21.154445 kubelet[2644]: W1213 13:27:21.154411 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:27:21.154445 kubelet[2644]: E1213 13:27:21.154428 2644 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:27:21.154606 kubelet[2644]: E1213 13:27:21.154592 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:27:21.154606 kubelet[2644]: W1213 13:27:21.154606 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:27:21.154654 kubelet[2644]: E1213 13:27:21.154622 2644 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:27:21.154789 kubelet[2644]: E1213 13:27:21.154777 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:27:21.154789 kubelet[2644]: W1213 13:27:21.154787 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:27:21.154840 kubelet[2644]: E1213 13:27:21.154799 2644 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:27:21.154951 kubelet[2644]: E1213 13:27:21.154938 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:27:21.154951 kubelet[2644]: W1213 13:27:21.154949 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:27:21.155007 kubelet[2644]: E1213 13:27:21.154989 2644 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:27:21.155097 kubelet[2644]: E1213 13:27:21.155086 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:27:21.155097 kubelet[2644]: W1213 13:27:21.155095 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:27:21.155148 kubelet[2644]: E1213 13:27:21.155106 2644 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:27:21.155255 kubelet[2644]: E1213 13:27:21.155244 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:27:21.155255 kubelet[2644]: W1213 13:27:21.155253 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:27:21.155308 kubelet[2644]: E1213 13:27:21.155264 2644 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:27:21.155401 kubelet[2644]: E1213 13:27:21.155389 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:27:21.155401 kubelet[2644]: W1213 13:27:21.155397 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:27:21.155451 kubelet[2644]: E1213 13:27:21.155409 2644 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:27:21.155565 kubelet[2644]: E1213 13:27:21.155555 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:27:21.155597 kubelet[2644]: W1213 13:27:21.155573 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:27:21.155597 kubelet[2644]: E1213 13:27:21.155586 2644 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:27:21.155862 kubelet[2644]: E1213 13:27:21.155849 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:27:21.155862 kubelet[2644]: W1213 13:27:21.155862 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:27:21.155943 kubelet[2644]: E1213 13:27:21.155891 2644 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:27:21.156066 kubelet[2644]: E1213 13:27:21.156054 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:27:21.156066 kubelet[2644]: W1213 13:27:21.156066 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:27:21.156109 kubelet[2644]: E1213 13:27:21.156074 2644 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:27:21.156379 kubelet[2644]: E1213 13:27:21.156357 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:27:21.156413 kubelet[2644]: W1213 13:27:21.156379 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:27:21.156413 kubelet[2644]: E1213 13:27:21.156389 2644 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:27:21.858644 containerd[1470]: time="2024-12-13T13:27:21.858567777Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5117811" Dec 13 13:27:21.860617 containerd[1470]: time="2024-12-13T13:27:21.858968977Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:27:21.861472 containerd[1470]: time="2024-12-13T13:27:21.861442580Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6487425\" in 987.625973ms" Dec 13 13:27:21.861526 containerd[1470]: time="2024-12-13T13:27:21.861477700Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\"" Dec 13 13:27:21.863573 containerd[1470]: time="2024-12-13T13:27:21.863539902Z" level=info msg="ImageCreate event name:\"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:27:21.864376 containerd[1470]: time="2024-12-13T13:27:21.864340543Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:27:21.879672 containerd[1470]: time="2024-12-13T13:27:21.879623279Z" level=info msg="CreateContainer within sandbox \"997ce60831c9ab1184f945dd32241f976f01c01233bb01752d0581958e8c335c\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 13 13:27:21.890909 containerd[1470]: time="2024-12-13T13:27:21.890852611Z" level=info msg="CreateContainer within sandbox \"997ce60831c9ab1184f945dd32241f976f01c01233bb01752d0581958e8c335c\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"b16dfa4395fdd6847f005eb5a63fc2ca4e59bd07fe28f897e8b792b01e958dab\"" Dec 13 13:27:21.891618 containerd[1470]: time="2024-12-13T13:27:21.891506092Z" level=info msg="StartContainer for \"b16dfa4395fdd6847f005eb5a63fc2ca4e59bd07fe28f897e8b792b01e958dab\"" Dec 13 13:27:21.923029 systemd[1]: Started cri-containerd-b16dfa4395fdd6847f005eb5a63fc2ca4e59bd07fe28f897e8b792b01e958dab.scope - libcontainer container b16dfa4395fdd6847f005eb5a63fc2ca4e59bd07fe28f897e8b792b01e958dab. Dec 13 13:27:21.948356 containerd[1470]: time="2024-12-13T13:27:21.948309992Z" level=info msg="StartContainer for \"b16dfa4395fdd6847f005eb5a63fc2ca4e59bd07fe28f897e8b792b01e958dab\" returns successfully" Dec 13 13:27:21.973548 systemd[1]: cri-containerd-b16dfa4395fdd6847f005eb5a63fc2ca4e59bd07fe28f897e8b792b01e958dab.scope: Deactivated successfully. Dec 13 13:27:21.986190 kubelet[2644]: E1213 13:27:21.985960 2644 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jl58q" podUID="991aee5f-8651-49bc-ac7b-b0b4b2cc81c5" Dec 13 13:27:21.996332 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b16dfa4395fdd6847f005eb5a63fc2ca4e59bd07fe28f897e8b792b01e958dab-rootfs.mount: Deactivated successfully. Dec 13 13:27:22.049734 kubelet[2644]: E1213 13:27:22.049226 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:27:22.074759 kubelet[2644]: I1213 13:27:22.074688 2644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-867dd587b-dljjl" podStartSLOduration=1.730069649 podStartE2EDuration="4.074665441s" podCreationTimestamp="2024-12-13 13:27:18 +0000 UTC" firstStartedPulling="2024-12-13 13:27:18.528957655 +0000 UTC m=+21.624046499" lastFinishedPulling="2024-12-13 13:27:20.873553407 +0000 UTC m=+23.968642291" observedRunningTime="2024-12-13 13:27:21.055650208 +0000 UTC m=+24.150739052" watchObservedRunningTime="2024-12-13 13:27:22.074665441 +0000 UTC m=+25.169754285" Dec 13 13:27:22.078343 kubelet[2644]: I1213 13:27:22.078094 2644 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 13:27:22.079183 kubelet[2644]: E1213 13:27:22.079164 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:27:22.156738 containerd[1470]: time="2024-12-13T13:27:22.151186956Z" level=info msg="shim disconnected" id=b16dfa4395fdd6847f005eb5a63fc2ca4e59bd07fe28f897e8b792b01e958dab namespace=k8s.io Dec 13 13:27:22.156738 containerd[1470]: time="2024-12-13T13:27:22.156636202Z" level=warning msg="cleaning up after shim disconnected" id=b16dfa4395fdd6847f005eb5a63fc2ca4e59bd07fe28f897e8b792b01e958dab namespace=k8s.io Dec 13 13:27:22.156738 containerd[1470]: time="2024-12-13T13:27:22.156649882Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:27:23.051184 kubelet[2644]: E1213 13:27:23.050986 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:27:23.052510 containerd[1470]: time="2024-12-13T13:27:23.052321446Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Dec 13 13:27:23.987053 kubelet[2644]: E1213 13:27:23.987001 2644 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jl58q" podUID="991aee5f-8651-49bc-ac7b-b0b4b2cc81c5" Dec 13 13:27:24.736788 kubelet[2644]: I1213 13:27:24.736200 2644 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 13:27:24.737506 kubelet[2644]: E1213 13:27:24.736926 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:27:25.055338 kubelet[2644]: E1213 13:27:25.055239 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:27:25.880851 systemd[1]: Started sshd@7-10.0.0.123:22-10.0.0.1:39322.service - OpenSSH per-connection server daemon (10.0.0.1:39322). Dec 13 13:27:25.929475 sshd[3329]: Accepted publickey for core from 10.0.0.1 port 39322 ssh2: RSA SHA256:q9cWvSR3bBxu+L28Z4JmOHhvW5qF2BbU+1GVJNGhIf4 Dec 13 13:27:25.931206 sshd-session[3329]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:27:25.936938 systemd-logind[1454]: New session 8 of user core. Dec 13 13:27:25.945055 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 13 13:27:25.986888 kubelet[2644]: E1213 13:27:25.986841 2644 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jl58q" podUID="991aee5f-8651-49bc-ac7b-b0b4b2cc81c5" Dec 13 13:27:26.101652 sshd[3332]: Connection closed by 10.0.0.1 port 39322 Dec 13 13:27:26.102800 sshd-session[3329]: pam_unix(sshd:session): session closed for user core Dec 13 13:27:26.108939 systemd-logind[1454]: Session 8 logged out. Waiting for processes to exit. Dec 13 13:27:26.109257 systemd[1]: sshd@7-10.0.0.123:22-10.0.0.1:39322.service: Deactivated successfully. Dec 13 13:27:26.111300 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 13:27:26.113368 systemd-logind[1454]: Removed session 8. Dec 13 13:27:26.978906 containerd[1470]: time="2024-12-13T13:27:26.975918641Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:27:26.979941 containerd[1470]: time="2024-12-13T13:27:26.979636084Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=89703123" Dec 13 13:27:26.981045 containerd[1470]: time="2024-12-13T13:27:26.980678405Z" level=info msg="ImageCreate event name:\"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:27:26.983618 containerd[1470]: time="2024-12-13T13:27:26.983581087Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:27:26.985606 containerd[1470]: time="2024-12-13T13:27:26.985494889Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"91072777\" in 3.933113443s" Dec 13 13:27:26.985606 containerd[1470]: time="2024-12-13T13:27:26.985525529Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\"" Dec 13 13:27:26.991249 containerd[1470]: time="2024-12-13T13:27:26.991128573Z" level=info msg="CreateContainer within sandbox \"997ce60831c9ab1184f945dd32241f976f01c01233bb01752d0581958e8c335c\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 13 13:27:27.016217 containerd[1470]: time="2024-12-13T13:27:27.016157272Z" level=info msg="CreateContainer within sandbox \"997ce60831c9ab1184f945dd32241f976f01c01233bb01752d0581958e8c335c\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"4a661798ed7cfd70cd4d2bbffa828b97c780e6a193f6600d9cd55290861effed\"" Dec 13 13:27:27.017765 containerd[1470]: time="2024-12-13T13:27:27.017730873Z" level=info msg="StartContainer for \"4a661798ed7cfd70cd4d2bbffa828b97c780e6a193f6600d9cd55290861effed\"" Dec 13 13:27:27.059080 systemd[1]: Started cri-containerd-4a661798ed7cfd70cd4d2bbffa828b97c780e6a193f6600d9cd55290861effed.scope - libcontainer container 4a661798ed7cfd70cd4d2bbffa828b97c780e6a193f6600d9cd55290861effed. Dec 13 13:27:27.175951 containerd[1470]: time="2024-12-13T13:27:27.175903226Z" level=info msg="StartContainer for \"4a661798ed7cfd70cd4d2bbffa828b97c780e6a193f6600d9cd55290861effed\" returns successfully" Dec 13 13:27:27.638446 systemd[1]: cri-containerd-4a661798ed7cfd70cd4d2bbffa828b97c780e6a193f6600d9cd55290861effed.scope: Deactivated successfully. Dec 13 13:27:27.666410 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4a661798ed7cfd70cd4d2bbffa828b97c780e6a193f6600d9cd55290861effed-rootfs.mount: Deactivated successfully. Dec 13 13:27:27.668610 containerd[1470]: time="2024-12-13T13:27:27.668556820Z" level=info msg="shim disconnected" id=4a661798ed7cfd70cd4d2bbffa828b97c780e6a193f6600d9cd55290861effed namespace=k8s.io Dec 13 13:27:27.668610 containerd[1470]: time="2024-12-13T13:27:27.668610980Z" level=warning msg="cleaning up after shim disconnected" id=4a661798ed7cfd70cd4d2bbffa828b97c780e6a193f6600d9cd55290861effed namespace=k8s.io Dec 13 13:27:27.668728 containerd[1470]: time="2024-12-13T13:27:27.668620260Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:27:27.730681 kubelet[2644]: I1213 13:27:27.730649 2644 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 13:27:27.760343 kubelet[2644]: I1213 13:27:27.759904 2644 topology_manager.go:215] "Topology Admit Handler" podUID="16d3a59c-4627-445a-a548-3d17c81a3ebd" podNamespace="kube-system" podName="coredns-7db6d8ff4d-tm76v" Dec 13 13:27:27.762389 kubelet[2644]: I1213 13:27:27.762334 2644 topology_manager.go:215] "Topology Admit Handler" podUID="7432b46c-534c-4718-843e-36f44a5a5ac1" podNamespace="kube-system" podName="coredns-7db6d8ff4d-4g2n7" Dec 13 13:27:27.762579 kubelet[2644]: I1213 13:27:27.762559 2644 topology_manager.go:215] "Topology Admit Handler" podUID="fdd0e41e-e772-4088-a22c-e059031fe725" podNamespace="calico-system" podName="calico-kube-controllers-84949d5b96-26kkm" Dec 13 13:27:27.764447 kubelet[2644]: I1213 13:27:27.763714 2644 topology_manager.go:215] "Topology Admit Handler" podUID="77036710-cd93-4856-bfc5-037aee276132" podNamespace="calico-apiserver" podName="calico-apiserver-69648fc998-4w2cp" Dec 13 13:27:27.766571 kubelet[2644]: I1213 13:27:27.766502 2644 topology_manager.go:215] "Topology Admit Handler" podUID="b1402681-1dab-4e27-bb98-220ec86ddde8" podNamespace="calico-apiserver" podName="calico-apiserver-69648fc998-v2qqw" Dec 13 13:27:27.770184 systemd[1]: Created slice kubepods-burstable-pod16d3a59c_4627_445a_a548_3d17c81a3ebd.slice - libcontainer container kubepods-burstable-pod16d3a59c_4627_445a_a548_3d17c81a3ebd.slice. Dec 13 13:27:27.776733 systemd[1]: Created slice kubepods-burstable-pod7432b46c_534c_4718_843e_36f44a5a5ac1.slice - libcontainer container kubepods-burstable-pod7432b46c_534c_4718_843e_36f44a5a5ac1.slice. Dec 13 13:27:27.785312 systemd[1]: Created slice kubepods-besteffort-podfdd0e41e_e772_4088_a22c_e059031fe725.slice - libcontainer container kubepods-besteffort-podfdd0e41e_e772_4088_a22c_e059031fe725.slice. Dec 13 13:27:27.792791 systemd[1]: Created slice kubepods-besteffort-pod77036710_cd93_4856_bfc5_037aee276132.slice - libcontainer container kubepods-besteffort-pod77036710_cd93_4856_bfc5_037aee276132.slice. Dec 13 13:27:27.798794 kubelet[2644]: I1213 13:27:27.798766 2644 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/77036710-cd93-4856-bfc5-037aee276132-calico-apiserver-certs\") pod \"calico-apiserver-69648fc998-4w2cp\" (UID: \"77036710-cd93-4856-bfc5-037aee276132\") " pod="calico-apiserver/calico-apiserver-69648fc998-4w2cp" Dec 13 13:27:27.799089 kubelet[2644]: I1213 13:27:27.799052 2644 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bp5lz\" (UniqueName: \"kubernetes.io/projected/77036710-cd93-4856-bfc5-037aee276132-kube-api-access-bp5lz\") pod \"calico-apiserver-69648fc998-4w2cp\" (UID: \"77036710-cd93-4856-bfc5-037aee276132\") " pod="calico-apiserver/calico-apiserver-69648fc998-4w2cp" Dec 13 13:27:27.799254 kubelet[2644]: I1213 13:27:27.799235 2644 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b1402681-1dab-4e27-bb98-220ec86ddde8-calico-apiserver-certs\") pod \"calico-apiserver-69648fc998-v2qqw\" (UID: \"b1402681-1dab-4e27-bb98-220ec86ddde8\") " pod="calico-apiserver/calico-apiserver-69648fc998-v2qqw" Dec 13 13:27:27.799332 kubelet[2644]: I1213 13:27:27.799319 2644 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whs56\" (UniqueName: \"kubernetes.io/projected/b1402681-1dab-4e27-bb98-220ec86ddde8-kube-api-access-whs56\") pod \"calico-apiserver-69648fc998-v2qqw\" (UID: \"b1402681-1dab-4e27-bb98-220ec86ddde8\") " pod="calico-apiserver/calico-apiserver-69648fc998-v2qqw" Dec 13 13:27:27.799426 kubelet[2644]: I1213 13:27:27.799410 2644 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/16d3a59c-4627-445a-a548-3d17c81a3ebd-config-volume\") pod \"coredns-7db6d8ff4d-tm76v\" (UID: \"16d3a59c-4627-445a-a548-3d17c81a3ebd\") " pod="kube-system/coredns-7db6d8ff4d-tm76v" Dec 13 13:27:27.799511 kubelet[2644]: I1213 13:27:27.799496 2644 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5mzh\" (UniqueName: \"kubernetes.io/projected/fdd0e41e-e772-4088-a22c-e059031fe725-kube-api-access-n5mzh\") pod \"calico-kube-controllers-84949d5b96-26kkm\" (UID: \"fdd0e41e-e772-4088-a22c-e059031fe725\") " pod="calico-system/calico-kube-controllers-84949d5b96-26kkm" Dec 13 13:27:27.800487 kubelet[2644]: I1213 13:27:27.799572 2644 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7432b46c-534c-4718-843e-36f44a5a5ac1-config-volume\") pod \"coredns-7db6d8ff4d-4g2n7\" (UID: \"7432b46c-534c-4718-843e-36f44a5a5ac1\") " pod="kube-system/coredns-7db6d8ff4d-4g2n7" Dec 13 13:27:27.800487 kubelet[2644]: I1213 13:27:27.799609 2644 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwxst\" (UniqueName: \"kubernetes.io/projected/7432b46c-534c-4718-843e-36f44a5a5ac1-kube-api-access-nwxst\") pod \"coredns-7db6d8ff4d-4g2n7\" (UID: \"7432b46c-534c-4718-843e-36f44a5a5ac1\") " pod="kube-system/coredns-7db6d8ff4d-4g2n7" Dec 13 13:27:27.800487 kubelet[2644]: I1213 13:27:27.799628 2644 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fdd0e41e-e772-4088-a22c-e059031fe725-tigera-ca-bundle\") pod \"calico-kube-controllers-84949d5b96-26kkm\" (UID: \"fdd0e41e-e772-4088-a22c-e059031fe725\") " pod="calico-system/calico-kube-controllers-84949d5b96-26kkm" Dec 13 13:27:27.800487 kubelet[2644]: I1213 13:27:27.799643 2644 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8jl8\" (UniqueName: \"kubernetes.io/projected/16d3a59c-4627-445a-a548-3d17c81a3ebd-kube-api-access-c8jl8\") pod \"coredns-7db6d8ff4d-tm76v\" (UID: \"16d3a59c-4627-445a-a548-3d17c81a3ebd\") " pod="kube-system/coredns-7db6d8ff4d-tm76v" Dec 13 13:27:27.802148 systemd[1]: Created slice kubepods-besteffort-podb1402681_1dab_4e27_bb98_220ec86ddde8.slice - libcontainer container kubepods-besteffort-podb1402681_1dab_4e27_bb98_220ec86ddde8.slice. Dec 13 13:27:27.991164 systemd[1]: Created slice kubepods-besteffort-pod991aee5f_8651_49bc_ac7b_b0b4b2cc81c5.slice - libcontainer container kubepods-besteffort-pod991aee5f_8651_49bc_ac7b_b0b4b2cc81c5.slice. Dec 13 13:27:27.993657 containerd[1470]: time="2024-12-13T13:27:27.993598653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jl58q,Uid:991aee5f-8651-49bc-ac7b-b0b4b2cc81c5,Namespace:calico-system,Attempt:0,}" Dec 13 13:27:28.071169 kubelet[2644]: E1213 13:27:28.067723 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:27:28.073100 containerd[1470]: time="2024-12-13T13:27:28.072848787Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Dec 13 13:27:28.073159 kubelet[2644]: E1213 13:27:28.072961 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:27:28.073375 containerd[1470]: time="2024-12-13T13:27:28.073336347Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-tm76v,Uid:16d3a59c-4627-445a-a548-3d17c81a3ebd,Namespace:kube-system,Attempt:0,}" Dec 13 13:27:28.081048 kubelet[2644]: E1213 13:27:28.080554 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:27:28.081140 containerd[1470]: time="2024-12-13T13:27:28.081003272Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-4g2n7,Uid:7432b46c-534c-4718-843e-36f44a5a5ac1,Namespace:kube-system,Attempt:0,}" Dec 13 13:27:28.090397 containerd[1470]: time="2024-12-13T13:27:28.089774998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84949d5b96-26kkm,Uid:fdd0e41e-e772-4088-a22c-e059031fe725,Namespace:calico-system,Attempt:0,}" Dec 13 13:27:28.100846 containerd[1470]: time="2024-12-13T13:27:28.099946445Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69648fc998-4w2cp,Uid:77036710-cd93-4856-bfc5-037aee276132,Namespace:calico-apiserver,Attempt:0,}" Dec 13 13:27:28.108308 containerd[1470]: time="2024-12-13T13:27:28.108274251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69648fc998-v2qqw,Uid:b1402681-1dab-4e27-bb98-220ec86ddde8,Namespace:calico-apiserver,Attempt:0,}" Dec 13 13:27:28.176768 containerd[1470]: time="2024-12-13T13:27:28.176712417Z" level=error msg="Failed to destroy network for sandbox \"27f753461c75b2520c666cbe0a1bf5f551b1057d4e6d88357b64393d5796a9dd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:28.178540 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-27f753461c75b2520c666cbe0a1bf5f551b1057d4e6d88357b64393d5796a9dd-shm.mount: Deactivated successfully. Dec 13 13:27:28.181580 containerd[1470]: time="2024-12-13T13:27:28.181532420Z" level=error msg="encountered an error cleaning up failed sandbox \"27f753461c75b2520c666cbe0a1bf5f551b1057d4e6d88357b64393d5796a9dd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:28.181637 containerd[1470]: time="2024-12-13T13:27:28.181618620Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jl58q,Uid:991aee5f-8651-49bc-ac7b-b0b4b2cc81c5,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"27f753461c75b2520c666cbe0a1bf5f551b1057d4e6d88357b64393d5796a9dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:28.186624 kubelet[2644]: E1213 13:27:28.186414 2644 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"27f753461c75b2520c666cbe0a1bf5f551b1057d4e6d88357b64393d5796a9dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:28.186624 kubelet[2644]: E1213 13:27:28.186500 2644 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"27f753461c75b2520c666cbe0a1bf5f551b1057d4e6d88357b64393d5796a9dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jl58q" Dec 13 13:27:28.186624 kubelet[2644]: E1213 13:27:28.186518 2644 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"27f753461c75b2520c666cbe0a1bf5f551b1057d4e6d88357b64393d5796a9dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jl58q" Dec 13 13:27:28.186787 kubelet[2644]: E1213 13:27:28.186584 2644 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-jl58q_calico-system(991aee5f-8651-49bc-ac7b-b0b4b2cc81c5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-jl58q_calico-system(991aee5f-8651-49bc-ac7b-b0b4b2cc81c5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"27f753461c75b2520c666cbe0a1bf5f551b1057d4e6d88357b64393d5796a9dd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jl58q" podUID="991aee5f-8651-49bc-ac7b-b0b4b2cc81c5" Dec 13 13:27:28.281640 containerd[1470]: time="2024-12-13T13:27:28.281530847Z" level=error msg="Failed to destroy network for sandbox \"9a96510483de5288a555a7496385fae3414d83977f2aea433b94563106a02b7f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:28.284128 containerd[1470]: time="2024-12-13T13:27:28.282842288Z" level=error msg="encountered an error cleaning up failed sandbox \"9a96510483de5288a555a7496385fae3414d83977f2aea433b94563106a02b7f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:28.284128 containerd[1470]: time="2024-12-13T13:27:28.282920968Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-tm76v,Uid:16d3a59c-4627-445a-a548-3d17c81a3ebd,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9a96510483de5288a555a7496385fae3414d83977f2aea433b94563106a02b7f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:28.284442 kubelet[2644]: E1213 13:27:28.284388 2644 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a96510483de5288a555a7496385fae3414d83977f2aea433b94563106a02b7f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:28.284495 kubelet[2644]: E1213 13:27:28.284455 2644 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a96510483de5288a555a7496385fae3414d83977f2aea433b94563106a02b7f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-tm76v" Dec 13 13:27:28.284495 kubelet[2644]: E1213 13:27:28.284475 2644 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a96510483de5288a555a7496385fae3414d83977f2aea433b94563106a02b7f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-tm76v" Dec 13 13:27:28.284545 kubelet[2644]: E1213 13:27:28.284512 2644 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-tm76v_kube-system(16d3a59c-4627-445a-a548-3d17c81a3ebd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-tm76v_kube-system(16d3a59c-4627-445a-a548-3d17c81a3ebd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9a96510483de5288a555a7496385fae3414d83977f2aea433b94563106a02b7f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-tm76v" podUID="16d3a59c-4627-445a-a548-3d17c81a3ebd" Dec 13 13:27:28.288004 containerd[1470]: time="2024-12-13T13:27:28.287951811Z" level=error msg="Failed to destroy network for sandbox \"82e88ef19224fab276f15fff182d10dd5aa6fb658f753603262f32fe1451a3d8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:28.288292 containerd[1470]: time="2024-12-13T13:27:28.288266772Z" level=error msg="encountered an error cleaning up failed sandbox \"82e88ef19224fab276f15fff182d10dd5aa6fb658f753603262f32fe1451a3d8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:28.288340 containerd[1470]: time="2024-12-13T13:27:28.288316892Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69648fc998-v2qqw,Uid:b1402681-1dab-4e27-bb98-220ec86ddde8,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"82e88ef19224fab276f15fff182d10dd5aa6fb658f753603262f32fe1451a3d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:28.288567 kubelet[2644]: E1213 13:27:28.288529 2644 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"82e88ef19224fab276f15fff182d10dd5aa6fb658f753603262f32fe1451a3d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:28.288620 kubelet[2644]: E1213 13:27:28.288578 2644 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"82e88ef19224fab276f15fff182d10dd5aa6fb658f753603262f32fe1451a3d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69648fc998-v2qqw" Dec 13 13:27:28.288620 kubelet[2644]: E1213 13:27:28.288596 2644 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"82e88ef19224fab276f15fff182d10dd5aa6fb658f753603262f32fe1451a3d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69648fc998-v2qqw" Dec 13 13:27:28.288666 kubelet[2644]: E1213 13:27:28.288630 2644 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-69648fc998-v2qqw_calico-apiserver(b1402681-1dab-4e27-bb98-220ec86ddde8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-69648fc998-v2qqw_calico-apiserver(b1402681-1dab-4e27-bb98-220ec86ddde8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"82e88ef19224fab276f15fff182d10dd5aa6fb658f753603262f32fe1451a3d8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-69648fc998-v2qqw" podUID="b1402681-1dab-4e27-bb98-220ec86ddde8" Dec 13 13:27:28.289466 containerd[1470]: time="2024-12-13T13:27:28.289438012Z" level=error msg="Failed to destroy network for sandbox \"c5a9072b3c00d745aface7571023c0abffdb11eda589c93d145a27b3ebbe9018\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:28.289692 containerd[1470]: time="2024-12-13T13:27:28.289672373Z" level=error msg="encountered an error cleaning up failed sandbox \"c5a9072b3c00d745aface7571023c0abffdb11eda589c93d145a27b3ebbe9018\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:28.289734 containerd[1470]: time="2024-12-13T13:27:28.289713493Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-4g2n7,Uid:7432b46c-534c-4718-843e-36f44a5a5ac1,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c5a9072b3c00d745aface7571023c0abffdb11eda589c93d145a27b3ebbe9018\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:28.290386 kubelet[2644]: E1213 13:27:28.289856 2644 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5a9072b3c00d745aface7571023c0abffdb11eda589c93d145a27b3ebbe9018\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:28.290508 kubelet[2644]: E1213 13:27:28.290482 2644 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5a9072b3c00d745aface7571023c0abffdb11eda589c93d145a27b3ebbe9018\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-4g2n7" Dec 13 13:27:28.290555 kubelet[2644]: E1213 13:27:28.290520 2644 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5a9072b3c00d745aface7571023c0abffdb11eda589c93d145a27b3ebbe9018\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-4g2n7" Dec 13 13:27:28.290582 kubelet[2644]: E1213 13:27:28.290559 2644 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-4g2n7_kube-system(7432b46c-534c-4718-843e-36f44a5a5ac1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-4g2n7_kube-system(7432b46c-534c-4718-843e-36f44a5a5ac1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c5a9072b3c00d745aface7571023c0abffdb11eda589c93d145a27b3ebbe9018\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-4g2n7" podUID="7432b46c-534c-4718-843e-36f44a5a5ac1" Dec 13 13:27:28.292713 containerd[1470]: time="2024-12-13T13:27:28.292677735Z" level=error msg="Failed to destroy network for sandbox \"1dd6d8c1a6eac4dd5b5c62d01d7187e7b328d566cb31b745bf1ff4ef183d29f4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:28.292990 containerd[1470]: time="2024-12-13T13:27:28.292965495Z" level=error msg="encountered an error cleaning up failed sandbox \"1dd6d8c1a6eac4dd5b5c62d01d7187e7b328d566cb31b745bf1ff4ef183d29f4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:28.293081 containerd[1470]: time="2024-12-13T13:27:28.293057295Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84949d5b96-26kkm,Uid:fdd0e41e-e772-4088-a22c-e059031fe725,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1dd6d8c1a6eac4dd5b5c62d01d7187e7b328d566cb31b745bf1ff4ef183d29f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:28.293299 kubelet[2644]: E1213 13:27:28.293268 2644 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1dd6d8c1a6eac4dd5b5c62d01d7187e7b328d566cb31b745bf1ff4ef183d29f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:28.293406 kubelet[2644]: E1213 13:27:28.293311 2644 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1dd6d8c1a6eac4dd5b5c62d01d7187e7b328d566cb31b745bf1ff4ef183d29f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-84949d5b96-26kkm" Dec 13 13:27:28.293406 kubelet[2644]: E1213 13:27:28.293328 2644 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1dd6d8c1a6eac4dd5b5c62d01d7187e7b328d566cb31b745bf1ff4ef183d29f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-84949d5b96-26kkm" Dec 13 13:27:28.293406 kubelet[2644]: E1213 13:27:28.293373 2644 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-84949d5b96-26kkm_calico-system(fdd0e41e-e772-4088-a22c-e059031fe725)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-84949d5b96-26kkm_calico-system(fdd0e41e-e772-4088-a22c-e059031fe725)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1dd6d8c1a6eac4dd5b5c62d01d7187e7b328d566cb31b745bf1ff4ef183d29f4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-84949d5b96-26kkm" podUID="fdd0e41e-e772-4088-a22c-e059031fe725" Dec 13 13:27:28.295695 containerd[1470]: time="2024-12-13T13:27:28.295660857Z" level=error msg="Failed to destroy network for sandbox \"d42c754f5bfce3ff24d96e0d14d8e10b47072d6fee77834d8ccae6ccaea0fb69\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:28.296288 containerd[1470]: time="2024-12-13T13:27:28.296258977Z" level=error msg="encountered an error cleaning up failed sandbox \"d42c754f5bfce3ff24d96e0d14d8e10b47072d6fee77834d8ccae6ccaea0fb69\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:28.296392 containerd[1470]: time="2024-12-13T13:27:28.296320177Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69648fc998-4w2cp,Uid:77036710-cd93-4856-bfc5-037aee276132,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d42c754f5bfce3ff24d96e0d14d8e10b47072d6fee77834d8ccae6ccaea0fb69\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:28.296577 kubelet[2644]: E1213 13:27:28.296553 2644 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d42c754f5bfce3ff24d96e0d14d8e10b47072d6fee77834d8ccae6ccaea0fb69\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:28.296618 kubelet[2644]: E1213 13:27:28.296592 2644 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d42c754f5bfce3ff24d96e0d14d8e10b47072d6fee77834d8ccae6ccaea0fb69\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69648fc998-4w2cp" Dec 13 13:27:28.296618 kubelet[2644]: E1213 13:27:28.296611 2644 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d42c754f5bfce3ff24d96e0d14d8e10b47072d6fee77834d8ccae6ccaea0fb69\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69648fc998-4w2cp" Dec 13 13:27:28.296675 kubelet[2644]: E1213 13:27:28.296654 2644 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-69648fc998-4w2cp_calico-apiserver(77036710-cd93-4856-bfc5-037aee276132)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-69648fc998-4w2cp_calico-apiserver(77036710-cd93-4856-bfc5-037aee276132)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d42c754f5bfce3ff24d96e0d14d8e10b47072d6fee77834d8ccae6ccaea0fb69\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-69648fc998-4w2cp" podUID="77036710-cd93-4856-bfc5-037aee276132" Dec 13 13:27:29.013086 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9a96510483de5288a555a7496385fae3414d83977f2aea433b94563106a02b7f-shm.mount: Deactivated successfully. Dec 13 13:27:29.069948 kubelet[2644]: I1213 13:27:29.069912 2644 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1dd6d8c1a6eac4dd5b5c62d01d7187e7b328d566cb31b745bf1ff4ef183d29f4" Dec 13 13:27:29.071446 kubelet[2644]: I1213 13:27:29.071421 2644 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a96510483de5288a555a7496385fae3414d83977f2aea433b94563106a02b7f" Dec 13 13:27:29.071885 containerd[1470]: time="2024-12-13T13:27:29.070510495Z" level=info msg="StopPodSandbox for \"1dd6d8c1a6eac4dd5b5c62d01d7187e7b328d566cb31b745bf1ff4ef183d29f4\"" Dec 13 13:27:29.072270 containerd[1470]: time="2024-12-13T13:27:29.071924976Z" level=info msg="StopPodSandbox for \"9a96510483de5288a555a7496385fae3414d83977f2aea433b94563106a02b7f\"" Dec 13 13:27:29.072270 containerd[1470]: time="2024-12-13T13:27:29.072066016Z" level=info msg="Ensure that sandbox 1dd6d8c1a6eac4dd5b5c62d01d7187e7b328d566cb31b745bf1ff4ef183d29f4 in task-service has been cleanup successfully" Dec 13 13:27:29.072270 containerd[1470]: time="2024-12-13T13:27:29.072216536Z" level=info msg="Ensure that sandbox 9a96510483de5288a555a7496385fae3414d83977f2aea433b94563106a02b7f in task-service has been cleanup successfully" Dec 13 13:27:29.072972 containerd[1470]: time="2024-12-13T13:27:29.072275056Z" level=info msg="TearDown network for sandbox \"1dd6d8c1a6eac4dd5b5c62d01d7187e7b328d566cb31b745bf1ff4ef183d29f4\" successfully" Dec 13 13:27:29.072972 containerd[1470]: time="2024-12-13T13:27:29.072304656Z" level=info msg="StopPodSandbox for \"1dd6d8c1a6eac4dd5b5c62d01d7187e7b328d566cb31b745bf1ff4ef183d29f4\" returns successfully" Dec 13 13:27:29.073206 containerd[1470]: time="2024-12-13T13:27:29.073172777Z" level=info msg="TearDown network for sandbox \"9a96510483de5288a555a7496385fae3414d83977f2aea433b94563106a02b7f\" successfully" Dec 13 13:27:29.073389 containerd[1470]: time="2024-12-13T13:27:29.073202857Z" level=info msg="StopPodSandbox for \"9a96510483de5288a555a7496385fae3414d83977f2aea433b94563106a02b7f\" returns successfully" Dec 13 13:27:29.073426 kubelet[2644]: I1213 13:27:29.073200 2644 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="82e88ef19224fab276f15fff182d10dd5aa6fb658f753603262f32fe1451a3d8" Dec 13 13:27:29.073773 kubelet[2644]: E1213 13:27:29.073551 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:27:29.073807 containerd[1470]: time="2024-12-13T13:27:29.073736697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84949d5b96-26kkm,Uid:fdd0e41e-e772-4088-a22c-e059031fe725,Namespace:calico-system,Attempt:1,}" Dec 13 13:27:29.074847 containerd[1470]: time="2024-12-13T13:27:29.073923657Z" level=info msg="StopPodSandbox for \"82e88ef19224fab276f15fff182d10dd5aa6fb658f753603262f32fe1451a3d8\"" Dec 13 13:27:29.074847 containerd[1470]: time="2024-12-13T13:27:29.073952497Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-tm76v,Uid:16d3a59c-4627-445a-a548-3d17c81a3ebd,Namespace:kube-system,Attempt:1,}" Dec 13 13:27:29.074847 containerd[1470]: time="2024-12-13T13:27:29.074067857Z" level=info msg="Ensure that sandbox 82e88ef19224fab276f15fff182d10dd5aa6fb658f753603262f32fe1451a3d8 in task-service has been cleanup successfully" Dec 13 13:27:29.075154 systemd[1]: run-netns-cni\x2db864863d\x2df053\x2d1d0d\x2d6f8c\x2d5960583b5bf2.mount: Deactivated successfully. Dec 13 13:27:29.076419 kubelet[2644]: I1213 13:27:29.075234 2644 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d42c754f5bfce3ff24d96e0d14d8e10b47072d6fee77834d8ccae6ccaea0fb69" Dec 13 13:27:29.075266 systemd[1]: run-netns-cni\x2d39260ab5\x2d544e\x2db29b\x2dd600\x2d8925eb2579de.mount: Deactivated successfully. Dec 13 13:27:29.076536 kubelet[2644]: I1213 13:27:29.076470 2644 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5a9072b3c00d745aface7571023c0abffdb11eda589c93d145a27b3ebbe9018" Dec 13 13:27:29.076904 containerd[1470]: time="2024-12-13T13:27:29.075112538Z" level=info msg="TearDown network for sandbox \"82e88ef19224fab276f15fff182d10dd5aa6fb658f753603262f32fe1451a3d8\" successfully" Dec 13 13:27:29.076904 containerd[1470]: time="2024-12-13T13:27:29.076817139Z" level=info msg="StopPodSandbox for \"82e88ef19224fab276f15fff182d10dd5aa6fb658f753603262f32fe1451a3d8\" returns successfully" Dec 13 13:27:29.076904 containerd[1470]: time="2024-12-13T13:27:29.076863179Z" level=info msg="StopPodSandbox for \"c5a9072b3c00d745aface7571023c0abffdb11eda589c93d145a27b3ebbe9018\"" Dec 13 13:27:29.077067 containerd[1470]: time="2024-12-13T13:27:29.077039859Z" level=info msg="Ensure that sandbox c5a9072b3c00d745aface7571023c0abffdb11eda589c93d145a27b3ebbe9018 in task-service has been cleanup successfully" Dec 13 13:27:29.077067 containerd[1470]: time="2024-12-13T13:27:29.075721778Z" level=info msg="StopPodSandbox for \"d42c754f5bfce3ff24d96e0d14d8e10b47072d6fee77834d8ccae6ccaea0fb69\"" Dec 13 13:27:29.077218 containerd[1470]: time="2024-12-13T13:27:29.077196299Z" level=info msg="Ensure that sandbox d42c754f5bfce3ff24d96e0d14d8e10b47072d6fee77834d8ccae6ccaea0fb69 in task-service has been cleanup successfully" Dec 13 13:27:29.081121 containerd[1470]: time="2024-12-13T13:27:29.078089660Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69648fc998-v2qqw,Uid:b1402681-1dab-4e27-bb98-220ec86ddde8,Namespace:calico-apiserver,Attempt:1,}" Dec 13 13:27:29.081121 containerd[1470]: time="2024-12-13T13:27:29.079854861Z" level=info msg="TearDown network for sandbox \"c5a9072b3c00d745aface7571023c0abffdb11eda589c93d145a27b3ebbe9018\" successfully" Dec 13 13:27:29.081121 containerd[1470]: time="2024-12-13T13:27:29.079891741Z" level=info msg="StopPodSandbox for \"c5a9072b3c00d745aface7571023c0abffdb11eda589c93d145a27b3ebbe9018\" returns successfully" Dec 13 13:27:29.081121 containerd[1470]: time="2024-12-13T13:27:29.079884221Z" level=info msg="TearDown network for sandbox \"d42c754f5bfce3ff24d96e0d14d8e10b47072d6fee77834d8ccae6ccaea0fb69\" successfully" Dec 13 13:27:29.081121 containerd[1470]: time="2024-12-13T13:27:29.079946381Z" level=info msg="StopPodSandbox for \"d42c754f5bfce3ff24d96e0d14d8e10b47072d6fee77834d8ccae6ccaea0fb69\" returns successfully" Dec 13 13:27:29.081121 containerd[1470]: time="2024-12-13T13:27:29.080990462Z" level=info msg="StopPodSandbox for \"27f753461c75b2520c666cbe0a1bf5f551b1057d4e6d88357b64393d5796a9dd\"" Dec 13 13:27:29.080307 systemd[1]: run-netns-cni\x2dabb3e57c\x2de1dd\x2d7340\x2dde69\x2dd47e89f145b7.mount: Deactivated successfully. Dec 13 13:27:29.081357 kubelet[2644]: I1213 13:27:29.079757 2644 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="27f753461c75b2520c666cbe0a1bf5f551b1057d4e6d88357b64393d5796a9dd" Dec 13 13:27:29.081357 kubelet[2644]: E1213 13:27:29.080139 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:27:29.081472 containerd[1470]: time="2024-12-13T13:27:29.081151222Z" level=info msg="Ensure that sandbox 27f753461c75b2520c666cbe0a1bf5f551b1057d4e6d88357b64393d5796a9dd in task-service has been cleanup successfully" Dec 13 13:27:29.080432 systemd[1]: run-netns-cni\x2dc2eaa064\x2df2af\x2d48aa\x2d0bb8\x2df482a8a464be.mount: Deactivated successfully. Dec 13 13:27:29.080487 systemd[1]: run-netns-cni\x2dcbb0a0b7\x2d86f2\x2d0583\x2d7005\x2d5aa1950c9b3b.mount: Deactivated successfully. Dec 13 13:27:29.082704 containerd[1470]: time="2024-12-13T13:27:29.082382142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69648fc998-4w2cp,Uid:77036710-cd93-4856-bfc5-037aee276132,Namespace:calico-apiserver,Attempt:1,}" Dec 13 13:27:29.090892 containerd[1470]: time="2024-12-13T13:27:29.090463268Z" level=info msg="TearDown network for sandbox \"27f753461c75b2520c666cbe0a1bf5f551b1057d4e6d88357b64393d5796a9dd\" successfully" Dec 13 13:27:29.090892 containerd[1470]: time="2024-12-13T13:27:29.090504068Z" level=info msg="StopPodSandbox for \"27f753461c75b2520c666cbe0a1bf5f551b1057d4e6d88357b64393d5796a9dd\" returns successfully" Dec 13 13:27:29.091257 containerd[1470]: time="2024-12-13T13:27:29.091231908Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jl58q,Uid:991aee5f-8651-49bc-ac7b-b0b4b2cc81c5,Namespace:calico-system,Attempt:1,}" Dec 13 13:27:29.095297 systemd[1]: run-netns-cni\x2d92b6b024\x2db368\x2dec28\x2dd5ce\x2d8ec705b98ff9.mount: Deactivated successfully. Dec 13 13:27:29.102041 containerd[1470]: time="2024-12-13T13:27:29.101711755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-4g2n7,Uid:7432b46c-534c-4718-843e-36f44a5a5ac1,Namespace:kube-system,Attempt:1,}" Dec 13 13:27:29.239069 containerd[1470]: time="2024-12-13T13:27:29.238992841Z" level=error msg="Failed to destroy network for sandbox \"2402bfc3fb517198a33cc6b1f6555325952a6abcc8100e95a58daba1204b8e98\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:29.239241 containerd[1470]: time="2024-12-13T13:27:29.239035761Z" level=error msg="Failed to destroy network for sandbox \"b12022606fec298cca643e809db898729e25ac7a1a2e4b945e0edb8f1bc98341\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:29.239640 containerd[1470]: time="2024-12-13T13:27:29.239599482Z" level=error msg="encountered an error cleaning up failed sandbox \"2402bfc3fb517198a33cc6b1f6555325952a6abcc8100e95a58daba1204b8e98\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:29.239701 containerd[1470]: time="2024-12-13T13:27:29.239680242Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84949d5b96-26kkm,Uid:fdd0e41e-e772-4088-a22c-e059031fe725,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"2402bfc3fb517198a33cc6b1f6555325952a6abcc8100e95a58daba1204b8e98\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:29.239777 containerd[1470]: time="2024-12-13T13:27:29.239720842Z" level=error msg="encountered an error cleaning up failed sandbox \"b12022606fec298cca643e809db898729e25ac7a1a2e4b945e0edb8f1bc98341\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:29.239817 containerd[1470]: time="2024-12-13T13:27:29.239797282Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69648fc998-v2qqw,Uid:b1402681-1dab-4e27-bb98-220ec86ddde8,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"b12022606fec298cca643e809db898729e25ac7a1a2e4b945e0edb8f1bc98341\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:29.240151 kubelet[2644]: E1213 13:27:29.240114 2644 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b12022606fec298cca643e809db898729e25ac7a1a2e4b945e0edb8f1bc98341\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:29.240151 kubelet[2644]: E1213 13:27:29.240121 2644 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2402bfc3fb517198a33cc6b1f6555325952a6abcc8100e95a58daba1204b8e98\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:29.240264 kubelet[2644]: E1213 13:27:29.240174 2644 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2402bfc3fb517198a33cc6b1f6555325952a6abcc8100e95a58daba1204b8e98\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-84949d5b96-26kkm" Dec 13 13:27:29.240264 kubelet[2644]: E1213 13:27:29.240193 2644 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2402bfc3fb517198a33cc6b1f6555325952a6abcc8100e95a58daba1204b8e98\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-84949d5b96-26kkm" Dec 13 13:27:29.240264 kubelet[2644]: E1213 13:27:29.240229 2644 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-84949d5b96-26kkm_calico-system(fdd0e41e-e772-4088-a22c-e059031fe725)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-84949d5b96-26kkm_calico-system(fdd0e41e-e772-4088-a22c-e059031fe725)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2402bfc3fb517198a33cc6b1f6555325952a6abcc8100e95a58daba1204b8e98\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-84949d5b96-26kkm" podUID="fdd0e41e-e772-4088-a22c-e059031fe725" Dec 13 13:27:29.240723 kubelet[2644]: E1213 13:27:29.240169 2644 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b12022606fec298cca643e809db898729e25ac7a1a2e4b945e0edb8f1bc98341\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69648fc998-v2qqw" Dec 13 13:27:29.240723 kubelet[2644]: E1213 13:27:29.240407 2644 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b12022606fec298cca643e809db898729e25ac7a1a2e4b945e0edb8f1bc98341\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69648fc998-v2qqw" Dec 13 13:27:29.240723 kubelet[2644]: E1213 13:27:29.240469 2644 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-69648fc998-v2qqw_calico-apiserver(b1402681-1dab-4e27-bb98-220ec86ddde8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-69648fc998-v2qqw_calico-apiserver(b1402681-1dab-4e27-bb98-220ec86ddde8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b12022606fec298cca643e809db898729e25ac7a1a2e4b945e0edb8f1bc98341\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-69648fc998-v2qqw" podUID="b1402681-1dab-4e27-bb98-220ec86ddde8" Dec 13 13:27:29.247575 containerd[1470]: time="2024-12-13T13:27:29.247422847Z" level=error msg="Failed to destroy network for sandbox \"ac41ff4cc00b556307a05cd4080412f7483fd2836ce0b704d27334cf15d284a6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:29.248080 containerd[1470]: time="2024-12-13T13:27:29.248030447Z" level=error msg="encountered an error cleaning up failed sandbox \"ac41ff4cc00b556307a05cd4080412f7483fd2836ce0b704d27334cf15d284a6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:29.248143 containerd[1470]: time="2024-12-13T13:27:29.248109687Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jl58q,Uid:991aee5f-8651-49bc-ac7b-b0b4b2cc81c5,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"ac41ff4cc00b556307a05cd4080412f7483fd2836ce0b704d27334cf15d284a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:29.248367 kubelet[2644]: E1213 13:27:29.248288 2644 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac41ff4cc00b556307a05cd4080412f7483fd2836ce0b704d27334cf15d284a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:29.248367 kubelet[2644]: E1213 13:27:29.248338 2644 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac41ff4cc00b556307a05cd4080412f7483fd2836ce0b704d27334cf15d284a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jl58q" Dec 13 13:27:29.248471 kubelet[2644]: E1213 13:27:29.248376 2644 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac41ff4cc00b556307a05cd4080412f7483fd2836ce0b704d27334cf15d284a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jl58q" Dec 13 13:27:29.248471 kubelet[2644]: E1213 13:27:29.248415 2644 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-jl58q_calico-system(991aee5f-8651-49bc-ac7b-b0b4b2cc81c5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-jl58q_calico-system(991aee5f-8651-49bc-ac7b-b0b4b2cc81c5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ac41ff4cc00b556307a05cd4080412f7483fd2836ce0b704d27334cf15d284a6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jl58q" podUID="991aee5f-8651-49bc-ac7b-b0b4b2cc81c5" Dec 13 13:27:29.260834 containerd[1470]: time="2024-12-13T13:27:29.260796615Z" level=error msg="Failed to destroy network for sandbox \"6ebfe153f9bd5d284e3c2d0e770265e24cb4369ad8e7497825d870da4520a0a6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:29.261473 containerd[1470]: time="2024-12-13T13:27:29.261444255Z" level=error msg="encountered an error cleaning up failed sandbox \"6ebfe153f9bd5d284e3c2d0e770265e24cb4369ad8e7497825d870da4520a0a6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:29.262105 containerd[1470]: time="2024-12-13T13:27:29.261572736Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-4g2n7,Uid:7432b46c-534c-4718-843e-36f44a5a5ac1,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"6ebfe153f9bd5d284e3c2d0e770265e24cb4369ad8e7497825d870da4520a0a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:29.262161 kubelet[2644]: E1213 13:27:29.261786 2644 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ebfe153f9bd5d284e3c2d0e770265e24cb4369ad8e7497825d870da4520a0a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:29.262161 kubelet[2644]: E1213 13:27:29.261836 2644 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ebfe153f9bd5d284e3c2d0e770265e24cb4369ad8e7497825d870da4520a0a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-4g2n7" Dec 13 13:27:29.262161 kubelet[2644]: E1213 13:27:29.261855 2644 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ebfe153f9bd5d284e3c2d0e770265e24cb4369ad8e7497825d870da4520a0a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-4g2n7" Dec 13 13:27:29.262257 kubelet[2644]: E1213 13:27:29.261912 2644 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-4g2n7_kube-system(7432b46c-534c-4718-843e-36f44a5a5ac1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-4g2n7_kube-system(7432b46c-534c-4718-843e-36f44a5a5ac1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6ebfe153f9bd5d284e3c2d0e770265e24cb4369ad8e7497825d870da4520a0a6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-4g2n7" podUID="7432b46c-534c-4718-843e-36f44a5a5ac1" Dec 13 13:27:29.271291 containerd[1470]: time="2024-12-13T13:27:29.270009621Z" level=error msg="Failed to destroy network for sandbox \"2524df59d1d5837bc47c1f56c1f561f460cf084e19c6176080b270b2eb6b9f65\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:29.271942 containerd[1470]: time="2024-12-13T13:27:29.270522621Z" level=error msg="encountered an error cleaning up failed sandbox \"2524df59d1d5837bc47c1f56c1f561f460cf084e19c6176080b270b2eb6b9f65\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:29.272010 containerd[1470]: time="2024-12-13T13:27:29.271984342Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-tm76v,Uid:16d3a59c-4627-445a-a548-3d17c81a3ebd,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"2524df59d1d5837bc47c1f56c1f561f460cf084e19c6176080b270b2eb6b9f65\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:29.272377 kubelet[2644]: E1213 13:27:29.272264 2644 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2524df59d1d5837bc47c1f56c1f561f460cf084e19c6176080b270b2eb6b9f65\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:29.272377 kubelet[2644]: E1213 13:27:29.272323 2644 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2524df59d1d5837bc47c1f56c1f561f460cf084e19c6176080b270b2eb6b9f65\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-tm76v" Dec 13 13:27:29.272377 kubelet[2644]: E1213 13:27:29.272344 2644 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2524df59d1d5837bc47c1f56c1f561f460cf084e19c6176080b270b2eb6b9f65\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-tm76v" Dec 13 13:27:29.272755 kubelet[2644]: E1213 13:27:29.272531 2644 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-tm76v_kube-system(16d3a59c-4627-445a-a548-3d17c81a3ebd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-tm76v_kube-system(16d3a59c-4627-445a-a548-3d17c81a3ebd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2524df59d1d5837bc47c1f56c1f561f460cf084e19c6176080b270b2eb6b9f65\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-tm76v" podUID="16d3a59c-4627-445a-a548-3d17c81a3ebd" Dec 13 13:27:29.276386 containerd[1470]: time="2024-12-13T13:27:29.276266265Z" level=error msg="Failed to destroy network for sandbox \"e800de8956fc69d7244ccf347e8f59184b294ff27d86fb84f83ff71188ec38c0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:29.276672 containerd[1470]: time="2024-12-13T13:27:29.276645225Z" level=error msg="encountered an error cleaning up failed sandbox \"e800de8956fc69d7244ccf347e8f59184b294ff27d86fb84f83ff71188ec38c0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:29.276811 containerd[1470]: time="2024-12-13T13:27:29.276770705Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69648fc998-4w2cp,Uid:77036710-cd93-4856-bfc5-037aee276132,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"e800de8956fc69d7244ccf347e8f59184b294ff27d86fb84f83ff71188ec38c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:29.277145 kubelet[2644]: E1213 13:27:29.277068 2644 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e800de8956fc69d7244ccf347e8f59184b294ff27d86fb84f83ff71188ec38c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:29.277145 kubelet[2644]: E1213 13:27:29.277117 2644 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e800de8956fc69d7244ccf347e8f59184b294ff27d86fb84f83ff71188ec38c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69648fc998-4w2cp" Dec 13 13:27:29.277145 kubelet[2644]: E1213 13:27:29.277134 2644 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e800de8956fc69d7244ccf347e8f59184b294ff27d86fb84f83ff71188ec38c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69648fc998-4w2cp" Dec 13 13:27:29.277303 kubelet[2644]: E1213 13:27:29.277180 2644 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-69648fc998-4w2cp_calico-apiserver(77036710-cd93-4856-bfc5-037aee276132)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-69648fc998-4w2cp_calico-apiserver(77036710-cd93-4856-bfc5-037aee276132)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e800de8956fc69d7244ccf347e8f59184b294ff27d86fb84f83ff71188ec38c0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-69648fc998-4w2cp" podUID="77036710-cd93-4856-bfc5-037aee276132" Dec 13 13:27:30.013717 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2402bfc3fb517198a33cc6b1f6555325952a6abcc8100e95a58daba1204b8e98-shm.mount: Deactivated successfully. Dec 13 13:27:30.083351 kubelet[2644]: I1213 13:27:30.083313 2644 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ebfe153f9bd5d284e3c2d0e770265e24cb4369ad8e7497825d870da4520a0a6" Dec 13 13:27:30.084597 containerd[1470]: time="2024-12-13T13:27:30.084374491Z" level=info msg="StopPodSandbox for \"6ebfe153f9bd5d284e3c2d0e770265e24cb4369ad8e7497825d870da4520a0a6\"" Dec 13 13:27:30.084597 containerd[1470]: time="2024-12-13T13:27:30.084595571Z" level=info msg="Ensure that sandbox 6ebfe153f9bd5d284e3c2d0e770265e24cb4369ad8e7497825d870da4520a0a6 in task-service has been cleanup successfully" Dec 13 13:27:30.085612 containerd[1470]: time="2024-12-13T13:27:30.085565732Z" level=info msg="TearDown network for sandbox \"6ebfe153f9bd5d284e3c2d0e770265e24cb4369ad8e7497825d870da4520a0a6\" successfully" Dec 13 13:27:30.085612 containerd[1470]: time="2024-12-13T13:27:30.085594012Z" level=info msg="StopPodSandbox for \"6ebfe153f9bd5d284e3c2d0e770265e24cb4369ad8e7497825d870da4520a0a6\" returns successfully" Dec 13 13:27:30.087994 containerd[1470]: time="2024-12-13T13:27:30.087951693Z" level=info msg="StopPodSandbox for \"c5a9072b3c00d745aface7571023c0abffdb11eda589c93d145a27b3ebbe9018\"" Dec 13 13:27:30.088064 containerd[1470]: time="2024-12-13T13:27:30.088045093Z" level=info msg="TearDown network for sandbox \"c5a9072b3c00d745aface7571023c0abffdb11eda589c93d145a27b3ebbe9018\" successfully" Dec 13 13:27:30.088064 containerd[1470]: time="2024-12-13T13:27:30.088055293Z" level=info msg="StopPodSandbox for \"c5a9072b3c00d745aface7571023c0abffdb11eda589c93d145a27b3ebbe9018\" returns successfully" Dec 13 13:27:30.088118 kubelet[2644]: I1213 13:27:30.088047 2644 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac41ff4cc00b556307a05cd4080412f7483fd2836ce0b704d27334cf15d284a6" Dec 13 13:27:30.088902 kubelet[2644]: E1213 13:27:30.088532 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:27:30.088657 systemd[1]: run-netns-cni\x2dc4444325\x2d6561\x2d5c5e\x2da39b\x2d98246a50a77e.mount: Deactivated successfully. Dec 13 13:27:30.089778 containerd[1470]: time="2024-12-13T13:27:30.089144534Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-4g2n7,Uid:7432b46c-534c-4718-843e-36f44a5a5ac1,Namespace:kube-system,Attempt:2,}" Dec 13 13:27:30.089778 containerd[1470]: time="2024-12-13T13:27:30.089208974Z" level=info msg="StopPodSandbox for \"ac41ff4cc00b556307a05cd4080412f7483fd2836ce0b704d27334cf15d284a6\"" Dec 13 13:27:30.089778 containerd[1470]: time="2024-12-13T13:27:30.089386254Z" level=info msg="Ensure that sandbox ac41ff4cc00b556307a05cd4080412f7483fd2836ce0b704d27334cf15d284a6 in task-service has been cleanup successfully" Dec 13 13:27:30.089778 containerd[1470]: time="2024-12-13T13:27:30.089706854Z" level=info msg="TearDown network for sandbox \"ac41ff4cc00b556307a05cd4080412f7483fd2836ce0b704d27334cf15d284a6\" successfully" Dec 13 13:27:30.089778 containerd[1470]: time="2024-12-13T13:27:30.089726294Z" level=info msg="StopPodSandbox for \"ac41ff4cc00b556307a05cd4080412f7483fd2836ce0b704d27334cf15d284a6\" returns successfully" Dec 13 13:27:30.090235 containerd[1470]: time="2024-12-13T13:27:30.090208935Z" level=info msg="StopPodSandbox for \"27f753461c75b2520c666cbe0a1bf5f551b1057d4e6d88357b64393d5796a9dd\"" Dec 13 13:27:30.090305 containerd[1470]: time="2024-12-13T13:27:30.090290295Z" level=info msg="TearDown network for sandbox \"27f753461c75b2520c666cbe0a1bf5f551b1057d4e6d88357b64393d5796a9dd\" successfully" Dec 13 13:27:30.090337 containerd[1470]: time="2024-12-13T13:27:30.090303935Z" level=info msg="StopPodSandbox for \"27f753461c75b2520c666cbe0a1bf5f551b1057d4e6d88357b64393d5796a9dd\" returns successfully" Dec 13 13:27:30.090995 containerd[1470]: time="2024-12-13T13:27:30.090724975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jl58q,Uid:991aee5f-8651-49bc-ac7b-b0b4b2cc81c5,Namespace:calico-system,Attempt:2,}" Dec 13 13:27:30.091067 kubelet[2644]: I1213 13:27:30.090762 2644 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2402bfc3fb517198a33cc6b1f6555325952a6abcc8100e95a58daba1204b8e98" Dec 13 13:27:30.091204 containerd[1470]: time="2024-12-13T13:27:30.091182135Z" level=info msg="StopPodSandbox for \"2402bfc3fb517198a33cc6b1f6555325952a6abcc8100e95a58daba1204b8e98\"" Dec 13 13:27:30.091360 containerd[1470]: time="2024-12-13T13:27:30.091333375Z" level=info msg="Ensure that sandbox 2402bfc3fb517198a33cc6b1f6555325952a6abcc8100e95a58daba1204b8e98 in task-service has been cleanup successfully" Dec 13 13:27:30.091560 containerd[1470]: time="2024-12-13T13:27:30.091535455Z" level=info msg="TearDown network for sandbox \"2402bfc3fb517198a33cc6b1f6555325952a6abcc8100e95a58daba1204b8e98\" successfully" Dec 13 13:27:30.091560 containerd[1470]: time="2024-12-13T13:27:30.091555575Z" level=info msg="StopPodSandbox for \"2402bfc3fb517198a33cc6b1f6555325952a6abcc8100e95a58daba1204b8e98\" returns successfully" Dec 13 13:27:30.091941 containerd[1470]: time="2024-12-13T13:27:30.091909176Z" level=info msg="StopPodSandbox for \"1dd6d8c1a6eac4dd5b5c62d01d7187e7b328d566cb31b745bf1ff4ef183d29f4\"" Dec 13 13:27:30.092011 containerd[1470]: time="2024-12-13T13:27:30.091997616Z" level=info msg="TearDown network for sandbox \"1dd6d8c1a6eac4dd5b5c62d01d7187e7b328d566cb31b745bf1ff4ef183d29f4\" successfully" Dec 13 13:27:30.092038 containerd[1470]: time="2024-12-13T13:27:30.092009296Z" level=info msg="StopPodSandbox for \"1dd6d8c1a6eac4dd5b5c62d01d7187e7b328d566cb31b745bf1ff4ef183d29f4\" returns successfully" Dec 13 13:27:30.092297 systemd[1]: run-netns-cni\x2d3a5ab7db\x2d9e22\x2d0a6b\x2d316f\x2da3095c73566e.mount: Deactivated successfully. Dec 13 13:27:30.093954 containerd[1470]: time="2024-12-13T13:27:30.093409697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84949d5b96-26kkm,Uid:fdd0e41e-e772-4088-a22c-e059031fe725,Namespace:calico-system,Attempt:2,}" Dec 13 13:27:30.094034 kubelet[2644]: I1213 13:27:30.093913 2644 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2524df59d1d5837bc47c1f56c1f561f460cf084e19c6176080b270b2eb6b9f65" Dec 13 13:27:30.095429 containerd[1470]: time="2024-12-13T13:27:30.094334057Z" level=info msg="StopPodSandbox for \"2524df59d1d5837bc47c1f56c1f561f460cf084e19c6176080b270b2eb6b9f65\"" Dec 13 13:27:30.095429 containerd[1470]: time="2024-12-13T13:27:30.094490697Z" level=info msg="Ensure that sandbox 2524df59d1d5837bc47c1f56c1f561f460cf084e19c6176080b270b2eb6b9f65 in task-service has been cleanup successfully" Dec 13 13:27:30.095429 containerd[1470]: time="2024-12-13T13:27:30.094669897Z" level=info msg="TearDown network for sandbox \"2524df59d1d5837bc47c1f56c1f561f460cf084e19c6176080b270b2eb6b9f65\" successfully" Dec 13 13:27:30.095429 containerd[1470]: time="2024-12-13T13:27:30.094682097Z" level=info msg="StopPodSandbox for \"2524df59d1d5837bc47c1f56c1f561f460cf084e19c6176080b270b2eb6b9f65\" returns successfully" Dec 13 13:27:30.095429 containerd[1470]: time="2024-12-13T13:27:30.095292338Z" level=info msg="StopPodSandbox for \"9a96510483de5288a555a7496385fae3414d83977f2aea433b94563106a02b7f\"" Dec 13 13:27:30.095429 containerd[1470]: time="2024-12-13T13:27:30.095378938Z" level=info msg="TearDown network for sandbox \"9a96510483de5288a555a7496385fae3414d83977f2aea433b94563106a02b7f\" successfully" Dec 13 13:27:30.095429 containerd[1470]: time="2024-12-13T13:27:30.095388618Z" level=info msg="StopPodSandbox for \"9a96510483de5288a555a7496385fae3414d83977f2aea433b94563106a02b7f\" returns successfully" Dec 13 13:27:30.095298 systemd[1]: run-netns-cni\x2d6b042ea7\x2d79aa\x2d0e9b\x2da6e7\x2db5710a7a06f4.mount: Deactivated successfully. Dec 13 13:27:30.095711 kubelet[2644]: E1213 13:27:30.095558 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:27:30.096907 containerd[1470]: time="2024-12-13T13:27:30.096318738Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-tm76v,Uid:16d3a59c-4627-445a-a548-3d17c81a3ebd,Namespace:kube-system,Attempt:2,}" Dec 13 13:27:30.097495 kubelet[2644]: I1213 13:27:30.097452 2644 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b12022606fec298cca643e809db898729e25ac7a1a2e4b945e0edb8f1bc98341" Dec 13 13:27:30.097538 systemd[1]: run-netns-cni\x2d21f44780\x2d7679\x2d80c0\x2db568\x2d20d0542a4312.mount: Deactivated successfully. Dec 13 13:27:30.098487 containerd[1470]: time="2024-12-13T13:27:30.098148019Z" level=info msg="StopPodSandbox for \"b12022606fec298cca643e809db898729e25ac7a1a2e4b945e0edb8f1bc98341\"" Dec 13 13:27:30.098487 containerd[1470]: time="2024-12-13T13:27:30.098464580Z" level=info msg="Ensure that sandbox b12022606fec298cca643e809db898729e25ac7a1a2e4b945e0edb8f1bc98341 in task-service has been cleanup successfully" Dec 13 13:27:30.098834 containerd[1470]: time="2024-12-13T13:27:30.098685780Z" level=info msg="TearDown network for sandbox \"b12022606fec298cca643e809db898729e25ac7a1a2e4b945e0edb8f1bc98341\" successfully" Dec 13 13:27:30.098834 containerd[1470]: time="2024-12-13T13:27:30.098709180Z" level=info msg="StopPodSandbox for \"b12022606fec298cca643e809db898729e25ac7a1a2e4b945e0edb8f1bc98341\" returns successfully" Dec 13 13:27:30.099035 containerd[1470]: time="2024-12-13T13:27:30.098983700Z" level=info msg="StopPodSandbox for \"82e88ef19224fab276f15fff182d10dd5aa6fb658f753603262f32fe1451a3d8\"" Dec 13 13:27:30.099180 containerd[1470]: time="2024-12-13T13:27:30.099058980Z" level=info msg="TearDown network for sandbox \"82e88ef19224fab276f15fff182d10dd5aa6fb658f753603262f32fe1451a3d8\" successfully" Dec 13 13:27:30.099180 containerd[1470]: time="2024-12-13T13:27:30.099126700Z" level=info msg="StopPodSandbox for \"82e88ef19224fab276f15fff182d10dd5aa6fb658f753603262f32fe1451a3d8\" returns successfully" Dec 13 13:27:30.099584 kubelet[2644]: I1213 13:27:30.099552 2644 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e800de8956fc69d7244ccf347e8f59184b294ff27d86fb84f83ff71188ec38c0" Dec 13 13:27:30.099737 containerd[1470]: time="2024-12-13T13:27:30.099711180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69648fc998-v2qqw,Uid:b1402681-1dab-4e27-bb98-220ec86ddde8,Namespace:calico-apiserver,Attempt:2,}" Dec 13 13:27:30.100382 containerd[1470]: time="2024-12-13T13:27:30.100356101Z" level=info msg="StopPodSandbox for \"e800de8956fc69d7244ccf347e8f59184b294ff27d86fb84f83ff71188ec38c0\"" Dec 13 13:27:30.100528 containerd[1470]: time="2024-12-13T13:27:30.100508141Z" level=info msg="Ensure that sandbox e800de8956fc69d7244ccf347e8f59184b294ff27d86fb84f83ff71188ec38c0 in task-service has been cleanup successfully" Dec 13 13:27:30.100682 containerd[1470]: time="2024-12-13T13:27:30.100665541Z" level=info msg="TearDown network for sandbox \"e800de8956fc69d7244ccf347e8f59184b294ff27d86fb84f83ff71188ec38c0\" successfully" Dec 13 13:27:30.100719 containerd[1470]: time="2024-12-13T13:27:30.100682461Z" level=info msg="StopPodSandbox for \"e800de8956fc69d7244ccf347e8f59184b294ff27d86fb84f83ff71188ec38c0\" returns successfully" Dec 13 13:27:30.101078 containerd[1470]: time="2024-12-13T13:27:30.101055061Z" level=info msg="StopPodSandbox for \"d42c754f5bfce3ff24d96e0d14d8e10b47072d6fee77834d8ccae6ccaea0fb69\"" Dec 13 13:27:30.101149 containerd[1470]: time="2024-12-13T13:27:30.101134941Z" level=info msg="TearDown network for sandbox \"d42c754f5bfce3ff24d96e0d14d8e10b47072d6fee77834d8ccae6ccaea0fb69\" successfully" Dec 13 13:27:30.101181 containerd[1470]: time="2024-12-13T13:27:30.101148461Z" level=info msg="StopPodSandbox for \"d42c754f5bfce3ff24d96e0d14d8e10b47072d6fee77834d8ccae6ccaea0fb69\" returns successfully" Dec 13 13:27:30.101571 containerd[1470]: time="2024-12-13T13:27:30.101542181Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69648fc998-4w2cp,Uid:77036710-cd93-4856-bfc5-037aee276132,Namespace:calico-apiserver,Attempt:2,}" Dec 13 13:27:30.338485 containerd[1470]: time="2024-12-13T13:27:30.338365521Z" level=error msg="Failed to destroy network for sandbox \"5117923122ba090e5d499c0acd72a36d052893c08e93f893dee251dd20c62718\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:30.341453 containerd[1470]: time="2024-12-13T13:27:30.341327323Z" level=error msg="encountered an error cleaning up failed sandbox \"5117923122ba090e5d499c0acd72a36d052893c08e93f893dee251dd20c62718\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:30.341453 containerd[1470]: time="2024-12-13T13:27:30.341407043Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-4g2n7,Uid:7432b46c-534c-4718-843e-36f44a5a5ac1,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"5117923122ba090e5d499c0acd72a36d052893c08e93f893dee251dd20c62718\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:30.341933 kubelet[2644]: E1213 13:27:30.341630 2644 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5117923122ba090e5d499c0acd72a36d052893c08e93f893dee251dd20c62718\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:30.341933 kubelet[2644]: E1213 13:27:30.341694 2644 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5117923122ba090e5d499c0acd72a36d052893c08e93f893dee251dd20c62718\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-4g2n7" Dec 13 13:27:30.341933 kubelet[2644]: E1213 13:27:30.341715 2644 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5117923122ba090e5d499c0acd72a36d052893c08e93f893dee251dd20c62718\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-4g2n7" Dec 13 13:27:30.342079 kubelet[2644]: E1213 13:27:30.341763 2644 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-4g2n7_kube-system(7432b46c-534c-4718-843e-36f44a5a5ac1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-4g2n7_kube-system(7432b46c-534c-4718-843e-36f44a5a5ac1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5117923122ba090e5d499c0acd72a36d052893c08e93f893dee251dd20c62718\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-4g2n7" podUID="7432b46c-534c-4718-843e-36f44a5a5ac1" Dec 13 13:27:30.354170 containerd[1470]: time="2024-12-13T13:27:30.354120571Z" level=error msg="Failed to destroy network for sandbox \"a1e711814fe87f2d186bc5aa964ee62e4f7c132dedd2d5657686553649b8af34\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:30.354764 containerd[1470]: time="2024-12-13T13:27:30.354715011Z" level=error msg="encountered an error cleaning up failed sandbox \"a1e711814fe87f2d186bc5aa964ee62e4f7c132dedd2d5657686553649b8af34\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:30.354821 containerd[1470]: time="2024-12-13T13:27:30.354780091Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69648fc998-4w2cp,Uid:77036710-cd93-4856-bfc5-037aee276132,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"a1e711814fe87f2d186bc5aa964ee62e4f7c132dedd2d5657686553649b8af34\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:30.355086 kubelet[2644]: E1213 13:27:30.355027 2644 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a1e711814fe87f2d186bc5aa964ee62e4f7c132dedd2d5657686553649b8af34\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:30.355164 kubelet[2644]: E1213 13:27:30.355084 2644 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a1e711814fe87f2d186bc5aa964ee62e4f7c132dedd2d5657686553649b8af34\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69648fc998-4w2cp" Dec 13 13:27:30.355164 kubelet[2644]: E1213 13:27:30.355111 2644 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a1e711814fe87f2d186bc5aa964ee62e4f7c132dedd2d5657686553649b8af34\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69648fc998-4w2cp" Dec 13 13:27:30.355241 kubelet[2644]: E1213 13:27:30.355196 2644 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-69648fc998-4w2cp_calico-apiserver(77036710-cd93-4856-bfc5-037aee276132)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-69648fc998-4w2cp_calico-apiserver(77036710-cd93-4856-bfc5-037aee276132)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a1e711814fe87f2d186bc5aa964ee62e4f7c132dedd2d5657686553649b8af34\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-69648fc998-4w2cp" podUID="77036710-cd93-4856-bfc5-037aee276132" Dec 13 13:27:30.367825 containerd[1470]: time="2024-12-13T13:27:30.367700579Z" level=error msg="Failed to destroy network for sandbox \"98eb941cd0ea66b9687a3207f0c149e653e35a9ef2c103582f631a9c9a6ba849\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:30.368791 containerd[1470]: time="2024-12-13T13:27:30.368758259Z" level=error msg="encountered an error cleaning up failed sandbox \"98eb941cd0ea66b9687a3207f0c149e653e35a9ef2c103582f631a9c9a6ba849\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:30.369165 containerd[1470]: time="2024-12-13T13:27:30.368995300Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jl58q,Uid:991aee5f-8651-49bc-ac7b-b0b4b2cc81c5,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"98eb941cd0ea66b9687a3207f0c149e653e35a9ef2c103582f631a9c9a6ba849\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:30.369926 kubelet[2644]: E1213 13:27:30.369849 2644 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"98eb941cd0ea66b9687a3207f0c149e653e35a9ef2c103582f631a9c9a6ba849\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:30.370021 kubelet[2644]: E1213 13:27:30.369943 2644 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"98eb941cd0ea66b9687a3207f0c149e653e35a9ef2c103582f631a9c9a6ba849\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jl58q" Dec 13 13:27:30.370021 kubelet[2644]: E1213 13:27:30.369979 2644 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"98eb941cd0ea66b9687a3207f0c149e653e35a9ef2c103582f631a9c9a6ba849\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jl58q" Dec 13 13:27:30.370229 kubelet[2644]: E1213 13:27:30.370022 2644 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-jl58q_calico-system(991aee5f-8651-49bc-ac7b-b0b4b2cc81c5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-jl58q_calico-system(991aee5f-8651-49bc-ac7b-b0b4b2cc81c5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"98eb941cd0ea66b9687a3207f0c149e653e35a9ef2c103582f631a9c9a6ba849\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jl58q" podUID="991aee5f-8651-49bc-ac7b-b0b4b2cc81c5" Dec 13 13:27:30.372325 containerd[1470]: time="2024-12-13T13:27:30.372288061Z" level=error msg="Failed to destroy network for sandbox \"84f3817040532860c1215df089849c6146828b5201bd21c8c2b10f4313126fa2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:30.374292 containerd[1470]: time="2024-12-13T13:27:30.374193303Z" level=error msg="encountered an error cleaning up failed sandbox \"84f3817040532860c1215df089849c6146828b5201bd21c8c2b10f4313126fa2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:30.374609 containerd[1470]: time="2024-12-13T13:27:30.374335423Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84949d5b96-26kkm,Uid:fdd0e41e-e772-4088-a22c-e059031fe725,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"84f3817040532860c1215df089849c6146828b5201bd21c8c2b10f4313126fa2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:30.374609 containerd[1470]: time="2024-12-13T13:27:30.374454263Z" level=error msg="Failed to destroy network for sandbox \"86c882a73419c7c668dfe9102b04b31ccd6932449f9ba94a44718f2750f7c19b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:30.374692 kubelet[2644]: E1213 13:27:30.374570 2644 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"84f3817040532860c1215df089849c6146828b5201bd21c8c2b10f4313126fa2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:30.374692 kubelet[2644]: E1213 13:27:30.374617 2644 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"84f3817040532860c1215df089849c6146828b5201bd21c8c2b10f4313126fa2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-84949d5b96-26kkm" Dec 13 13:27:30.374692 kubelet[2644]: E1213 13:27:30.374649 2644 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"84f3817040532860c1215df089849c6146828b5201bd21c8c2b10f4313126fa2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-84949d5b96-26kkm" Dec 13 13:27:30.374787 kubelet[2644]: E1213 13:27:30.374691 2644 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-84949d5b96-26kkm_calico-system(fdd0e41e-e772-4088-a22c-e059031fe725)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-84949d5b96-26kkm_calico-system(fdd0e41e-e772-4088-a22c-e059031fe725)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"84f3817040532860c1215df089849c6146828b5201bd21c8c2b10f4313126fa2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-84949d5b96-26kkm" podUID="fdd0e41e-e772-4088-a22c-e059031fe725" Dec 13 13:27:30.375310 containerd[1470]: time="2024-12-13T13:27:30.375279823Z" level=error msg="encountered an error cleaning up failed sandbox \"86c882a73419c7c668dfe9102b04b31ccd6932449f9ba94a44718f2750f7c19b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:30.375718 containerd[1470]: time="2024-12-13T13:27:30.375690344Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-tm76v,Uid:16d3a59c-4627-445a-a548-3d17c81a3ebd,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"86c882a73419c7c668dfe9102b04b31ccd6932449f9ba94a44718f2750f7c19b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:30.375982 containerd[1470]: time="2024-12-13T13:27:30.375944904Z" level=error msg="Failed to destroy network for sandbox \"981b86e50a2e4cb062d6972caeb37ab970e145a0945e9373c8351e947fb237f4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:30.376331 containerd[1470]: time="2024-12-13T13:27:30.376250224Z" level=error msg="encountered an error cleaning up failed sandbox \"981b86e50a2e4cb062d6972caeb37ab970e145a0945e9373c8351e947fb237f4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:30.376331 containerd[1470]: time="2024-12-13T13:27:30.376310864Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69648fc998-v2qqw,Uid:b1402681-1dab-4e27-bb98-220ec86ddde8,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"981b86e50a2e4cb062d6972caeb37ab970e145a0945e9373c8351e947fb237f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:30.377024 kubelet[2644]: E1213 13:27:30.376957 2644 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"981b86e50a2e4cb062d6972caeb37ab970e145a0945e9373c8351e947fb237f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:30.377024 kubelet[2644]: E1213 13:27:30.377010 2644 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"981b86e50a2e4cb062d6972caeb37ab970e145a0945e9373c8351e947fb237f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69648fc998-v2qqw" Dec 13 13:27:30.377118 kubelet[2644]: E1213 13:27:30.377027 2644 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"981b86e50a2e4cb062d6972caeb37ab970e145a0945e9373c8351e947fb237f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69648fc998-v2qqw" Dec 13 13:27:30.377118 kubelet[2644]: E1213 13:27:30.377059 2644 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-69648fc998-v2qqw_calico-apiserver(b1402681-1dab-4e27-bb98-220ec86ddde8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-69648fc998-v2qqw_calico-apiserver(b1402681-1dab-4e27-bb98-220ec86ddde8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"981b86e50a2e4cb062d6972caeb37ab970e145a0945e9373c8351e947fb237f4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-69648fc998-v2qqw" podUID="b1402681-1dab-4e27-bb98-220ec86ddde8" Dec 13 13:27:30.377182 kubelet[2644]: E1213 13:27:30.377128 2644 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86c882a73419c7c668dfe9102b04b31ccd6932449f9ba94a44718f2750f7c19b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:30.377182 kubelet[2644]: E1213 13:27:30.377150 2644 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86c882a73419c7c668dfe9102b04b31ccd6932449f9ba94a44718f2750f7c19b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-tm76v" Dec 13 13:27:30.377182 kubelet[2644]: E1213 13:27:30.377163 2644 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86c882a73419c7c668dfe9102b04b31ccd6932449f9ba94a44718f2750f7c19b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-tm76v" Dec 13 13:27:30.377250 kubelet[2644]: E1213 13:27:30.377192 2644 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-tm76v_kube-system(16d3a59c-4627-445a-a548-3d17c81a3ebd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-tm76v_kube-system(16d3a59c-4627-445a-a548-3d17c81a3ebd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"86c882a73419c7c668dfe9102b04b31ccd6932449f9ba94a44718f2750f7c19b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-tm76v" podUID="16d3a59c-4627-445a-a548-3d17c81a3ebd" Dec 13 13:27:31.015306 systemd[1]: run-netns-cni\x2d62c76647\x2d51ac\x2de119\x2dafc9\x2d5477dfba87ad.mount: Deactivated successfully. Dec 13 13:27:31.015393 systemd[1]: run-netns-cni\x2d3cd6ed7e\x2de428\x2d2635\x2d70c7\x2daa792aa5061e.mount: Deactivated successfully. Dec 13 13:27:31.103222 kubelet[2644]: I1213 13:27:31.103188 2644 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5117923122ba090e5d499c0acd72a36d052893c08e93f893dee251dd20c62718" Dec 13 13:27:31.104333 containerd[1470]: time="2024-12-13T13:27:31.104118770Z" level=info msg="StopPodSandbox for \"5117923122ba090e5d499c0acd72a36d052893c08e93f893dee251dd20c62718\"" Dec 13 13:27:31.107117 containerd[1470]: time="2024-12-13T13:27:31.104281811Z" level=info msg="Ensure that sandbox 5117923122ba090e5d499c0acd72a36d052893c08e93f893dee251dd20c62718 in task-service has been cleanup successfully" Dec 13 13:27:31.107117 containerd[1470]: time="2024-12-13T13:27:31.105422851Z" level=info msg="TearDown network for sandbox \"5117923122ba090e5d499c0acd72a36d052893c08e93f893dee251dd20c62718\" successfully" Dec 13 13:27:31.107117 containerd[1470]: time="2024-12-13T13:27:31.105441291Z" level=info msg="StopPodSandbox for \"5117923122ba090e5d499c0acd72a36d052893c08e93f893dee251dd20c62718\" returns successfully" Dec 13 13:27:31.107117 containerd[1470]: time="2024-12-13T13:27:31.106716012Z" level=info msg="StopPodSandbox for \"6ebfe153f9bd5d284e3c2d0e770265e24cb4369ad8e7497825d870da4520a0a6\"" Dec 13 13:27:31.107117 containerd[1470]: time="2024-12-13T13:27:31.106791492Z" level=info msg="StopPodSandbox for \"98eb941cd0ea66b9687a3207f0c149e653e35a9ef2c103582f631a9c9a6ba849\"" Dec 13 13:27:31.107117 containerd[1470]: time="2024-12-13T13:27:31.106948412Z" level=info msg="Ensure that sandbox 98eb941cd0ea66b9687a3207f0c149e653e35a9ef2c103582f631a9c9a6ba849 in task-service has been cleanup successfully" Dec 13 13:27:31.107117 containerd[1470]: time="2024-12-13T13:27:31.106800372Z" level=info msg="TearDown network for sandbox \"6ebfe153f9bd5d284e3c2d0e770265e24cb4369ad8e7497825d870da4520a0a6\" successfully" Dec 13 13:27:31.107117 containerd[1470]: time="2024-12-13T13:27:31.107058252Z" level=info msg="StopPodSandbox for \"6ebfe153f9bd5d284e3c2d0e770265e24cb4369ad8e7497825d870da4520a0a6\" returns successfully" Dec 13 13:27:31.107295 kubelet[2644]: I1213 13:27:31.105609 2644 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98eb941cd0ea66b9687a3207f0c149e653e35a9ef2c103582f631a9c9a6ba849" Dec 13 13:27:31.107329 containerd[1470]: time="2024-12-13T13:27:31.107159132Z" level=info msg="TearDown network for sandbox \"98eb941cd0ea66b9687a3207f0c149e653e35a9ef2c103582f631a9c9a6ba849\" successfully" Dec 13 13:27:31.107329 containerd[1470]: time="2024-12-13T13:27:31.107176292Z" level=info msg="StopPodSandbox for \"98eb941cd0ea66b9687a3207f0c149e653e35a9ef2c103582f631a9c9a6ba849\" returns successfully" Dec 13 13:27:31.109092 containerd[1470]: time="2024-12-13T13:27:31.108727093Z" level=info msg="StopPodSandbox for \"c5a9072b3c00d745aface7571023c0abffdb11eda589c93d145a27b3ebbe9018\"" Dec 13 13:27:31.109092 containerd[1470]: time="2024-12-13T13:27:31.108834493Z" level=info msg="TearDown network for sandbox \"c5a9072b3c00d745aface7571023c0abffdb11eda589c93d145a27b3ebbe9018\" successfully" Dec 13 13:27:31.109092 containerd[1470]: time="2024-12-13T13:27:31.108845893Z" level=info msg="StopPodSandbox for \"c5a9072b3c00d745aface7571023c0abffdb11eda589c93d145a27b3ebbe9018\" returns successfully" Dec 13 13:27:31.109092 containerd[1470]: time="2024-12-13T13:27:31.108920493Z" level=info msg="StopPodSandbox for \"ac41ff4cc00b556307a05cd4080412f7483fd2836ce0b704d27334cf15d284a6\"" Dec 13 13:27:31.109092 containerd[1470]: time="2024-12-13T13:27:31.108968293Z" level=info msg="TearDown network for sandbox \"ac41ff4cc00b556307a05cd4080412f7483fd2836ce0b704d27334cf15d284a6\" successfully" Dec 13 13:27:31.109092 containerd[1470]: time="2024-12-13T13:27:31.108983733Z" level=info msg="StopPodSandbox for \"ac41ff4cc00b556307a05cd4080412f7483fd2836ce0b704d27334cf15d284a6\" returns successfully" Dec 13 13:27:31.110111 containerd[1470]: time="2024-12-13T13:27:31.109965014Z" level=info msg="StopPodSandbox for \"27f753461c75b2520c666cbe0a1bf5f551b1057d4e6d88357b64393d5796a9dd\"" Dec 13 13:27:31.110111 containerd[1470]: time="2024-12-13T13:27:31.110042254Z" level=info msg="TearDown network for sandbox \"27f753461c75b2520c666cbe0a1bf5f551b1057d4e6d88357b64393d5796a9dd\" successfully" Dec 13 13:27:31.110111 containerd[1470]: time="2024-12-13T13:27:31.110051494Z" level=info msg="StopPodSandbox for \"27f753461c75b2520c666cbe0a1bf5f551b1057d4e6d88357b64393d5796a9dd\" returns successfully" Dec 13 13:27:31.110213 kubelet[2644]: E1213 13:27:31.110105 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:27:31.110640 containerd[1470]: time="2024-12-13T13:27:31.110567654Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-4g2n7,Uid:7432b46c-534c-4718-843e-36f44a5a5ac1,Namespace:kube-system,Attempt:3,}" Dec 13 13:27:31.110708 kubelet[2644]: I1213 13:27:31.110573 2644 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="84f3817040532860c1215df089849c6146828b5201bd21c8c2b10f4313126fa2" Dec 13 13:27:31.110998 containerd[1470]: time="2024-12-13T13:27:31.110856854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jl58q,Uid:991aee5f-8651-49bc-ac7b-b0b4b2cc81c5,Namespace:calico-system,Attempt:3,}" Dec 13 13:27:31.111331 containerd[1470]: time="2024-12-13T13:27:31.111216574Z" level=info msg="StopPodSandbox for \"84f3817040532860c1215df089849c6146828b5201bd21c8c2b10f4313126fa2\"" Dec 13 13:27:31.111387 containerd[1470]: time="2024-12-13T13:27:31.111365294Z" level=info msg="Ensure that sandbox 84f3817040532860c1215df089849c6146828b5201bd21c8c2b10f4313126fa2 in task-service has been cleanup successfully" Dec 13 13:27:31.113205 systemd[1]: run-netns-cni\x2d4e804f89\x2d57b0\x2d9958\x2d9ea3\x2db9758cc04463.mount: Deactivated successfully. Dec 13 13:27:31.113294 systemd[1]: run-netns-cni\x2dbe471a7d\x2d130e\x2d0816\x2dca1f\x2df47062ca32ce.mount: Deactivated successfully. Dec 13 13:27:31.115322 containerd[1470]: time="2024-12-13T13:27:31.115273977Z" level=info msg="TearDown network for sandbox \"84f3817040532860c1215df089849c6146828b5201bd21c8c2b10f4313126fa2\" successfully" Dec 13 13:27:31.115322 containerd[1470]: time="2024-12-13T13:27:31.115305337Z" level=info msg="StopPodSandbox for \"84f3817040532860c1215df089849c6146828b5201bd21c8c2b10f4313126fa2\" returns successfully" Dec 13 13:27:31.115624 containerd[1470]: time="2024-12-13T13:27:31.115602777Z" level=info msg="StopPodSandbox for \"2402bfc3fb517198a33cc6b1f6555325952a6abcc8100e95a58daba1204b8e98\"" Dec 13 13:27:31.115774 containerd[1470]: time="2024-12-13T13:27:31.115743537Z" level=info msg="TearDown network for sandbox \"2402bfc3fb517198a33cc6b1f6555325952a6abcc8100e95a58daba1204b8e98\" successfully" Dec 13 13:27:31.115774 containerd[1470]: time="2024-12-13T13:27:31.115773657Z" level=info msg="StopPodSandbox for \"2402bfc3fb517198a33cc6b1f6555325952a6abcc8100e95a58daba1204b8e98\" returns successfully" Dec 13 13:27:31.116023 containerd[1470]: time="2024-12-13T13:27:31.115977297Z" level=info msg="StopPodSandbox for \"1dd6d8c1a6eac4dd5b5c62d01d7187e7b328d566cb31b745bf1ff4ef183d29f4\"" Dec 13 13:27:31.116103 containerd[1470]: time="2024-12-13T13:27:31.116052817Z" level=info msg="TearDown network for sandbox \"1dd6d8c1a6eac4dd5b5c62d01d7187e7b328d566cb31b745bf1ff4ef183d29f4\" successfully" Dec 13 13:27:31.116103 containerd[1470]: time="2024-12-13T13:27:31.116063457Z" level=info msg="StopPodSandbox for \"1dd6d8c1a6eac4dd5b5c62d01d7187e7b328d566cb31b745bf1ff4ef183d29f4\" returns successfully" Dec 13 13:27:31.116766 kubelet[2644]: I1213 13:27:31.116405 2644 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86c882a73419c7c668dfe9102b04b31ccd6932449f9ba94a44718f2750f7c19b" Dec 13 13:27:31.117132 containerd[1470]: time="2024-12-13T13:27:31.117092698Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84949d5b96-26kkm,Uid:fdd0e41e-e772-4088-a22c-e059031fe725,Namespace:calico-system,Attempt:3,}" Dec 13 13:27:31.117509 systemd[1]: run-netns-cni\x2d93fca45e\x2d5bb8\x2d594d\x2d9b9c\x2d7080c3251715.mount: Deactivated successfully. Dec 13 13:27:31.118888 containerd[1470]: time="2024-12-13T13:27:31.117115138Z" level=info msg="StopPodSandbox for \"86c882a73419c7c668dfe9102b04b31ccd6932449f9ba94a44718f2750f7c19b\"" Dec 13 13:27:31.118888 containerd[1470]: time="2024-12-13T13:27:31.118704059Z" level=info msg="Ensure that sandbox 86c882a73419c7c668dfe9102b04b31ccd6932449f9ba94a44718f2750f7c19b in task-service has been cleanup successfully" Dec 13 13:27:31.120129 containerd[1470]: time="2024-12-13T13:27:31.120096739Z" level=info msg="TearDown network for sandbox \"86c882a73419c7c668dfe9102b04b31ccd6932449f9ba94a44718f2750f7c19b\" successfully" Dec 13 13:27:31.120129 containerd[1470]: time="2024-12-13T13:27:31.120120739Z" level=info msg="StopPodSandbox for \"86c882a73419c7c668dfe9102b04b31ccd6932449f9ba94a44718f2750f7c19b\" returns successfully" Dec 13 13:27:31.121109 kubelet[2644]: I1213 13:27:31.121078 2644 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="981b86e50a2e4cb062d6972caeb37ab970e145a0945e9373c8351e947fb237f4" Dec 13 13:27:31.121716 containerd[1470]: time="2024-12-13T13:27:31.121669740Z" level=info msg="StopPodSandbox for \"2524df59d1d5837bc47c1f56c1f561f460cf084e19c6176080b270b2eb6b9f65\"" Dec 13 13:27:31.121900 containerd[1470]: time="2024-12-13T13:27:31.121770460Z" level=info msg="StopPodSandbox for \"981b86e50a2e4cb062d6972caeb37ab970e145a0945e9373c8351e947fb237f4\"" Dec 13 13:27:31.122932 containerd[1470]: time="2024-12-13T13:27:31.121781260Z" level=info msg="TearDown network for sandbox \"2524df59d1d5837bc47c1f56c1f561f460cf084e19c6176080b270b2eb6b9f65\" successfully" Dec 13 13:27:31.122932 containerd[1470]: time="2024-12-13T13:27:31.122790541Z" level=info msg="StopPodSandbox for \"2524df59d1d5837bc47c1f56c1f561f460cf084e19c6176080b270b2eb6b9f65\" returns successfully" Dec 13 13:27:31.122932 containerd[1470]: time="2024-12-13T13:27:31.122216100Z" level=info msg="Ensure that sandbox 981b86e50a2e4cb062d6972caeb37ab970e145a0945e9373c8351e947fb237f4 in task-service has been cleanup successfully" Dec 13 13:27:31.123481 containerd[1470]: time="2024-12-13T13:27:31.123270061Z" level=info msg="TearDown network for sandbox \"981b86e50a2e4cb062d6972caeb37ab970e145a0945e9373c8351e947fb237f4\" successfully" Dec 13 13:27:31.123481 containerd[1470]: time="2024-12-13T13:27:31.123398501Z" level=info msg="StopPodSandbox for \"981b86e50a2e4cb062d6972caeb37ab970e145a0945e9373c8351e947fb237f4\" returns successfully" Dec 13 13:27:31.123719 containerd[1470]: time="2024-12-13T13:27:31.123698741Z" level=info msg="StopPodSandbox for \"9a96510483de5288a555a7496385fae3414d83977f2aea433b94563106a02b7f\"" Dec 13 13:27:31.123785 containerd[1470]: time="2024-12-13T13:27:31.123771341Z" level=info msg="TearDown network for sandbox \"9a96510483de5288a555a7496385fae3414d83977f2aea433b94563106a02b7f\" successfully" Dec 13 13:27:31.123811 containerd[1470]: time="2024-12-13T13:27:31.123784341Z" level=info msg="StopPodSandbox for \"9a96510483de5288a555a7496385fae3414d83977f2aea433b94563106a02b7f\" returns successfully" Dec 13 13:27:31.124076 kubelet[2644]: I1213 13:27:31.124016 2644 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a1e711814fe87f2d186bc5aa964ee62e4f7c132dedd2d5657686553649b8af34" Dec 13 13:27:31.124235 kubelet[2644]: E1213 13:27:31.124209 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:27:31.124988 containerd[1470]: time="2024-12-13T13:27:31.124958422Z" level=info msg="StopPodSandbox for \"b12022606fec298cca643e809db898729e25ac7a1a2e4b945e0edb8f1bc98341\"" Dec 13 13:27:31.125107 containerd[1470]: time="2024-12-13T13:27:31.125039902Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-tm76v,Uid:16d3a59c-4627-445a-a548-3d17c81a3ebd,Namespace:kube-system,Attempt:3,}" Dec 13 13:27:31.125283 containerd[1470]: time="2024-12-13T13:27:31.125046902Z" level=info msg="TearDown network for sandbox \"b12022606fec298cca643e809db898729e25ac7a1a2e4b945e0edb8f1bc98341\" successfully" Dec 13 13:27:31.125283 containerd[1470]: time="2024-12-13T13:27:31.125210262Z" level=info msg="StopPodSandbox for \"b12022606fec298cca643e809db898729e25ac7a1a2e4b945e0edb8f1bc98341\" returns successfully" Dec 13 13:27:31.125559 containerd[1470]: time="2024-12-13T13:27:31.125532862Z" level=info msg="StopPodSandbox for \"a1e711814fe87f2d186bc5aa964ee62e4f7c132dedd2d5657686553649b8af34\"" Dec 13 13:27:31.125630 containerd[1470]: time="2024-12-13T13:27:31.125575582Z" level=info msg="StopPodSandbox for \"82e88ef19224fab276f15fff182d10dd5aa6fb658f753603262f32fe1451a3d8\"" Dec 13 13:27:31.125690 containerd[1470]: time="2024-12-13T13:27:31.125650382Z" level=info msg="TearDown network for sandbox \"82e88ef19224fab276f15fff182d10dd5aa6fb658f753603262f32fe1451a3d8\" successfully" Dec 13 13:27:31.125690 containerd[1470]: time="2024-12-13T13:27:31.125659822Z" level=info msg="StopPodSandbox for \"82e88ef19224fab276f15fff182d10dd5aa6fb658f753603262f32fe1451a3d8\" returns successfully" Dec 13 13:27:31.125739 containerd[1470]: time="2024-12-13T13:27:31.125694902Z" level=info msg="Ensure that sandbox a1e711814fe87f2d186bc5aa964ee62e4f7c132dedd2d5657686553649b8af34 in task-service has been cleanup successfully" Dec 13 13:27:31.125970 containerd[1470]: time="2024-12-13T13:27:31.125938623Z" level=info msg="TearDown network for sandbox \"a1e711814fe87f2d186bc5aa964ee62e4f7c132dedd2d5657686553649b8af34\" successfully" Dec 13 13:27:31.125970 containerd[1470]: time="2024-12-13T13:27:31.125968703Z" level=info msg="StopPodSandbox for \"a1e711814fe87f2d186bc5aa964ee62e4f7c132dedd2d5657686553649b8af34\" returns successfully" Dec 13 13:27:31.126299 containerd[1470]: time="2024-12-13T13:27:31.126270623Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69648fc998-v2qqw,Uid:b1402681-1dab-4e27-bb98-220ec86ddde8,Namespace:calico-apiserver,Attempt:3,}" Dec 13 13:27:31.126514 systemd[1]: Started sshd@8-10.0.0.123:22-10.0.0.1:39330.service - OpenSSH per-connection server daemon (10.0.0.1:39330). Dec 13 13:27:31.127209 containerd[1470]: time="2024-12-13T13:27:31.127181543Z" level=info msg="StopPodSandbox for \"e800de8956fc69d7244ccf347e8f59184b294ff27d86fb84f83ff71188ec38c0\"" Dec 13 13:27:31.127294 containerd[1470]: time="2024-12-13T13:27:31.127277863Z" level=info msg="TearDown network for sandbox \"e800de8956fc69d7244ccf347e8f59184b294ff27d86fb84f83ff71188ec38c0\" successfully" Dec 13 13:27:31.127294 containerd[1470]: time="2024-12-13T13:27:31.127291943Z" level=info msg="StopPodSandbox for \"e800de8956fc69d7244ccf347e8f59184b294ff27d86fb84f83ff71188ec38c0\" returns successfully" Dec 13 13:27:31.127722 containerd[1470]: time="2024-12-13T13:27:31.127642663Z" level=info msg="StopPodSandbox for \"d42c754f5bfce3ff24d96e0d14d8e10b47072d6fee77834d8ccae6ccaea0fb69\"" Dec 13 13:27:31.127771 containerd[1470]: time="2024-12-13T13:27:31.127755544Z" level=info msg="TearDown network for sandbox \"d42c754f5bfce3ff24d96e0d14d8e10b47072d6fee77834d8ccae6ccaea0fb69\" successfully" Dec 13 13:27:31.127807 containerd[1470]: time="2024-12-13T13:27:31.127769584Z" level=info msg="StopPodSandbox for \"d42c754f5bfce3ff24d96e0d14d8e10b47072d6fee77834d8ccae6ccaea0fb69\" returns successfully" Dec 13 13:27:31.129120 systemd[1]: run-netns-cni\x2d0b3a869e\x2dac60\x2d75c7\x2db5c0\x2db81e36eaa2b4.mount: Deactivated successfully. Dec 13 13:27:31.129210 systemd[1]: run-netns-cni\x2d765b9aa5\x2d1db7\x2d888a\x2dc1e8\x2d03620dc83786.mount: Deactivated successfully. Dec 13 13:27:31.129266 systemd[1]: run-netns-cni\x2de49e0099\x2d8271\x2d28f9\x2dc8ea\x2d4476191342dc.mount: Deactivated successfully. Dec 13 13:27:31.129326 containerd[1470]: time="2024-12-13T13:27:31.129242904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69648fc998-4w2cp,Uid:77036710-cd93-4856-bfc5-037aee276132,Namespace:calico-apiserver,Attempt:3,}" Dec 13 13:27:31.224912 sshd[4099]: Accepted publickey for core from 10.0.0.1 port 39330 ssh2: RSA SHA256:q9cWvSR3bBxu+L28Z4JmOHhvW5qF2BbU+1GVJNGhIf4 Dec 13 13:27:31.226050 sshd-session[4099]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:27:31.235903 systemd-logind[1454]: New session 9 of user core. Dec 13 13:27:31.241214 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 13 13:27:31.290040 containerd[1470]: time="2024-12-13T13:27:31.289931193Z" level=error msg="Failed to destroy network for sandbox \"f0c5c755d19130228cfe26ea013b5dbc206b46452154e1ebbb8578b200cc559b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:31.310408 containerd[1470]: time="2024-12-13T13:27:31.310109165Z" level=error msg="encountered an error cleaning up failed sandbox \"f0c5c755d19130228cfe26ea013b5dbc206b46452154e1ebbb8578b200cc559b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:31.310615 containerd[1470]: time="2024-12-13T13:27:31.310287205Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84949d5b96-26kkm,Uid:fdd0e41e-e772-4088-a22c-e059031fe725,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"f0c5c755d19130228cfe26ea013b5dbc206b46452154e1ebbb8578b200cc559b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:31.311053 kubelet[2644]: E1213 13:27:31.310986 2644 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0c5c755d19130228cfe26ea013b5dbc206b46452154e1ebbb8578b200cc559b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:31.311053 kubelet[2644]: E1213 13:27:31.311049 2644 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0c5c755d19130228cfe26ea013b5dbc206b46452154e1ebbb8578b200cc559b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-84949d5b96-26kkm" Dec 13 13:27:31.311273 kubelet[2644]: E1213 13:27:31.311069 2644 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0c5c755d19130228cfe26ea013b5dbc206b46452154e1ebbb8578b200cc559b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-84949d5b96-26kkm" Dec 13 13:27:31.311273 kubelet[2644]: E1213 13:27:31.311112 2644 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-84949d5b96-26kkm_calico-system(fdd0e41e-e772-4088-a22c-e059031fe725)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-84949d5b96-26kkm_calico-system(fdd0e41e-e772-4088-a22c-e059031fe725)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f0c5c755d19130228cfe26ea013b5dbc206b46452154e1ebbb8578b200cc559b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-84949d5b96-26kkm" podUID="fdd0e41e-e772-4088-a22c-e059031fe725" Dec 13 13:27:31.315008 containerd[1470]: time="2024-12-13T13:27:31.314967887Z" level=error msg="Failed to destroy network for sandbox \"6da3c483bc09561e3eaa1e766a1a2aa82e218cc7c295b6d24645c61431ad5be0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:31.318194 containerd[1470]: time="2024-12-13T13:27:31.318152929Z" level=error msg="encountered an error cleaning up failed sandbox \"6da3c483bc09561e3eaa1e766a1a2aa82e218cc7c295b6d24645c61431ad5be0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:31.320127 containerd[1470]: time="2024-12-13T13:27:31.319977610Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jl58q,Uid:991aee5f-8651-49bc-ac7b-b0b4b2cc81c5,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"6da3c483bc09561e3eaa1e766a1a2aa82e218cc7c295b6d24645c61431ad5be0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:31.320265 kubelet[2644]: E1213 13:27:31.320227 2644 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6da3c483bc09561e3eaa1e766a1a2aa82e218cc7c295b6d24645c61431ad5be0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:31.320318 kubelet[2644]: E1213 13:27:31.320283 2644 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6da3c483bc09561e3eaa1e766a1a2aa82e218cc7c295b6d24645c61431ad5be0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jl58q" Dec 13 13:27:31.320318 kubelet[2644]: E1213 13:27:31.320308 2644 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6da3c483bc09561e3eaa1e766a1a2aa82e218cc7c295b6d24645c61431ad5be0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jl58q" Dec 13 13:27:31.320404 kubelet[2644]: E1213 13:27:31.320355 2644 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-jl58q_calico-system(991aee5f-8651-49bc-ac7b-b0b4b2cc81c5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-jl58q_calico-system(991aee5f-8651-49bc-ac7b-b0b4b2cc81c5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6da3c483bc09561e3eaa1e766a1a2aa82e218cc7c295b6d24645c61431ad5be0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jl58q" podUID="991aee5f-8651-49bc-ac7b-b0b4b2cc81c5" Dec 13 13:27:31.376927 containerd[1470]: time="2024-12-13T13:27:31.375845481Z" level=error msg="Failed to destroy network for sandbox \"28665c410f01cc4a3f0b3dc135a4aa2288f4981a36a7d7da8a38e9803909a0b3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:31.377980 containerd[1470]: time="2024-12-13T13:27:31.377950602Z" level=error msg="encountered an error cleaning up failed sandbox \"28665c410f01cc4a3f0b3dc135a4aa2288f4981a36a7d7da8a38e9803909a0b3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:31.378398 containerd[1470]: time="2024-12-13T13:27:31.378100162Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69648fc998-v2qqw,Uid:b1402681-1dab-4e27-bb98-220ec86ddde8,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"28665c410f01cc4a3f0b3dc135a4aa2288f4981a36a7d7da8a38e9803909a0b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:31.378825 kubelet[2644]: E1213 13:27:31.378736 2644 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"28665c410f01cc4a3f0b3dc135a4aa2288f4981a36a7d7da8a38e9803909a0b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:31.378825 kubelet[2644]: E1213 13:27:31.378796 2644 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"28665c410f01cc4a3f0b3dc135a4aa2288f4981a36a7d7da8a38e9803909a0b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69648fc998-v2qqw" Dec 13 13:27:31.378825 kubelet[2644]: E1213 13:27:31.378816 2644 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"28665c410f01cc4a3f0b3dc135a4aa2288f4981a36a7d7da8a38e9803909a0b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69648fc998-v2qqw" Dec 13 13:27:31.379339 kubelet[2644]: E1213 13:27:31.378857 2644 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-69648fc998-v2qqw_calico-apiserver(b1402681-1dab-4e27-bb98-220ec86ddde8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-69648fc998-v2qqw_calico-apiserver(b1402681-1dab-4e27-bb98-220ec86ddde8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"28665c410f01cc4a3f0b3dc135a4aa2288f4981a36a7d7da8a38e9803909a0b3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-69648fc998-v2qqw" podUID="b1402681-1dab-4e27-bb98-220ec86ddde8" Dec 13 13:27:31.386267 containerd[1470]: time="2024-12-13T13:27:31.386169767Z" level=error msg="Failed to destroy network for sandbox \"d7b4d2f75c00d8923e4129d720e722943bb77a0cfdc8afedf8ea41878d41a057\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:31.389267 containerd[1470]: time="2024-12-13T13:27:31.389099728Z" level=error msg="encountered an error cleaning up failed sandbox \"d7b4d2f75c00d8923e4129d720e722943bb77a0cfdc8afedf8ea41878d41a057\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:31.389267 containerd[1470]: time="2024-12-13T13:27:31.389159648Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-tm76v,Uid:16d3a59c-4627-445a-a548-3d17c81a3ebd,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"d7b4d2f75c00d8923e4129d720e722943bb77a0cfdc8afedf8ea41878d41a057\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:31.389387 kubelet[2644]: E1213 13:27:31.389324 2644 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7b4d2f75c00d8923e4129d720e722943bb77a0cfdc8afedf8ea41878d41a057\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:31.389423 kubelet[2644]: E1213 13:27:31.389382 2644 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7b4d2f75c00d8923e4129d720e722943bb77a0cfdc8afedf8ea41878d41a057\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-tm76v" Dec 13 13:27:31.389423 kubelet[2644]: E1213 13:27:31.389401 2644 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7b4d2f75c00d8923e4129d720e722943bb77a0cfdc8afedf8ea41878d41a057\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-tm76v" Dec 13 13:27:31.389467 kubelet[2644]: E1213 13:27:31.389434 2644 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-tm76v_kube-system(16d3a59c-4627-445a-a548-3d17c81a3ebd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-tm76v_kube-system(16d3a59c-4627-445a-a548-3d17c81a3ebd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d7b4d2f75c00d8923e4129d720e722943bb77a0cfdc8afedf8ea41878d41a057\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-tm76v" podUID="16d3a59c-4627-445a-a548-3d17c81a3ebd" Dec 13 13:27:31.392562 containerd[1470]: time="2024-12-13T13:27:31.392333210Z" level=error msg="Failed to destroy network for sandbox \"2acfd3bccb51b496667a56602d7f76917d689ef151d291f89058c90ec3e6fdbd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:31.393052 containerd[1470]: time="2024-12-13T13:27:31.393000611Z" level=error msg="encountered an error cleaning up failed sandbox \"2acfd3bccb51b496667a56602d7f76917d689ef151d291f89058c90ec3e6fdbd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:31.393101 containerd[1470]: time="2024-12-13T13:27:31.393064131Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69648fc998-4w2cp,Uid:77036710-cd93-4856-bfc5-037aee276132,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"2acfd3bccb51b496667a56602d7f76917d689ef151d291f89058c90ec3e6fdbd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:31.393628 kubelet[2644]: E1213 13:27:31.393256 2644 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2acfd3bccb51b496667a56602d7f76917d689ef151d291f89058c90ec3e6fdbd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:31.393628 kubelet[2644]: E1213 13:27:31.393305 2644 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2acfd3bccb51b496667a56602d7f76917d689ef151d291f89058c90ec3e6fdbd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69648fc998-4w2cp" Dec 13 13:27:31.393628 kubelet[2644]: E1213 13:27:31.393323 2644 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2acfd3bccb51b496667a56602d7f76917d689ef151d291f89058c90ec3e6fdbd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69648fc998-4w2cp" Dec 13 13:27:31.393716 kubelet[2644]: E1213 13:27:31.393378 2644 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-69648fc998-4w2cp_calico-apiserver(77036710-cd93-4856-bfc5-037aee276132)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-69648fc998-4w2cp_calico-apiserver(77036710-cd93-4856-bfc5-037aee276132)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2acfd3bccb51b496667a56602d7f76917d689ef151d291f89058c90ec3e6fdbd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-69648fc998-4w2cp" podUID="77036710-cd93-4856-bfc5-037aee276132" Dec 13 13:27:31.408079 containerd[1470]: time="2024-12-13T13:27:31.408019219Z" level=error msg="Failed to destroy network for sandbox \"11bc29dc0d8d27e911b08c204189f28fe100a6879dcb15fbb9ba688c573f4e2d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:31.410132 containerd[1470]: time="2024-12-13T13:27:31.410089900Z" level=error msg="encountered an error cleaning up failed sandbox \"11bc29dc0d8d27e911b08c204189f28fe100a6879dcb15fbb9ba688c573f4e2d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:31.410192 containerd[1470]: time="2024-12-13T13:27:31.410154220Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-4g2n7,Uid:7432b46c-534c-4718-843e-36f44a5a5ac1,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"11bc29dc0d8d27e911b08c204189f28fe100a6879dcb15fbb9ba688c573f4e2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:31.410381 kubelet[2644]: E1213 13:27:31.410347 2644 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11bc29dc0d8d27e911b08c204189f28fe100a6879dcb15fbb9ba688c573f4e2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:31.410907 kubelet[2644]: E1213 13:27:31.410616 2644 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11bc29dc0d8d27e911b08c204189f28fe100a6879dcb15fbb9ba688c573f4e2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-4g2n7" Dec 13 13:27:31.410907 kubelet[2644]: E1213 13:27:31.410643 2644 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11bc29dc0d8d27e911b08c204189f28fe100a6879dcb15fbb9ba688c573f4e2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-4g2n7" Dec 13 13:27:31.410907 kubelet[2644]: E1213 13:27:31.410690 2644 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-4g2n7_kube-system(7432b46c-534c-4718-843e-36f44a5a5ac1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-4g2n7_kube-system(7432b46c-534c-4718-843e-36f44a5a5ac1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"11bc29dc0d8d27e911b08c204189f28fe100a6879dcb15fbb9ba688c573f4e2d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-4g2n7" podUID="7432b46c-534c-4718-843e-36f44a5a5ac1" Dec 13 13:27:31.428499 sshd[4124]: Connection closed by 10.0.0.1 port 39330 Dec 13 13:27:31.429107 sshd-session[4099]: pam_unix(sshd:session): session closed for user core Dec 13 13:27:31.433386 systemd-logind[1454]: Session 9 logged out. Waiting for processes to exit. Dec 13 13:27:31.433712 systemd[1]: sshd@8-10.0.0.123:22-10.0.0.1:39330.service: Deactivated successfully. Dec 13 13:27:31.436337 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 13:27:31.437865 systemd-logind[1454]: Removed session 9. Dec 13 13:27:32.014170 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f0c5c755d19130228cfe26ea013b5dbc206b46452154e1ebbb8578b200cc559b-shm.mount: Deactivated successfully. Dec 13 13:27:32.129901 kubelet[2644]: I1213 13:27:32.129793 2644 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="11bc29dc0d8d27e911b08c204189f28fe100a6879dcb15fbb9ba688c573f4e2d" Dec 13 13:27:32.130901 containerd[1470]: time="2024-12-13T13:27:32.130802135Z" level=info msg="StopPodSandbox for \"11bc29dc0d8d27e911b08c204189f28fe100a6879dcb15fbb9ba688c573f4e2d\"" Dec 13 13:27:32.131879 containerd[1470]: time="2024-12-13T13:27:32.131506455Z" level=info msg="Ensure that sandbox 11bc29dc0d8d27e911b08c204189f28fe100a6879dcb15fbb9ba688c573f4e2d in task-service has been cleanup successfully" Dec 13 13:27:32.131879 containerd[1470]: time="2024-12-13T13:27:32.131712576Z" level=info msg="TearDown network for sandbox \"11bc29dc0d8d27e911b08c204189f28fe100a6879dcb15fbb9ba688c573f4e2d\" successfully" Dec 13 13:27:32.131879 containerd[1470]: time="2024-12-13T13:27:32.131727896Z" level=info msg="StopPodSandbox for \"11bc29dc0d8d27e911b08c204189f28fe100a6879dcb15fbb9ba688c573f4e2d\" returns successfully" Dec 13 13:27:32.137907 containerd[1470]: time="2024-12-13T13:27:32.132217416Z" level=info msg="StopPodSandbox for \"5117923122ba090e5d499c0acd72a36d052893c08e93f893dee251dd20c62718\"" Dec 13 13:27:32.137907 containerd[1470]: time="2024-12-13T13:27:32.132307576Z" level=info msg="TearDown network for sandbox \"5117923122ba090e5d499c0acd72a36d052893c08e93f893dee251dd20c62718\" successfully" Dec 13 13:27:32.137907 containerd[1470]: time="2024-12-13T13:27:32.132318576Z" level=info msg="StopPodSandbox for \"5117923122ba090e5d499c0acd72a36d052893c08e93f893dee251dd20c62718\" returns successfully" Dec 13 13:27:32.137907 containerd[1470]: time="2024-12-13T13:27:32.132672256Z" level=info msg="StopPodSandbox for \"6ebfe153f9bd5d284e3c2d0e770265e24cb4369ad8e7497825d870da4520a0a6\"" Dec 13 13:27:32.137907 containerd[1470]: time="2024-12-13T13:27:32.132746296Z" level=info msg="TearDown network for sandbox \"6ebfe153f9bd5d284e3c2d0e770265e24cb4369ad8e7497825d870da4520a0a6\" successfully" Dec 13 13:27:32.137907 containerd[1470]: time="2024-12-13T13:27:32.132754936Z" level=info msg="StopPodSandbox for \"6ebfe153f9bd5d284e3c2d0e770265e24cb4369ad8e7497825d870da4520a0a6\" returns successfully" Dec 13 13:27:32.137907 containerd[1470]: time="2024-12-13T13:27:32.133686537Z" level=info msg="StopPodSandbox for \"6da3c483bc09561e3eaa1e766a1a2aa82e218cc7c295b6d24645c61431ad5be0\"" Dec 13 13:27:32.137907 containerd[1470]: time="2024-12-13T13:27:32.133833097Z" level=info msg="Ensure that sandbox 6da3c483bc09561e3eaa1e766a1a2aa82e218cc7c295b6d24645c61431ad5be0 in task-service has been cleanup successfully" Dec 13 13:27:32.137907 containerd[1470]: time="2024-12-13T13:27:32.134001777Z" level=info msg="TearDown network for sandbox \"6da3c483bc09561e3eaa1e766a1a2aa82e218cc7c295b6d24645c61431ad5be0\" successfully" Dec 13 13:27:32.137907 containerd[1470]: time="2024-12-13T13:27:32.134015377Z" level=info msg="StopPodSandbox for \"6da3c483bc09561e3eaa1e766a1a2aa82e218cc7c295b6d24645c61431ad5be0\" returns successfully" Dec 13 13:27:32.137907 containerd[1470]: time="2024-12-13T13:27:32.134221177Z" level=info msg="StopPodSandbox for \"c5a9072b3c00d745aface7571023c0abffdb11eda589c93d145a27b3ebbe9018\"" Dec 13 13:27:32.137907 containerd[1470]: time="2024-12-13T13:27:32.134292297Z" level=info msg="TearDown network for sandbox \"c5a9072b3c00d745aface7571023c0abffdb11eda589c93d145a27b3ebbe9018\" successfully" Dec 13 13:27:32.137907 containerd[1470]: time="2024-12-13T13:27:32.134302457Z" level=info msg="StopPodSandbox for \"c5a9072b3c00d745aface7571023c0abffdb11eda589c93d145a27b3ebbe9018\" returns successfully" Dec 13 13:27:32.137907 containerd[1470]: time="2024-12-13T13:27:32.134404737Z" level=info msg="StopPodSandbox for \"98eb941cd0ea66b9687a3207f0c149e653e35a9ef2c103582f631a9c9a6ba849\"" Dec 13 13:27:32.137907 containerd[1470]: time="2024-12-13T13:27:32.134480057Z" level=info msg="TearDown network for sandbox \"98eb941cd0ea66b9687a3207f0c149e653e35a9ef2c103582f631a9c9a6ba849\" successfully" Dec 13 13:27:32.137907 containerd[1470]: time="2024-12-13T13:27:32.134492217Z" level=info msg="StopPodSandbox for \"98eb941cd0ea66b9687a3207f0c149e653e35a9ef2c103582f631a9c9a6ba849\" returns successfully" Dec 13 13:27:32.137907 containerd[1470]: time="2024-12-13T13:27:32.134674777Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-4g2n7,Uid:7432b46c-534c-4718-843e-36f44a5a5ac1,Namespace:kube-system,Attempt:4,}" Dec 13 13:27:32.134348 systemd[1]: run-netns-cni\x2d4a85c860\x2d8cc7\x2d4c94\x2d678f\x2d6e6d59715824.mount: Deactivated successfully. Dec 13 13:27:32.138640 kubelet[2644]: I1213 13:27:32.133181 2644 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6da3c483bc09561e3eaa1e766a1a2aa82e218cc7c295b6d24645c61431ad5be0" Dec 13 13:27:32.138640 kubelet[2644]: E1213 13:27:32.134429 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:27:32.136540 systemd[1]: run-netns-cni\x2d64278cf2\x2d6a48\x2d4f1e\x2de88a\x2dc254528fa8fb.mount: Deactivated successfully. Dec 13 13:27:32.333043 kubelet[2644]: I1213 13:27:32.332934 2644 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d7b4d2f75c00d8923e4129d720e722943bb77a0cfdc8afedf8ea41878d41a057" Dec 13 13:27:32.335016 containerd[1470]: time="2024-12-13T13:27:32.334973081Z" level=info msg="StopPodSandbox for \"d7b4d2f75c00d8923e4129d720e722943bb77a0cfdc8afedf8ea41878d41a057\"" Dec 13 13:27:32.336467 containerd[1470]: time="2024-12-13T13:27:32.336421002Z" level=info msg="Ensure that sandbox d7b4d2f75c00d8923e4129d720e722943bb77a0cfdc8afedf8ea41878d41a057 in task-service has been cleanup successfully" Dec 13 13:27:32.338519 containerd[1470]: time="2024-12-13T13:27:32.336774202Z" level=info msg="TearDown network for sandbox \"d7b4d2f75c00d8923e4129d720e722943bb77a0cfdc8afedf8ea41878d41a057\" successfully" Dec 13 13:27:32.338519 containerd[1470]: time="2024-12-13T13:27:32.336807162Z" level=info msg="StopPodSandbox for \"d7b4d2f75c00d8923e4129d720e722943bb77a0cfdc8afedf8ea41878d41a057\" returns successfully" Dec 13 13:27:32.338519 containerd[1470]: time="2024-12-13T13:27:32.337178882Z" level=info msg="StopPodSandbox for \"ac41ff4cc00b556307a05cd4080412f7483fd2836ce0b704d27334cf15d284a6\"" Dec 13 13:27:32.338519 containerd[1470]: time="2024-12-13T13:27:32.337426763Z" level=info msg="TearDown network for sandbox \"ac41ff4cc00b556307a05cd4080412f7483fd2836ce0b704d27334cf15d284a6\" successfully" Dec 13 13:27:32.338519 containerd[1470]: time="2024-12-13T13:27:32.337446803Z" level=info msg="StopPodSandbox for \"ac41ff4cc00b556307a05cd4080412f7483fd2836ce0b704d27334cf15d284a6\" returns successfully" Dec 13 13:27:32.339030 containerd[1470]: time="2024-12-13T13:27:32.338997203Z" level=info msg="StopPodSandbox for \"27f753461c75b2520c666cbe0a1bf5f551b1057d4e6d88357b64393d5796a9dd\"" Dec 13 13:27:32.339113 containerd[1470]: time="2024-12-13T13:27:32.339086563Z" level=info msg="TearDown network for sandbox \"27f753461c75b2520c666cbe0a1bf5f551b1057d4e6d88357b64393d5796a9dd\" successfully" Dec 13 13:27:32.339113 containerd[1470]: time="2024-12-13T13:27:32.339108283Z" level=info msg="StopPodSandbox for \"27f753461c75b2520c666cbe0a1bf5f551b1057d4e6d88357b64393d5796a9dd\" returns successfully" Dec 13 13:27:32.340893 containerd[1470]: time="2024-12-13T13:27:32.339464204Z" level=info msg="StopPodSandbox for \"86c882a73419c7c668dfe9102b04b31ccd6932449f9ba94a44718f2750f7c19b\"" Dec 13 13:27:32.340893 containerd[1470]: time="2024-12-13T13:27:32.339720364Z" level=info msg="TearDown network for sandbox \"86c882a73419c7c668dfe9102b04b31ccd6932449f9ba94a44718f2750f7c19b\" successfully" Dec 13 13:27:32.340893 containerd[1470]: time="2024-12-13T13:27:32.339733964Z" level=info msg="StopPodSandbox for \"86c882a73419c7c668dfe9102b04b31ccd6932449f9ba94a44718f2750f7c19b\" returns successfully" Dec 13 13:27:32.340893 containerd[1470]: time="2024-12-13T13:27:32.340455884Z" level=info msg="StopPodSandbox for \"2524df59d1d5837bc47c1f56c1f561f460cf084e19c6176080b270b2eb6b9f65\"" Dec 13 13:27:32.340893 containerd[1470]: time="2024-12-13T13:27:32.340710324Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jl58q,Uid:991aee5f-8651-49bc-ac7b-b0b4b2cc81c5,Namespace:calico-system,Attempt:4,}" Dec 13 13:27:32.340893 containerd[1470]: time="2024-12-13T13:27:32.340738124Z" level=info msg="TearDown network for sandbox \"2524df59d1d5837bc47c1f56c1f561f460cf084e19c6176080b270b2eb6b9f65\" successfully" Dec 13 13:27:32.340367 systemd[1]: run-netns-cni\x2d90f43390\x2d2929\x2d6acd\x2df6c4\x2d32a68f546138.mount: Deactivated successfully. Dec 13 13:27:32.345473 containerd[1470]: time="2024-12-13T13:27:32.340752004Z" level=info msg="StopPodSandbox for \"2524df59d1d5837bc47c1f56c1f561f460cf084e19c6176080b270b2eb6b9f65\" returns successfully" Dec 13 13:27:32.347350 kubelet[2644]: I1213 13:27:32.347307 2644 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f0c5c755d19130228cfe26ea013b5dbc206b46452154e1ebbb8578b200cc559b" Dec 13 13:27:32.348033 containerd[1470]: time="2024-12-13T13:27:32.347996168Z" level=info msg="StopPodSandbox for \"f0c5c755d19130228cfe26ea013b5dbc206b46452154e1ebbb8578b200cc559b\"" Dec 13 13:27:32.348167 containerd[1470]: time="2024-12-13T13:27:32.348143488Z" level=info msg="Ensure that sandbox f0c5c755d19130228cfe26ea013b5dbc206b46452154e1ebbb8578b200cc559b in task-service has been cleanup successfully" Dec 13 13:27:32.349896 containerd[1470]: time="2024-12-13T13:27:32.348997809Z" level=info msg="TearDown network for sandbox \"f0c5c755d19130228cfe26ea013b5dbc206b46452154e1ebbb8578b200cc559b\" successfully" Dec 13 13:27:32.349896 containerd[1470]: time="2024-12-13T13:27:32.349026289Z" level=info msg="StopPodSandbox for \"f0c5c755d19130228cfe26ea013b5dbc206b46452154e1ebbb8578b200cc559b\" returns successfully" Dec 13 13:27:32.350442 systemd[1]: run-netns-cni\x2d2703091d\x2d0bed\x2d77e0\x2d5858\x2dd15cc1db2cee.mount: Deactivated successfully. Dec 13 13:27:32.354086 containerd[1470]: time="2024-12-13T13:27:32.354024211Z" level=info msg="StopPodSandbox for \"9a96510483de5288a555a7496385fae3414d83977f2aea433b94563106a02b7f\"" Dec 13 13:27:32.354163 containerd[1470]: time="2024-12-13T13:27:32.354149891Z" level=info msg="TearDown network for sandbox \"9a96510483de5288a555a7496385fae3414d83977f2aea433b94563106a02b7f\" successfully" Dec 13 13:27:32.354186 containerd[1470]: time="2024-12-13T13:27:32.354161851Z" level=info msg="StopPodSandbox for \"9a96510483de5288a555a7496385fae3414d83977f2aea433b94563106a02b7f\" returns successfully" Dec 13 13:27:32.354704 kubelet[2644]: E1213 13:27:32.354508 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:27:32.357224 containerd[1470]: time="2024-12-13T13:27:32.356599332Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-tm76v,Uid:16d3a59c-4627-445a-a548-3d17c81a3ebd,Namespace:kube-system,Attempt:4,}" Dec 13 13:27:32.357224 containerd[1470]: time="2024-12-13T13:27:32.356975693Z" level=info msg="StopPodSandbox for \"84f3817040532860c1215df089849c6146828b5201bd21c8c2b10f4313126fa2\"" Dec 13 13:27:32.357224 containerd[1470]: time="2024-12-13T13:27:32.357046133Z" level=info msg="TearDown network for sandbox \"84f3817040532860c1215df089849c6146828b5201bd21c8c2b10f4313126fa2\" successfully" Dec 13 13:27:32.357224 containerd[1470]: time="2024-12-13T13:27:32.357054853Z" level=info msg="StopPodSandbox for \"84f3817040532860c1215df089849c6146828b5201bd21c8c2b10f4313126fa2\" returns successfully" Dec 13 13:27:32.358157 containerd[1470]: time="2024-12-13T13:27:32.358126453Z" level=info msg="StopPodSandbox for \"2402bfc3fb517198a33cc6b1f6555325952a6abcc8100e95a58daba1204b8e98\"" Dec 13 13:27:32.358222 containerd[1470]: time="2024-12-13T13:27:32.358201053Z" level=info msg="TearDown network for sandbox \"2402bfc3fb517198a33cc6b1f6555325952a6abcc8100e95a58daba1204b8e98\" successfully" Dec 13 13:27:32.358222 containerd[1470]: time="2024-12-13T13:27:32.358210253Z" level=info msg="StopPodSandbox for \"2402bfc3fb517198a33cc6b1f6555325952a6abcc8100e95a58daba1204b8e98\" returns successfully" Dec 13 13:27:32.358725 containerd[1470]: time="2024-12-13T13:27:32.358699094Z" level=info msg="StopPodSandbox for \"1dd6d8c1a6eac4dd5b5c62d01d7187e7b328d566cb31b745bf1ff4ef183d29f4\"" Dec 13 13:27:32.358815 containerd[1470]: time="2024-12-13T13:27:32.358782934Z" level=info msg="TearDown network for sandbox \"1dd6d8c1a6eac4dd5b5c62d01d7187e7b328d566cb31b745bf1ff4ef183d29f4\" successfully" Dec 13 13:27:32.358815 containerd[1470]: time="2024-12-13T13:27:32.358793454Z" level=info msg="StopPodSandbox for \"1dd6d8c1a6eac4dd5b5c62d01d7187e7b328d566cb31b745bf1ff4ef183d29f4\" returns successfully" Dec 13 13:27:32.359679 containerd[1470]: time="2024-12-13T13:27:32.359326694Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84949d5b96-26kkm,Uid:fdd0e41e-e772-4088-a22c-e059031fe725,Namespace:calico-system,Attempt:4,}" Dec 13 13:27:32.359710 kubelet[2644]: I1213 13:27:32.359317 2644 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="28665c410f01cc4a3f0b3dc135a4aa2288f4981a36a7d7da8a38e9803909a0b3" Dec 13 13:27:32.359932 containerd[1470]: time="2024-12-13T13:27:32.359903334Z" level=info msg="StopPodSandbox for \"28665c410f01cc4a3f0b3dc135a4aa2288f4981a36a7d7da8a38e9803909a0b3\"" Dec 13 13:27:32.361314 containerd[1470]: time="2024-12-13T13:27:32.361233175Z" level=info msg="Ensure that sandbox 28665c410f01cc4a3f0b3dc135a4aa2288f4981a36a7d7da8a38e9803909a0b3 in task-service has been cleanup successfully" Dec 13 13:27:32.361916 containerd[1470]: time="2024-12-13T13:27:32.361642695Z" level=info msg="TearDown network for sandbox \"28665c410f01cc4a3f0b3dc135a4aa2288f4981a36a7d7da8a38e9803909a0b3\" successfully" Dec 13 13:27:32.361916 containerd[1470]: time="2024-12-13T13:27:32.361664375Z" level=info msg="StopPodSandbox for \"28665c410f01cc4a3f0b3dc135a4aa2288f4981a36a7d7da8a38e9803909a0b3\" returns successfully" Dec 13 13:27:32.362565 containerd[1470]: time="2024-12-13T13:27:32.362528816Z" level=info msg="StopPodSandbox for \"981b86e50a2e4cb062d6972caeb37ab970e145a0945e9373c8351e947fb237f4\"" Dec 13 13:27:32.362709 containerd[1470]: time="2024-12-13T13:27:32.362692136Z" level=info msg="TearDown network for sandbox \"981b86e50a2e4cb062d6972caeb37ab970e145a0945e9373c8351e947fb237f4\" successfully" Dec 13 13:27:32.362858 containerd[1470]: time="2024-12-13T13:27:32.362840816Z" level=info msg="StopPodSandbox for \"981b86e50a2e4cb062d6972caeb37ab970e145a0945e9373c8351e947fb237f4\" returns successfully" Dec 13 13:27:32.363896 containerd[1470]: time="2024-12-13T13:27:32.363839376Z" level=info msg="StopPodSandbox for \"b12022606fec298cca643e809db898729e25ac7a1a2e4b945e0edb8f1bc98341\"" Dec 13 13:27:32.364105 containerd[1470]: time="2024-12-13T13:27:32.364079616Z" level=info msg="TearDown network for sandbox \"b12022606fec298cca643e809db898729e25ac7a1a2e4b945e0edb8f1bc98341\" successfully" Dec 13 13:27:32.364185 containerd[1470]: time="2024-12-13T13:27:32.364164056Z" level=info msg="StopPodSandbox for \"b12022606fec298cca643e809db898729e25ac7a1a2e4b945e0edb8f1bc98341\" returns successfully" Dec 13 13:27:32.364855 containerd[1470]: time="2024-12-13T13:27:32.364825217Z" level=info msg="StopPodSandbox for \"82e88ef19224fab276f15fff182d10dd5aa6fb658f753603262f32fe1451a3d8\"" Dec 13 13:27:32.365182 containerd[1470]: time="2024-12-13T13:27:32.365125097Z" level=info msg="TearDown network for sandbox \"82e88ef19224fab276f15fff182d10dd5aa6fb658f753603262f32fe1451a3d8\" successfully" Dec 13 13:27:32.365244 containerd[1470]: time="2024-12-13T13:27:32.365229257Z" level=info msg="StopPodSandbox for \"82e88ef19224fab276f15fff182d10dd5aa6fb658f753603262f32fe1451a3d8\" returns successfully" Dec 13 13:27:32.365631 kubelet[2644]: I1213 13:27:32.365557 2644 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2acfd3bccb51b496667a56602d7f76917d689ef151d291f89058c90ec3e6fdbd" Dec 13 13:27:32.366113 containerd[1470]: time="2024-12-13T13:27:32.365843697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69648fc998-v2qqw,Uid:b1402681-1dab-4e27-bb98-220ec86ddde8,Namespace:calico-apiserver,Attempt:4,}" Dec 13 13:27:32.380675 containerd[1470]: time="2024-12-13T13:27:32.380634465Z" level=info msg="StopPodSandbox for \"2acfd3bccb51b496667a56602d7f76917d689ef151d291f89058c90ec3e6fdbd\"" Dec 13 13:27:32.381238 containerd[1470]: time="2024-12-13T13:27:32.381085265Z" level=info msg="Ensure that sandbox 2acfd3bccb51b496667a56602d7f76917d689ef151d291f89058c90ec3e6fdbd in task-service has been cleanup successfully" Dec 13 13:27:32.382945 containerd[1470]: time="2024-12-13T13:27:32.382821226Z" level=info msg="TearDown network for sandbox \"2acfd3bccb51b496667a56602d7f76917d689ef151d291f89058c90ec3e6fdbd\" successfully" Dec 13 13:27:32.382945 containerd[1470]: time="2024-12-13T13:27:32.382851826Z" level=info msg="StopPodSandbox for \"2acfd3bccb51b496667a56602d7f76917d689ef151d291f89058c90ec3e6fdbd\" returns successfully" Dec 13 13:27:32.385276 containerd[1470]: time="2024-12-13T13:27:32.384935307Z" level=info msg="StopPodSandbox for \"a1e711814fe87f2d186bc5aa964ee62e4f7c132dedd2d5657686553649b8af34\"" Dec 13 13:27:32.385276 containerd[1470]: time="2024-12-13T13:27:32.385023467Z" level=info msg="TearDown network for sandbox \"a1e711814fe87f2d186bc5aa964ee62e4f7c132dedd2d5657686553649b8af34\" successfully" Dec 13 13:27:32.385276 containerd[1470]: time="2024-12-13T13:27:32.385033587Z" level=info msg="StopPodSandbox for \"a1e711814fe87f2d186bc5aa964ee62e4f7c132dedd2d5657686553649b8af34\" returns successfully" Dec 13 13:27:32.385815 containerd[1470]: time="2024-12-13T13:27:32.385791468Z" level=info msg="StopPodSandbox for \"e800de8956fc69d7244ccf347e8f59184b294ff27d86fb84f83ff71188ec38c0\"" Dec 13 13:27:32.386058 containerd[1470]: time="2024-12-13T13:27:32.385909948Z" level=info msg="TearDown network for sandbox \"e800de8956fc69d7244ccf347e8f59184b294ff27d86fb84f83ff71188ec38c0\" successfully" Dec 13 13:27:32.386058 containerd[1470]: time="2024-12-13T13:27:32.385922308Z" level=info msg="StopPodSandbox for \"e800de8956fc69d7244ccf347e8f59184b294ff27d86fb84f83ff71188ec38c0\" returns successfully" Dec 13 13:27:32.386199 containerd[1470]: time="2024-12-13T13:27:32.386154468Z" level=info msg="StopPodSandbox for \"d42c754f5bfce3ff24d96e0d14d8e10b47072d6fee77834d8ccae6ccaea0fb69\"" Dec 13 13:27:32.386266 containerd[1470]: time="2024-12-13T13:27:32.386242708Z" level=info msg="TearDown network for sandbox \"d42c754f5bfce3ff24d96e0d14d8e10b47072d6fee77834d8ccae6ccaea0fb69\" successfully" Dec 13 13:27:32.386266 containerd[1470]: time="2024-12-13T13:27:32.386258428Z" level=info msg="StopPodSandbox for \"d42c754f5bfce3ff24d96e0d14d8e10b47072d6fee77834d8ccae6ccaea0fb69\" returns successfully" Dec 13 13:27:32.387000 containerd[1470]: time="2024-12-13T13:27:32.386857948Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69648fc998-4w2cp,Uid:77036710-cd93-4856-bfc5-037aee276132,Namespace:calico-apiserver,Attempt:4,}" Dec 13 13:27:32.430384 containerd[1470]: time="2024-12-13T13:27:32.430011011Z" level=error msg="Failed to destroy network for sandbox \"935e87fa1a29f3b7e1fa6a61c2097ab55d13abeefd1f75daf149518f7772d275\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:32.430384 containerd[1470]: time="2024-12-13T13:27:32.430360611Z" level=error msg="encountered an error cleaning up failed sandbox \"935e87fa1a29f3b7e1fa6a61c2097ab55d13abeefd1f75daf149518f7772d275\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:32.430816 containerd[1470]: time="2024-12-13T13:27:32.430770211Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-4g2n7,Uid:7432b46c-534c-4718-843e-36f44a5a5ac1,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"935e87fa1a29f3b7e1fa6a61c2097ab55d13abeefd1f75daf149518f7772d275\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:32.431080 kubelet[2644]: E1213 13:27:32.431039 2644 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"935e87fa1a29f3b7e1fa6a61c2097ab55d13abeefd1f75daf149518f7772d275\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:32.431153 kubelet[2644]: E1213 13:27:32.431099 2644 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"935e87fa1a29f3b7e1fa6a61c2097ab55d13abeefd1f75daf149518f7772d275\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-4g2n7" Dec 13 13:27:32.431153 kubelet[2644]: E1213 13:27:32.431119 2644 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"935e87fa1a29f3b7e1fa6a61c2097ab55d13abeefd1f75daf149518f7772d275\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-4g2n7" Dec 13 13:27:32.431211 kubelet[2644]: E1213 13:27:32.431154 2644 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-4g2n7_kube-system(7432b46c-534c-4718-843e-36f44a5a5ac1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-4g2n7_kube-system(7432b46c-534c-4718-843e-36f44a5a5ac1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"935e87fa1a29f3b7e1fa6a61c2097ab55d13abeefd1f75daf149518f7772d275\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-4g2n7" podUID="7432b46c-534c-4718-843e-36f44a5a5ac1" Dec 13 13:27:32.484763 containerd[1470]: time="2024-12-13T13:27:32.484712439Z" level=error msg="Failed to destroy network for sandbox \"6d570a13a57e03098bfffbaaa22039f83f8481fef8b859bf88b497a88d87e04a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:32.485084 containerd[1470]: time="2024-12-13T13:27:32.485055479Z" level=error msg="encountered an error cleaning up failed sandbox \"6d570a13a57e03098bfffbaaa22039f83f8481fef8b859bf88b497a88d87e04a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:32.485142 containerd[1470]: time="2024-12-13T13:27:32.485117919Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84949d5b96-26kkm,Uid:fdd0e41e-e772-4088-a22c-e059031fe725,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"6d570a13a57e03098bfffbaaa22039f83f8481fef8b859bf88b497a88d87e04a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:32.485456 kubelet[2644]: E1213 13:27:32.485331 2644 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d570a13a57e03098bfffbaaa22039f83f8481fef8b859bf88b497a88d87e04a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:32.485858 kubelet[2644]: E1213 13:27:32.485540 2644 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d570a13a57e03098bfffbaaa22039f83f8481fef8b859bf88b497a88d87e04a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-84949d5b96-26kkm" Dec 13 13:27:32.485858 kubelet[2644]: E1213 13:27:32.485567 2644 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d570a13a57e03098bfffbaaa22039f83f8481fef8b859bf88b497a88d87e04a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-84949d5b96-26kkm" Dec 13 13:27:32.485858 kubelet[2644]: E1213 13:27:32.485613 2644 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-84949d5b96-26kkm_calico-system(fdd0e41e-e772-4088-a22c-e059031fe725)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-84949d5b96-26kkm_calico-system(fdd0e41e-e772-4088-a22c-e059031fe725)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6d570a13a57e03098bfffbaaa22039f83f8481fef8b859bf88b497a88d87e04a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-84949d5b96-26kkm" podUID="fdd0e41e-e772-4088-a22c-e059031fe725" Dec 13 13:27:32.509037 containerd[1470]: time="2024-12-13T13:27:32.508993692Z" level=error msg="Failed to destroy network for sandbox \"f1a55cf25343be3ff987a3f6d2c674ce55f42d1f82c9cafb620feaf09bd7e472\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:32.509934 containerd[1470]: time="2024-12-13T13:27:32.509527652Z" level=error msg="encountered an error cleaning up failed sandbox \"f1a55cf25343be3ff987a3f6d2c674ce55f42d1f82c9cafb620feaf09bd7e472\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:32.509934 containerd[1470]: time="2024-12-13T13:27:32.509637412Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jl58q,Uid:991aee5f-8651-49bc-ac7b-b0b4b2cc81c5,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"f1a55cf25343be3ff987a3f6d2c674ce55f42d1f82c9cafb620feaf09bd7e472\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:32.509934 containerd[1470]: time="2024-12-13T13:27:32.509772412Z" level=error msg="Failed to destroy network for sandbox \"dde4c85d63593dbc1b0eb31d47d9664d70fa15ee0fbb41aae212d0e0070482b2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:32.510110 kubelet[2644]: E1213 13:27:32.510046 2644 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1a55cf25343be3ff987a3f6d2c674ce55f42d1f82c9cafb620feaf09bd7e472\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:32.510110 kubelet[2644]: E1213 13:27:32.510104 2644 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1a55cf25343be3ff987a3f6d2c674ce55f42d1f82c9cafb620feaf09bd7e472\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jl58q" Dec 13 13:27:32.510190 kubelet[2644]: E1213 13:27:32.510130 2644 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1a55cf25343be3ff987a3f6d2c674ce55f42d1f82c9cafb620feaf09bd7e472\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jl58q" Dec 13 13:27:32.510190 kubelet[2644]: E1213 13:27:32.510173 2644 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-jl58q_calico-system(991aee5f-8651-49bc-ac7b-b0b4b2cc81c5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-jl58q_calico-system(991aee5f-8651-49bc-ac7b-b0b4b2cc81c5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f1a55cf25343be3ff987a3f6d2c674ce55f42d1f82c9cafb620feaf09bd7e472\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jl58q" podUID="991aee5f-8651-49bc-ac7b-b0b4b2cc81c5" Dec 13 13:27:32.511039 containerd[1470]: time="2024-12-13T13:27:32.510638333Z" level=error msg="encountered an error cleaning up failed sandbox \"dde4c85d63593dbc1b0eb31d47d9664d70fa15ee0fbb41aae212d0e0070482b2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:32.511039 containerd[1470]: time="2024-12-13T13:27:32.510730253Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69648fc998-4w2cp,Uid:77036710-cd93-4856-bfc5-037aee276132,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"dde4c85d63593dbc1b0eb31d47d9664d70fa15ee0fbb41aae212d0e0070482b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:32.511174 kubelet[2644]: E1213 13:27:32.510886 2644 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dde4c85d63593dbc1b0eb31d47d9664d70fa15ee0fbb41aae212d0e0070482b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:32.511174 kubelet[2644]: E1213 13:27:32.510920 2644 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dde4c85d63593dbc1b0eb31d47d9664d70fa15ee0fbb41aae212d0e0070482b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69648fc998-4w2cp" Dec 13 13:27:32.511174 kubelet[2644]: E1213 13:27:32.510935 2644 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dde4c85d63593dbc1b0eb31d47d9664d70fa15ee0fbb41aae212d0e0070482b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69648fc998-4w2cp" Dec 13 13:27:32.511248 kubelet[2644]: E1213 13:27:32.510974 2644 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-69648fc998-4w2cp_calico-apiserver(77036710-cd93-4856-bfc5-037aee276132)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-69648fc998-4w2cp_calico-apiserver(77036710-cd93-4856-bfc5-037aee276132)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dde4c85d63593dbc1b0eb31d47d9664d70fa15ee0fbb41aae212d0e0070482b2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-69648fc998-4w2cp" podUID="77036710-cd93-4856-bfc5-037aee276132" Dec 13 13:27:32.512803 containerd[1470]: time="2024-12-13T13:27:32.512703974Z" level=error msg="Failed to destroy network for sandbox \"38fddec9a10e52bf3d7532ba46d65d687a15cb24fffb9f932ef372b48ada3b20\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:32.512987 containerd[1470]: time="2024-12-13T13:27:32.512959814Z" level=error msg="Failed to destroy network for sandbox \"ffd3e784900842f1b03b0b7292ed26c16d097c668390b0bfb8e3630401546aa6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:32.513468 containerd[1470]: time="2024-12-13T13:27:32.513321174Z" level=error msg="encountered an error cleaning up failed sandbox \"ffd3e784900842f1b03b0b7292ed26c16d097c668390b0bfb8e3630401546aa6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:32.513468 containerd[1470]: time="2024-12-13T13:27:32.513375254Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69648fc998-v2qqw,Uid:b1402681-1dab-4e27-bb98-220ec86ddde8,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"ffd3e784900842f1b03b0b7292ed26c16d097c668390b0bfb8e3630401546aa6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:32.513576 kubelet[2644]: E1213 13:27:32.513533 2644 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ffd3e784900842f1b03b0b7292ed26c16d097c668390b0bfb8e3630401546aa6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:32.513576 kubelet[2644]: E1213 13:27:32.513565 2644 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ffd3e784900842f1b03b0b7292ed26c16d097c668390b0bfb8e3630401546aa6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69648fc998-v2qqw" Dec 13 13:27:32.513622 kubelet[2644]: E1213 13:27:32.513579 2644 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ffd3e784900842f1b03b0b7292ed26c16d097c668390b0bfb8e3630401546aa6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69648fc998-v2qqw" Dec 13 13:27:32.513983 kubelet[2644]: E1213 13:27:32.513615 2644 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-69648fc998-v2qqw_calico-apiserver(b1402681-1dab-4e27-bb98-220ec86ddde8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-69648fc998-v2qqw_calico-apiserver(b1402681-1dab-4e27-bb98-220ec86ddde8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ffd3e784900842f1b03b0b7292ed26c16d097c668390b0bfb8e3630401546aa6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-69648fc998-v2qqw" podUID="b1402681-1dab-4e27-bb98-220ec86ddde8" Dec 13 13:27:32.514066 containerd[1470]: time="2024-12-13T13:27:32.513830014Z" level=error msg="encountered an error cleaning up failed sandbox \"38fddec9a10e52bf3d7532ba46d65d687a15cb24fffb9f932ef372b48ada3b20\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:32.514227 containerd[1470]: time="2024-12-13T13:27:32.514200934Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-tm76v,Uid:16d3a59c-4627-445a-a548-3d17c81a3ebd,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"38fddec9a10e52bf3d7532ba46d65d687a15cb24fffb9f932ef372b48ada3b20\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:32.514431 kubelet[2644]: E1213 13:27:32.514407 2644 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"38fddec9a10e52bf3d7532ba46d65d687a15cb24fffb9f932ef372b48ada3b20\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:32.514522 kubelet[2644]: E1213 13:27:32.514436 2644 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"38fddec9a10e52bf3d7532ba46d65d687a15cb24fffb9f932ef372b48ada3b20\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-tm76v" Dec 13 13:27:32.514522 kubelet[2644]: E1213 13:27:32.514452 2644 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"38fddec9a10e52bf3d7532ba46d65d687a15cb24fffb9f932ef372b48ada3b20\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-tm76v" Dec 13 13:27:32.514522 kubelet[2644]: E1213 13:27:32.514475 2644 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-tm76v_kube-system(16d3a59c-4627-445a-a548-3d17c81a3ebd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-tm76v_kube-system(16d3a59c-4627-445a-a548-3d17c81a3ebd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"38fddec9a10e52bf3d7532ba46d65d687a15cb24fffb9f932ef372b48ada3b20\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-tm76v" podUID="16d3a59c-4627-445a-a548-3d17c81a3ebd" Dec 13 13:27:32.645469 containerd[1470]: time="2024-12-13T13:27:32.645357163Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:27:32.646121 containerd[1470]: time="2024-12-13T13:27:32.646006003Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=137671762" Dec 13 13:27:32.648547 containerd[1470]: time="2024-12-13T13:27:32.648321004Z" level=info msg="ImageCreate event name:\"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:27:32.650900 containerd[1470]: time="2024-12-13T13:27:32.650832365Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:27:32.651807 containerd[1470]: time="2024-12-13T13:27:32.651424886Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"137671624\" in 4.578472539s" Dec 13 13:27:32.651807 containerd[1470]: time="2024-12-13T13:27:32.651461926Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\"" Dec 13 13:27:32.657835 containerd[1470]: time="2024-12-13T13:27:32.657803929Z" level=info msg="CreateContainer within sandbox \"997ce60831c9ab1184f945dd32241f976f01c01233bb01752d0581958e8c335c\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 13 13:27:32.671026 containerd[1470]: time="2024-12-13T13:27:32.670984376Z" level=info msg="CreateContainer within sandbox \"997ce60831c9ab1184f945dd32241f976f01c01233bb01752d0581958e8c335c\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"e380bc1c1e2dc1288b8a40f0ba80bc4a237db2a8b7b736bc905259dfc56a0cd3\"" Dec 13 13:27:32.671668 containerd[1470]: time="2024-12-13T13:27:32.671603496Z" level=info msg="StartContainer for \"e380bc1c1e2dc1288b8a40f0ba80bc4a237db2a8b7b736bc905259dfc56a0cd3\"" Dec 13 13:27:32.727052 systemd[1]: Started cri-containerd-e380bc1c1e2dc1288b8a40f0ba80bc4a237db2a8b7b736bc905259dfc56a0cd3.scope - libcontainer container e380bc1c1e2dc1288b8a40f0ba80bc4a237db2a8b7b736bc905259dfc56a0cd3. Dec 13 13:27:32.758624 containerd[1470]: time="2024-12-13T13:27:32.758580301Z" level=info msg="StartContainer for \"e380bc1c1e2dc1288b8a40f0ba80bc4a237db2a8b7b736bc905259dfc56a0cd3\" returns successfully" Dec 13 13:27:32.948175 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 13 13:27:32.948279 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 13 13:27:33.016538 systemd[1]: run-netns-cni\x2d42ecfc13\x2d2fb8\x2dfa59\x2d99a4\x2d4b6f1b1dc198.mount: Deactivated successfully. Dec 13 13:27:33.017293 systemd[1]: run-netns-cni\x2decf896d2\x2d8fc4\x2dff52\x2d8525\x2dafb4af962480.mount: Deactivated successfully. Dec 13 13:27:33.017374 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount897581644.mount: Deactivated successfully. Dec 13 13:27:33.369705 kubelet[2644]: I1213 13:27:33.369591 2644 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6d570a13a57e03098bfffbaaa22039f83f8481fef8b859bf88b497a88d87e04a" Dec 13 13:27:33.370205 containerd[1470]: time="2024-12-13T13:27:33.370093127Z" level=info msg="StopPodSandbox for \"6d570a13a57e03098bfffbaaa22039f83f8481fef8b859bf88b497a88d87e04a\"" Dec 13 13:27:33.370653 containerd[1470]: time="2024-12-13T13:27:33.370469927Z" level=info msg="Ensure that sandbox 6d570a13a57e03098bfffbaaa22039f83f8481fef8b859bf88b497a88d87e04a in task-service has been cleanup successfully" Dec 13 13:27:33.372801 kubelet[2644]: I1213 13:27:33.372420 2644 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="38fddec9a10e52bf3d7532ba46d65d687a15cb24fffb9f932ef372b48ada3b20" Dec 13 13:27:33.372892 containerd[1470]: time="2024-12-13T13:27:33.372792729Z" level=info msg="TearDown network for sandbox \"6d570a13a57e03098bfffbaaa22039f83f8481fef8b859bf88b497a88d87e04a\" successfully" Dec 13 13:27:33.372892 containerd[1470]: time="2024-12-13T13:27:33.372813409Z" level=info msg="StopPodSandbox for \"6d570a13a57e03098bfffbaaa22039f83f8481fef8b859bf88b497a88d87e04a\" returns successfully" Dec 13 13:27:33.372892 containerd[1470]: time="2024-12-13T13:27:33.372839849Z" level=info msg="StopPodSandbox for \"38fddec9a10e52bf3d7532ba46d65d687a15cb24fffb9f932ef372b48ada3b20\"" Dec 13 13:27:33.373087 containerd[1470]: time="2024-12-13T13:27:33.373001049Z" level=info msg="Ensure that sandbox 38fddec9a10e52bf3d7532ba46d65d687a15cb24fffb9f932ef372b48ada3b20 in task-service has been cleanup successfully" Dec 13 13:27:33.373087 containerd[1470]: time="2024-12-13T13:27:33.373055569Z" level=info msg="StopPodSandbox for \"f0c5c755d19130228cfe26ea013b5dbc206b46452154e1ebbb8578b200cc559b\"" Dec 13 13:27:33.373813 containerd[1470]: time="2024-12-13T13:27:33.373146849Z" level=info msg="TearDown network for sandbox \"f0c5c755d19130228cfe26ea013b5dbc206b46452154e1ebbb8578b200cc559b\" successfully" Dec 13 13:27:33.373813 containerd[1470]: time="2024-12-13T13:27:33.373156489Z" level=info msg="StopPodSandbox for \"f0c5c755d19130228cfe26ea013b5dbc206b46452154e1ebbb8578b200cc559b\" returns successfully" Dec 13 13:27:33.373813 containerd[1470]: time="2024-12-13T13:27:33.373171129Z" level=info msg="TearDown network for sandbox \"38fddec9a10e52bf3d7532ba46d65d687a15cb24fffb9f932ef372b48ada3b20\" successfully" Dec 13 13:27:33.373813 containerd[1470]: time="2024-12-13T13:27:33.373186569Z" level=info msg="StopPodSandbox for \"38fddec9a10e52bf3d7532ba46d65d687a15cb24fffb9f932ef372b48ada3b20\" returns successfully" Dec 13 13:27:33.373267 systemd[1]: run-netns-cni\x2d1b7e698b\x2dc728\x2d7d65\x2d3e7c\x2de755bba7a127.mount: Deactivated successfully. Dec 13 13:27:33.374152 containerd[1470]: time="2024-12-13T13:27:33.373907089Z" level=info msg="StopPodSandbox for \"d7b4d2f75c00d8923e4129d720e722943bb77a0cfdc8afedf8ea41878d41a057\"" Dec 13 13:27:33.374152 containerd[1470]: time="2024-12-13T13:27:33.373973409Z" level=info msg="TearDown network for sandbox \"d7b4d2f75c00d8923e4129d720e722943bb77a0cfdc8afedf8ea41878d41a057\" successfully" Dec 13 13:27:33.374152 containerd[1470]: time="2024-12-13T13:27:33.373983369Z" level=info msg="StopPodSandbox for \"d7b4d2f75c00d8923e4129d720e722943bb77a0cfdc8afedf8ea41878d41a057\" returns successfully" Dec 13 13:27:33.374152 containerd[1470]: time="2024-12-13T13:27:33.374415449Z" level=info msg="StopPodSandbox for \"84f3817040532860c1215df089849c6146828b5201bd21c8c2b10f4313126fa2\"" Dec 13 13:27:33.374152 containerd[1470]: time="2024-12-13T13:27:33.374469809Z" level=info msg="StopPodSandbox for \"86c882a73419c7c668dfe9102b04b31ccd6932449f9ba94a44718f2750f7c19b\"" Dec 13 13:27:33.374152 containerd[1470]: time="2024-12-13T13:27:33.374544769Z" level=info msg="TearDown network for sandbox \"84f3817040532860c1215df089849c6146828b5201bd21c8c2b10f4313126fa2\" successfully" Dec 13 13:27:33.374152 containerd[1470]: time="2024-12-13T13:27:33.374558969Z" level=info msg="StopPodSandbox for \"84f3817040532860c1215df089849c6146828b5201bd21c8c2b10f4313126fa2\" returns successfully" Dec 13 13:27:33.374152 containerd[1470]: time="2024-12-13T13:27:33.374578969Z" level=info msg="TearDown network for sandbox \"86c882a73419c7c668dfe9102b04b31ccd6932449f9ba94a44718f2750f7c19b\" successfully" Dec 13 13:27:33.374152 containerd[1470]: time="2024-12-13T13:27:33.374591129Z" level=info msg="StopPodSandbox for \"86c882a73419c7c668dfe9102b04b31ccd6932449f9ba94a44718f2750f7c19b\" returns successfully" Dec 13 13:27:33.375044 containerd[1470]: time="2024-12-13T13:27:33.374890330Z" level=info msg="StopPodSandbox for \"2402bfc3fb517198a33cc6b1f6555325952a6abcc8100e95a58daba1204b8e98\"" Dec 13 13:27:33.375044 containerd[1470]: time="2024-12-13T13:27:33.374956770Z" level=info msg="TearDown network for sandbox \"2402bfc3fb517198a33cc6b1f6555325952a6abcc8100e95a58daba1204b8e98\" successfully" Dec 13 13:27:33.375044 containerd[1470]: time="2024-12-13T13:27:33.374965250Z" level=info msg="StopPodSandbox for \"2402bfc3fb517198a33cc6b1f6555325952a6abcc8100e95a58daba1204b8e98\" returns successfully" Dec 13 13:27:33.375044 containerd[1470]: time="2024-12-13T13:27:33.375015850Z" level=info msg="StopPodSandbox for \"2524df59d1d5837bc47c1f56c1f561f460cf084e19c6176080b270b2eb6b9f65\"" Dec 13 13:27:33.375125 containerd[1470]: time="2024-12-13T13:27:33.375062970Z" level=info msg="TearDown network for sandbox \"2524df59d1d5837bc47c1f56c1f561f460cf084e19c6176080b270b2eb6b9f65\" successfully" Dec 13 13:27:33.375125 containerd[1470]: time="2024-12-13T13:27:33.375070610Z" level=info msg="StopPodSandbox for \"2524df59d1d5837bc47c1f56c1f561f460cf084e19c6176080b270b2eb6b9f65\" returns successfully" Dec 13 13:27:33.375908 containerd[1470]: time="2024-12-13T13:27:33.375576410Z" level=info msg="StopPodSandbox for \"9a96510483de5288a555a7496385fae3414d83977f2aea433b94563106a02b7f\"" Dec 13 13:27:33.375908 containerd[1470]: time="2024-12-13T13:27:33.375622130Z" level=info msg="StopPodSandbox for \"1dd6d8c1a6eac4dd5b5c62d01d7187e7b328d566cb31b745bf1ff4ef183d29f4\"" Dec 13 13:27:33.375908 containerd[1470]: time="2024-12-13T13:27:33.375654690Z" level=info msg="TearDown network for sandbox \"9a96510483de5288a555a7496385fae3414d83977f2aea433b94563106a02b7f\" successfully" Dec 13 13:27:33.375908 containerd[1470]: time="2024-12-13T13:27:33.375664730Z" level=info msg="StopPodSandbox for \"9a96510483de5288a555a7496385fae3414d83977f2aea433b94563106a02b7f\" returns successfully" Dec 13 13:27:33.375908 containerd[1470]: time="2024-12-13T13:27:33.375690330Z" level=info msg="TearDown network for sandbox \"1dd6d8c1a6eac4dd5b5c62d01d7187e7b328d566cb31b745bf1ff4ef183d29f4\" successfully" Dec 13 13:27:33.375908 containerd[1470]: time="2024-12-13T13:27:33.375698730Z" level=info msg="StopPodSandbox for \"1dd6d8c1a6eac4dd5b5c62d01d7187e7b328d566cb31b745bf1ff4ef183d29f4\" returns successfully" Dec 13 13:27:33.376128 kubelet[2644]: E1213 13:27:33.376050 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:27:33.376599 containerd[1470]: time="2024-12-13T13:27:33.376555730Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84949d5b96-26kkm,Uid:fdd0e41e-e772-4088-a22c-e059031fe725,Namespace:calico-system,Attempt:5,}" Dec 13 13:27:33.376818 systemd[1]: run-netns-cni\x2d57a83f68\x2dbd01\x2d3269\x2d20c3\x2d6a2f5f041ee0.mount: Deactivated successfully. Dec 13 13:27:33.377289 containerd[1470]: time="2024-12-13T13:27:33.377218291Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-tm76v,Uid:16d3a59c-4627-445a-a548-3d17c81a3ebd,Namespace:kube-system,Attempt:5,}" Dec 13 13:27:33.378118 kubelet[2644]: E1213 13:27:33.378095 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:27:33.384304 kubelet[2644]: I1213 13:27:33.384048 2644 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ffd3e784900842f1b03b0b7292ed26c16d097c668390b0bfb8e3630401546aa6" Dec 13 13:27:33.384604 containerd[1470]: time="2024-12-13T13:27:33.384570694Z" level=info msg="StopPodSandbox for \"ffd3e784900842f1b03b0b7292ed26c16d097c668390b0bfb8e3630401546aa6\"" Dec 13 13:27:33.384836 containerd[1470]: time="2024-12-13T13:27:33.384815814Z" level=info msg="Ensure that sandbox ffd3e784900842f1b03b0b7292ed26c16d097c668390b0bfb8e3630401546aa6 in task-service has been cleanup successfully" Dec 13 13:27:33.386895 containerd[1470]: time="2024-12-13T13:27:33.386847535Z" level=info msg="TearDown network for sandbox \"ffd3e784900842f1b03b0b7292ed26c16d097c668390b0bfb8e3630401546aa6\" successfully" Dec 13 13:27:33.386895 containerd[1470]: time="2024-12-13T13:27:33.386892135Z" level=info msg="StopPodSandbox for \"ffd3e784900842f1b03b0b7292ed26c16d097c668390b0bfb8e3630401546aa6\" returns successfully" Dec 13 13:27:33.387070 systemd[1]: run-netns-cni\x2dff7a9735\x2d1fff\x2d329d\x2d7948\x2de0ad574952f9.mount: Deactivated successfully. Dec 13 13:27:33.388322 containerd[1470]: time="2024-12-13T13:27:33.388286096Z" level=info msg="StopPodSandbox for \"28665c410f01cc4a3f0b3dc135a4aa2288f4981a36a7d7da8a38e9803909a0b3\"" Dec 13 13:27:33.388406 containerd[1470]: time="2024-12-13T13:27:33.388389376Z" level=info msg="TearDown network for sandbox \"28665c410f01cc4a3f0b3dc135a4aa2288f4981a36a7d7da8a38e9803909a0b3\" successfully" Dec 13 13:27:33.388406 containerd[1470]: time="2024-12-13T13:27:33.388403776Z" level=info msg="StopPodSandbox for \"28665c410f01cc4a3f0b3dc135a4aa2288f4981a36a7d7da8a38e9803909a0b3\" returns successfully" Dec 13 13:27:33.388932 containerd[1470]: time="2024-12-13T13:27:33.388774616Z" level=info msg="StopPodSandbox for \"981b86e50a2e4cb062d6972caeb37ab970e145a0945e9373c8351e947fb237f4\"" Dec 13 13:27:33.388932 containerd[1470]: time="2024-12-13T13:27:33.388853896Z" level=info msg="TearDown network for sandbox \"981b86e50a2e4cb062d6972caeb37ab970e145a0945e9373c8351e947fb237f4\" successfully" Dec 13 13:27:33.388932 containerd[1470]: time="2024-12-13T13:27:33.388863256Z" level=info msg="StopPodSandbox for \"981b86e50a2e4cb062d6972caeb37ab970e145a0945e9373c8351e947fb237f4\" returns successfully" Dec 13 13:27:33.391185 containerd[1470]: time="2024-12-13T13:27:33.391147618Z" level=info msg="StopPodSandbox for \"b12022606fec298cca643e809db898729e25ac7a1a2e4b945e0edb8f1bc98341\"" Dec 13 13:27:33.391250 containerd[1470]: time="2024-12-13T13:27:33.391223098Z" level=info msg="TearDown network for sandbox \"b12022606fec298cca643e809db898729e25ac7a1a2e4b945e0edb8f1bc98341\" successfully" Dec 13 13:27:33.391250 containerd[1470]: time="2024-12-13T13:27:33.391232378Z" level=info msg="StopPodSandbox for \"b12022606fec298cca643e809db898729e25ac7a1a2e4b945e0edb8f1bc98341\" returns successfully" Dec 13 13:27:33.391812 containerd[1470]: time="2024-12-13T13:27:33.391551218Z" level=info msg="StopPodSandbox for \"82e88ef19224fab276f15fff182d10dd5aa6fb658f753603262f32fe1451a3d8\"" Dec 13 13:27:33.391812 containerd[1470]: time="2024-12-13T13:27:33.391613178Z" level=info msg="TearDown network for sandbox \"82e88ef19224fab276f15fff182d10dd5aa6fb658f753603262f32fe1451a3d8\" successfully" Dec 13 13:27:33.391812 containerd[1470]: time="2024-12-13T13:27:33.391622938Z" level=info msg="StopPodSandbox for \"82e88ef19224fab276f15fff182d10dd5aa6fb658f753603262f32fe1451a3d8\" returns successfully" Dec 13 13:27:33.392187 kubelet[2644]: I1213 13:27:33.392159 2644 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dde4c85d63593dbc1b0eb31d47d9664d70fa15ee0fbb41aae212d0e0070482b2" Dec 13 13:27:33.392947 containerd[1470]: time="2024-12-13T13:27:33.392865538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69648fc998-v2qqw,Uid:b1402681-1dab-4e27-bb98-220ec86ddde8,Namespace:calico-apiserver,Attempt:5,}" Dec 13 13:27:33.393724 containerd[1470]: time="2024-12-13T13:27:33.393565579Z" level=info msg="StopPodSandbox for \"dde4c85d63593dbc1b0eb31d47d9664d70fa15ee0fbb41aae212d0e0070482b2\"" Dec 13 13:27:33.394326 containerd[1470]: time="2024-12-13T13:27:33.394142539Z" level=info msg="Ensure that sandbox dde4c85d63593dbc1b0eb31d47d9664d70fa15ee0fbb41aae212d0e0070482b2 in task-service has been cleanup successfully" Dec 13 13:27:33.394835 containerd[1470]: time="2024-12-13T13:27:33.394811979Z" level=info msg="TearDown network for sandbox \"dde4c85d63593dbc1b0eb31d47d9664d70fa15ee0fbb41aae212d0e0070482b2\" successfully" Dec 13 13:27:33.394835 containerd[1470]: time="2024-12-13T13:27:33.394833579Z" level=info msg="StopPodSandbox for \"dde4c85d63593dbc1b0eb31d47d9664d70fa15ee0fbb41aae212d0e0070482b2\" returns successfully" Dec 13 13:27:33.395255 kubelet[2644]: I1213 13:27:33.395219 2644 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="935e87fa1a29f3b7e1fa6a61c2097ab55d13abeefd1f75daf149518f7772d275" Dec 13 13:27:33.395293 containerd[1470]: time="2024-12-13T13:27:33.395227259Z" level=info msg="StopPodSandbox for \"2acfd3bccb51b496667a56602d7f76917d689ef151d291f89058c90ec3e6fdbd\"" Dec 13 13:27:33.395325 containerd[1470]: time="2024-12-13T13:27:33.395294540Z" level=info msg="TearDown network for sandbox \"2acfd3bccb51b496667a56602d7f76917d689ef151d291f89058c90ec3e6fdbd\" successfully" Dec 13 13:27:33.395325 containerd[1470]: time="2024-12-13T13:27:33.395303620Z" level=info msg="StopPodSandbox for \"2acfd3bccb51b496667a56602d7f76917d689ef151d291f89058c90ec3e6fdbd\" returns successfully" Dec 13 13:27:33.395642 containerd[1470]: time="2024-12-13T13:27:33.395621580Z" level=info msg="StopPodSandbox for \"a1e711814fe87f2d186bc5aa964ee62e4f7c132dedd2d5657686553649b8af34\"" Dec 13 13:27:33.395702 containerd[1470]: time="2024-12-13T13:27:33.395690180Z" level=info msg="TearDown network for sandbox \"a1e711814fe87f2d186bc5aa964ee62e4f7c132dedd2d5657686553649b8af34\" successfully" Dec 13 13:27:33.395727 containerd[1470]: time="2024-12-13T13:27:33.395702580Z" level=info msg="StopPodSandbox for \"a1e711814fe87f2d186bc5aa964ee62e4f7c132dedd2d5657686553649b8af34\" returns successfully" Dec 13 13:27:33.398149 containerd[1470]: time="2024-12-13T13:27:33.396050260Z" level=info msg="StopPodSandbox for \"e800de8956fc69d7244ccf347e8f59184b294ff27d86fb84f83ff71188ec38c0\"" Dec 13 13:27:33.398149 containerd[1470]: time="2024-12-13T13:27:33.396112140Z" level=info msg="TearDown network for sandbox \"e800de8956fc69d7244ccf347e8f59184b294ff27d86fb84f83ff71188ec38c0\" successfully" Dec 13 13:27:33.398149 containerd[1470]: time="2024-12-13T13:27:33.396121340Z" level=info msg="StopPodSandbox for \"e800de8956fc69d7244ccf347e8f59184b294ff27d86fb84f83ff71188ec38c0\" returns successfully" Dec 13 13:27:33.398149 containerd[1470]: time="2024-12-13T13:27:33.396193020Z" level=info msg="StopPodSandbox for \"935e87fa1a29f3b7e1fa6a61c2097ab55d13abeefd1f75daf149518f7772d275\"" Dec 13 13:27:33.398149 containerd[1470]: time="2024-12-13T13:27:33.396307700Z" level=info msg="Ensure that sandbox 935e87fa1a29f3b7e1fa6a61c2097ab55d13abeefd1f75daf149518f7772d275 in task-service has been cleanup successfully" Dec 13 13:27:33.398149 containerd[1470]: time="2024-12-13T13:27:33.396477780Z" level=info msg="TearDown network for sandbox \"935e87fa1a29f3b7e1fa6a61c2097ab55d13abeefd1f75daf149518f7772d275\" successfully" Dec 13 13:27:33.398149 containerd[1470]: time="2024-12-13T13:27:33.396491820Z" level=info msg="StopPodSandbox for \"935e87fa1a29f3b7e1fa6a61c2097ab55d13abeefd1f75daf149518f7772d275\" returns successfully" Dec 13 13:27:33.398149 containerd[1470]: time="2024-12-13T13:27:33.396618460Z" level=info msg="StopPodSandbox for \"d42c754f5bfce3ff24d96e0d14d8e10b47072d6fee77834d8ccae6ccaea0fb69\"" Dec 13 13:27:33.398149 containerd[1470]: time="2024-12-13T13:27:33.396687020Z" level=info msg="TearDown network for sandbox \"d42c754f5bfce3ff24d96e0d14d8e10b47072d6fee77834d8ccae6ccaea0fb69\" successfully" Dec 13 13:27:33.398149 containerd[1470]: time="2024-12-13T13:27:33.396696660Z" level=info msg="StopPodSandbox for \"d42c754f5bfce3ff24d96e0d14d8e10b47072d6fee77834d8ccae6ccaea0fb69\" returns successfully" Dec 13 13:27:33.398149 containerd[1470]: time="2024-12-13T13:27:33.396920820Z" level=info msg="StopPodSandbox for \"11bc29dc0d8d27e911b08c204189f28fe100a6879dcb15fbb9ba688c573f4e2d\"" Dec 13 13:27:33.398149 containerd[1470]: time="2024-12-13T13:27:33.397023620Z" level=info msg="TearDown network for sandbox \"11bc29dc0d8d27e911b08c204189f28fe100a6879dcb15fbb9ba688c573f4e2d\" successfully" Dec 13 13:27:33.398149 containerd[1470]: time="2024-12-13T13:27:33.397033500Z" level=info msg="StopPodSandbox for \"11bc29dc0d8d27e911b08c204189f28fe100a6879dcb15fbb9ba688c573f4e2d\" returns successfully" Dec 13 13:27:33.398149 containerd[1470]: time="2024-12-13T13:27:33.397573061Z" level=info msg="StopPodSandbox for \"5117923122ba090e5d499c0acd72a36d052893c08e93f893dee251dd20c62718\"" Dec 13 13:27:33.398149 containerd[1470]: time="2024-12-13T13:27:33.397674061Z" level=info msg="TearDown network for sandbox \"5117923122ba090e5d499c0acd72a36d052893c08e93f893dee251dd20c62718\" successfully" Dec 13 13:27:33.398149 containerd[1470]: time="2024-12-13T13:27:33.397685341Z" level=info msg="StopPodSandbox for \"5117923122ba090e5d499c0acd72a36d052893c08e93f893dee251dd20c62718\" returns successfully" Dec 13 13:27:33.398149 containerd[1470]: time="2024-12-13T13:27:33.397842701Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69648fc998-4w2cp,Uid:77036710-cd93-4856-bfc5-037aee276132,Namespace:calico-apiserver,Attempt:5,}" Dec 13 13:27:33.397585 systemd[1]: run-netns-cni\x2db374e3d3\x2d4d60\x2defdc\x2d2cc3\x2df7e2734cd580.mount: Deactivated successfully. Dec 13 13:27:33.398581 containerd[1470]: time="2024-12-13T13:27:33.398299101Z" level=info msg="StopPodSandbox for \"6ebfe153f9bd5d284e3c2d0e770265e24cb4369ad8e7497825d870da4520a0a6\"" Dec 13 13:27:33.398581 containerd[1470]: time="2024-12-13T13:27:33.398388101Z" level=info msg="TearDown network for sandbox \"6ebfe153f9bd5d284e3c2d0e770265e24cb4369ad8e7497825d870da4520a0a6\" successfully" Dec 13 13:27:33.398581 containerd[1470]: time="2024-12-13T13:27:33.398398821Z" level=info msg="StopPodSandbox for \"6ebfe153f9bd5d284e3c2d0e770265e24cb4369ad8e7497825d870da4520a0a6\" returns successfully" Dec 13 13:27:33.399889 containerd[1470]: time="2024-12-13T13:27:33.398792181Z" level=info msg="StopPodSandbox for \"c5a9072b3c00d745aface7571023c0abffdb11eda589c93d145a27b3ebbe9018\"" Dec 13 13:27:33.399889 containerd[1470]: time="2024-12-13T13:27:33.399214021Z" level=info msg="TearDown network for sandbox \"c5a9072b3c00d745aface7571023c0abffdb11eda589c93d145a27b3ebbe9018\" successfully" Dec 13 13:27:33.399889 containerd[1470]: time="2024-12-13T13:27:33.399240221Z" level=info msg="StopPodSandbox for \"c5a9072b3c00d745aface7571023c0abffdb11eda589c93d145a27b3ebbe9018\" returns successfully" Dec 13 13:27:33.399889 containerd[1470]: time="2024-12-13T13:27:33.399538942Z" level=info msg="StopPodSandbox for \"f1a55cf25343be3ff987a3f6d2c674ce55f42d1f82c9cafb620feaf09bd7e472\"" Dec 13 13:27:33.399889 containerd[1470]: time="2024-12-13T13:27:33.399667742Z" level=info msg="Ensure that sandbox f1a55cf25343be3ff987a3f6d2c674ce55f42d1f82c9cafb620feaf09bd7e472 in task-service has been cleanup successfully" Dec 13 13:27:33.399889 containerd[1470]: time="2024-12-13T13:27:33.399809782Z" level=info msg="TearDown network for sandbox \"f1a55cf25343be3ff987a3f6d2c674ce55f42d1f82c9cafb620feaf09bd7e472\" successfully" Dec 13 13:27:33.399889 containerd[1470]: time="2024-12-13T13:27:33.399821902Z" level=info msg="StopPodSandbox for \"f1a55cf25343be3ff987a3f6d2c674ce55f42d1f82c9cafb620feaf09bd7e472\" returns successfully" Dec 13 13:27:33.400142 kubelet[2644]: I1213 13:27:33.399032 2644 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1a55cf25343be3ff987a3f6d2c674ce55f42d1f82c9cafb620feaf09bd7e472" Dec 13 13:27:33.400142 kubelet[2644]: E1213 13:27:33.399726 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:27:33.400275 containerd[1470]: time="2024-12-13T13:27:33.400251702Z" level=info msg="StopPodSandbox for \"6da3c483bc09561e3eaa1e766a1a2aa82e218cc7c295b6d24645c61431ad5be0\"" Dec 13 13:27:33.400362 containerd[1470]: time="2024-12-13T13:27:33.400334462Z" level=info msg="TearDown network for sandbox \"6da3c483bc09561e3eaa1e766a1a2aa82e218cc7c295b6d24645c61431ad5be0\" successfully" Dec 13 13:27:33.400362 containerd[1470]: time="2024-12-13T13:27:33.400360302Z" level=info msg="StopPodSandbox for \"6da3c483bc09561e3eaa1e766a1a2aa82e218cc7c295b6d24645c61431ad5be0\" returns successfully" Dec 13 13:27:33.400799 containerd[1470]: time="2024-12-13T13:27:33.400643822Z" level=info msg="StopPodSandbox for \"98eb941cd0ea66b9687a3207f0c149e653e35a9ef2c103582f631a9c9a6ba849\"" Dec 13 13:27:33.400799 containerd[1470]: time="2024-12-13T13:27:33.400730902Z" level=info msg="TearDown network for sandbox \"98eb941cd0ea66b9687a3207f0c149e653e35a9ef2c103582f631a9c9a6ba849\" successfully" Dec 13 13:27:33.400799 containerd[1470]: time="2024-12-13T13:27:33.400740742Z" level=info msg="StopPodSandbox for \"98eb941cd0ea66b9687a3207f0c149e653e35a9ef2c103582f631a9c9a6ba849\" returns successfully" Dec 13 13:27:33.400799 containerd[1470]: time="2024-12-13T13:27:33.400783662Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-4g2n7,Uid:7432b46c-534c-4718-843e-36f44a5a5ac1,Namespace:kube-system,Attempt:5,}" Dec 13 13:27:33.401012 containerd[1470]: time="2024-12-13T13:27:33.400980102Z" level=info msg="StopPodSandbox for \"ac41ff4cc00b556307a05cd4080412f7483fd2836ce0b704d27334cf15d284a6\"" Dec 13 13:27:33.401081 containerd[1470]: time="2024-12-13T13:27:33.401064782Z" level=info msg="TearDown network for sandbox \"ac41ff4cc00b556307a05cd4080412f7483fd2836ce0b704d27334cf15d284a6\" successfully" Dec 13 13:27:33.401081 containerd[1470]: time="2024-12-13T13:27:33.401078822Z" level=info msg="StopPodSandbox for \"ac41ff4cc00b556307a05cd4080412f7483fd2836ce0b704d27334cf15d284a6\" returns successfully" Dec 13 13:27:33.401477 containerd[1470]: time="2024-12-13T13:27:33.401320142Z" level=info msg="StopPodSandbox for \"27f753461c75b2520c666cbe0a1bf5f551b1057d4e6d88357b64393d5796a9dd\"" Dec 13 13:27:33.401477 containerd[1470]: time="2024-12-13T13:27:33.401411423Z" level=info msg="TearDown network for sandbox \"27f753461c75b2520c666cbe0a1bf5f551b1057d4e6d88357b64393d5796a9dd\" successfully" Dec 13 13:27:33.401477 containerd[1470]: time="2024-12-13T13:27:33.401422103Z" level=info msg="StopPodSandbox for \"27f753461c75b2520c666cbe0a1bf5f551b1057d4e6d88357b64393d5796a9dd\" returns successfully" Dec 13 13:27:33.401757 containerd[1470]: time="2024-12-13T13:27:33.401733863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jl58q,Uid:991aee5f-8651-49bc-ac7b-b0b4b2cc81c5,Namespace:calico-system,Attempt:5,}" Dec 13 13:27:33.860817 systemd-networkd[1397]: cali97d63e3e84b: Link UP Dec 13 13:27:33.861058 systemd-networkd[1397]: cali97d63e3e84b: Gained carrier Dec 13 13:27:33.870106 kubelet[2644]: I1213 13:27:33.869906 2644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-b5ckg" podStartSLOduration=2.667143244 podStartE2EDuration="15.869830491s" podCreationTimestamp="2024-12-13 13:27:18 +0000 UTC" firstStartedPulling="2024-12-13 13:27:19.449318999 +0000 UTC m=+22.544407803" lastFinishedPulling="2024-12-13 13:27:32.652006206 +0000 UTC m=+35.747095050" observedRunningTime="2024-12-13 13:27:33.415133149 +0000 UTC m=+36.510221993" watchObservedRunningTime="2024-12-13 13:27:33.869830491 +0000 UTC m=+36.964919335" Dec 13 13:27:33.872346 containerd[1470]: 2024-12-13 13:27:33.517 [INFO][4685] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 13:27:33.872346 containerd[1470]: 2024-12-13 13:27:33.632 [INFO][4685] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--jl58q-eth0 csi-node-driver- calico-system 991aee5f-8651-49bc-ac7b-b0b4b2cc81c5 616 0 2024-12-13 13:27:18 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-jl58q eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali97d63e3e84b [] []}} ContainerID="5a0d0d17bf9fbb0b46005836a71e6062f3b9828950ce74916a74ff6fbc415e68" Namespace="calico-system" Pod="csi-node-driver-jl58q" WorkloadEndpoint="localhost-k8s-csi--node--driver--jl58q-" Dec 13 13:27:33.872346 containerd[1470]: 2024-12-13 13:27:33.633 [INFO][4685] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="5a0d0d17bf9fbb0b46005836a71e6062f3b9828950ce74916a74ff6fbc415e68" Namespace="calico-system" Pod="csi-node-driver-jl58q" WorkloadEndpoint="localhost-k8s-csi--node--driver--jl58q-eth0" Dec 13 13:27:33.872346 containerd[1470]: 2024-12-13 13:27:33.781 [INFO][4736] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5a0d0d17bf9fbb0b46005836a71e6062f3b9828950ce74916a74ff6fbc415e68" HandleID="k8s-pod-network.5a0d0d17bf9fbb0b46005836a71e6062f3b9828950ce74916a74ff6fbc415e68" Workload="localhost-k8s-csi--node--driver--jl58q-eth0" Dec 13 13:27:33.872346 containerd[1470]: 2024-12-13 13:27:33.808 [INFO][4736] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5a0d0d17bf9fbb0b46005836a71e6062f3b9828950ce74916a74ff6fbc415e68" HandleID="k8s-pod-network.5a0d0d17bf9fbb0b46005836a71e6062f3b9828950ce74916a74ff6fbc415e68" Workload="localhost-k8s-csi--node--driver--jl58q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000242720), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-jl58q", "timestamp":"2024-12-13 13:27:33.781708288 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 13:27:33.872346 containerd[1470]: 2024-12-13 13:27:33.810 [INFO][4736] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 13:27:33.872346 containerd[1470]: 2024-12-13 13:27:33.811 [INFO][4736] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 13:27:33.872346 containerd[1470]: 2024-12-13 13:27:33.811 [INFO][4736] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 13:27:33.872346 containerd[1470]: 2024-12-13 13:27:33.813 [INFO][4736] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.5a0d0d17bf9fbb0b46005836a71e6062f3b9828950ce74916a74ff6fbc415e68" host="localhost" Dec 13 13:27:33.872346 containerd[1470]: 2024-12-13 13:27:33.822 [INFO][4736] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 13:27:33.872346 containerd[1470]: 2024-12-13 13:27:33.826 [INFO][4736] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 13:27:33.872346 containerd[1470]: 2024-12-13 13:27:33.833 [INFO][4736] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 13:27:33.872346 containerd[1470]: 2024-12-13 13:27:33.835 [INFO][4736] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 13:27:33.872346 containerd[1470]: 2024-12-13 13:27:33.835 [INFO][4736] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5a0d0d17bf9fbb0b46005836a71e6062f3b9828950ce74916a74ff6fbc415e68" host="localhost" Dec 13 13:27:33.872346 containerd[1470]: 2024-12-13 13:27:33.836 [INFO][4736] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.5a0d0d17bf9fbb0b46005836a71e6062f3b9828950ce74916a74ff6fbc415e68 Dec 13 13:27:33.872346 containerd[1470]: 2024-12-13 13:27:33.840 [INFO][4736] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5a0d0d17bf9fbb0b46005836a71e6062f3b9828950ce74916a74ff6fbc415e68" host="localhost" Dec 13 13:27:33.872346 containerd[1470]: 2024-12-13 13:27:33.844 [INFO][4736] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.5a0d0d17bf9fbb0b46005836a71e6062f3b9828950ce74916a74ff6fbc415e68" host="localhost" Dec 13 13:27:33.872346 containerd[1470]: 2024-12-13 13:27:33.845 [INFO][4736] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.5a0d0d17bf9fbb0b46005836a71e6062f3b9828950ce74916a74ff6fbc415e68" host="localhost" Dec 13 13:27:33.872346 containerd[1470]: 2024-12-13 13:27:33.845 [INFO][4736] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 13:27:33.872346 containerd[1470]: 2024-12-13 13:27:33.845 [INFO][4736] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="5a0d0d17bf9fbb0b46005836a71e6062f3b9828950ce74916a74ff6fbc415e68" HandleID="k8s-pod-network.5a0d0d17bf9fbb0b46005836a71e6062f3b9828950ce74916a74ff6fbc415e68" Workload="localhost-k8s-csi--node--driver--jl58q-eth0" Dec 13 13:27:33.872837 containerd[1470]: 2024-12-13 13:27:33.847 [INFO][4685] cni-plugin/k8s.go 386: Populated endpoint ContainerID="5a0d0d17bf9fbb0b46005836a71e6062f3b9828950ce74916a74ff6fbc415e68" Namespace="calico-system" Pod="csi-node-driver-jl58q" WorkloadEndpoint="localhost-k8s-csi--node--driver--jl58q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--jl58q-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"991aee5f-8651-49bc-ac7b-b0b4b2cc81c5", ResourceVersion:"616", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 13, 27, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-jl58q", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali97d63e3e84b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 13:27:33.872837 containerd[1470]: 2024-12-13 13:27:33.847 [INFO][4685] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="5a0d0d17bf9fbb0b46005836a71e6062f3b9828950ce74916a74ff6fbc415e68" Namespace="calico-system" Pod="csi-node-driver-jl58q" WorkloadEndpoint="localhost-k8s-csi--node--driver--jl58q-eth0" Dec 13 13:27:33.872837 containerd[1470]: 2024-12-13 13:27:33.847 [INFO][4685] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali97d63e3e84b ContainerID="5a0d0d17bf9fbb0b46005836a71e6062f3b9828950ce74916a74ff6fbc415e68" Namespace="calico-system" Pod="csi-node-driver-jl58q" WorkloadEndpoint="localhost-k8s-csi--node--driver--jl58q-eth0" Dec 13 13:27:33.872837 containerd[1470]: 2024-12-13 13:27:33.859 [INFO][4685] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5a0d0d17bf9fbb0b46005836a71e6062f3b9828950ce74916a74ff6fbc415e68" Namespace="calico-system" Pod="csi-node-driver-jl58q" WorkloadEndpoint="localhost-k8s-csi--node--driver--jl58q-eth0" Dec 13 13:27:33.872837 containerd[1470]: 2024-12-13 13:27:33.859 [INFO][4685] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="5a0d0d17bf9fbb0b46005836a71e6062f3b9828950ce74916a74ff6fbc415e68" Namespace="calico-system" Pod="csi-node-driver-jl58q" WorkloadEndpoint="localhost-k8s-csi--node--driver--jl58q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--jl58q-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"991aee5f-8651-49bc-ac7b-b0b4b2cc81c5", ResourceVersion:"616", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 13, 27, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5a0d0d17bf9fbb0b46005836a71e6062f3b9828950ce74916a74ff6fbc415e68", Pod:"csi-node-driver-jl58q", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali97d63e3e84b", MAC:"f6:65:8f:06:71:0a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 13:27:33.872837 containerd[1470]: 2024-12-13 13:27:33.870 [INFO][4685] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="5a0d0d17bf9fbb0b46005836a71e6062f3b9828950ce74916a74ff6fbc415e68" Namespace="calico-system" Pod="csi-node-driver-jl58q" WorkloadEndpoint="localhost-k8s-csi--node--driver--jl58q-eth0" Dec 13 13:27:33.880999 systemd-networkd[1397]: cali75d4c9d466d: Link UP Dec 13 13:27:33.881396 systemd-networkd[1397]: cali75d4c9d466d: Gained carrier Dec 13 13:27:33.892536 containerd[1470]: 2024-12-13 13:27:33.548 [INFO][4664] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 13:27:33.892536 containerd[1470]: 2024-12-13 13:27:33.649 [INFO][4664] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--4g2n7-eth0 coredns-7db6d8ff4d- kube-system 7432b46c-534c-4718-843e-36f44a5a5ac1 749 0 2024-12-13 13:27:12 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-4g2n7 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali75d4c9d466d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="5170f017d6e0f82924ca0991742bd4e8ff115b9ebec6388b4190d19acb178ab8" Namespace="kube-system" Pod="coredns-7db6d8ff4d-4g2n7" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--4g2n7-" Dec 13 13:27:33.892536 containerd[1470]: 2024-12-13 13:27:33.649 [INFO][4664] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="5170f017d6e0f82924ca0991742bd4e8ff115b9ebec6388b4190d19acb178ab8" Namespace="kube-system" Pod="coredns-7db6d8ff4d-4g2n7" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--4g2n7-eth0" Dec 13 13:27:33.892536 containerd[1470]: 2024-12-13 13:27:33.781 [INFO][4744] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5170f017d6e0f82924ca0991742bd4e8ff115b9ebec6388b4190d19acb178ab8" HandleID="k8s-pod-network.5170f017d6e0f82924ca0991742bd4e8ff115b9ebec6388b4190d19acb178ab8" Workload="localhost-k8s-coredns--7db6d8ff4d--4g2n7-eth0" Dec 13 13:27:33.892536 containerd[1470]: 2024-12-13 13:27:33.813 [INFO][4744] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5170f017d6e0f82924ca0991742bd4e8ff115b9ebec6388b4190d19acb178ab8" HandleID="k8s-pod-network.5170f017d6e0f82924ca0991742bd4e8ff115b9ebec6388b4190d19acb178ab8" Workload="localhost-k8s-coredns--7db6d8ff4d--4g2n7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000419d60), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-4g2n7", "timestamp":"2024-12-13 13:27:33.781707888 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 13:27:33.892536 containerd[1470]: 2024-12-13 13:27:33.813 [INFO][4744] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 13:27:33.892536 containerd[1470]: 2024-12-13 13:27:33.845 [INFO][4744] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 13:27:33.892536 containerd[1470]: 2024-12-13 13:27:33.845 [INFO][4744] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 13:27:33.892536 containerd[1470]: 2024-12-13 13:27:33.848 [INFO][4744] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.5170f017d6e0f82924ca0991742bd4e8ff115b9ebec6388b4190d19acb178ab8" host="localhost" Dec 13 13:27:33.892536 containerd[1470]: 2024-12-13 13:27:33.852 [INFO][4744] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 13:27:33.892536 containerd[1470]: 2024-12-13 13:27:33.856 [INFO][4744] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 13:27:33.892536 containerd[1470]: 2024-12-13 13:27:33.858 [INFO][4744] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 13:27:33.892536 containerd[1470]: 2024-12-13 13:27:33.862 [INFO][4744] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 13:27:33.892536 containerd[1470]: 2024-12-13 13:27:33.862 [INFO][4744] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5170f017d6e0f82924ca0991742bd4e8ff115b9ebec6388b4190d19acb178ab8" host="localhost" Dec 13 13:27:33.892536 containerd[1470]: 2024-12-13 13:27:33.863 [INFO][4744] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.5170f017d6e0f82924ca0991742bd4e8ff115b9ebec6388b4190d19acb178ab8 Dec 13 13:27:33.892536 containerd[1470]: 2024-12-13 13:27:33.869 [INFO][4744] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5170f017d6e0f82924ca0991742bd4e8ff115b9ebec6388b4190d19acb178ab8" host="localhost" Dec 13 13:27:33.892536 containerd[1470]: 2024-12-13 13:27:33.875 [INFO][4744] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.5170f017d6e0f82924ca0991742bd4e8ff115b9ebec6388b4190d19acb178ab8" host="localhost" Dec 13 13:27:33.892536 containerd[1470]: 2024-12-13 13:27:33.875 [INFO][4744] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.5170f017d6e0f82924ca0991742bd4e8ff115b9ebec6388b4190d19acb178ab8" host="localhost" Dec 13 13:27:33.892536 containerd[1470]: 2024-12-13 13:27:33.875 [INFO][4744] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 13:27:33.892536 containerd[1470]: 2024-12-13 13:27:33.875 [INFO][4744] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="5170f017d6e0f82924ca0991742bd4e8ff115b9ebec6388b4190d19acb178ab8" HandleID="k8s-pod-network.5170f017d6e0f82924ca0991742bd4e8ff115b9ebec6388b4190d19acb178ab8" Workload="localhost-k8s-coredns--7db6d8ff4d--4g2n7-eth0" Dec 13 13:27:33.893943 containerd[1470]: 2024-12-13 13:27:33.878 [INFO][4664] cni-plugin/k8s.go 386: Populated endpoint ContainerID="5170f017d6e0f82924ca0991742bd4e8ff115b9ebec6388b4190d19acb178ab8" Namespace="kube-system" Pod="coredns-7db6d8ff4d-4g2n7" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--4g2n7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--4g2n7-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"7432b46c-534c-4718-843e-36f44a5a5ac1", ResourceVersion:"749", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 13, 27, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-4g2n7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali75d4c9d466d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 13:27:33.893943 containerd[1470]: 2024-12-13 13:27:33.878 [INFO][4664] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="5170f017d6e0f82924ca0991742bd4e8ff115b9ebec6388b4190d19acb178ab8" Namespace="kube-system" Pod="coredns-7db6d8ff4d-4g2n7" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--4g2n7-eth0" Dec 13 13:27:33.893943 containerd[1470]: 2024-12-13 13:27:33.879 [INFO][4664] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali75d4c9d466d ContainerID="5170f017d6e0f82924ca0991742bd4e8ff115b9ebec6388b4190d19acb178ab8" Namespace="kube-system" Pod="coredns-7db6d8ff4d-4g2n7" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--4g2n7-eth0" Dec 13 13:27:33.893943 containerd[1470]: 2024-12-13 13:27:33.881 [INFO][4664] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5170f017d6e0f82924ca0991742bd4e8ff115b9ebec6388b4190d19acb178ab8" Namespace="kube-system" Pod="coredns-7db6d8ff4d-4g2n7" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--4g2n7-eth0" Dec 13 13:27:33.893943 containerd[1470]: 2024-12-13 13:27:33.882 [INFO][4664] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="5170f017d6e0f82924ca0991742bd4e8ff115b9ebec6388b4190d19acb178ab8" Namespace="kube-system" Pod="coredns-7db6d8ff4d-4g2n7" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--4g2n7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--4g2n7-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"7432b46c-534c-4718-843e-36f44a5a5ac1", ResourceVersion:"749", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 13, 27, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5170f017d6e0f82924ca0991742bd4e8ff115b9ebec6388b4190d19acb178ab8", Pod:"coredns-7db6d8ff4d-4g2n7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali75d4c9d466d", MAC:"66:3d:08:36:0c:39", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 13:27:33.893943 containerd[1470]: 2024-12-13 13:27:33.889 [INFO][4664] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="5170f017d6e0f82924ca0991742bd4e8ff115b9ebec6388b4190d19acb178ab8" Namespace="kube-system" Pod="coredns-7db6d8ff4d-4g2n7" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--4g2n7-eth0" Dec 13 13:27:33.906932 containerd[1470]: time="2024-12-13T13:27:33.906763269Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:27:33.906932 containerd[1470]: time="2024-12-13T13:27:33.906864989Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:27:33.906932 containerd[1470]: time="2024-12-13T13:27:33.906888629Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:27:33.907201 containerd[1470]: time="2024-12-13T13:27:33.907024389Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:27:33.914513 containerd[1470]: time="2024-12-13T13:27:33.914406632Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:27:33.914513 containerd[1470]: time="2024-12-13T13:27:33.914472352Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:27:33.917635 containerd[1470]: time="2024-12-13T13:27:33.914522713Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:27:33.917688 containerd[1470]: time="2024-12-13T13:27:33.917642834Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:27:33.921445 systemd-networkd[1397]: cali739b8daf778: Link UP Dec 13 13:27:33.921660 systemd-networkd[1397]: cali739b8daf778: Gained carrier Dec 13 13:27:33.932125 systemd[1]: Started cri-containerd-5a0d0d17bf9fbb0b46005836a71e6062f3b9828950ce74916a74ff6fbc415e68.scope - libcontainer container 5a0d0d17bf9fbb0b46005836a71e6062f3b9828950ce74916a74ff6fbc415e68. Dec 13 13:27:33.937442 containerd[1470]: 2024-12-13 13:27:33.521 [INFO][4656] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 13:27:33.937442 containerd[1470]: 2024-12-13 13:27:33.643 [INFO][4656] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--69648fc998--4w2cp-eth0 calico-apiserver-69648fc998- calico-apiserver 77036710-cd93-4856-bfc5-037aee276132 751 0 2024-12-13 13:27:18 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:69648fc998 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-69648fc998-4w2cp eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali739b8daf778 [] []}} ContainerID="bf93d8108fa4d803e30e8903963cfd33177785a98ba9fb8c35e5e48bf5b04e3c" Namespace="calico-apiserver" Pod="calico-apiserver-69648fc998-4w2cp" WorkloadEndpoint="localhost-k8s-calico--apiserver--69648fc998--4w2cp-" Dec 13 13:27:33.937442 containerd[1470]: 2024-12-13 13:27:33.643 [INFO][4656] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="bf93d8108fa4d803e30e8903963cfd33177785a98ba9fb8c35e5e48bf5b04e3c" Namespace="calico-apiserver" Pod="calico-apiserver-69648fc998-4w2cp" WorkloadEndpoint="localhost-k8s-calico--apiserver--69648fc998--4w2cp-eth0" Dec 13 13:27:33.937442 containerd[1470]: 2024-12-13 13:27:33.788 [INFO][4743] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bf93d8108fa4d803e30e8903963cfd33177785a98ba9fb8c35e5e48bf5b04e3c" HandleID="k8s-pod-network.bf93d8108fa4d803e30e8903963cfd33177785a98ba9fb8c35e5e48bf5b04e3c" Workload="localhost-k8s-calico--apiserver--69648fc998--4w2cp-eth0" Dec 13 13:27:33.937442 containerd[1470]: 2024-12-13 13:27:33.813 [INFO][4743] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bf93d8108fa4d803e30e8903963cfd33177785a98ba9fb8c35e5e48bf5b04e3c" HandleID="k8s-pod-network.bf93d8108fa4d803e30e8903963cfd33177785a98ba9fb8c35e5e48bf5b04e3c" Workload="localhost-k8s-calico--apiserver--69648fc998--4w2cp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400033a1c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-69648fc998-4w2cp", "timestamp":"2024-12-13 13:27:33.788314451 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 13:27:33.937442 containerd[1470]: 2024-12-13 13:27:33.813 [INFO][4743] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 13:27:33.937442 containerd[1470]: 2024-12-13 13:27:33.875 [INFO][4743] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 13:27:33.937442 containerd[1470]: 2024-12-13 13:27:33.875 [INFO][4743] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 13:27:33.937442 containerd[1470]: 2024-12-13 13:27:33.877 [INFO][4743] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.bf93d8108fa4d803e30e8903963cfd33177785a98ba9fb8c35e5e48bf5b04e3c" host="localhost" Dec 13 13:27:33.937442 containerd[1470]: 2024-12-13 13:27:33.884 [INFO][4743] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 13:27:33.937442 containerd[1470]: 2024-12-13 13:27:33.894 [INFO][4743] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 13:27:33.937442 containerd[1470]: 2024-12-13 13:27:33.896 [INFO][4743] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 13:27:33.937442 containerd[1470]: 2024-12-13 13:27:33.900 [INFO][4743] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 13:27:33.937442 containerd[1470]: 2024-12-13 13:27:33.901 [INFO][4743] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.bf93d8108fa4d803e30e8903963cfd33177785a98ba9fb8c35e5e48bf5b04e3c" host="localhost" Dec 13 13:27:33.937442 containerd[1470]: 2024-12-13 13:27:33.904 [INFO][4743] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.bf93d8108fa4d803e30e8903963cfd33177785a98ba9fb8c35e5e48bf5b04e3c Dec 13 13:27:33.937442 containerd[1470]: 2024-12-13 13:27:33.909 [INFO][4743] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.bf93d8108fa4d803e30e8903963cfd33177785a98ba9fb8c35e5e48bf5b04e3c" host="localhost" Dec 13 13:27:33.937442 containerd[1470]: 2024-12-13 13:27:33.914 [INFO][4743] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.bf93d8108fa4d803e30e8903963cfd33177785a98ba9fb8c35e5e48bf5b04e3c" host="localhost" Dec 13 13:27:33.937442 containerd[1470]: 2024-12-13 13:27:33.914 [INFO][4743] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.bf93d8108fa4d803e30e8903963cfd33177785a98ba9fb8c35e5e48bf5b04e3c" host="localhost" Dec 13 13:27:33.937442 containerd[1470]: 2024-12-13 13:27:33.914 [INFO][4743] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 13:27:33.937442 containerd[1470]: 2024-12-13 13:27:33.915 [INFO][4743] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="bf93d8108fa4d803e30e8903963cfd33177785a98ba9fb8c35e5e48bf5b04e3c" HandleID="k8s-pod-network.bf93d8108fa4d803e30e8903963cfd33177785a98ba9fb8c35e5e48bf5b04e3c" Workload="localhost-k8s-calico--apiserver--69648fc998--4w2cp-eth0" Dec 13 13:27:33.938110 containerd[1470]: 2024-12-13 13:27:33.918 [INFO][4656] cni-plugin/k8s.go 386: Populated endpoint ContainerID="bf93d8108fa4d803e30e8903963cfd33177785a98ba9fb8c35e5e48bf5b04e3c" Namespace="calico-apiserver" Pod="calico-apiserver-69648fc998-4w2cp" WorkloadEndpoint="localhost-k8s-calico--apiserver--69648fc998--4w2cp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--69648fc998--4w2cp-eth0", GenerateName:"calico-apiserver-69648fc998-", Namespace:"calico-apiserver", SelfLink:"", UID:"77036710-cd93-4856-bfc5-037aee276132", ResourceVersion:"751", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 13, 27, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69648fc998", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-69648fc998-4w2cp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali739b8daf778", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 13:27:33.938110 containerd[1470]: 2024-12-13 13:27:33.918 [INFO][4656] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="bf93d8108fa4d803e30e8903963cfd33177785a98ba9fb8c35e5e48bf5b04e3c" Namespace="calico-apiserver" Pod="calico-apiserver-69648fc998-4w2cp" WorkloadEndpoint="localhost-k8s-calico--apiserver--69648fc998--4w2cp-eth0" Dec 13 13:27:33.938110 containerd[1470]: 2024-12-13 13:27:33.918 [INFO][4656] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali739b8daf778 ContainerID="bf93d8108fa4d803e30e8903963cfd33177785a98ba9fb8c35e5e48bf5b04e3c" Namespace="calico-apiserver" Pod="calico-apiserver-69648fc998-4w2cp" WorkloadEndpoint="localhost-k8s-calico--apiserver--69648fc998--4w2cp-eth0" Dec 13 13:27:33.938110 containerd[1470]: 2024-12-13 13:27:33.921 [INFO][4656] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bf93d8108fa4d803e30e8903963cfd33177785a98ba9fb8c35e5e48bf5b04e3c" Namespace="calico-apiserver" Pod="calico-apiserver-69648fc998-4w2cp" WorkloadEndpoint="localhost-k8s-calico--apiserver--69648fc998--4w2cp-eth0" Dec 13 13:27:33.938110 containerd[1470]: 2024-12-13 13:27:33.922 [INFO][4656] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="bf93d8108fa4d803e30e8903963cfd33177785a98ba9fb8c35e5e48bf5b04e3c" Namespace="calico-apiserver" Pod="calico-apiserver-69648fc998-4w2cp" WorkloadEndpoint="localhost-k8s-calico--apiserver--69648fc998--4w2cp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--69648fc998--4w2cp-eth0", GenerateName:"calico-apiserver-69648fc998-", Namespace:"calico-apiserver", SelfLink:"", UID:"77036710-cd93-4856-bfc5-037aee276132", ResourceVersion:"751", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 13, 27, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69648fc998", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bf93d8108fa4d803e30e8903963cfd33177785a98ba9fb8c35e5e48bf5b04e3c", Pod:"calico-apiserver-69648fc998-4w2cp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali739b8daf778", MAC:"7e:25:81:45:32:31", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 13:27:33.938110 containerd[1470]: 2024-12-13 13:27:33.934 [INFO][4656] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="bf93d8108fa4d803e30e8903963cfd33177785a98ba9fb8c35e5e48bf5b04e3c" Namespace="calico-apiserver" Pod="calico-apiserver-69648fc998-4w2cp" WorkloadEndpoint="localhost-k8s-calico--apiserver--69648fc998--4w2cp-eth0" Dec 13 13:27:33.949056 systemd[1]: Started cri-containerd-5170f017d6e0f82924ca0991742bd4e8ff115b9ebec6388b4190d19acb178ab8.scope - libcontainer container 5170f017d6e0f82924ca0991742bd4e8ff115b9ebec6388b4190d19acb178ab8. Dec 13 13:27:33.958382 systemd-resolved[1311]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 13:27:33.960176 containerd[1470]: time="2024-12-13T13:27:33.959041814Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:27:33.960176 containerd[1470]: time="2024-12-13T13:27:33.959127894Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:27:33.960176 containerd[1470]: time="2024-12-13T13:27:33.959140774Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:27:33.960176 containerd[1470]: time="2024-12-13T13:27:33.959220734Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:27:33.965706 systemd-resolved[1311]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 13:27:33.969915 systemd-networkd[1397]: cali04d9119177a: Link UP Dec 13 13:27:33.970740 systemd-networkd[1397]: cali04d9119177a: Gained carrier Dec 13 13:27:33.989145 systemd[1]: Started cri-containerd-bf93d8108fa4d803e30e8903963cfd33177785a98ba9fb8c35e5e48bf5b04e3c.scope - libcontainer container bf93d8108fa4d803e30e8903963cfd33177785a98ba9fb8c35e5e48bf5b04e3c. Dec 13 13:27:33.991857 containerd[1470]: 2024-12-13 13:27:33.508 [INFO][4651] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 13:27:33.991857 containerd[1470]: 2024-12-13 13:27:33.646 [INFO][4651] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--tm76v-eth0 coredns-7db6d8ff4d- kube-system 16d3a59c-4627-445a-a548-3d17c81a3ebd 748 0 2024-12-13 13:27:12 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-tm76v eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali04d9119177a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="0b14d14d2926a5363fa66c9d67188223640c9fd9834a3c3aab45535799b4d202" Namespace="kube-system" Pod="coredns-7db6d8ff4d-tm76v" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--tm76v-" Dec 13 13:27:33.991857 containerd[1470]: 2024-12-13 13:27:33.646 [INFO][4651] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="0b14d14d2926a5363fa66c9d67188223640c9fd9834a3c3aab45535799b4d202" Namespace="kube-system" Pod="coredns-7db6d8ff4d-tm76v" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--tm76v-eth0" Dec 13 13:27:33.991857 containerd[1470]: 2024-12-13 13:27:33.781 [INFO][4742] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0b14d14d2926a5363fa66c9d67188223640c9fd9834a3c3aab45535799b4d202" HandleID="k8s-pod-network.0b14d14d2926a5363fa66c9d67188223640c9fd9834a3c3aab45535799b4d202" Workload="localhost-k8s-coredns--7db6d8ff4d--tm76v-eth0" Dec 13 13:27:33.991857 containerd[1470]: 2024-12-13 13:27:33.814 [INFO][4742] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0b14d14d2926a5363fa66c9d67188223640c9fd9834a3c3aab45535799b4d202" HandleID="k8s-pod-network.0b14d14d2926a5363fa66c9d67188223640c9fd9834a3c3aab45535799b4d202" Workload="localhost-k8s-coredns--7db6d8ff4d--tm76v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003633e0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-tm76v", "timestamp":"2024-12-13 13:27:33.781703728 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 13:27:33.991857 containerd[1470]: 2024-12-13 13:27:33.814 [INFO][4742] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 13:27:33.991857 containerd[1470]: 2024-12-13 13:27:33.915 [INFO][4742] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 13:27:33.991857 containerd[1470]: 2024-12-13 13:27:33.916 [INFO][4742] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 13:27:33.991857 containerd[1470]: 2024-12-13 13:27:33.918 [INFO][4742] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0b14d14d2926a5363fa66c9d67188223640c9fd9834a3c3aab45535799b4d202" host="localhost" Dec 13 13:27:33.991857 containerd[1470]: 2024-12-13 13:27:33.928 [INFO][4742] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 13:27:33.991857 containerd[1470]: 2024-12-13 13:27:33.939 [INFO][4742] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 13:27:33.991857 containerd[1470]: 2024-12-13 13:27:33.942 [INFO][4742] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 13:27:33.991857 containerd[1470]: 2024-12-13 13:27:33.944 [INFO][4742] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 13:27:33.991857 containerd[1470]: 2024-12-13 13:27:33.944 [INFO][4742] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0b14d14d2926a5363fa66c9d67188223640c9fd9834a3c3aab45535799b4d202" host="localhost" Dec 13 13:27:33.991857 containerd[1470]: 2024-12-13 13:27:33.946 [INFO][4742] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.0b14d14d2926a5363fa66c9d67188223640c9fd9834a3c3aab45535799b4d202 Dec 13 13:27:33.991857 containerd[1470]: 2024-12-13 13:27:33.951 [INFO][4742] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0b14d14d2926a5363fa66c9d67188223640c9fd9834a3c3aab45535799b4d202" host="localhost" Dec 13 13:27:33.991857 containerd[1470]: 2024-12-13 13:27:33.961 [INFO][4742] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.0b14d14d2926a5363fa66c9d67188223640c9fd9834a3c3aab45535799b4d202" host="localhost" Dec 13 13:27:33.991857 containerd[1470]: 2024-12-13 13:27:33.961 [INFO][4742] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.0b14d14d2926a5363fa66c9d67188223640c9fd9834a3c3aab45535799b4d202" host="localhost" Dec 13 13:27:33.991857 containerd[1470]: 2024-12-13 13:27:33.961 [INFO][4742] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 13:27:33.991857 containerd[1470]: 2024-12-13 13:27:33.961 [INFO][4742] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="0b14d14d2926a5363fa66c9d67188223640c9fd9834a3c3aab45535799b4d202" HandleID="k8s-pod-network.0b14d14d2926a5363fa66c9d67188223640c9fd9834a3c3aab45535799b4d202" Workload="localhost-k8s-coredns--7db6d8ff4d--tm76v-eth0" Dec 13 13:27:33.992940 containerd[1470]: 2024-12-13 13:27:33.965 [INFO][4651] cni-plugin/k8s.go 386: Populated endpoint ContainerID="0b14d14d2926a5363fa66c9d67188223640c9fd9834a3c3aab45535799b4d202" Namespace="kube-system" Pod="coredns-7db6d8ff4d-tm76v" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--tm76v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--tm76v-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"16d3a59c-4627-445a-a548-3d17c81a3ebd", ResourceVersion:"748", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 13, 27, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-tm76v", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali04d9119177a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 13:27:33.992940 containerd[1470]: 2024-12-13 13:27:33.965 [INFO][4651] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="0b14d14d2926a5363fa66c9d67188223640c9fd9834a3c3aab45535799b4d202" Namespace="kube-system" Pod="coredns-7db6d8ff4d-tm76v" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--tm76v-eth0" Dec 13 13:27:33.992940 containerd[1470]: 2024-12-13 13:27:33.965 [INFO][4651] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali04d9119177a ContainerID="0b14d14d2926a5363fa66c9d67188223640c9fd9834a3c3aab45535799b4d202" Namespace="kube-system" Pod="coredns-7db6d8ff4d-tm76v" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--tm76v-eth0" Dec 13 13:27:33.992940 containerd[1470]: 2024-12-13 13:27:33.971 [INFO][4651] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0b14d14d2926a5363fa66c9d67188223640c9fd9834a3c3aab45535799b4d202" Namespace="kube-system" Pod="coredns-7db6d8ff4d-tm76v" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--tm76v-eth0" Dec 13 13:27:33.992940 containerd[1470]: 2024-12-13 13:27:33.972 [INFO][4651] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="0b14d14d2926a5363fa66c9d67188223640c9fd9834a3c3aab45535799b4d202" Namespace="kube-system" Pod="coredns-7db6d8ff4d-tm76v" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--tm76v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--tm76v-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"16d3a59c-4627-445a-a548-3d17c81a3ebd", ResourceVersion:"748", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 13, 27, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0b14d14d2926a5363fa66c9d67188223640c9fd9834a3c3aab45535799b4d202", Pod:"coredns-7db6d8ff4d-tm76v", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali04d9119177a", MAC:"1e:c6:ef:3f:ee:60", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 13:27:33.992940 containerd[1470]: 2024-12-13 13:27:33.986 [INFO][4651] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="0b14d14d2926a5363fa66c9d67188223640c9fd9834a3c3aab45535799b4d202" Namespace="kube-system" Pod="coredns-7db6d8ff4d-tm76v" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--tm76v-eth0" Dec 13 13:27:33.996968 containerd[1470]: time="2024-12-13T13:27:33.996832753Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jl58q,Uid:991aee5f-8651-49bc-ac7b-b0b4b2cc81c5,Namespace:calico-system,Attempt:5,} returns sandbox id \"5a0d0d17bf9fbb0b46005836a71e6062f3b9828950ce74916a74ff6fbc415e68\"" Dec 13 13:27:34.000937 containerd[1470]: time="2024-12-13T13:27:34.000591274Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Dec 13 13:27:34.014072 containerd[1470]: time="2024-12-13T13:27:34.014038281Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-4g2n7,Uid:7432b46c-534c-4718-843e-36f44a5a5ac1,Namespace:kube-system,Attempt:5,} returns sandbox id \"5170f017d6e0f82924ca0991742bd4e8ff115b9ebec6388b4190d19acb178ab8\"" Dec 13 13:27:34.014921 systemd-resolved[1311]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 13:27:34.021948 kubelet[2644]: E1213 13:27:34.021912 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:27:34.022412 containerd[1470]: time="2024-12-13T13:27:34.022183444Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:27:34.022412 containerd[1470]: time="2024-12-13T13:27:34.022245644Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:27:34.022412 containerd[1470]: time="2024-12-13T13:27:34.022256484Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:27:34.022412 containerd[1470]: time="2024-12-13T13:27:34.022341364Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:27:34.028394 systemd[1]: run-netns-cni\x2da15f1231\x2db45f\x2d31e0\x2dd607\x2dc9859e03408f.mount: Deactivated successfully. Dec 13 13:27:34.028489 systemd[1]: run-netns-cni\x2db3045ccf\x2dba32\x2d2e6e\x2dcedc\x2deecde03d23ab.mount: Deactivated successfully. Dec 13 13:27:34.030261 containerd[1470]: time="2024-12-13T13:27:34.030227248Z" level=info msg="CreateContainer within sandbox \"5170f017d6e0f82924ca0991742bd4e8ff115b9ebec6388b4190d19acb178ab8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 13:27:34.043086 systemd-networkd[1397]: calidb469a33747: Link UP Dec 13 13:27:34.047350 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1448489401.mount: Deactivated successfully. Dec 13 13:27:34.048123 systemd-networkd[1397]: calidb469a33747: Gained carrier Dec 13 13:27:34.054588 containerd[1470]: time="2024-12-13T13:27:34.054550259Z" level=info msg="CreateContainer within sandbox \"5170f017d6e0f82924ca0991742bd4e8ff115b9ebec6388b4190d19acb178ab8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7a43bcb8249bd9737b6adac4a60895fc21ad9a1489da77c1738baf0bf8c484d2\"" Dec 13 13:27:34.055695 containerd[1470]: time="2024-12-13T13:27:34.055664060Z" level=info msg="StartContainer for \"7a43bcb8249bd9737b6adac4a60895fc21ad9a1489da77c1738baf0bf8c484d2\"" Dec 13 13:27:34.073048 systemd[1]: Started cri-containerd-0b14d14d2926a5363fa66c9d67188223640c9fd9834a3c3aab45535799b4d202.scope - libcontainer container 0b14d14d2926a5363fa66c9d67188223640c9fd9834a3c3aab45535799b4d202. Dec 13 13:27:34.073202 containerd[1470]: 2024-12-13 13:27:33.515 [INFO][4639] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 13:27:34.073202 containerd[1470]: 2024-12-13 13:27:33.650 [INFO][4639] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--69648fc998--v2qqw-eth0 calico-apiserver-69648fc998- calico-apiserver b1402681-1dab-4e27-bb98-220ec86ddde8 753 0 2024-12-13 13:27:18 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:69648fc998 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-69648fc998-v2qqw eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calidb469a33747 [] []}} ContainerID="fc70dae031f6e94119ef8af0fffe70bb2a92407bf194806f75532efdf7017405" Namespace="calico-apiserver" Pod="calico-apiserver-69648fc998-v2qqw" WorkloadEndpoint="localhost-k8s-calico--apiserver--69648fc998--v2qqw-" Dec 13 13:27:34.073202 containerd[1470]: 2024-12-13 13:27:33.650 [INFO][4639] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="fc70dae031f6e94119ef8af0fffe70bb2a92407bf194806f75532efdf7017405" Namespace="calico-apiserver" Pod="calico-apiserver-69648fc998-v2qqw" WorkloadEndpoint="localhost-k8s-calico--apiserver--69648fc998--v2qqw-eth0" Dec 13 13:27:34.073202 containerd[1470]: 2024-12-13 13:27:33.781 [INFO][4762] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fc70dae031f6e94119ef8af0fffe70bb2a92407bf194806f75532efdf7017405" HandleID="k8s-pod-network.fc70dae031f6e94119ef8af0fffe70bb2a92407bf194806f75532efdf7017405" Workload="localhost-k8s-calico--apiserver--69648fc998--v2qqw-eth0" Dec 13 13:27:34.073202 containerd[1470]: 2024-12-13 13:27:33.816 [INFO][4762] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fc70dae031f6e94119ef8af0fffe70bb2a92407bf194806f75532efdf7017405" HandleID="k8s-pod-network.fc70dae031f6e94119ef8af0fffe70bb2a92407bf194806f75532efdf7017405" Workload="localhost-k8s-calico--apiserver--69648fc998--v2qqw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40004b6120), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-69648fc998-v2qqw", "timestamp":"2024-12-13 13:27:33.781704488 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 13:27:34.073202 containerd[1470]: 2024-12-13 13:27:33.816 [INFO][4762] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 13:27:34.073202 containerd[1470]: 2024-12-13 13:27:33.961 [INFO][4762] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 13:27:34.073202 containerd[1470]: 2024-12-13 13:27:33.961 [INFO][4762] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 13:27:34.073202 containerd[1470]: 2024-12-13 13:27:33.966 [INFO][4762] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.fc70dae031f6e94119ef8af0fffe70bb2a92407bf194806f75532efdf7017405" host="localhost" Dec 13 13:27:34.073202 containerd[1470]: 2024-12-13 13:27:33.983 [INFO][4762] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 13:27:34.073202 containerd[1470]: 2024-12-13 13:27:33.996 [INFO][4762] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 13:27:34.073202 containerd[1470]: 2024-12-13 13:27:33.999 [INFO][4762] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 13:27:34.073202 containerd[1470]: 2024-12-13 13:27:34.002 [INFO][4762] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 13:27:34.073202 containerd[1470]: 2024-12-13 13:27:34.002 [INFO][4762] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.fc70dae031f6e94119ef8af0fffe70bb2a92407bf194806f75532efdf7017405" host="localhost" Dec 13 13:27:34.073202 containerd[1470]: 2024-12-13 13:27:34.004 [INFO][4762] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.fc70dae031f6e94119ef8af0fffe70bb2a92407bf194806f75532efdf7017405 Dec 13 13:27:34.073202 containerd[1470]: 2024-12-13 13:27:34.014 [INFO][4762] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.fc70dae031f6e94119ef8af0fffe70bb2a92407bf194806f75532efdf7017405" host="localhost" Dec 13 13:27:34.073202 containerd[1470]: 2024-12-13 13:27:34.024 [INFO][4762] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.fc70dae031f6e94119ef8af0fffe70bb2a92407bf194806f75532efdf7017405" host="localhost" Dec 13 13:27:34.073202 containerd[1470]: 2024-12-13 13:27:34.024 [INFO][4762] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.fc70dae031f6e94119ef8af0fffe70bb2a92407bf194806f75532efdf7017405" host="localhost" Dec 13 13:27:34.073202 containerd[1470]: 2024-12-13 13:27:34.024 [INFO][4762] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 13:27:34.073202 containerd[1470]: 2024-12-13 13:27:34.024 [INFO][4762] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="fc70dae031f6e94119ef8af0fffe70bb2a92407bf194806f75532efdf7017405" HandleID="k8s-pod-network.fc70dae031f6e94119ef8af0fffe70bb2a92407bf194806f75532efdf7017405" Workload="localhost-k8s-calico--apiserver--69648fc998--v2qqw-eth0" Dec 13 13:27:34.073624 containerd[1470]: 2024-12-13 13:27:34.038 [INFO][4639] cni-plugin/k8s.go 386: Populated endpoint ContainerID="fc70dae031f6e94119ef8af0fffe70bb2a92407bf194806f75532efdf7017405" Namespace="calico-apiserver" Pod="calico-apiserver-69648fc998-v2qqw" WorkloadEndpoint="localhost-k8s-calico--apiserver--69648fc998--v2qqw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--69648fc998--v2qqw-eth0", GenerateName:"calico-apiserver-69648fc998-", Namespace:"calico-apiserver", SelfLink:"", UID:"b1402681-1dab-4e27-bb98-220ec86ddde8", ResourceVersion:"753", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 13, 27, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69648fc998", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-69648fc998-v2qqw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidb469a33747", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 13:27:34.073624 containerd[1470]: 2024-12-13 13:27:34.038 [INFO][4639] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="fc70dae031f6e94119ef8af0fffe70bb2a92407bf194806f75532efdf7017405" Namespace="calico-apiserver" Pod="calico-apiserver-69648fc998-v2qqw" WorkloadEndpoint="localhost-k8s-calico--apiserver--69648fc998--v2qqw-eth0" Dec 13 13:27:34.073624 containerd[1470]: 2024-12-13 13:27:34.038 [INFO][4639] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidb469a33747 ContainerID="fc70dae031f6e94119ef8af0fffe70bb2a92407bf194806f75532efdf7017405" Namespace="calico-apiserver" Pod="calico-apiserver-69648fc998-v2qqw" WorkloadEndpoint="localhost-k8s-calico--apiserver--69648fc998--v2qqw-eth0" Dec 13 13:27:34.073624 containerd[1470]: 2024-12-13 13:27:34.053 [INFO][4639] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fc70dae031f6e94119ef8af0fffe70bb2a92407bf194806f75532efdf7017405" Namespace="calico-apiserver" Pod="calico-apiserver-69648fc998-v2qqw" WorkloadEndpoint="localhost-k8s-calico--apiserver--69648fc998--v2qqw-eth0" Dec 13 13:27:34.073624 containerd[1470]: 2024-12-13 13:27:34.056 [INFO][4639] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="fc70dae031f6e94119ef8af0fffe70bb2a92407bf194806f75532efdf7017405" Namespace="calico-apiserver" Pod="calico-apiserver-69648fc998-v2qqw" WorkloadEndpoint="localhost-k8s-calico--apiserver--69648fc998--v2qqw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--69648fc998--v2qqw-eth0", GenerateName:"calico-apiserver-69648fc998-", Namespace:"calico-apiserver", SelfLink:"", UID:"b1402681-1dab-4e27-bb98-220ec86ddde8", ResourceVersion:"753", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 13, 27, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69648fc998", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fc70dae031f6e94119ef8af0fffe70bb2a92407bf194806f75532efdf7017405", Pod:"calico-apiserver-69648fc998-v2qqw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidb469a33747", MAC:"b2:65:bd:35:92:59", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 13:27:34.073624 containerd[1470]: 2024-12-13 13:27:34.069 [INFO][4639] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="fc70dae031f6e94119ef8af0fffe70bb2a92407bf194806f75532efdf7017405" Namespace="calico-apiserver" Pod="calico-apiserver-69648fc998-v2qqw" WorkloadEndpoint="localhost-k8s-calico--apiserver--69648fc998--v2qqw-eth0" Dec 13 13:27:34.092384 systemd[1]: Started cri-containerd-7a43bcb8249bd9737b6adac4a60895fc21ad9a1489da77c1738baf0bf8c484d2.scope - libcontainer container 7a43bcb8249bd9737b6adac4a60895fc21ad9a1489da77c1738baf0bf8c484d2. Dec 13 13:27:34.096396 containerd[1470]: time="2024-12-13T13:27:34.096344518Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69648fc998-4w2cp,Uid:77036710-cd93-4856-bfc5-037aee276132,Namespace:calico-apiserver,Attempt:5,} returns sandbox id \"bf93d8108fa4d803e30e8903963cfd33177785a98ba9fb8c35e5e48bf5b04e3c\"" Dec 13 13:27:34.096640 systemd-resolved[1311]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 13:27:34.100391 systemd-networkd[1397]: cali93877d5c7e1: Link UP Dec 13 13:27:34.100585 systemd-networkd[1397]: cali93877d5c7e1: Gained carrier Dec 13 13:27:34.115019 containerd[1470]: time="2024-12-13T13:27:34.114863847Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:27:34.115137 containerd[1470]: time="2024-12-13T13:27:34.114948167Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:27:34.115137 containerd[1470]: time="2024-12-13T13:27:34.114960167Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:27:34.115137 containerd[1470]: time="2024-12-13T13:27:34.115042887Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:27:34.115481 containerd[1470]: 2024-12-13 13:27:33.504 [INFO][4627] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 13:27:34.115481 containerd[1470]: 2024-12-13 13:27:33.652 [INFO][4627] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--84949d5b96--26kkm-eth0 calico-kube-controllers-84949d5b96- calico-system fdd0e41e-e772-4088-a22c-e059031fe725 750 0 2024-12-13 13:27:18 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:84949d5b96 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-84949d5b96-26kkm eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali93877d5c7e1 [] []}} ContainerID="f97d5d0232ca53ffc71778bb263419d87763c8c9951dc9119a52441a824d933b" Namespace="calico-system" Pod="calico-kube-controllers-84949d5b96-26kkm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--84949d5b96--26kkm-" Dec 13 13:27:34.115481 containerd[1470]: 2024-12-13 13:27:33.652 [INFO][4627] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f97d5d0232ca53ffc71778bb263419d87763c8c9951dc9119a52441a824d933b" Namespace="calico-system" Pod="calico-kube-controllers-84949d5b96-26kkm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--84949d5b96--26kkm-eth0" Dec 13 13:27:34.115481 containerd[1470]: 2024-12-13 13:27:33.815 [INFO][4741] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f97d5d0232ca53ffc71778bb263419d87763c8c9951dc9119a52441a824d933b" HandleID="k8s-pod-network.f97d5d0232ca53ffc71778bb263419d87763c8c9951dc9119a52441a824d933b" Workload="localhost-k8s-calico--kube--controllers--84949d5b96--26kkm-eth0" Dec 13 13:27:34.115481 containerd[1470]: 2024-12-13 13:27:33.833 [INFO][4741] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f97d5d0232ca53ffc71778bb263419d87763c8c9951dc9119a52441a824d933b" HandleID="k8s-pod-network.f97d5d0232ca53ffc71778bb263419d87763c8c9951dc9119a52441a824d933b" Workload="localhost-k8s-calico--kube--controllers--84949d5b96--26kkm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400034e400), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-84949d5b96-26kkm", "timestamp":"2024-12-13 13:27:33.815241904 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 13:27:34.115481 containerd[1470]: 2024-12-13 13:27:33.833 [INFO][4741] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 13:27:34.115481 containerd[1470]: 2024-12-13 13:27:34.024 [INFO][4741] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 13:27:34.115481 containerd[1470]: 2024-12-13 13:27:34.024 [INFO][4741] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 13:27:34.115481 containerd[1470]: 2024-12-13 13:27:34.034 [INFO][4741] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f97d5d0232ca53ffc71778bb263419d87763c8c9951dc9119a52441a824d933b" host="localhost" Dec 13 13:27:34.115481 containerd[1470]: 2024-12-13 13:27:34.043 [INFO][4741] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 13:27:34.115481 containerd[1470]: 2024-12-13 13:27:34.064 [INFO][4741] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 13:27:34.115481 containerd[1470]: 2024-12-13 13:27:34.068 [INFO][4741] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 13:27:34.115481 containerd[1470]: 2024-12-13 13:27:34.074 [INFO][4741] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 13:27:34.115481 containerd[1470]: 2024-12-13 13:27:34.074 [INFO][4741] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f97d5d0232ca53ffc71778bb263419d87763c8c9951dc9119a52441a824d933b" host="localhost" Dec 13 13:27:34.115481 containerd[1470]: 2024-12-13 13:27:34.077 [INFO][4741] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f97d5d0232ca53ffc71778bb263419d87763c8c9951dc9119a52441a824d933b Dec 13 13:27:34.115481 containerd[1470]: 2024-12-13 13:27:34.083 [INFO][4741] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f97d5d0232ca53ffc71778bb263419d87763c8c9951dc9119a52441a824d933b" host="localhost" Dec 13 13:27:34.115481 containerd[1470]: 2024-12-13 13:27:34.093 [INFO][4741] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.f97d5d0232ca53ffc71778bb263419d87763c8c9951dc9119a52441a824d933b" host="localhost" Dec 13 13:27:34.115481 containerd[1470]: 2024-12-13 13:27:34.093 [INFO][4741] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.f97d5d0232ca53ffc71778bb263419d87763c8c9951dc9119a52441a824d933b" host="localhost" Dec 13 13:27:34.115481 containerd[1470]: 2024-12-13 13:27:34.093 [INFO][4741] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 13:27:34.115481 containerd[1470]: 2024-12-13 13:27:34.093 [INFO][4741] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="f97d5d0232ca53ffc71778bb263419d87763c8c9951dc9119a52441a824d933b" HandleID="k8s-pod-network.f97d5d0232ca53ffc71778bb263419d87763c8c9951dc9119a52441a824d933b" Workload="localhost-k8s-calico--kube--controllers--84949d5b96--26kkm-eth0" Dec 13 13:27:34.115951 containerd[1470]: 2024-12-13 13:27:34.097 [INFO][4627] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f97d5d0232ca53ffc71778bb263419d87763c8c9951dc9119a52441a824d933b" Namespace="calico-system" Pod="calico-kube-controllers-84949d5b96-26kkm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--84949d5b96--26kkm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--84949d5b96--26kkm-eth0", GenerateName:"calico-kube-controllers-84949d5b96-", Namespace:"calico-system", SelfLink:"", UID:"fdd0e41e-e772-4088-a22c-e059031fe725", ResourceVersion:"750", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 13, 27, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"84949d5b96", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-84949d5b96-26kkm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali93877d5c7e1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 13:27:34.115951 containerd[1470]: 2024-12-13 13:27:34.097 [INFO][4627] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="f97d5d0232ca53ffc71778bb263419d87763c8c9951dc9119a52441a824d933b" Namespace="calico-system" Pod="calico-kube-controllers-84949d5b96-26kkm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--84949d5b96--26kkm-eth0" Dec 13 13:27:34.115951 containerd[1470]: 2024-12-13 13:27:34.097 [INFO][4627] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali93877d5c7e1 ContainerID="f97d5d0232ca53ffc71778bb263419d87763c8c9951dc9119a52441a824d933b" Namespace="calico-system" Pod="calico-kube-controllers-84949d5b96-26kkm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--84949d5b96--26kkm-eth0" Dec 13 13:27:34.115951 containerd[1470]: 2024-12-13 13:27:34.099 [INFO][4627] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f97d5d0232ca53ffc71778bb263419d87763c8c9951dc9119a52441a824d933b" Namespace="calico-system" Pod="calico-kube-controllers-84949d5b96-26kkm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--84949d5b96--26kkm-eth0" Dec 13 13:27:34.115951 containerd[1470]: 2024-12-13 13:27:34.099 [INFO][4627] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f97d5d0232ca53ffc71778bb263419d87763c8c9951dc9119a52441a824d933b" Namespace="calico-system" Pod="calico-kube-controllers-84949d5b96-26kkm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--84949d5b96--26kkm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--84949d5b96--26kkm-eth0", GenerateName:"calico-kube-controllers-84949d5b96-", Namespace:"calico-system", SelfLink:"", UID:"fdd0e41e-e772-4088-a22c-e059031fe725", ResourceVersion:"750", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 13, 27, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"84949d5b96", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f97d5d0232ca53ffc71778bb263419d87763c8c9951dc9119a52441a824d933b", Pod:"calico-kube-controllers-84949d5b96-26kkm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali93877d5c7e1", MAC:"e2:72:34:f1:0d:eb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 13:27:34.115951 containerd[1470]: 2024-12-13 13:27:34.113 [INFO][4627] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f97d5d0232ca53ffc71778bb263419d87763c8c9951dc9119a52441a824d933b" Namespace="calico-system" Pod="calico-kube-controllers-84949d5b96-26kkm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--84949d5b96--26kkm-eth0" Dec 13 13:27:34.140246 systemd[1]: Started cri-containerd-fc70dae031f6e94119ef8af0fffe70bb2a92407bf194806f75532efdf7017405.scope - libcontainer container fc70dae031f6e94119ef8af0fffe70bb2a92407bf194806f75532efdf7017405. Dec 13 13:27:34.141464 containerd[1470]: time="2024-12-13T13:27:34.141431259Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-tm76v,Uid:16d3a59c-4627-445a-a548-3d17c81a3ebd,Namespace:kube-system,Attempt:5,} returns sandbox id \"0b14d14d2926a5363fa66c9d67188223640c9fd9834a3c3aab45535799b4d202\"" Dec 13 13:27:34.142764 kubelet[2644]: E1213 13:27:34.142680 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:27:34.146562 containerd[1470]: time="2024-12-13T13:27:34.146444141Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:27:34.148269 containerd[1470]: time="2024-12-13T13:27:34.148211742Z" level=info msg="CreateContainer within sandbox \"0b14d14d2926a5363fa66c9d67188223640c9fd9834a3c3aab45535799b4d202\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 13:27:34.156822 containerd[1470]: time="2024-12-13T13:27:34.156776826Z" level=info msg="StartContainer for \"7a43bcb8249bd9737b6adac4a60895fc21ad9a1489da77c1738baf0bf8c484d2\" returns successfully" Dec 13 13:27:34.156949 containerd[1470]: time="2024-12-13T13:27:34.146503741Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:27:34.156949 containerd[1470]: time="2024-12-13T13:27:34.156745266Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:27:34.158113 containerd[1470]: time="2024-12-13T13:27:34.158042546Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:27:34.164348 systemd-resolved[1311]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 13:27:34.170556 containerd[1470]: time="2024-12-13T13:27:34.170478552Z" level=info msg="CreateContainer within sandbox \"0b14d14d2926a5363fa66c9d67188223640c9fd9834a3c3aab45535799b4d202\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ac4081e58efa5a19cf8d76ef752921f1b2c704ee5cd4eef4ebae2003807273b5\"" Dec 13 13:27:34.172854 containerd[1470]: time="2024-12-13T13:27:34.172826953Z" level=info msg="StartContainer for \"ac4081e58efa5a19cf8d76ef752921f1b2c704ee5cd4eef4ebae2003807273b5\"" Dec 13 13:27:34.178039 systemd[1]: Started cri-containerd-f97d5d0232ca53ffc71778bb263419d87763c8c9951dc9119a52441a824d933b.scope - libcontainer container f97d5d0232ca53ffc71778bb263419d87763c8c9951dc9119a52441a824d933b. Dec 13 13:27:34.197264 systemd-resolved[1311]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 13:27:34.210195 systemd[1]: Started cri-containerd-ac4081e58efa5a19cf8d76ef752921f1b2c704ee5cd4eef4ebae2003807273b5.scope - libcontainer container ac4081e58efa5a19cf8d76ef752921f1b2c704ee5cd4eef4ebae2003807273b5. Dec 13 13:27:34.221132 containerd[1470]: time="2024-12-13T13:27:34.221007935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69648fc998-v2qqw,Uid:b1402681-1dab-4e27-bb98-220ec86ddde8,Namespace:calico-apiserver,Attempt:5,} returns sandbox id \"fc70dae031f6e94119ef8af0fffe70bb2a92407bf194806f75532efdf7017405\"" Dec 13 13:27:34.240758 containerd[1470]: time="2024-12-13T13:27:34.240699904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84949d5b96-26kkm,Uid:fdd0e41e-e772-4088-a22c-e059031fe725,Namespace:calico-system,Attempt:5,} returns sandbox id \"f97d5d0232ca53ffc71778bb263419d87763c8c9951dc9119a52441a824d933b\"" Dec 13 13:27:34.246304 containerd[1470]: time="2024-12-13T13:27:34.246257707Z" level=info msg="StartContainer for \"ac4081e58efa5a19cf8d76ef752921f1b2c704ee5cd4eef4ebae2003807273b5\" returns successfully" Dec 13 13:27:34.437199 kubelet[2644]: E1213 13:27:34.425500 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:27:34.451139 kubelet[2644]: E1213 13:27:34.450750 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:27:34.451832 kubelet[2644]: E1213 13:27:34.451313 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:27:34.467463 kubelet[2644]: I1213 13:27:34.467394 2644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-tm76v" podStartSLOduration=22.467376048 podStartE2EDuration="22.467376048s" podCreationTimestamp="2024-12-13 13:27:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:27:34.455737682 +0000 UTC m=+37.550826566" watchObservedRunningTime="2024-12-13 13:27:34.467376048 +0000 UTC m=+37.562464892" Dec 13 13:27:34.490510 kubelet[2644]: I1213 13:27:34.490433 2644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-4g2n7" podStartSLOduration=22.490413138 podStartE2EDuration="22.490413138s" podCreationTimestamp="2024-12-13 13:27:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:27:34.487767897 +0000 UTC m=+37.582856741" watchObservedRunningTime="2024-12-13 13:27:34.490413138 +0000 UTC m=+37.585501982" Dec 13 13:27:34.611916 kernel: bpftool[5339]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Dec 13 13:27:34.768379 systemd-networkd[1397]: vxlan.calico: Link UP Dec 13 13:27:34.768626 systemd-networkd[1397]: vxlan.calico: Gained carrier Dec 13 13:27:35.049688 systemd-networkd[1397]: cali04d9119177a: Gained IPv6LL Dec 13 13:27:35.050079 systemd-networkd[1397]: cali97d63e3e84b: Gained IPv6LL Dec 13 13:27:35.112996 systemd-networkd[1397]: cali739b8daf778: Gained IPv6LL Dec 13 13:27:35.304999 systemd-networkd[1397]: calidb469a33747: Gained IPv6LL Dec 13 13:27:35.309970 containerd[1470]: time="2024-12-13T13:27:35.309900824Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7464730" Dec 13 13:27:35.314394 containerd[1470]: time="2024-12-13T13:27:35.314253506Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"8834384\" in 1.313597072s" Dec 13 13:27:35.314394 containerd[1470]: time="2024-12-13T13:27:35.314293986Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\"" Dec 13 13:27:35.317320 containerd[1470]: time="2024-12-13T13:27:35.316478267Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Dec 13 13:27:35.319386 containerd[1470]: time="2024-12-13T13:27:35.319356748Z" level=info msg="CreateContainer within sandbox \"5a0d0d17bf9fbb0b46005836a71e6062f3b9828950ce74916a74ff6fbc415e68\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Dec 13 13:27:35.333151 containerd[1470]: time="2024-12-13T13:27:35.333112794Z" level=info msg="CreateContainer within sandbox \"5a0d0d17bf9fbb0b46005836a71e6062f3b9828950ce74916a74ff6fbc415e68\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"541f42a720100ed5f28729673a1046fcc3bfcb8672134ae31035fd80ca1065d4\"" Dec 13 13:27:35.335578 containerd[1470]: time="2024-12-13T13:27:35.334423994Z" level=info msg="StartContainer for \"541f42a720100ed5f28729673a1046fcc3bfcb8672134ae31035fd80ca1065d4\"" Dec 13 13:27:35.338056 containerd[1470]: time="2024-12-13T13:27:35.338024116Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:27:35.338898 containerd[1470]: time="2024-12-13T13:27:35.338831036Z" level=info msg="ImageCreate event name:\"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:27:35.339851 containerd[1470]: time="2024-12-13T13:27:35.339511076Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:27:35.376036 systemd[1]: Started cri-containerd-541f42a720100ed5f28729673a1046fcc3bfcb8672134ae31035fd80ca1065d4.scope - libcontainer container 541f42a720100ed5f28729673a1046fcc3bfcb8672134ae31035fd80ca1065d4. Dec 13 13:27:35.414263 containerd[1470]: time="2024-12-13T13:27:35.414210628Z" level=info msg="StartContainer for \"541f42a720100ed5f28729673a1046fcc3bfcb8672134ae31035fd80ca1065d4\" returns successfully" Dec 13 13:27:35.455851 kubelet[2644]: E1213 13:27:35.455541 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:27:35.456470 kubelet[2644]: E1213 13:27:35.456385 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:27:35.688976 systemd-networkd[1397]: cali75d4c9d466d: Gained IPv6LL Dec 13 13:27:36.009032 systemd-networkd[1397]: cali93877d5c7e1: Gained IPv6LL Dec 13 13:27:36.136993 systemd-networkd[1397]: vxlan.calico: Gained IPv6LL Dec 13 13:27:36.439820 systemd[1]: Started sshd@9-10.0.0.123:22-10.0.0.1:52764.service - OpenSSH per-connection server daemon (10.0.0.1:52764). Dec 13 13:27:36.459213 kubelet[2644]: E1213 13:27:36.458809 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:27:36.459213 kubelet[2644]: E1213 13:27:36.459182 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:27:36.541908 sshd[5460]: Accepted publickey for core from 10.0.0.1 port 52764 ssh2: RSA SHA256:q9cWvSR3bBxu+L28Z4JmOHhvW5qF2BbU+1GVJNGhIf4 Dec 13 13:27:36.543665 sshd-session[5460]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:27:36.548752 systemd-logind[1454]: New session 10 of user core. Dec 13 13:27:36.554041 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 13 13:27:36.757968 sshd[5466]: Connection closed by 10.0.0.1 port 52764 Dec 13 13:27:36.759913 sshd-session[5460]: pam_unix(sshd:session): session closed for user core Dec 13 13:27:36.768194 systemd[1]: sshd@9-10.0.0.123:22-10.0.0.1:52764.service: Deactivated successfully. Dec 13 13:27:36.772978 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 13:27:36.775677 systemd-logind[1454]: Session 10 logged out. Waiting for processes to exit. Dec 13 13:27:36.783152 systemd[1]: Started sshd@10-10.0.0.123:22-10.0.0.1:52778.service - OpenSSH per-connection server daemon (10.0.0.1:52778). Dec 13 13:27:36.784707 systemd-logind[1454]: Removed session 10. Dec 13 13:27:36.829798 sshd[5481]: Accepted publickey for core from 10.0.0.1 port 52778 ssh2: RSA SHA256:q9cWvSR3bBxu+L28Z4JmOHhvW5qF2BbU+1GVJNGhIf4 Dec 13 13:27:36.831253 sshd-session[5481]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:27:36.837619 systemd-logind[1454]: New session 11 of user core. Dec 13 13:27:36.842058 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 13 13:27:37.075688 containerd[1470]: time="2024-12-13T13:27:37.073990789Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:27:37.075688 containerd[1470]: time="2024-12-13T13:27:37.075623989Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=39298409" Dec 13 13:27:37.077144 containerd[1470]: time="2024-12-13T13:27:37.077090030Z" level=info msg="ImageCreate event name:\"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:27:37.081920 containerd[1470]: time="2024-12-13T13:27:37.081882232Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:27:37.082573 containerd[1470]: time="2024-12-13T13:27:37.082540752Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 1.766026765s" Dec 13 13:27:37.082613 containerd[1470]: time="2024-12-13T13:27:37.082574192Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Dec 13 13:27:37.085418 containerd[1470]: time="2024-12-13T13:27:37.084127592Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Dec 13 13:27:37.085655 containerd[1470]: time="2024-12-13T13:27:37.085602473Z" level=info msg="CreateContainer within sandbox \"bf93d8108fa4d803e30e8903963cfd33177785a98ba9fb8c35e5e48bf5b04e3c\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 13:27:37.096896 containerd[1470]: time="2024-12-13T13:27:37.096820717Z" level=info msg="CreateContainer within sandbox \"bf93d8108fa4d803e30e8903963cfd33177785a98ba9fb8c35e5e48bf5b04e3c\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"e133c35b334f143cce605d562e562e8f46ce48175cb9d0adcf90d0af03a60e7e\"" Dec 13 13:27:37.097954 containerd[1470]: time="2024-12-13T13:27:37.097922718Z" level=info msg="StartContainer for \"e133c35b334f143cce605d562e562e8f46ce48175cb9d0adcf90d0af03a60e7e\"" Dec 13 13:27:37.155041 systemd[1]: Started cri-containerd-e133c35b334f143cce605d562e562e8f46ce48175cb9d0adcf90d0af03a60e7e.scope - libcontainer container e133c35b334f143cce605d562e562e8f46ce48175cb9d0adcf90d0af03a60e7e. Dec 13 13:27:37.202672 containerd[1470]: time="2024-12-13T13:27:37.202632797Z" level=info msg="StartContainer for \"e133c35b334f143cce605d562e562e8f46ce48175cb9d0adcf90d0af03a60e7e\" returns successfully" Dec 13 13:27:37.241087 sshd[5483]: Connection closed by 10.0.0.1 port 52778 Dec 13 13:27:37.241660 sshd-session[5481]: pam_unix(sshd:session): session closed for user core Dec 13 13:27:37.251734 systemd[1]: sshd@10-10.0.0.123:22-10.0.0.1:52778.service: Deactivated successfully. Dec 13 13:27:37.256011 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 13:27:37.262133 systemd-logind[1454]: Session 11 logged out. Waiting for processes to exit. Dec 13 13:27:37.277283 systemd[1]: Started sshd@11-10.0.0.123:22-10.0.0.1:52786.service - OpenSSH per-connection server daemon (10.0.0.1:52786). Dec 13 13:27:37.280094 systemd-logind[1454]: Removed session 11. Dec 13 13:27:37.335220 sshd[5538]: Accepted publickey for core from 10.0.0.1 port 52786 ssh2: RSA SHA256:q9cWvSR3bBxu+L28Z4JmOHhvW5qF2BbU+1GVJNGhIf4 Dec 13 13:27:37.336774 sshd-session[5538]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:27:37.340934 systemd-logind[1454]: New session 12 of user core. Dec 13 13:27:37.350031 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 13 13:27:37.479937 kubelet[2644]: I1213 13:27:37.479864 2644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-69648fc998-4w2cp" podStartSLOduration=16.503420272 podStartE2EDuration="19.479843981s" podCreationTimestamp="2024-12-13 13:27:18 +0000 UTC" firstStartedPulling="2024-12-13 13:27:34.107472563 +0000 UTC m=+37.202561407" lastFinishedPulling="2024-12-13 13:27:37.083896272 +0000 UTC m=+40.178985116" observedRunningTime="2024-12-13 13:27:37.477910221 +0000 UTC m=+40.572999065" watchObservedRunningTime="2024-12-13 13:27:37.479843981 +0000 UTC m=+40.574932825" Dec 13 13:27:37.537339 containerd[1470]: time="2024-12-13T13:27:37.537237643Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:27:37.537709 containerd[1470]: time="2024-12-13T13:27:37.537653763Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Dec 13 13:27:37.539789 containerd[1470]: time="2024-12-13T13:27:37.539746084Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 455.583452ms" Dec 13 13:27:37.539789 containerd[1470]: time="2024-12-13T13:27:37.539787404Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Dec 13 13:27:37.542268 containerd[1470]: time="2024-12-13T13:27:37.541291485Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Dec 13 13:27:37.542502 containerd[1470]: time="2024-12-13T13:27:37.542458645Z" level=info msg="CreateContainer within sandbox \"fc70dae031f6e94119ef8af0fffe70bb2a92407bf194806f75532efdf7017405\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 13:27:37.559620 containerd[1470]: time="2024-12-13T13:27:37.556409090Z" level=info msg="CreateContainer within sandbox \"fc70dae031f6e94119ef8af0fffe70bb2a92407bf194806f75532efdf7017405\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"9611ae78c5eb8c9397da2f5017ed64894d16350316b799952dccb3b6066f2a70\"" Dec 13 13:27:37.560321 containerd[1470]: time="2024-12-13T13:27:37.559762492Z" level=info msg="StartContainer for \"9611ae78c5eb8c9397da2f5017ed64894d16350316b799952dccb3b6066f2a70\"" Dec 13 13:27:37.563314 sshd[5542]: Connection closed by 10.0.0.1 port 52786 Dec 13 13:27:37.563865 sshd-session[5538]: pam_unix(sshd:session): session closed for user core Dec 13 13:27:37.568079 systemd[1]: sshd@11-10.0.0.123:22-10.0.0.1:52786.service: Deactivated successfully. Dec 13 13:27:37.570590 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 13:27:37.572156 systemd-logind[1454]: Session 12 logged out. Waiting for processes to exit. Dec 13 13:27:37.573182 systemd-logind[1454]: Removed session 12. Dec 13 13:27:37.594074 systemd[1]: Started cri-containerd-9611ae78c5eb8c9397da2f5017ed64894d16350316b799952dccb3b6066f2a70.scope - libcontainer container 9611ae78c5eb8c9397da2f5017ed64894d16350316b799952dccb3b6066f2a70. Dec 13 13:27:37.628095 containerd[1470]: time="2024-12-13T13:27:37.628047837Z" level=info msg="StartContainer for \"9611ae78c5eb8c9397da2f5017ed64894d16350316b799952dccb3b6066f2a70\" returns successfully" Dec 13 13:27:38.472999 kubelet[2644]: I1213 13:27:38.472967 2644 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 13:27:39.475409 kubelet[2644]: I1213 13:27:39.475338 2644 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 13:27:39.627952 containerd[1470]: time="2024-12-13T13:27:39.627707338Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=31953828" Dec 13 13:27:39.627952 containerd[1470]: time="2024-12-13T13:27:39.627832658Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:27:39.630554 containerd[1470]: time="2024-12-13T13:27:39.630518019Z" level=info msg="ImageCreate event name:\"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:27:39.631331 containerd[1470]: time="2024-12-13T13:27:39.631289299Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:27:39.632605 containerd[1470]: time="2024-12-13T13:27:39.632444659Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"33323450\" in 2.091120414s" Dec 13 13:27:39.632605 containerd[1470]: time="2024-12-13T13:27:39.632482579Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\"" Dec 13 13:27:39.634038 containerd[1470]: time="2024-12-13T13:27:39.634014740Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Dec 13 13:27:39.643439 containerd[1470]: time="2024-12-13T13:27:39.643370103Z" level=info msg="CreateContainer within sandbox \"f97d5d0232ca53ffc71778bb263419d87763c8c9951dc9119a52441a824d933b\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Dec 13 13:27:39.654948 containerd[1470]: time="2024-12-13T13:27:39.654823227Z" level=info msg="CreateContainer within sandbox \"f97d5d0232ca53ffc71778bb263419d87763c8c9951dc9119a52441a824d933b\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"94c8053baab573b98f636c818ac9ba6b28a5594fb493849d51818ac8b2d194fa\"" Dec 13 13:27:39.657022 containerd[1470]: time="2024-12-13T13:27:39.656990667Z" level=info msg="StartContainer for \"94c8053baab573b98f636c818ac9ba6b28a5594fb493849d51818ac8b2d194fa\"" Dec 13 13:27:39.689020 systemd[1]: Started cri-containerd-94c8053baab573b98f636c818ac9ba6b28a5594fb493849d51818ac8b2d194fa.scope - libcontainer container 94c8053baab573b98f636c818ac9ba6b28a5594fb493849d51818ac8b2d194fa. Dec 13 13:27:39.718122 containerd[1470]: time="2024-12-13T13:27:39.718051008Z" level=info msg="StartContainer for \"94c8053baab573b98f636c818ac9ba6b28a5594fb493849d51818ac8b2d194fa\" returns successfully" Dec 13 13:27:40.502920 kubelet[2644]: I1213 13:27:40.502509 2644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-69648fc998-v2qqw" podStartSLOduration=19.189310631 podStartE2EDuration="22.502491537s" podCreationTimestamp="2024-12-13 13:27:18 +0000 UTC" firstStartedPulling="2024-12-13 13:27:34.227288338 +0000 UTC m=+37.322377182" lastFinishedPulling="2024-12-13 13:27:37.540469244 +0000 UTC m=+40.635558088" observedRunningTime="2024-12-13 13:27:38.483951268 +0000 UTC m=+41.579040112" watchObservedRunningTime="2024-12-13 13:27:40.502491537 +0000 UTC m=+43.597580381" Dec 13 13:27:40.502920 kubelet[2644]: I1213 13:27:40.502845 2644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-84949d5b96-26kkm" podStartSLOduration=17.112771542 podStartE2EDuration="22.502841297s" podCreationTimestamp="2024-12-13 13:27:18 +0000 UTC" firstStartedPulling="2024-12-13 13:27:34.243244905 +0000 UTC m=+37.338333749" lastFinishedPulling="2024-12-13 13:27:39.63331466 +0000 UTC m=+42.728403504" observedRunningTime="2024-12-13 13:27:40.502165857 +0000 UTC m=+43.597254701" watchObservedRunningTime="2024-12-13 13:27:40.502841297 +0000 UTC m=+43.597930141" Dec 13 13:27:41.077415 containerd[1470]: time="2024-12-13T13:27:41.077353714Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:27:41.078730 containerd[1470]: time="2024-12-13T13:27:41.078535194Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=9883368" Dec 13 13:27:41.080854 containerd[1470]: time="2024-12-13T13:27:41.079557274Z" level=info msg="ImageCreate event name:\"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:27:41.082826 containerd[1470]: time="2024-12-13T13:27:41.082791555Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:27:41.083739 containerd[1470]: time="2024-12-13T13:27:41.083691555Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11252974\" in 1.449645695s" Dec 13 13:27:41.083739 containerd[1470]: time="2024-12-13T13:27:41.083728635Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\"" Dec 13 13:27:41.094069 containerd[1470]: time="2024-12-13T13:27:41.094015198Z" level=info msg="CreateContainer within sandbox \"5a0d0d17bf9fbb0b46005836a71e6062f3b9828950ce74916a74ff6fbc415e68\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Dec 13 13:27:41.110102 containerd[1470]: time="2024-12-13T13:27:41.110053323Z" level=info msg="CreateContainer within sandbox \"5a0d0d17bf9fbb0b46005836a71e6062f3b9828950ce74916a74ff6fbc415e68\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"40ff2196e8ec5bcc85c51d1380b447c8c8f594b93115bccef3980f4a85d6e7d6\"" Dec 13 13:27:41.113678 containerd[1470]: time="2024-12-13T13:27:41.112180204Z" level=info msg="StartContainer for \"40ff2196e8ec5bcc85c51d1380b447c8c8f594b93115bccef3980f4a85d6e7d6\"" Dec 13 13:27:41.156058 systemd[1]: Started cri-containerd-40ff2196e8ec5bcc85c51d1380b447c8c8f594b93115bccef3980f4a85d6e7d6.scope - libcontainer container 40ff2196e8ec5bcc85c51d1380b447c8c8f594b93115bccef3980f4a85d6e7d6. Dec 13 13:27:41.194572 containerd[1470]: time="2024-12-13T13:27:41.194524988Z" level=info msg="StartContainer for \"40ff2196e8ec5bcc85c51d1380b447c8c8f594b93115bccef3980f4a85d6e7d6\" returns successfully" Dec 13 13:27:41.505778 kubelet[2644]: I1213 13:27:41.505344 2644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-jl58q" podStartSLOduration=16.414124434 podStartE2EDuration="23.505327278s" podCreationTimestamp="2024-12-13 13:27:18 +0000 UTC" firstStartedPulling="2024-12-13 13:27:33.999928314 +0000 UTC m=+37.095017158" lastFinishedPulling="2024-12-13 13:27:41.091131158 +0000 UTC m=+44.186220002" observedRunningTime="2024-12-13 13:27:41.503339317 +0000 UTC m=+44.598428161" watchObservedRunningTime="2024-12-13 13:27:41.505327278 +0000 UTC m=+44.600416122" Dec 13 13:27:42.079902 kubelet[2644]: I1213 13:27:42.079628 2644 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Dec 13 13:27:42.081530 kubelet[2644]: I1213 13:27:42.081505 2644 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Dec 13 13:27:42.574746 systemd[1]: Started sshd@12-10.0.0.123:22-10.0.0.1:35470.service - OpenSSH per-connection server daemon (10.0.0.1:35470). Dec 13 13:27:42.662730 sshd[5708]: Accepted publickey for core from 10.0.0.1 port 35470 ssh2: RSA SHA256:q9cWvSR3bBxu+L28Z4JmOHhvW5qF2BbU+1GVJNGhIf4 Dec 13 13:27:42.665009 sshd-session[5708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:27:42.668959 systemd-logind[1454]: New session 13 of user core. Dec 13 13:27:42.676024 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 13 13:27:42.876355 sshd[5710]: Connection closed by 10.0.0.1 port 35470 Dec 13 13:27:42.876646 sshd-session[5708]: pam_unix(sshd:session): session closed for user core Dec 13 13:27:42.885468 systemd[1]: sshd@12-10.0.0.123:22-10.0.0.1:35470.service: Deactivated successfully. Dec 13 13:27:42.888048 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 13:27:42.889247 systemd-logind[1454]: Session 13 logged out. Waiting for processes to exit. Dec 13 13:27:42.891223 systemd[1]: Started sshd@13-10.0.0.123:22-10.0.0.1:35480.service - OpenSSH per-connection server daemon (10.0.0.1:35480). Dec 13 13:27:42.892410 systemd-logind[1454]: Removed session 13. Dec 13 13:27:42.939923 sshd[5725]: Accepted publickey for core from 10.0.0.1 port 35480 ssh2: RSA SHA256:q9cWvSR3bBxu+L28Z4JmOHhvW5qF2BbU+1GVJNGhIf4 Dec 13 13:27:42.940986 sshd-session[5725]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:27:42.944961 systemd-logind[1454]: New session 14 of user core. Dec 13 13:27:42.954053 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 13 13:27:43.210599 sshd[5727]: Connection closed by 10.0.0.1 port 35480 Dec 13 13:27:43.210841 sshd-session[5725]: pam_unix(sshd:session): session closed for user core Dec 13 13:27:43.222404 systemd[1]: sshd@13-10.0.0.123:22-10.0.0.1:35480.service: Deactivated successfully. Dec 13 13:27:43.226083 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 13:27:43.228029 systemd-logind[1454]: Session 14 logged out. Waiting for processes to exit. Dec 13 13:27:43.242215 systemd[1]: Started sshd@14-10.0.0.123:22-10.0.0.1:35482.service - OpenSSH per-connection server daemon (10.0.0.1:35482). Dec 13 13:27:43.243684 systemd-logind[1454]: Removed session 14. Dec 13 13:27:43.290762 sshd[5739]: Accepted publickey for core from 10.0.0.1 port 35482 ssh2: RSA SHA256:q9cWvSR3bBxu+L28Z4JmOHhvW5qF2BbU+1GVJNGhIf4 Dec 13 13:27:43.292013 sshd-session[5739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:27:43.295727 systemd-logind[1454]: New session 15 of user core. Dec 13 13:27:43.303040 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 13 13:27:44.690593 sshd[5741]: Connection closed by 10.0.0.1 port 35482 Dec 13 13:27:44.692143 sshd-session[5739]: pam_unix(sshd:session): session closed for user core Dec 13 13:27:44.703367 systemd[1]: sshd@14-10.0.0.123:22-10.0.0.1:35482.service: Deactivated successfully. Dec 13 13:27:44.706809 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 13:27:44.709118 systemd-logind[1454]: Session 15 logged out. Waiting for processes to exit. Dec 13 13:27:44.719190 systemd[1]: Started sshd@15-10.0.0.123:22-10.0.0.1:35494.service - OpenSSH per-connection server daemon (10.0.0.1:35494). Dec 13 13:27:44.721214 systemd-logind[1454]: Removed session 15. Dec 13 13:27:44.768976 sshd[5758]: Accepted publickey for core from 10.0.0.1 port 35494 ssh2: RSA SHA256:q9cWvSR3bBxu+L28Z4JmOHhvW5qF2BbU+1GVJNGhIf4 Dec 13 13:27:44.770367 sshd-session[5758]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:27:44.774672 systemd-logind[1454]: New session 16 of user core. Dec 13 13:27:44.783077 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 13 13:27:45.072073 sshd[5760]: Connection closed by 10.0.0.1 port 35494 Dec 13 13:27:45.073164 sshd-session[5758]: pam_unix(sshd:session): session closed for user core Dec 13 13:27:45.081803 systemd[1]: sshd@15-10.0.0.123:22-10.0.0.1:35494.service: Deactivated successfully. Dec 13 13:27:45.084144 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 13:27:45.087012 systemd-logind[1454]: Session 16 logged out. Waiting for processes to exit. Dec 13 13:27:45.092150 systemd[1]: Started sshd@16-10.0.0.123:22-10.0.0.1:35508.service - OpenSSH per-connection server daemon (10.0.0.1:35508). Dec 13 13:27:45.092981 systemd-logind[1454]: Removed session 16. Dec 13 13:27:45.136552 sshd[5779]: Accepted publickey for core from 10.0.0.1 port 35508 ssh2: RSA SHA256:q9cWvSR3bBxu+L28Z4JmOHhvW5qF2BbU+1GVJNGhIf4 Dec 13 13:27:45.137737 sshd-session[5779]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:27:45.141656 systemd-logind[1454]: New session 17 of user core. Dec 13 13:27:45.151013 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 13 13:27:45.294380 sshd[5781]: Connection closed by 10.0.0.1 port 35508 Dec 13 13:27:45.295197 sshd-session[5779]: pam_unix(sshd:session): session closed for user core Dec 13 13:27:45.297637 systemd[1]: sshd@16-10.0.0.123:22-10.0.0.1:35508.service: Deactivated successfully. Dec 13 13:27:45.299697 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 13:27:45.302371 systemd-logind[1454]: Session 17 logged out. Waiting for processes to exit. Dec 13 13:27:45.303319 systemd-logind[1454]: Removed session 17. Dec 13 13:27:50.305451 systemd[1]: Started sshd@17-10.0.0.123:22-10.0.0.1:35512.service - OpenSSH per-connection server daemon (10.0.0.1:35512). Dec 13 13:27:50.351900 sshd[5800]: Accepted publickey for core from 10.0.0.1 port 35512 ssh2: RSA SHA256:q9cWvSR3bBxu+L28Z4JmOHhvW5qF2BbU+1GVJNGhIf4 Dec 13 13:27:50.352849 sshd-session[5800]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:27:50.357019 systemd-logind[1454]: New session 18 of user core. Dec 13 13:27:50.368029 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 13 13:27:50.478454 sshd[5802]: Connection closed by 10.0.0.1 port 35512 Dec 13 13:27:50.478971 sshd-session[5800]: pam_unix(sshd:session): session closed for user core Dec 13 13:27:50.481461 systemd[1]: sshd@17-10.0.0.123:22-10.0.0.1:35512.service: Deactivated successfully. Dec 13 13:27:50.483067 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 13:27:50.484525 systemd-logind[1454]: Session 18 logged out. Waiting for processes to exit. Dec 13 13:27:50.485474 systemd-logind[1454]: Removed session 18. Dec 13 13:27:55.489536 systemd[1]: Started sshd@18-10.0.0.123:22-10.0.0.1:32934.service - OpenSSH per-connection server daemon (10.0.0.1:32934). Dec 13 13:27:55.535389 sshd[5823]: Accepted publickey for core from 10.0.0.1 port 32934 ssh2: RSA SHA256:q9cWvSR3bBxu+L28Z4JmOHhvW5qF2BbU+1GVJNGhIf4 Dec 13 13:27:55.536718 sshd-session[5823]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:27:55.540216 systemd-logind[1454]: New session 19 of user core. Dec 13 13:27:55.552078 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 13 13:27:55.699733 sshd[5825]: Connection closed by 10.0.0.1 port 32934 Dec 13 13:27:55.700102 sshd-session[5823]: pam_unix(sshd:session): session closed for user core Dec 13 13:27:55.703318 systemd[1]: sshd@18-10.0.0.123:22-10.0.0.1:32934.service: Deactivated successfully. Dec 13 13:27:55.707325 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 13:27:55.708240 systemd-logind[1454]: Session 19 logged out. Waiting for processes to exit. Dec 13 13:27:55.709421 systemd-logind[1454]: Removed session 19. Dec 13 13:27:56.974929 containerd[1470]: time="2024-12-13T13:27:56.974889644Z" level=info msg="StopPodSandbox for \"82e88ef19224fab276f15fff182d10dd5aa6fb658f753603262f32fe1451a3d8\"" Dec 13 13:27:56.975303 containerd[1470]: time="2024-12-13T13:27:56.974995124Z" level=info msg="TearDown network for sandbox \"82e88ef19224fab276f15fff182d10dd5aa6fb658f753603262f32fe1451a3d8\" successfully" Dec 13 13:27:56.975303 containerd[1470]: time="2024-12-13T13:27:56.975006004Z" level=info msg="StopPodSandbox for \"82e88ef19224fab276f15fff182d10dd5aa6fb658f753603262f32fe1451a3d8\" returns successfully" Dec 13 13:27:56.979543 containerd[1470]: time="2024-12-13T13:27:56.979501645Z" level=info msg="RemovePodSandbox for \"82e88ef19224fab276f15fff182d10dd5aa6fb658f753603262f32fe1451a3d8\"" Dec 13 13:27:56.979599 containerd[1470]: time="2024-12-13T13:27:56.979547525Z" level=info msg="Forcibly stopping sandbox \"82e88ef19224fab276f15fff182d10dd5aa6fb658f753603262f32fe1451a3d8\"" Dec 13 13:27:56.979637 containerd[1470]: time="2024-12-13T13:27:56.979619405Z" level=info msg="TearDown network for sandbox \"82e88ef19224fab276f15fff182d10dd5aa6fb658f753603262f32fe1451a3d8\" successfully" Dec 13 13:27:56.988096 containerd[1470]: time="2024-12-13T13:27:56.988063366Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"82e88ef19224fab276f15fff182d10dd5aa6fb658f753603262f32fe1451a3d8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:27:56.988155 containerd[1470]: time="2024-12-13T13:27:56.988124126Z" level=info msg="RemovePodSandbox \"82e88ef19224fab276f15fff182d10dd5aa6fb658f753603262f32fe1451a3d8\" returns successfully" Dec 13 13:27:56.988772 containerd[1470]: time="2024-12-13T13:27:56.988697686Z" level=info msg="StopPodSandbox for \"b12022606fec298cca643e809db898729e25ac7a1a2e4b945e0edb8f1bc98341\"" Dec 13 13:27:56.988843 containerd[1470]: time="2024-12-13T13:27:56.988790646Z" level=info msg="TearDown network for sandbox \"b12022606fec298cca643e809db898729e25ac7a1a2e4b945e0edb8f1bc98341\" successfully" Dec 13 13:27:56.988843 containerd[1470]: time="2024-12-13T13:27:56.988801526Z" level=info msg="StopPodSandbox for \"b12022606fec298cca643e809db898729e25ac7a1a2e4b945e0edb8f1bc98341\" returns successfully" Dec 13 13:27:56.989151 containerd[1470]: time="2024-12-13T13:27:56.989121766Z" level=info msg="RemovePodSandbox for \"b12022606fec298cca643e809db898729e25ac7a1a2e4b945e0edb8f1bc98341\"" Dec 13 13:27:56.989197 containerd[1470]: time="2024-12-13T13:27:56.989153206Z" level=info msg="Forcibly stopping sandbox \"b12022606fec298cca643e809db898729e25ac7a1a2e4b945e0edb8f1bc98341\"" Dec 13 13:27:56.989222 containerd[1470]: time="2024-12-13T13:27:56.989205766Z" level=info msg="TearDown network for sandbox \"b12022606fec298cca643e809db898729e25ac7a1a2e4b945e0edb8f1bc98341\" successfully" Dec 13 13:27:56.992591 containerd[1470]: time="2024-12-13T13:27:56.992412206Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b12022606fec298cca643e809db898729e25ac7a1a2e4b945e0edb8f1bc98341\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:27:56.992591 containerd[1470]: time="2024-12-13T13:27:56.992475206Z" level=info msg="RemovePodSandbox \"b12022606fec298cca643e809db898729e25ac7a1a2e4b945e0edb8f1bc98341\" returns successfully" Dec 13 13:27:56.993044 containerd[1470]: time="2024-12-13T13:27:56.992788966Z" level=info msg="StopPodSandbox for \"981b86e50a2e4cb062d6972caeb37ab970e145a0945e9373c8351e947fb237f4\"" Dec 13 13:27:56.993044 containerd[1470]: time="2024-12-13T13:27:56.992902566Z" level=info msg="TearDown network for sandbox \"981b86e50a2e4cb062d6972caeb37ab970e145a0945e9373c8351e947fb237f4\" successfully" Dec 13 13:27:56.993044 containerd[1470]: time="2024-12-13T13:27:56.992914326Z" level=info msg="StopPodSandbox for \"981b86e50a2e4cb062d6972caeb37ab970e145a0945e9373c8351e947fb237f4\" returns successfully" Dec 13 13:27:56.993479 containerd[1470]: time="2024-12-13T13:27:56.993458406Z" level=info msg="RemovePodSandbox for \"981b86e50a2e4cb062d6972caeb37ab970e145a0945e9373c8351e947fb237f4\"" Dec 13 13:27:56.993701 containerd[1470]: time="2024-12-13T13:27:56.993584726Z" level=info msg="Forcibly stopping sandbox \"981b86e50a2e4cb062d6972caeb37ab970e145a0945e9373c8351e947fb237f4\"" Dec 13 13:27:56.993701 containerd[1470]: time="2024-12-13T13:27:56.993652966Z" level=info msg="TearDown network for sandbox \"981b86e50a2e4cb062d6972caeb37ab970e145a0945e9373c8351e947fb237f4\" successfully" Dec 13 13:27:57.000528 containerd[1470]: time="2024-12-13T13:27:57.000381887Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"981b86e50a2e4cb062d6972caeb37ab970e145a0945e9373c8351e947fb237f4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:27:57.000528 containerd[1470]: time="2024-12-13T13:27:57.000452607Z" level=info msg="RemovePodSandbox \"981b86e50a2e4cb062d6972caeb37ab970e145a0945e9373c8351e947fb237f4\" returns successfully" Dec 13 13:27:57.000838 containerd[1470]: time="2024-12-13T13:27:57.000806687Z" level=info msg="StopPodSandbox for \"28665c410f01cc4a3f0b3dc135a4aa2288f4981a36a7d7da8a38e9803909a0b3\"" Dec 13 13:27:57.000928 containerd[1470]: time="2024-12-13T13:27:57.000910447Z" level=info msg="TearDown network for sandbox \"28665c410f01cc4a3f0b3dc135a4aa2288f4981a36a7d7da8a38e9803909a0b3\" successfully" Dec 13 13:27:57.000928 containerd[1470]: time="2024-12-13T13:27:57.000925887Z" level=info msg="StopPodSandbox for \"28665c410f01cc4a3f0b3dc135a4aa2288f4981a36a7d7da8a38e9803909a0b3\" returns successfully" Dec 13 13:27:57.001598 containerd[1470]: time="2024-12-13T13:27:57.001252287Z" level=info msg="RemovePodSandbox for \"28665c410f01cc4a3f0b3dc135a4aa2288f4981a36a7d7da8a38e9803909a0b3\"" Dec 13 13:27:57.001598 containerd[1470]: time="2024-12-13T13:27:57.001291287Z" level=info msg="Forcibly stopping sandbox \"28665c410f01cc4a3f0b3dc135a4aa2288f4981a36a7d7da8a38e9803909a0b3\"" Dec 13 13:27:57.001598 containerd[1470]: time="2024-12-13T13:27:57.001359567Z" level=info msg="TearDown network for sandbox \"28665c410f01cc4a3f0b3dc135a4aa2288f4981a36a7d7da8a38e9803909a0b3\" successfully" Dec 13 13:27:57.015260 containerd[1470]: time="2024-12-13T13:27:57.015223728Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"28665c410f01cc4a3f0b3dc135a4aa2288f4981a36a7d7da8a38e9803909a0b3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:27:57.015430 containerd[1470]: time="2024-12-13T13:27:57.015412648Z" level=info msg="RemovePodSandbox \"28665c410f01cc4a3f0b3dc135a4aa2288f4981a36a7d7da8a38e9803909a0b3\" returns successfully" Dec 13 13:27:57.015885 containerd[1470]: time="2024-12-13T13:27:57.015847889Z" level=info msg="StopPodSandbox for \"ffd3e784900842f1b03b0b7292ed26c16d097c668390b0bfb8e3630401546aa6\"" Dec 13 13:27:57.015979 containerd[1470]: time="2024-12-13T13:27:57.015959529Z" level=info msg="TearDown network for sandbox \"ffd3e784900842f1b03b0b7292ed26c16d097c668390b0bfb8e3630401546aa6\" successfully" Dec 13 13:27:57.015979 containerd[1470]: time="2024-12-13T13:27:57.015975689Z" level=info msg="StopPodSandbox for \"ffd3e784900842f1b03b0b7292ed26c16d097c668390b0bfb8e3630401546aa6\" returns successfully" Dec 13 13:27:57.017512 containerd[1470]: time="2024-12-13T13:27:57.016320009Z" level=info msg="RemovePodSandbox for \"ffd3e784900842f1b03b0b7292ed26c16d097c668390b0bfb8e3630401546aa6\"" Dec 13 13:27:57.017512 containerd[1470]: time="2024-12-13T13:27:57.016348489Z" level=info msg="Forcibly stopping sandbox \"ffd3e784900842f1b03b0b7292ed26c16d097c668390b0bfb8e3630401546aa6\"" Dec 13 13:27:57.017512 containerd[1470]: time="2024-12-13T13:27:57.016419769Z" level=info msg="TearDown network for sandbox \"ffd3e784900842f1b03b0b7292ed26c16d097c668390b0bfb8e3630401546aa6\" successfully" Dec 13 13:27:57.018837 containerd[1470]: time="2024-12-13T13:27:57.018808249Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ffd3e784900842f1b03b0b7292ed26c16d097c668390b0bfb8e3630401546aa6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:27:57.019033 containerd[1470]: time="2024-12-13T13:27:57.019012409Z" level=info msg="RemovePodSandbox \"ffd3e784900842f1b03b0b7292ed26c16d097c668390b0bfb8e3630401546aa6\" returns successfully" Dec 13 13:27:57.019364 containerd[1470]: time="2024-12-13T13:27:57.019344489Z" level=info msg="StopPodSandbox for \"27f753461c75b2520c666cbe0a1bf5f551b1057d4e6d88357b64393d5796a9dd\"" Dec 13 13:27:57.019569 containerd[1470]: time="2024-12-13T13:27:57.019548129Z" level=info msg="TearDown network for sandbox \"27f753461c75b2520c666cbe0a1bf5f551b1057d4e6d88357b64393d5796a9dd\" successfully" Dec 13 13:27:57.019637 containerd[1470]: time="2024-12-13T13:27:57.019623729Z" level=info msg="StopPodSandbox for \"27f753461c75b2520c666cbe0a1bf5f551b1057d4e6d88357b64393d5796a9dd\" returns successfully" Dec 13 13:27:57.019951 containerd[1470]: time="2024-12-13T13:27:57.019932689Z" level=info msg="RemovePodSandbox for \"27f753461c75b2520c666cbe0a1bf5f551b1057d4e6d88357b64393d5796a9dd\"" Dec 13 13:27:57.020094 containerd[1470]: time="2024-12-13T13:27:57.020062969Z" level=info msg="Forcibly stopping sandbox \"27f753461c75b2520c666cbe0a1bf5f551b1057d4e6d88357b64393d5796a9dd\"" Dec 13 13:27:57.020231 containerd[1470]: time="2024-12-13T13:27:57.020214689Z" level=info msg="TearDown network for sandbox \"27f753461c75b2520c666cbe0a1bf5f551b1057d4e6d88357b64393d5796a9dd\" successfully" Dec 13 13:27:57.022657 containerd[1470]: time="2024-12-13T13:27:57.022630009Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"27f753461c75b2520c666cbe0a1bf5f551b1057d4e6d88357b64393d5796a9dd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:27:57.022808 containerd[1470]: time="2024-12-13T13:27:57.022790209Z" level=info msg="RemovePodSandbox \"27f753461c75b2520c666cbe0a1bf5f551b1057d4e6d88357b64393d5796a9dd\" returns successfully" Dec 13 13:27:57.023127 containerd[1470]: time="2024-12-13T13:27:57.023105649Z" level=info msg="StopPodSandbox for \"ac41ff4cc00b556307a05cd4080412f7483fd2836ce0b704d27334cf15d284a6\"" Dec 13 13:27:57.023394 containerd[1470]: time="2024-12-13T13:27:57.023375169Z" level=info msg="TearDown network for sandbox \"ac41ff4cc00b556307a05cd4080412f7483fd2836ce0b704d27334cf15d284a6\" successfully" Dec 13 13:27:57.023467 containerd[1470]: time="2024-12-13T13:27:57.023453129Z" level=info msg="StopPodSandbox for \"ac41ff4cc00b556307a05cd4080412f7483fd2836ce0b704d27334cf15d284a6\" returns successfully" Dec 13 13:27:57.023856 containerd[1470]: time="2024-12-13T13:27:57.023830009Z" level=info msg="RemovePodSandbox for \"ac41ff4cc00b556307a05cd4080412f7483fd2836ce0b704d27334cf15d284a6\"" Dec 13 13:27:57.024008 containerd[1470]: time="2024-12-13T13:27:57.023990689Z" level=info msg="Forcibly stopping sandbox \"ac41ff4cc00b556307a05cd4080412f7483fd2836ce0b704d27334cf15d284a6\"" Dec 13 13:27:57.024120 containerd[1470]: time="2024-12-13T13:27:57.024104329Z" level=info msg="TearDown network for sandbox \"ac41ff4cc00b556307a05cd4080412f7483fd2836ce0b704d27334cf15d284a6\" successfully" Dec 13 13:27:57.027413 containerd[1470]: time="2024-12-13T13:27:57.027378490Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ac41ff4cc00b556307a05cd4080412f7483fd2836ce0b704d27334cf15d284a6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:27:57.027540 containerd[1470]: time="2024-12-13T13:27:57.027523170Z" level=info msg="RemovePodSandbox \"ac41ff4cc00b556307a05cd4080412f7483fd2836ce0b704d27334cf15d284a6\" returns successfully" Dec 13 13:27:57.028426 containerd[1470]: time="2024-12-13T13:27:57.028283970Z" level=info msg="StopPodSandbox for \"98eb941cd0ea66b9687a3207f0c149e653e35a9ef2c103582f631a9c9a6ba849\"" Dec 13 13:27:57.028426 containerd[1470]: time="2024-12-13T13:27:57.028362890Z" level=info msg="TearDown network for sandbox \"98eb941cd0ea66b9687a3207f0c149e653e35a9ef2c103582f631a9c9a6ba849\" successfully" Dec 13 13:27:57.028426 containerd[1470]: time="2024-12-13T13:27:57.028371930Z" level=info msg="StopPodSandbox for \"98eb941cd0ea66b9687a3207f0c149e653e35a9ef2c103582f631a9c9a6ba849\" returns successfully" Dec 13 13:27:57.029715 containerd[1470]: time="2024-12-13T13:27:57.028649010Z" level=info msg="RemovePodSandbox for \"98eb941cd0ea66b9687a3207f0c149e653e35a9ef2c103582f631a9c9a6ba849\"" Dec 13 13:27:57.029715 containerd[1470]: time="2024-12-13T13:27:57.028674210Z" level=info msg="Forcibly stopping sandbox \"98eb941cd0ea66b9687a3207f0c149e653e35a9ef2c103582f631a9c9a6ba849\"" Dec 13 13:27:57.029715 containerd[1470]: time="2024-12-13T13:27:57.028735570Z" level=info msg="TearDown network for sandbox \"98eb941cd0ea66b9687a3207f0c149e653e35a9ef2c103582f631a9c9a6ba849\" successfully" Dec 13 13:27:57.031697 containerd[1470]: time="2024-12-13T13:27:57.031671810Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"98eb941cd0ea66b9687a3207f0c149e653e35a9ef2c103582f631a9c9a6ba849\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:27:57.031924 containerd[1470]: time="2024-12-13T13:27:57.031904810Z" level=info msg="RemovePodSandbox \"98eb941cd0ea66b9687a3207f0c149e653e35a9ef2c103582f631a9c9a6ba849\" returns successfully" Dec 13 13:27:57.032647 containerd[1470]: time="2024-12-13T13:27:57.032624010Z" level=info msg="StopPodSandbox for \"6da3c483bc09561e3eaa1e766a1a2aa82e218cc7c295b6d24645c61431ad5be0\"" Dec 13 13:27:57.032886 containerd[1470]: time="2024-12-13T13:27:57.032851850Z" level=info msg="TearDown network for sandbox \"6da3c483bc09561e3eaa1e766a1a2aa82e218cc7c295b6d24645c61431ad5be0\" successfully" Dec 13 13:27:57.033048 containerd[1470]: time="2024-12-13T13:27:57.033022370Z" level=info msg="StopPodSandbox for \"6da3c483bc09561e3eaa1e766a1a2aa82e218cc7c295b6d24645c61431ad5be0\" returns successfully" Dec 13 13:27:57.033521 containerd[1470]: time="2024-12-13T13:27:57.033497330Z" level=info msg="RemovePodSandbox for \"6da3c483bc09561e3eaa1e766a1a2aa82e218cc7c295b6d24645c61431ad5be0\"" Dec 13 13:27:57.033572 containerd[1470]: time="2024-12-13T13:27:57.033524170Z" level=info msg="Forcibly stopping sandbox \"6da3c483bc09561e3eaa1e766a1a2aa82e218cc7c295b6d24645c61431ad5be0\"" Dec 13 13:27:57.033597 containerd[1470]: time="2024-12-13T13:27:57.033589690Z" level=info msg="TearDown network for sandbox \"6da3c483bc09561e3eaa1e766a1a2aa82e218cc7c295b6d24645c61431ad5be0\" successfully" Dec 13 13:27:57.038015 containerd[1470]: time="2024-12-13T13:27:57.037974251Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6da3c483bc09561e3eaa1e766a1a2aa82e218cc7c295b6d24645c61431ad5be0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:27:57.038099 containerd[1470]: time="2024-12-13T13:27:57.038032211Z" level=info msg="RemovePodSandbox \"6da3c483bc09561e3eaa1e766a1a2aa82e218cc7c295b6d24645c61431ad5be0\" returns successfully" Dec 13 13:27:57.038577 containerd[1470]: time="2024-12-13T13:27:57.038437811Z" level=info msg="StopPodSandbox for \"f1a55cf25343be3ff987a3f6d2c674ce55f42d1f82c9cafb620feaf09bd7e472\"" Dec 13 13:27:57.038577 containerd[1470]: time="2024-12-13T13:27:57.038520771Z" level=info msg="TearDown network for sandbox \"f1a55cf25343be3ff987a3f6d2c674ce55f42d1f82c9cafb620feaf09bd7e472\" successfully" Dec 13 13:27:57.038577 containerd[1470]: time="2024-12-13T13:27:57.038529611Z" level=info msg="StopPodSandbox for \"f1a55cf25343be3ff987a3f6d2c674ce55f42d1f82c9cafb620feaf09bd7e472\" returns successfully" Dec 13 13:27:57.039138 containerd[1470]: time="2024-12-13T13:27:57.039091051Z" level=info msg="RemovePodSandbox for \"f1a55cf25343be3ff987a3f6d2c674ce55f42d1f82c9cafb620feaf09bd7e472\"" Dec 13 13:27:57.039138 containerd[1470]: time="2024-12-13T13:27:57.039126051Z" level=info msg="Forcibly stopping sandbox \"f1a55cf25343be3ff987a3f6d2c674ce55f42d1f82c9cafb620feaf09bd7e472\"" Dec 13 13:27:57.039296 containerd[1470]: time="2024-12-13T13:27:57.039267251Z" level=info msg="TearDown network for sandbox \"f1a55cf25343be3ff987a3f6d2c674ce55f42d1f82c9cafb620feaf09bd7e472\" successfully" Dec 13 13:27:57.041570 containerd[1470]: time="2024-12-13T13:27:57.041542051Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f1a55cf25343be3ff987a3f6d2c674ce55f42d1f82c9cafb620feaf09bd7e472\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:27:57.041618 containerd[1470]: time="2024-12-13T13:27:57.041588171Z" level=info msg="RemovePodSandbox \"f1a55cf25343be3ff987a3f6d2c674ce55f42d1f82c9cafb620feaf09bd7e472\" returns successfully" Dec 13 13:27:57.042006 containerd[1470]: time="2024-12-13T13:27:57.041851491Z" level=info msg="StopPodSandbox for \"c5a9072b3c00d745aface7571023c0abffdb11eda589c93d145a27b3ebbe9018\"" Dec 13 13:27:57.042006 containerd[1470]: time="2024-12-13T13:27:57.041942731Z" level=info msg="TearDown network for sandbox \"c5a9072b3c00d745aface7571023c0abffdb11eda589c93d145a27b3ebbe9018\" successfully" Dec 13 13:27:57.042006 containerd[1470]: time="2024-12-13T13:27:57.041953731Z" level=info msg="StopPodSandbox for \"c5a9072b3c00d745aface7571023c0abffdb11eda589c93d145a27b3ebbe9018\" returns successfully" Dec 13 13:27:57.042282 containerd[1470]: time="2024-12-13T13:27:57.042192251Z" level=info msg="RemovePodSandbox for \"c5a9072b3c00d745aface7571023c0abffdb11eda589c93d145a27b3ebbe9018\"" Dec 13 13:27:57.042282 containerd[1470]: time="2024-12-13T13:27:57.042219651Z" level=info msg="Forcibly stopping sandbox \"c5a9072b3c00d745aface7571023c0abffdb11eda589c93d145a27b3ebbe9018\"" Dec 13 13:27:57.042350 containerd[1470]: time="2024-12-13T13:27:57.042286091Z" level=info msg="TearDown network for sandbox \"c5a9072b3c00d745aface7571023c0abffdb11eda589c93d145a27b3ebbe9018\" successfully" Dec 13 13:27:57.044539 containerd[1470]: time="2024-12-13T13:27:57.044507652Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c5a9072b3c00d745aface7571023c0abffdb11eda589c93d145a27b3ebbe9018\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:27:57.044589 containerd[1470]: time="2024-12-13T13:27:57.044553652Z" level=info msg="RemovePodSandbox \"c5a9072b3c00d745aface7571023c0abffdb11eda589c93d145a27b3ebbe9018\" returns successfully" Dec 13 13:27:57.044910 containerd[1470]: time="2024-12-13T13:27:57.044888212Z" level=info msg="StopPodSandbox for \"6ebfe153f9bd5d284e3c2d0e770265e24cb4369ad8e7497825d870da4520a0a6\"" Dec 13 13:27:57.045129 containerd[1470]: time="2024-12-13T13:27:57.045061572Z" level=info msg="TearDown network for sandbox \"6ebfe153f9bd5d284e3c2d0e770265e24cb4369ad8e7497825d870da4520a0a6\" successfully" Dec 13 13:27:57.045129 containerd[1470]: time="2024-12-13T13:27:57.045080212Z" level=info msg="StopPodSandbox for \"6ebfe153f9bd5d284e3c2d0e770265e24cb4369ad8e7497825d870da4520a0a6\" returns successfully" Dec 13 13:27:57.045921 containerd[1470]: time="2024-12-13T13:27:57.045451892Z" level=info msg="RemovePodSandbox for \"6ebfe153f9bd5d284e3c2d0e770265e24cb4369ad8e7497825d870da4520a0a6\"" Dec 13 13:27:57.045921 containerd[1470]: time="2024-12-13T13:27:57.045482972Z" level=info msg="Forcibly stopping sandbox \"6ebfe153f9bd5d284e3c2d0e770265e24cb4369ad8e7497825d870da4520a0a6\"" Dec 13 13:27:57.045921 containerd[1470]: time="2024-12-13T13:27:57.045541532Z" level=info msg="TearDown network for sandbox \"6ebfe153f9bd5d284e3c2d0e770265e24cb4369ad8e7497825d870da4520a0a6\" successfully" Dec 13 13:27:57.047827 containerd[1470]: time="2024-12-13T13:27:57.047794412Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6ebfe153f9bd5d284e3c2d0e770265e24cb4369ad8e7497825d870da4520a0a6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:27:57.047900 containerd[1470]: time="2024-12-13T13:27:57.047842852Z" level=info msg="RemovePodSandbox \"6ebfe153f9bd5d284e3c2d0e770265e24cb4369ad8e7497825d870da4520a0a6\" returns successfully" Dec 13 13:27:57.048242 containerd[1470]: time="2024-12-13T13:27:57.048218692Z" level=info msg="StopPodSandbox for \"5117923122ba090e5d499c0acd72a36d052893c08e93f893dee251dd20c62718\"" Dec 13 13:27:57.048325 containerd[1470]: time="2024-12-13T13:27:57.048308612Z" level=info msg="TearDown network for sandbox \"5117923122ba090e5d499c0acd72a36d052893c08e93f893dee251dd20c62718\" successfully" Dec 13 13:27:57.048325 containerd[1470]: time="2024-12-13T13:27:57.048323092Z" level=info msg="StopPodSandbox for \"5117923122ba090e5d499c0acd72a36d052893c08e93f893dee251dd20c62718\" returns successfully" Dec 13 13:27:57.049633 containerd[1470]: time="2024-12-13T13:27:57.048579652Z" level=info msg="RemovePodSandbox for \"5117923122ba090e5d499c0acd72a36d052893c08e93f893dee251dd20c62718\"" Dec 13 13:27:57.049633 containerd[1470]: time="2024-12-13T13:27:57.048608172Z" level=info msg="Forcibly stopping sandbox \"5117923122ba090e5d499c0acd72a36d052893c08e93f893dee251dd20c62718\"" Dec 13 13:27:57.049633 containerd[1470]: time="2024-12-13T13:27:57.048666292Z" level=info msg="TearDown network for sandbox \"5117923122ba090e5d499c0acd72a36d052893c08e93f893dee251dd20c62718\" successfully" Dec 13 13:27:57.050921 containerd[1470]: time="2024-12-13T13:27:57.050892412Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5117923122ba090e5d499c0acd72a36d052893c08e93f893dee251dd20c62718\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:27:57.051043 containerd[1470]: time="2024-12-13T13:27:57.051019252Z" level=info msg="RemovePodSandbox \"5117923122ba090e5d499c0acd72a36d052893c08e93f893dee251dd20c62718\" returns successfully" Dec 13 13:27:57.051362 containerd[1470]: time="2024-12-13T13:27:57.051340212Z" level=info msg="StopPodSandbox for \"11bc29dc0d8d27e911b08c204189f28fe100a6879dcb15fbb9ba688c573f4e2d\"" Dec 13 13:27:57.051426 containerd[1470]: time="2024-12-13T13:27:57.051412212Z" level=info msg="TearDown network for sandbox \"11bc29dc0d8d27e911b08c204189f28fe100a6879dcb15fbb9ba688c573f4e2d\" successfully" Dec 13 13:27:57.051426 containerd[1470]: time="2024-12-13T13:27:57.051424372Z" level=info msg="StopPodSandbox for \"11bc29dc0d8d27e911b08c204189f28fe100a6879dcb15fbb9ba688c573f4e2d\" returns successfully" Dec 13 13:27:57.051743 containerd[1470]: time="2024-12-13T13:27:57.051666092Z" level=info msg="RemovePodSandbox for \"11bc29dc0d8d27e911b08c204189f28fe100a6879dcb15fbb9ba688c573f4e2d\"" Dec 13 13:27:57.051743 containerd[1470]: time="2024-12-13T13:27:57.051687732Z" level=info msg="Forcibly stopping sandbox \"11bc29dc0d8d27e911b08c204189f28fe100a6879dcb15fbb9ba688c573f4e2d\"" Dec 13 13:27:57.058296 containerd[1470]: time="2024-12-13T13:27:57.051748172Z" level=info msg="TearDown network for sandbox \"11bc29dc0d8d27e911b08c204189f28fe100a6879dcb15fbb9ba688c573f4e2d\" successfully" Dec 13 13:27:57.060702 containerd[1470]: time="2024-12-13T13:27:57.060672693Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"11bc29dc0d8d27e911b08c204189f28fe100a6879dcb15fbb9ba688c573f4e2d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:27:57.060754 containerd[1470]: time="2024-12-13T13:27:57.060717693Z" level=info msg="RemovePodSandbox \"11bc29dc0d8d27e911b08c204189f28fe100a6879dcb15fbb9ba688c573f4e2d\" returns successfully" Dec 13 13:27:57.061151 containerd[1470]: time="2024-12-13T13:27:57.061114133Z" level=info msg="StopPodSandbox for \"935e87fa1a29f3b7e1fa6a61c2097ab55d13abeefd1f75daf149518f7772d275\"" Dec 13 13:27:57.061214 containerd[1470]: time="2024-12-13T13:27:57.061203533Z" level=info msg="TearDown network for sandbox \"935e87fa1a29f3b7e1fa6a61c2097ab55d13abeefd1f75daf149518f7772d275\" successfully" Dec 13 13:27:57.061237 containerd[1470]: time="2024-12-13T13:27:57.061213853Z" level=info msg="StopPodSandbox for \"935e87fa1a29f3b7e1fa6a61c2097ab55d13abeefd1f75daf149518f7772d275\" returns successfully" Dec 13 13:27:57.061657 containerd[1470]: time="2024-12-13T13:27:57.061502453Z" level=info msg="RemovePodSandbox for \"935e87fa1a29f3b7e1fa6a61c2097ab55d13abeefd1f75daf149518f7772d275\"" Dec 13 13:27:57.061657 containerd[1470]: time="2024-12-13T13:27:57.061535773Z" level=info msg="Forcibly stopping sandbox \"935e87fa1a29f3b7e1fa6a61c2097ab55d13abeefd1f75daf149518f7772d275\"" Dec 13 13:27:57.061657 containerd[1470]: time="2024-12-13T13:27:57.061595653Z" level=info msg="TearDown network for sandbox \"935e87fa1a29f3b7e1fa6a61c2097ab55d13abeefd1f75daf149518f7772d275\" successfully" Dec 13 13:27:57.064371 containerd[1470]: time="2024-12-13T13:27:57.064252734Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"935e87fa1a29f3b7e1fa6a61c2097ab55d13abeefd1f75daf149518f7772d275\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:27:57.064371 containerd[1470]: time="2024-12-13T13:27:57.064308174Z" level=info msg="RemovePodSandbox \"935e87fa1a29f3b7e1fa6a61c2097ab55d13abeefd1f75daf149518f7772d275\" returns successfully" Dec 13 13:27:57.064634 containerd[1470]: time="2024-12-13T13:27:57.064609134Z" level=info msg="StopPodSandbox for \"1dd6d8c1a6eac4dd5b5c62d01d7187e7b328d566cb31b745bf1ff4ef183d29f4\"" Dec 13 13:27:57.064703 containerd[1470]: time="2024-12-13T13:27:57.064688454Z" level=info msg="TearDown network for sandbox \"1dd6d8c1a6eac4dd5b5c62d01d7187e7b328d566cb31b745bf1ff4ef183d29f4\" successfully" Dec 13 13:27:57.064703 containerd[1470]: time="2024-12-13T13:27:57.064701294Z" level=info msg="StopPodSandbox for \"1dd6d8c1a6eac4dd5b5c62d01d7187e7b328d566cb31b745bf1ff4ef183d29f4\" returns successfully" Dec 13 13:27:57.064964 containerd[1470]: time="2024-12-13T13:27:57.064924174Z" level=info msg="RemovePodSandbox for \"1dd6d8c1a6eac4dd5b5c62d01d7187e7b328d566cb31b745bf1ff4ef183d29f4\"" Dec 13 13:27:57.064964 containerd[1470]: time="2024-12-13T13:27:57.064943454Z" level=info msg="Forcibly stopping sandbox \"1dd6d8c1a6eac4dd5b5c62d01d7187e7b328d566cb31b745bf1ff4ef183d29f4\"" Dec 13 13:27:57.065063 containerd[1470]: time="2024-12-13T13:27:57.064991254Z" level=info msg="TearDown network for sandbox \"1dd6d8c1a6eac4dd5b5c62d01d7187e7b328d566cb31b745bf1ff4ef183d29f4\" successfully" Dec 13 13:27:57.067056 containerd[1470]: time="2024-12-13T13:27:57.067026814Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1dd6d8c1a6eac4dd5b5c62d01d7187e7b328d566cb31b745bf1ff4ef183d29f4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:27:57.067098 containerd[1470]: time="2024-12-13T13:27:57.067084894Z" level=info msg="RemovePodSandbox \"1dd6d8c1a6eac4dd5b5c62d01d7187e7b328d566cb31b745bf1ff4ef183d29f4\" returns successfully" Dec 13 13:27:57.067494 containerd[1470]: time="2024-12-13T13:27:57.067459694Z" level=info msg="StopPodSandbox for \"2402bfc3fb517198a33cc6b1f6555325952a6abcc8100e95a58daba1204b8e98\"" Dec 13 13:27:57.067556 containerd[1470]: time="2024-12-13T13:27:57.067544134Z" level=info msg="TearDown network for sandbox \"2402bfc3fb517198a33cc6b1f6555325952a6abcc8100e95a58daba1204b8e98\" successfully" Dec 13 13:27:57.067579 containerd[1470]: time="2024-12-13T13:27:57.067555134Z" level=info msg="StopPodSandbox for \"2402bfc3fb517198a33cc6b1f6555325952a6abcc8100e95a58daba1204b8e98\" returns successfully" Dec 13 13:27:57.068837 containerd[1470]: time="2024-12-13T13:27:57.067786334Z" level=info msg="RemovePodSandbox for \"2402bfc3fb517198a33cc6b1f6555325952a6abcc8100e95a58daba1204b8e98\"" Dec 13 13:27:57.068837 containerd[1470]: time="2024-12-13T13:27:57.067813214Z" level=info msg="Forcibly stopping sandbox \"2402bfc3fb517198a33cc6b1f6555325952a6abcc8100e95a58daba1204b8e98\"" Dec 13 13:27:57.068837 containerd[1470]: time="2024-12-13T13:27:57.067887014Z" level=info msg="TearDown network for sandbox \"2402bfc3fb517198a33cc6b1f6555325952a6abcc8100e95a58daba1204b8e98\" successfully" Dec 13 13:27:57.070156 containerd[1470]: time="2024-12-13T13:27:57.070128214Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2402bfc3fb517198a33cc6b1f6555325952a6abcc8100e95a58daba1204b8e98\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:27:57.070257 containerd[1470]: time="2024-12-13T13:27:57.070241014Z" level=info msg="RemovePodSandbox \"2402bfc3fb517198a33cc6b1f6555325952a6abcc8100e95a58daba1204b8e98\" returns successfully" Dec 13 13:27:57.070589 containerd[1470]: time="2024-12-13T13:27:57.070551374Z" level=info msg="StopPodSandbox for \"84f3817040532860c1215df089849c6146828b5201bd21c8c2b10f4313126fa2\"" Dec 13 13:27:57.070658 containerd[1470]: time="2024-12-13T13:27:57.070641774Z" level=info msg="TearDown network for sandbox \"84f3817040532860c1215df089849c6146828b5201bd21c8c2b10f4313126fa2\" successfully" Dec 13 13:27:57.070658 containerd[1470]: time="2024-12-13T13:27:57.070654854Z" level=info msg="StopPodSandbox for \"84f3817040532860c1215df089849c6146828b5201bd21c8c2b10f4313126fa2\" returns successfully" Dec 13 13:27:57.070912 containerd[1470]: time="2024-12-13T13:27:57.070856094Z" level=info msg="RemovePodSandbox for \"84f3817040532860c1215df089849c6146828b5201bd21c8c2b10f4313126fa2\"" Dec 13 13:27:57.070912 containerd[1470]: time="2024-12-13T13:27:57.070888014Z" level=info msg="Forcibly stopping sandbox \"84f3817040532860c1215df089849c6146828b5201bd21c8c2b10f4313126fa2\"" Dec 13 13:27:57.070987 containerd[1470]: time="2024-12-13T13:27:57.070934054Z" level=info msg="TearDown network for sandbox \"84f3817040532860c1215df089849c6146828b5201bd21c8c2b10f4313126fa2\" successfully" Dec 13 13:27:57.073193 containerd[1470]: time="2024-12-13T13:27:57.073162694Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"84f3817040532860c1215df089849c6146828b5201bd21c8c2b10f4313126fa2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:27:57.073260 containerd[1470]: time="2024-12-13T13:27:57.073206694Z" level=info msg="RemovePodSandbox \"84f3817040532860c1215df089849c6146828b5201bd21c8c2b10f4313126fa2\" returns successfully" Dec 13 13:27:57.073684 containerd[1470]: time="2024-12-13T13:27:57.073643815Z" level=info msg="StopPodSandbox for \"f0c5c755d19130228cfe26ea013b5dbc206b46452154e1ebbb8578b200cc559b\"" Dec 13 13:27:57.073751 containerd[1470]: time="2024-12-13T13:27:57.073736855Z" level=info msg="TearDown network for sandbox \"f0c5c755d19130228cfe26ea013b5dbc206b46452154e1ebbb8578b200cc559b\" successfully" Dec 13 13:27:57.073751 containerd[1470]: time="2024-12-13T13:27:57.073749215Z" level=info msg="StopPodSandbox for \"f0c5c755d19130228cfe26ea013b5dbc206b46452154e1ebbb8578b200cc559b\" returns successfully" Dec 13 13:27:57.074010 containerd[1470]: time="2024-12-13T13:27:57.073987855Z" level=info msg="RemovePodSandbox for \"f0c5c755d19130228cfe26ea013b5dbc206b46452154e1ebbb8578b200cc559b\"" Dec 13 13:27:57.074010 containerd[1470]: time="2024-12-13T13:27:57.074010255Z" level=info msg="Forcibly stopping sandbox \"f0c5c755d19130228cfe26ea013b5dbc206b46452154e1ebbb8578b200cc559b\"" Dec 13 13:27:57.074086 containerd[1470]: time="2024-12-13T13:27:57.074064655Z" level=info msg="TearDown network for sandbox \"f0c5c755d19130228cfe26ea013b5dbc206b46452154e1ebbb8578b200cc559b\" successfully" Dec 13 13:27:57.076123 containerd[1470]: time="2024-12-13T13:27:57.076097455Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f0c5c755d19130228cfe26ea013b5dbc206b46452154e1ebbb8578b200cc559b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:27:57.076184 containerd[1470]: time="2024-12-13T13:27:57.076141215Z" level=info msg="RemovePodSandbox \"f0c5c755d19130228cfe26ea013b5dbc206b46452154e1ebbb8578b200cc559b\" returns successfully" Dec 13 13:27:57.076452 containerd[1470]: time="2024-12-13T13:27:57.076423535Z" level=info msg="StopPodSandbox for \"6d570a13a57e03098bfffbaaa22039f83f8481fef8b859bf88b497a88d87e04a\"" Dec 13 13:27:57.076520 containerd[1470]: time="2024-12-13T13:27:57.076500935Z" level=info msg="TearDown network for sandbox \"6d570a13a57e03098bfffbaaa22039f83f8481fef8b859bf88b497a88d87e04a\" successfully" Dec 13 13:27:57.076520 containerd[1470]: time="2024-12-13T13:27:57.076515335Z" level=info msg="StopPodSandbox for \"6d570a13a57e03098bfffbaaa22039f83f8481fef8b859bf88b497a88d87e04a\" returns successfully" Dec 13 13:27:57.076746 containerd[1470]: time="2024-12-13T13:27:57.076727935Z" level=info msg="RemovePodSandbox for \"6d570a13a57e03098bfffbaaa22039f83f8481fef8b859bf88b497a88d87e04a\"" Dec 13 13:27:57.076798 containerd[1470]: time="2024-12-13T13:27:57.076750015Z" level=info msg="Forcibly stopping sandbox \"6d570a13a57e03098bfffbaaa22039f83f8481fef8b859bf88b497a88d87e04a\"" Dec 13 13:27:57.076820 containerd[1470]: time="2024-12-13T13:27:57.076807335Z" level=info msg="TearDown network for sandbox \"6d570a13a57e03098bfffbaaa22039f83f8481fef8b859bf88b497a88d87e04a\" successfully" Dec 13 13:27:57.079193 containerd[1470]: time="2024-12-13T13:27:57.079160535Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6d570a13a57e03098bfffbaaa22039f83f8481fef8b859bf88b497a88d87e04a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:27:57.079248 containerd[1470]: time="2024-12-13T13:27:57.079208855Z" level=info msg="RemovePodSandbox \"6d570a13a57e03098bfffbaaa22039f83f8481fef8b859bf88b497a88d87e04a\" returns successfully" Dec 13 13:27:57.079819 containerd[1470]: time="2024-12-13T13:27:57.079600895Z" level=info msg="StopPodSandbox for \"9a96510483de5288a555a7496385fae3414d83977f2aea433b94563106a02b7f\"" Dec 13 13:27:57.079819 containerd[1470]: time="2024-12-13T13:27:57.079682295Z" level=info msg="TearDown network for sandbox \"9a96510483de5288a555a7496385fae3414d83977f2aea433b94563106a02b7f\" successfully" Dec 13 13:27:57.079819 containerd[1470]: time="2024-12-13T13:27:57.079691295Z" level=info msg="StopPodSandbox for \"9a96510483de5288a555a7496385fae3414d83977f2aea433b94563106a02b7f\" returns successfully" Dec 13 13:27:57.079978 containerd[1470]: time="2024-12-13T13:27:57.079954215Z" level=info msg="RemovePodSandbox for \"9a96510483de5288a555a7496385fae3414d83977f2aea433b94563106a02b7f\"" Dec 13 13:27:57.080006 containerd[1470]: time="2024-12-13T13:27:57.079981695Z" level=info msg="Forcibly stopping sandbox \"9a96510483de5288a555a7496385fae3414d83977f2aea433b94563106a02b7f\"" Dec 13 13:27:57.080082 containerd[1470]: time="2024-12-13T13:27:57.080043135Z" level=info msg="TearDown network for sandbox \"9a96510483de5288a555a7496385fae3414d83977f2aea433b94563106a02b7f\" successfully" Dec 13 13:27:57.082257 containerd[1470]: time="2024-12-13T13:27:57.082226935Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9a96510483de5288a555a7496385fae3414d83977f2aea433b94563106a02b7f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:27:57.082336 containerd[1470]: time="2024-12-13T13:27:57.082287935Z" level=info msg="RemovePodSandbox \"9a96510483de5288a555a7496385fae3414d83977f2aea433b94563106a02b7f\" returns successfully" Dec 13 13:27:57.082645 containerd[1470]: time="2024-12-13T13:27:57.082577655Z" level=info msg="StopPodSandbox for \"2524df59d1d5837bc47c1f56c1f561f460cf084e19c6176080b270b2eb6b9f65\"" Dec 13 13:27:57.082694 containerd[1470]: time="2024-12-13T13:27:57.082671855Z" level=info msg="TearDown network for sandbox \"2524df59d1d5837bc47c1f56c1f561f460cf084e19c6176080b270b2eb6b9f65\" successfully" Dec 13 13:27:57.082694 containerd[1470]: time="2024-12-13T13:27:57.082681855Z" level=info msg="StopPodSandbox for \"2524df59d1d5837bc47c1f56c1f561f460cf084e19c6176080b270b2eb6b9f65\" returns successfully" Dec 13 13:27:57.084186 containerd[1470]: time="2024-12-13T13:27:57.083093536Z" level=info msg="RemovePodSandbox for \"2524df59d1d5837bc47c1f56c1f561f460cf084e19c6176080b270b2eb6b9f65\"" Dec 13 13:27:57.084186 containerd[1470]: time="2024-12-13T13:27:57.083121456Z" level=info msg="Forcibly stopping sandbox \"2524df59d1d5837bc47c1f56c1f561f460cf084e19c6176080b270b2eb6b9f65\"" Dec 13 13:27:57.084186 containerd[1470]: time="2024-12-13T13:27:57.083180336Z" level=info msg="TearDown network for sandbox \"2524df59d1d5837bc47c1f56c1f561f460cf084e19c6176080b270b2eb6b9f65\" successfully" Dec 13 13:27:57.085449 containerd[1470]: time="2024-12-13T13:27:57.085416496Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2524df59d1d5837bc47c1f56c1f561f460cf084e19c6176080b270b2eb6b9f65\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:27:57.085562 containerd[1470]: time="2024-12-13T13:27:57.085545816Z" level=info msg="RemovePodSandbox \"2524df59d1d5837bc47c1f56c1f561f460cf084e19c6176080b270b2eb6b9f65\" returns successfully" Dec 13 13:27:57.085932 containerd[1470]: time="2024-12-13T13:27:57.085906696Z" level=info msg="StopPodSandbox for \"86c882a73419c7c668dfe9102b04b31ccd6932449f9ba94a44718f2750f7c19b\"" Dec 13 13:27:57.086010 containerd[1470]: time="2024-12-13T13:27:57.085990776Z" level=info msg="TearDown network for sandbox \"86c882a73419c7c668dfe9102b04b31ccd6932449f9ba94a44718f2750f7c19b\" successfully" Dec 13 13:27:57.086010 containerd[1470]: time="2024-12-13T13:27:57.086004216Z" level=info msg="StopPodSandbox for \"86c882a73419c7c668dfe9102b04b31ccd6932449f9ba94a44718f2750f7c19b\" returns successfully" Dec 13 13:27:57.086278 containerd[1470]: time="2024-12-13T13:27:57.086246016Z" level=info msg="RemovePodSandbox for \"86c882a73419c7c668dfe9102b04b31ccd6932449f9ba94a44718f2750f7c19b\"" Dec 13 13:27:57.086325 containerd[1470]: time="2024-12-13T13:27:57.086281216Z" level=info msg="Forcibly stopping sandbox \"86c882a73419c7c668dfe9102b04b31ccd6932449f9ba94a44718f2750f7c19b\"" Dec 13 13:27:57.086361 containerd[1470]: time="2024-12-13T13:27:57.086346336Z" level=info msg="TearDown network for sandbox \"86c882a73419c7c668dfe9102b04b31ccd6932449f9ba94a44718f2750f7c19b\" successfully" Dec 13 13:27:57.088600 containerd[1470]: time="2024-12-13T13:27:57.088567656Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"86c882a73419c7c668dfe9102b04b31ccd6932449f9ba94a44718f2750f7c19b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:27:57.088649 containerd[1470]: time="2024-12-13T13:27:57.088620936Z" level=info msg="RemovePodSandbox \"86c882a73419c7c668dfe9102b04b31ccd6932449f9ba94a44718f2750f7c19b\" returns successfully" Dec 13 13:27:57.089245 containerd[1470]: time="2024-12-13T13:27:57.088962336Z" level=info msg="StopPodSandbox for \"d7b4d2f75c00d8923e4129d720e722943bb77a0cfdc8afedf8ea41878d41a057\"" Dec 13 13:27:57.089245 containerd[1470]: time="2024-12-13T13:27:57.089055736Z" level=info msg="TearDown network for sandbox \"d7b4d2f75c00d8923e4129d720e722943bb77a0cfdc8afedf8ea41878d41a057\" successfully" Dec 13 13:27:57.089245 containerd[1470]: time="2024-12-13T13:27:57.089065376Z" level=info msg="StopPodSandbox for \"d7b4d2f75c00d8923e4129d720e722943bb77a0cfdc8afedf8ea41878d41a057\" returns successfully" Dec 13 13:27:57.089371 containerd[1470]: time="2024-12-13T13:27:57.089323136Z" level=info msg="RemovePodSandbox for \"d7b4d2f75c00d8923e4129d720e722943bb77a0cfdc8afedf8ea41878d41a057\"" Dec 13 13:27:57.089371 containerd[1470]: time="2024-12-13T13:27:57.089346976Z" level=info msg="Forcibly stopping sandbox \"d7b4d2f75c00d8923e4129d720e722943bb77a0cfdc8afedf8ea41878d41a057\"" Dec 13 13:27:57.089428 containerd[1470]: time="2024-12-13T13:27:57.089407216Z" level=info msg="TearDown network for sandbox \"d7b4d2f75c00d8923e4129d720e722943bb77a0cfdc8afedf8ea41878d41a057\" successfully" Dec 13 13:27:57.091686 containerd[1470]: time="2024-12-13T13:27:57.091654376Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d7b4d2f75c00d8923e4129d720e722943bb77a0cfdc8afedf8ea41878d41a057\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:27:57.091752 containerd[1470]: time="2024-12-13T13:27:57.091708656Z" level=info msg="RemovePodSandbox \"d7b4d2f75c00d8923e4129d720e722943bb77a0cfdc8afedf8ea41878d41a057\" returns successfully" Dec 13 13:27:57.092145 containerd[1470]: time="2024-12-13T13:27:57.092077656Z" level=info msg="StopPodSandbox for \"38fddec9a10e52bf3d7532ba46d65d687a15cb24fffb9f932ef372b48ada3b20\"" Dec 13 13:27:57.092247 containerd[1470]: time="2024-12-13T13:27:57.092228736Z" level=info msg="TearDown network for sandbox \"38fddec9a10e52bf3d7532ba46d65d687a15cb24fffb9f932ef372b48ada3b20\" successfully" Dec 13 13:27:57.092284 containerd[1470]: time="2024-12-13T13:27:57.092245216Z" level=info msg="StopPodSandbox for \"38fddec9a10e52bf3d7532ba46d65d687a15cb24fffb9f932ef372b48ada3b20\" returns successfully" Dec 13 13:27:57.092900 containerd[1470]: time="2024-12-13T13:27:57.092556496Z" level=info msg="RemovePodSandbox for \"38fddec9a10e52bf3d7532ba46d65d687a15cb24fffb9f932ef372b48ada3b20\"" Dec 13 13:27:57.092900 containerd[1470]: time="2024-12-13T13:27:57.092585416Z" level=info msg="Forcibly stopping sandbox \"38fddec9a10e52bf3d7532ba46d65d687a15cb24fffb9f932ef372b48ada3b20\"" Dec 13 13:27:57.092900 containerd[1470]: time="2024-12-13T13:27:57.092641656Z" level=info msg="TearDown network for sandbox \"38fddec9a10e52bf3d7532ba46d65d687a15cb24fffb9f932ef372b48ada3b20\" successfully" Dec 13 13:27:57.101098 containerd[1470]: time="2024-12-13T13:27:57.101050937Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"38fddec9a10e52bf3d7532ba46d65d687a15cb24fffb9f932ef372b48ada3b20\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:27:57.101182 containerd[1470]: time="2024-12-13T13:27:57.101118777Z" level=info msg="RemovePodSandbox \"38fddec9a10e52bf3d7532ba46d65d687a15cb24fffb9f932ef372b48ada3b20\" returns successfully" Dec 13 13:27:57.101786 containerd[1470]: time="2024-12-13T13:27:57.101532417Z" level=info msg="StopPodSandbox for \"d42c754f5bfce3ff24d96e0d14d8e10b47072d6fee77834d8ccae6ccaea0fb69\"" Dec 13 13:27:57.101786 containerd[1470]: time="2024-12-13T13:27:57.101627617Z" level=info msg="TearDown network for sandbox \"d42c754f5bfce3ff24d96e0d14d8e10b47072d6fee77834d8ccae6ccaea0fb69\" successfully" Dec 13 13:27:57.101786 containerd[1470]: time="2024-12-13T13:27:57.101638217Z" level=info msg="StopPodSandbox for \"d42c754f5bfce3ff24d96e0d14d8e10b47072d6fee77834d8ccae6ccaea0fb69\" returns successfully" Dec 13 13:27:57.102208 containerd[1470]: time="2024-12-13T13:27:57.102184257Z" level=info msg="RemovePodSandbox for \"d42c754f5bfce3ff24d96e0d14d8e10b47072d6fee77834d8ccae6ccaea0fb69\"" Dec 13 13:27:57.102257 containerd[1470]: time="2024-12-13T13:27:57.102211017Z" level=info msg="Forcibly stopping sandbox \"d42c754f5bfce3ff24d96e0d14d8e10b47072d6fee77834d8ccae6ccaea0fb69\"" Dec 13 13:27:57.102300 containerd[1470]: time="2024-12-13T13:27:57.102277817Z" level=info msg="TearDown network for sandbox \"d42c754f5bfce3ff24d96e0d14d8e10b47072d6fee77834d8ccae6ccaea0fb69\" successfully" Dec 13 13:27:57.104506 containerd[1470]: time="2024-12-13T13:27:57.104476818Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d42c754f5bfce3ff24d96e0d14d8e10b47072d6fee77834d8ccae6ccaea0fb69\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:27:57.104567 containerd[1470]: time="2024-12-13T13:27:57.104530178Z" level=info msg="RemovePodSandbox \"d42c754f5bfce3ff24d96e0d14d8e10b47072d6fee77834d8ccae6ccaea0fb69\" returns successfully" Dec 13 13:27:57.104931 containerd[1470]: time="2024-12-13T13:27:57.104889658Z" level=info msg="StopPodSandbox for \"e800de8956fc69d7244ccf347e8f59184b294ff27d86fb84f83ff71188ec38c0\"" Dec 13 13:27:57.104987 containerd[1470]: time="2024-12-13T13:27:57.104977698Z" level=info msg="TearDown network for sandbox \"e800de8956fc69d7244ccf347e8f59184b294ff27d86fb84f83ff71188ec38c0\" successfully" Dec 13 13:27:57.105027 containerd[1470]: time="2024-12-13T13:27:57.104987858Z" level=info msg="StopPodSandbox for \"e800de8956fc69d7244ccf347e8f59184b294ff27d86fb84f83ff71188ec38c0\" returns successfully" Dec 13 13:27:57.105658 containerd[1470]: time="2024-12-13T13:27:57.105465858Z" level=info msg="RemovePodSandbox for \"e800de8956fc69d7244ccf347e8f59184b294ff27d86fb84f83ff71188ec38c0\"" Dec 13 13:27:57.105658 containerd[1470]: time="2024-12-13T13:27:57.105492898Z" level=info msg="Forcibly stopping sandbox \"e800de8956fc69d7244ccf347e8f59184b294ff27d86fb84f83ff71188ec38c0\"" Dec 13 13:27:57.105658 containerd[1470]: time="2024-12-13T13:27:57.105558058Z" level=info msg="TearDown network for sandbox \"e800de8956fc69d7244ccf347e8f59184b294ff27d86fb84f83ff71188ec38c0\" successfully" Dec 13 13:27:57.108167 containerd[1470]: time="2024-12-13T13:27:57.107902418Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e800de8956fc69d7244ccf347e8f59184b294ff27d86fb84f83ff71188ec38c0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:27:57.108167 containerd[1470]: time="2024-12-13T13:27:57.107953978Z" level=info msg="RemovePodSandbox \"e800de8956fc69d7244ccf347e8f59184b294ff27d86fb84f83ff71188ec38c0\" returns successfully" Dec 13 13:27:57.108329 containerd[1470]: time="2024-12-13T13:27:57.108248218Z" level=info msg="StopPodSandbox for \"a1e711814fe87f2d186bc5aa964ee62e4f7c132dedd2d5657686553649b8af34\"" Dec 13 13:27:57.108510 containerd[1470]: time="2024-12-13T13:27:57.108344378Z" level=info msg="TearDown network for sandbox \"a1e711814fe87f2d186bc5aa964ee62e4f7c132dedd2d5657686553649b8af34\" successfully" Dec 13 13:27:57.108510 containerd[1470]: time="2024-12-13T13:27:57.108354618Z" level=info msg="StopPodSandbox for \"a1e711814fe87f2d186bc5aa964ee62e4f7c132dedd2d5657686553649b8af34\" returns successfully" Dec 13 13:27:57.108721 containerd[1470]: time="2024-12-13T13:27:57.108687738Z" level=info msg="RemovePodSandbox for \"a1e711814fe87f2d186bc5aa964ee62e4f7c132dedd2d5657686553649b8af34\"" Dec 13 13:27:57.108771 containerd[1470]: time="2024-12-13T13:27:57.108721698Z" level=info msg="Forcibly stopping sandbox \"a1e711814fe87f2d186bc5aa964ee62e4f7c132dedd2d5657686553649b8af34\"" Dec 13 13:27:57.108806 containerd[1470]: time="2024-12-13T13:27:57.108793058Z" level=info msg="TearDown network for sandbox \"a1e711814fe87f2d186bc5aa964ee62e4f7c132dedd2d5657686553649b8af34\" successfully" Dec 13 13:27:57.110997 containerd[1470]: time="2024-12-13T13:27:57.110960858Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a1e711814fe87f2d186bc5aa964ee62e4f7c132dedd2d5657686553649b8af34\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:27:57.111063 containerd[1470]: time="2024-12-13T13:27:57.111015978Z" level=info msg="RemovePodSandbox \"a1e711814fe87f2d186bc5aa964ee62e4f7c132dedd2d5657686553649b8af34\" returns successfully" Dec 13 13:27:57.111670 containerd[1470]: time="2024-12-13T13:27:57.111488058Z" level=info msg="StopPodSandbox for \"2acfd3bccb51b496667a56602d7f76917d689ef151d291f89058c90ec3e6fdbd\"" Dec 13 13:27:57.111670 containerd[1470]: time="2024-12-13T13:27:57.111588138Z" level=info msg="TearDown network for sandbox \"2acfd3bccb51b496667a56602d7f76917d689ef151d291f89058c90ec3e6fdbd\" successfully" Dec 13 13:27:57.111670 containerd[1470]: time="2024-12-13T13:27:57.111599258Z" level=info msg="StopPodSandbox for \"2acfd3bccb51b496667a56602d7f76917d689ef151d291f89058c90ec3e6fdbd\" returns successfully" Dec 13 13:27:57.112066 containerd[1470]: time="2024-12-13T13:27:57.112003258Z" level=info msg="RemovePodSandbox for \"2acfd3bccb51b496667a56602d7f76917d689ef151d291f89058c90ec3e6fdbd\"" Dec 13 13:27:57.112066 containerd[1470]: time="2024-12-13T13:27:57.112031498Z" level=info msg="Forcibly stopping sandbox \"2acfd3bccb51b496667a56602d7f76917d689ef151d291f89058c90ec3e6fdbd\"" Dec 13 13:27:57.112173 containerd[1470]: time="2024-12-13T13:27:57.112093499Z" level=info msg="TearDown network for sandbox \"2acfd3bccb51b496667a56602d7f76917d689ef151d291f89058c90ec3e6fdbd\" successfully" Dec 13 13:27:57.114228 containerd[1470]: time="2024-12-13T13:27:57.114192419Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2acfd3bccb51b496667a56602d7f76917d689ef151d291f89058c90ec3e6fdbd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:27:57.114291 containerd[1470]: time="2024-12-13T13:27:57.114248659Z" level=info msg="RemovePodSandbox \"2acfd3bccb51b496667a56602d7f76917d689ef151d291f89058c90ec3e6fdbd\" returns successfully" Dec 13 13:27:57.114643 containerd[1470]: time="2024-12-13T13:27:57.114562339Z" level=info msg="StopPodSandbox for \"dde4c85d63593dbc1b0eb31d47d9664d70fa15ee0fbb41aae212d0e0070482b2\"" Dec 13 13:27:57.114805 containerd[1470]: time="2024-12-13T13:27:57.114656499Z" level=info msg="TearDown network for sandbox \"dde4c85d63593dbc1b0eb31d47d9664d70fa15ee0fbb41aae212d0e0070482b2\" successfully" Dec 13 13:27:57.114805 containerd[1470]: time="2024-12-13T13:27:57.114666699Z" level=info msg="StopPodSandbox for \"dde4c85d63593dbc1b0eb31d47d9664d70fa15ee0fbb41aae212d0e0070482b2\" returns successfully" Dec 13 13:27:57.114953 containerd[1470]: time="2024-12-13T13:27:57.114865139Z" level=info msg="RemovePodSandbox for \"dde4c85d63593dbc1b0eb31d47d9664d70fa15ee0fbb41aae212d0e0070482b2\"" Dec 13 13:27:57.114953 containerd[1470]: time="2024-12-13T13:27:57.114902579Z" level=info msg="Forcibly stopping sandbox \"dde4c85d63593dbc1b0eb31d47d9664d70fa15ee0fbb41aae212d0e0070482b2\"" Dec 13 13:27:57.115099 containerd[1470]: time="2024-12-13T13:27:57.114960099Z" level=info msg="TearDown network for sandbox \"dde4c85d63593dbc1b0eb31d47d9664d70fa15ee0fbb41aae212d0e0070482b2\" successfully" Dec 13 13:27:57.117984 containerd[1470]: time="2024-12-13T13:27:57.117943019Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dde4c85d63593dbc1b0eb31d47d9664d70fa15ee0fbb41aae212d0e0070482b2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:27:57.118054 containerd[1470]: time="2024-12-13T13:27:57.118025259Z" level=info msg="RemovePodSandbox \"dde4c85d63593dbc1b0eb31d47d9664d70fa15ee0fbb41aae212d0e0070482b2\" returns successfully" Dec 13 13:28:00.710731 systemd[1]: Started sshd@19-10.0.0.123:22-10.0.0.1:32948.service - OpenSSH per-connection server daemon (10.0.0.1:32948). Dec 13 13:28:00.762556 sshd[5860]: Accepted publickey for core from 10.0.0.1 port 32948 ssh2: RSA SHA256:q9cWvSR3bBxu+L28Z4JmOHhvW5qF2BbU+1GVJNGhIf4 Dec 13 13:28:00.763837 sshd-session[5860]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:28:00.767398 systemd-logind[1454]: New session 20 of user core. Dec 13 13:28:00.779024 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 13 13:28:00.898973 sshd[5862]: Connection closed by 10.0.0.1 port 32948 Dec 13 13:28:00.899728 sshd-session[5860]: pam_unix(sshd:session): session closed for user core Dec 13 13:28:00.904352 systemd[1]: sshd@19-10.0.0.123:22-10.0.0.1:32948.service: Deactivated successfully. Dec 13 13:28:00.906373 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 13:28:00.907190 systemd-logind[1454]: Session 20 logged out. Waiting for processes to exit. Dec 13 13:28:00.908150 systemd-logind[1454]: Removed session 20.