Dec 13 13:06:54.897817 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Dec 13 13:06:54.897836 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Fri Dec 13 11:56:07 -00 2024 Dec 13 13:06:54.897846 kernel: KASLR enabled Dec 13 13:06:54.897851 kernel: efi: EFI v2.7 by EDK II Dec 13 13:06:54.897857 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbae018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40218 Dec 13 13:06:54.897862 kernel: random: crng init done Dec 13 13:06:54.897869 kernel: secureboot: Secure boot disabled Dec 13 13:06:54.897874 kernel: ACPI: Early table checksum verification disabled Dec 13 13:06:54.897880 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) Dec 13 13:06:54.897887 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) Dec 13 13:06:54.897893 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:06:54.897899 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:06:54.897904 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:06:54.897911 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:06:54.897918 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:06:54.897925 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:06:54.897932 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:06:54.897938 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:06:54.897944 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:06:54.897951 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Dec 13 13:06:54.897958 kernel: NUMA: Failed to initialise from firmware Dec 13 13:06:54.897964 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Dec 13 13:06:54.897970 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Dec 13 13:06:54.897976 kernel: Zone ranges: Dec 13 13:06:54.897982 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Dec 13 13:06:54.897989 kernel: DMA32 empty Dec 13 13:06:54.897995 kernel: Normal empty Dec 13 13:06:54.898001 kernel: Movable zone start for each node Dec 13 13:06:54.898007 kernel: Early memory node ranges Dec 13 13:06:54.898013 kernel: node 0: [mem 0x0000000040000000-0x00000000d967ffff] Dec 13 13:06:54.898019 kernel: node 0: [mem 0x00000000d9680000-0x00000000d968ffff] Dec 13 13:06:54.898025 kernel: node 0: [mem 0x00000000d9690000-0x00000000d976ffff] Dec 13 13:06:54.898031 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Dec 13 13:06:54.898036 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Dec 13 13:06:54.898042 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Dec 13 13:06:54.898048 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Dec 13 13:06:54.898054 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Dec 13 13:06:54.898061 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Dec 13 13:06:54.898067 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Dec 13 13:06:54.898073 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Dec 13 13:06:54.898082 kernel: psci: probing for conduit method from ACPI. Dec 13 13:06:54.898097 kernel: psci: PSCIv1.1 detected in firmware. Dec 13 13:06:54.898104 kernel: psci: Using standard PSCI v0.2 function IDs Dec 13 13:06:54.898112 kernel: psci: Trusted OS migration not required Dec 13 13:06:54.898118 kernel: psci: SMC Calling Convention v1.1 Dec 13 13:06:54.898125 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Dec 13 13:06:54.898131 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Dec 13 13:06:54.898137 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Dec 13 13:06:54.898144 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Dec 13 13:06:54.898150 kernel: Detected PIPT I-cache on CPU0 Dec 13 13:06:54.898157 kernel: CPU features: detected: GIC system register CPU interface Dec 13 13:06:54.898163 kernel: CPU features: detected: Hardware dirty bit management Dec 13 13:06:54.898169 kernel: CPU features: detected: Spectre-v4 Dec 13 13:06:54.898177 kernel: CPU features: detected: Spectre-BHB Dec 13 13:06:54.898183 kernel: CPU features: kernel page table isolation forced ON by KASLR Dec 13 13:06:54.898190 kernel: CPU features: detected: Kernel page table isolation (KPTI) Dec 13 13:06:54.898196 kernel: CPU features: detected: ARM erratum 1418040 Dec 13 13:06:54.898203 kernel: CPU features: detected: SSBS not fully self-synchronizing Dec 13 13:06:54.898209 kernel: alternatives: applying boot alternatives Dec 13 13:06:54.898217 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c48af8adabdaf1d8e07ceb011d2665929c607ddf2c4d40203b31334d745cc472 Dec 13 13:06:54.898223 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 13:06:54.898230 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 13:06:54.898236 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 13:06:54.898243 kernel: Fallback order for Node 0: 0 Dec 13 13:06:54.898250 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Dec 13 13:06:54.898257 kernel: Policy zone: DMA Dec 13 13:06:54.898263 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 13:06:54.898269 kernel: software IO TLB: area num 4. Dec 13 13:06:54.898276 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Dec 13 13:06:54.898282 kernel: Memory: 2385940K/2572288K available (10304K kernel code, 2184K rwdata, 8088K rodata, 39936K init, 897K bss, 186348K reserved, 0K cma-reserved) Dec 13 13:06:54.898289 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Dec 13 13:06:54.898295 kernel: trace event string verifier disabled Dec 13 13:06:54.898302 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 13:06:54.898309 kernel: rcu: RCU event tracing is enabled. Dec 13 13:06:54.898316 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Dec 13 13:06:54.898322 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 13:06:54.898330 kernel: Tracing variant of Tasks RCU enabled. Dec 13 13:06:54.898336 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 13:06:54.898343 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Dec 13 13:06:54.898349 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 13 13:06:54.898356 kernel: GICv3: 256 SPIs implemented Dec 13 13:06:54.898362 kernel: GICv3: 0 Extended SPIs implemented Dec 13 13:06:54.898368 kernel: Root IRQ handler: gic_handle_irq Dec 13 13:06:54.898375 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Dec 13 13:06:54.898381 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Dec 13 13:06:54.898387 kernel: ITS [mem 0x08080000-0x0809ffff] Dec 13 13:06:54.898394 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Dec 13 13:06:54.898402 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Dec 13 13:06:54.898408 kernel: GICv3: using LPI property table @0x00000000400f0000 Dec 13 13:06:54.898415 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Dec 13 13:06:54.898421 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 13:06:54.898427 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 13:06:54.898434 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Dec 13 13:06:54.898440 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Dec 13 13:06:54.898447 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Dec 13 13:06:54.898453 kernel: arm-pv: using stolen time PV Dec 13 13:06:54.898460 kernel: Console: colour dummy device 80x25 Dec 13 13:06:54.898467 kernel: ACPI: Core revision 20230628 Dec 13 13:06:54.898474 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Dec 13 13:06:54.898481 kernel: pid_max: default: 32768 minimum: 301 Dec 13 13:06:54.898488 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 13:06:54.898494 kernel: landlock: Up and running. Dec 13 13:06:54.898501 kernel: SELinux: Initializing. Dec 13 13:06:54.898507 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 13:06:54.898514 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 13:06:54.898521 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 13:06:54.898527 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 13:06:54.898535 kernel: rcu: Hierarchical SRCU implementation. Dec 13 13:06:54.898542 kernel: rcu: Max phase no-delay instances is 400. Dec 13 13:06:54.898548 kernel: Platform MSI: ITS@0x8080000 domain created Dec 13 13:06:54.898555 kernel: PCI/MSI: ITS@0x8080000 domain created Dec 13 13:06:54.898562 kernel: Remapping and enabling EFI services. Dec 13 13:06:54.898569 kernel: smp: Bringing up secondary CPUs ... Dec 13 13:06:54.898575 kernel: Detected PIPT I-cache on CPU1 Dec 13 13:06:54.898582 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Dec 13 13:06:54.898589 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Dec 13 13:06:54.898596 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 13:06:54.898603 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Dec 13 13:06:54.898614 kernel: Detected PIPT I-cache on CPU2 Dec 13 13:06:54.898623 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Dec 13 13:06:54.898630 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Dec 13 13:06:54.898637 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 13:06:54.898643 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Dec 13 13:06:54.898650 kernel: Detected PIPT I-cache on CPU3 Dec 13 13:06:54.898657 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Dec 13 13:06:54.898665 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Dec 13 13:06:54.898672 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 13:06:54.898686 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Dec 13 13:06:54.898693 kernel: smp: Brought up 1 node, 4 CPUs Dec 13 13:06:54.898700 kernel: SMP: Total of 4 processors activated. Dec 13 13:06:54.898707 kernel: CPU features: detected: 32-bit EL0 Support Dec 13 13:06:54.898714 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Dec 13 13:06:54.898721 kernel: CPU features: detected: Common not Private translations Dec 13 13:06:54.898728 kernel: CPU features: detected: CRC32 instructions Dec 13 13:06:54.898737 kernel: CPU features: detected: Enhanced Virtualization Traps Dec 13 13:06:54.898744 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Dec 13 13:06:54.898751 kernel: CPU features: detected: LSE atomic instructions Dec 13 13:06:54.898758 kernel: CPU features: detected: Privileged Access Never Dec 13 13:06:54.898765 kernel: CPU features: detected: RAS Extension Support Dec 13 13:06:54.898772 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Dec 13 13:06:54.898779 kernel: CPU: All CPU(s) started at EL1 Dec 13 13:06:54.898787 kernel: alternatives: applying system-wide alternatives Dec 13 13:06:54.898794 kernel: devtmpfs: initialized Dec 13 13:06:54.898803 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 13:06:54.898810 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Dec 13 13:06:54.898817 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 13:06:54.898824 kernel: SMBIOS 3.0.0 present. Dec 13 13:06:54.898831 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Dec 13 13:06:54.898838 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 13:06:54.898845 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Dec 13 13:06:54.898852 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 13 13:06:54.898859 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 13 13:06:54.898867 kernel: audit: initializing netlink subsys (disabled) Dec 13 13:06:54.898874 kernel: audit: type=2000 audit(0.018:1): state=initialized audit_enabled=0 res=1 Dec 13 13:06:54.898881 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 13:06:54.898888 kernel: cpuidle: using governor menu Dec 13 13:06:54.898895 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 13 13:06:54.898901 kernel: ASID allocator initialised with 32768 entries Dec 13 13:06:54.898908 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 13:06:54.898915 kernel: Serial: AMBA PL011 UART driver Dec 13 13:06:54.898922 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Dec 13 13:06:54.898931 kernel: Modules: 0 pages in range for non-PLT usage Dec 13 13:06:54.898938 kernel: Modules: 508880 pages in range for PLT usage Dec 13 13:06:54.898945 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 13:06:54.898952 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 13:06:54.898959 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Dec 13 13:06:54.898967 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Dec 13 13:06:54.898974 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 13:06:54.898981 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 13:06:54.898988 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Dec 13 13:06:54.899011 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Dec 13 13:06:54.899018 kernel: ACPI: Added _OSI(Module Device) Dec 13 13:06:54.899025 kernel: ACPI: Added _OSI(Processor Device) Dec 13 13:06:54.899033 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 13:06:54.899040 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 13:06:54.899048 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 13:06:54.899055 kernel: ACPI: Interpreter enabled Dec 13 13:06:54.899062 kernel: ACPI: Using GIC for interrupt routing Dec 13 13:06:54.899069 kernel: ACPI: MCFG table detected, 1 entries Dec 13 13:06:54.899078 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Dec 13 13:06:54.899085 kernel: printk: console [ttyAMA0] enabled Dec 13 13:06:54.899097 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 13:06:54.899219 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 13:06:54.899289 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 13 13:06:54.899351 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 13 13:06:54.899412 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Dec 13 13:06:54.899476 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Dec 13 13:06:54.899485 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Dec 13 13:06:54.899492 kernel: PCI host bridge to bus 0000:00 Dec 13 13:06:54.899558 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Dec 13 13:06:54.899617 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Dec 13 13:06:54.899672 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Dec 13 13:06:54.899741 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 13:06:54.899827 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Dec 13 13:06:54.899908 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Dec 13 13:06:54.899977 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Dec 13 13:06:54.900039 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Dec 13 13:06:54.900143 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Dec 13 13:06:54.900214 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Dec 13 13:06:54.900277 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Dec 13 13:06:54.900344 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Dec 13 13:06:54.900416 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Dec 13 13:06:54.900472 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Dec 13 13:06:54.900529 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Dec 13 13:06:54.900538 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Dec 13 13:06:54.900545 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Dec 13 13:06:54.900552 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Dec 13 13:06:54.900560 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Dec 13 13:06:54.900568 kernel: iommu: Default domain type: Translated Dec 13 13:06:54.900574 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 13 13:06:54.900581 kernel: efivars: Registered efivars operations Dec 13 13:06:54.900588 kernel: vgaarb: loaded Dec 13 13:06:54.900595 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 13 13:06:54.900602 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 13:06:54.900609 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 13:06:54.900616 kernel: pnp: PnP ACPI init Dec 13 13:06:54.900694 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Dec 13 13:06:54.900707 kernel: pnp: PnP ACPI: found 1 devices Dec 13 13:06:54.900714 kernel: NET: Registered PF_INET protocol family Dec 13 13:06:54.900721 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 13:06:54.900728 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 13 13:06:54.900735 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 13:06:54.900742 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 13:06:54.900749 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 13 13:06:54.900756 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 13 13:06:54.900765 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 13:06:54.900772 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 13:06:54.900779 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 13:06:54.900786 kernel: PCI: CLS 0 bytes, default 64 Dec 13 13:06:54.900793 kernel: kvm [1]: HYP mode not available Dec 13 13:06:54.900800 kernel: Initialise system trusted keyrings Dec 13 13:06:54.900807 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 13 13:06:54.900814 kernel: Key type asymmetric registered Dec 13 13:06:54.900821 kernel: Asymmetric key parser 'x509' registered Dec 13 13:06:54.900829 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 13 13:06:54.900837 kernel: io scheduler mq-deadline registered Dec 13 13:06:54.900843 kernel: io scheduler kyber registered Dec 13 13:06:54.900850 kernel: io scheduler bfq registered Dec 13 13:06:54.900857 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Dec 13 13:06:54.900864 kernel: ACPI: button: Power Button [PWRB] Dec 13 13:06:54.900871 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Dec 13 13:06:54.900936 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Dec 13 13:06:54.900945 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 13:06:54.900954 kernel: thunder_xcv, ver 1.0 Dec 13 13:06:54.900961 kernel: thunder_bgx, ver 1.0 Dec 13 13:06:54.900968 kernel: nicpf, ver 1.0 Dec 13 13:06:54.900975 kernel: nicvf, ver 1.0 Dec 13 13:06:54.901047 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 13 13:06:54.901124 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-12-13T13:06:54 UTC (1734095214) Dec 13 13:06:54.901134 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 13:06:54.901142 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Dec 13 13:06:54.901151 kernel: watchdog: Delayed init of the lockup detector failed: -19 Dec 13 13:06:54.901158 kernel: watchdog: Hard watchdog permanently disabled Dec 13 13:06:54.901165 kernel: NET: Registered PF_INET6 protocol family Dec 13 13:06:54.901172 kernel: Segment Routing with IPv6 Dec 13 13:06:54.901179 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 13:06:54.901186 kernel: NET: Registered PF_PACKET protocol family Dec 13 13:06:54.901193 kernel: Key type dns_resolver registered Dec 13 13:06:54.901200 kernel: registered taskstats version 1 Dec 13 13:06:54.901208 kernel: Loading compiled-in X.509 certificates Dec 13 13:06:54.901216 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: 752b3e36c6039904ea643ccad2b3f5f3cb4ebf78' Dec 13 13:06:54.901224 kernel: Key type .fscrypt registered Dec 13 13:06:54.901231 kernel: Key type fscrypt-provisioning registered Dec 13 13:06:54.901238 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 13:06:54.901245 kernel: ima: Allocated hash algorithm: sha1 Dec 13 13:06:54.901252 kernel: ima: No architecture policies found Dec 13 13:06:54.901260 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 13 13:06:54.901266 kernel: clk: Disabling unused clocks Dec 13 13:06:54.901273 kernel: Freeing unused kernel memory: 39936K Dec 13 13:06:54.901281 kernel: Run /init as init process Dec 13 13:06:54.901288 kernel: with arguments: Dec 13 13:06:54.901295 kernel: /init Dec 13 13:06:54.901301 kernel: with environment: Dec 13 13:06:54.901308 kernel: HOME=/ Dec 13 13:06:54.901315 kernel: TERM=linux Dec 13 13:06:54.901322 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 13:06:54.901330 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 13:06:54.901340 systemd[1]: Detected virtualization kvm. Dec 13 13:06:54.901348 systemd[1]: Detected architecture arm64. Dec 13 13:06:54.901355 systemd[1]: Running in initrd. Dec 13 13:06:54.901362 systemd[1]: No hostname configured, using default hostname. Dec 13 13:06:54.901369 systemd[1]: Hostname set to . Dec 13 13:06:54.901376 systemd[1]: Initializing machine ID from VM UUID. Dec 13 13:06:54.901384 systemd[1]: Queued start job for default target initrd.target. Dec 13 13:06:54.901391 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 13:06:54.901400 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 13:06:54.901408 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 13:06:54.901416 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 13:06:54.901424 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 13:06:54.901431 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 13:06:54.901440 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 13:06:54.901449 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 13:06:54.901456 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 13:06:54.901464 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 13:06:54.901471 systemd[1]: Reached target paths.target - Path Units. Dec 13 13:06:54.901479 systemd[1]: Reached target slices.target - Slice Units. Dec 13 13:06:54.901486 systemd[1]: Reached target swap.target - Swaps. Dec 13 13:06:54.901494 systemd[1]: Reached target timers.target - Timer Units. Dec 13 13:06:54.901501 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 13:06:54.901508 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 13:06:54.901517 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 13:06:54.901525 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 13:06:54.901533 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 13:06:54.901540 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 13:06:54.901548 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 13:06:54.901555 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 13:06:54.901562 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 13:06:54.901570 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 13:06:54.901578 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 13:06:54.901586 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 13:06:54.901593 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 13:06:54.901601 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 13:06:54.901608 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:06:54.901615 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 13:06:54.901623 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 13:06:54.901630 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 13:06:54.901640 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 13:06:54.901647 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:06:54.901655 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 13:06:54.901683 systemd-journald[239]: Collecting audit messages is disabled. Dec 13 13:06:54.901705 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 13:06:54.901712 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 13:06:54.901721 systemd-journald[239]: Journal started Dec 13 13:06:54.901743 systemd-journald[239]: Runtime Journal (/run/log/journal/c12986a13e064134afe6b4191d071f4a) is 5.9M, max 47.3M, 41.4M free. Dec 13 13:06:54.892707 systemd-modules-load[240]: Inserted module 'overlay' Dec 13 13:06:54.906112 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 13:06:54.906141 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 13:06:54.909024 systemd-modules-load[240]: Inserted module 'br_netfilter' Dec 13 13:06:54.909775 kernel: Bridge firewalling registered Dec 13 13:06:54.909965 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 13:06:54.912117 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 13:06:54.913347 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 13:06:54.915343 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 13:06:54.920555 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:06:54.921631 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 13:06:54.923672 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 13:06:54.939202 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 13:06:54.941008 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 13:06:54.949691 dracut-cmdline[275]: dracut-dracut-053 Dec 13 13:06:54.951781 dracut-cmdline[275]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c48af8adabdaf1d8e07ceb011d2665929c607ddf2c4d40203b31334d745cc472 Dec 13 13:06:54.964843 systemd-resolved[277]: Positive Trust Anchors: Dec 13 13:06:54.964862 systemd-resolved[277]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 13:06:54.964892 systemd-resolved[277]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 13:06:54.969483 systemd-resolved[277]: Defaulting to hostname 'linux'. Dec 13 13:06:54.970378 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 13:06:54.972272 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 13:06:55.019114 kernel: SCSI subsystem initialized Dec 13 13:06:55.023104 kernel: Loading iSCSI transport class v2.0-870. Dec 13 13:06:55.030109 kernel: iscsi: registered transport (tcp) Dec 13 13:06:55.043117 kernel: iscsi: registered transport (qla4xxx) Dec 13 13:06:55.043133 kernel: QLogic iSCSI HBA Driver Dec 13 13:06:55.079294 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 13:06:55.084224 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 13:06:55.099320 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 13:06:55.099350 kernel: device-mapper: uevent: version 1.0.3 Dec 13 13:06:55.100106 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 13:06:55.147111 kernel: raid6: neonx8 gen() 15792 MB/s Dec 13 13:06:55.164103 kernel: raid6: neonx4 gen() 15815 MB/s Dec 13 13:06:55.181107 kernel: raid6: neonx2 gen() 13220 MB/s Dec 13 13:06:55.198102 kernel: raid6: neonx1 gen() 10420 MB/s Dec 13 13:06:55.215104 kernel: raid6: int64x8 gen() 6792 MB/s Dec 13 13:06:55.232102 kernel: raid6: int64x4 gen() 7347 MB/s Dec 13 13:06:55.249104 kernel: raid6: int64x2 gen() 6111 MB/s Dec 13 13:06:55.266104 kernel: raid6: int64x1 gen() 5055 MB/s Dec 13 13:06:55.266119 kernel: raid6: using algorithm neonx4 gen() 15815 MB/s Dec 13 13:06:55.283107 kernel: raid6: .... xor() 12427 MB/s, rmw enabled Dec 13 13:06:55.283119 kernel: raid6: using neon recovery algorithm Dec 13 13:06:55.288251 kernel: xor: measuring software checksum speed Dec 13 13:06:55.288266 kernel: 8regs : 21658 MB/sec Dec 13 13:06:55.288280 kernel: 32regs : 21630 MB/sec Dec 13 13:06:55.289196 kernel: arm64_neon : 27908 MB/sec Dec 13 13:06:55.289218 kernel: xor: using function: arm64_neon (27908 MB/sec) Dec 13 13:06:55.337107 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 13:06:55.346992 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 13:06:55.357216 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 13:06:55.368170 systemd-udevd[460]: Using default interface naming scheme 'v255'. Dec 13 13:06:55.371241 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 13:06:55.373399 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 13:06:55.387438 dracut-pre-trigger[462]: rd.md=0: removing MD RAID activation Dec 13 13:06:55.410235 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 13:06:55.422216 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 13:06:55.460832 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 13:06:55.468254 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 13:06:55.478787 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 13:06:55.480364 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 13:06:55.481237 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 13:06:55.482805 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 13:06:55.493217 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 13:06:55.500154 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 13:06:55.503552 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Dec 13 13:06:55.512973 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Dec 13 13:06:55.513087 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 13:06:55.513163 kernel: GPT:9289727 != 19775487 Dec 13 13:06:55.513172 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 13:06:55.513182 kernel: GPT:9289727 != 19775487 Dec 13 13:06:55.513190 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 13:06:55.513199 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 13:06:55.512623 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 13:06:55.512747 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:06:55.515491 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 13:06:55.516963 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 13:06:55.517119 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:06:55.519158 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:06:55.528485 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:06:55.534105 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (520) Dec 13 13:06:55.537110 kernel: BTRFS: device fsid 47b12626-f7d3-4179-9720-ca262eb4c614 devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (506) Dec 13 13:06:55.540129 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:06:55.548076 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 13 13:06:55.552469 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 13 13:06:55.559496 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 13:06:55.563270 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Dec 13 13:06:55.564343 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 13 13:06:55.579224 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 13:06:55.581054 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 13:06:55.585874 disk-uuid[550]: Primary Header is updated. Dec 13 13:06:55.585874 disk-uuid[550]: Secondary Entries is updated. Dec 13 13:06:55.585874 disk-uuid[550]: Secondary Header is updated. Dec 13 13:06:55.590941 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 13:06:55.603573 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:06:56.599226 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 13:06:56.600324 disk-uuid[551]: The operation has completed successfully. Dec 13 13:06:56.626459 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 13:06:56.626560 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 13:06:56.650289 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 13:06:56.653185 sh[570]: Success Dec 13 13:06:56.665107 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Dec 13 13:06:56.696481 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 13:06:56.708450 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 13:06:56.711120 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 13:06:56.720312 kernel: BTRFS info (device dm-0): first mount of filesystem 47b12626-f7d3-4179-9720-ca262eb4c614 Dec 13 13:06:56.720352 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Dec 13 13:06:56.720362 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 13:06:56.721622 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 13:06:56.721638 kernel: BTRFS info (device dm-0): using free space tree Dec 13 13:06:56.725037 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 13:06:56.726343 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 13:06:56.735266 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 13:06:56.736829 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 13:06:56.745003 kernel: BTRFS info (device vda6): first mount of filesystem d0a3d620-8ab2-45d8-a26c-bb488ffd59f2 Dec 13 13:06:56.745053 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 13:06:56.745064 kernel: BTRFS info (device vda6): using free space tree Dec 13 13:06:56.748120 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 13:06:56.755667 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 13:06:56.757116 kernel: BTRFS info (device vda6): last unmount of filesystem d0a3d620-8ab2-45d8-a26c-bb488ffd59f2 Dec 13 13:06:56.763435 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 13:06:56.771262 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 13:06:56.838341 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 13:06:56.848275 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 13:06:56.878120 systemd-networkd[762]: lo: Link UP Dec 13 13:06:56.878129 systemd-networkd[762]: lo: Gained carrier Dec 13 13:06:56.880157 ignition[659]: Ignition 2.20.0 Dec 13 13:06:56.879108 systemd-networkd[762]: Enumeration completed Dec 13 13:06:56.880163 ignition[659]: Stage: fetch-offline Dec 13 13:06:56.879387 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 13:06:56.880197 ignition[659]: no configs at "/usr/lib/ignition/base.d" Dec 13 13:06:56.880411 systemd[1]: Reached target network.target - Network. Dec 13 13:06:56.880205 ignition[659]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 13:06:56.880609 systemd-networkd[762]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 13:06:56.880378 ignition[659]: parsed url from cmdline: "" Dec 13 13:06:56.880613 systemd-networkd[762]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 13:06:56.880381 ignition[659]: no config URL provided Dec 13 13:06:56.881523 systemd-networkd[762]: eth0: Link UP Dec 13 13:06:56.880386 ignition[659]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 13:06:56.881526 systemd-networkd[762]: eth0: Gained carrier Dec 13 13:06:56.880393 ignition[659]: no config at "/usr/lib/ignition/user.ign" Dec 13 13:06:56.881536 systemd-networkd[762]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 13:06:56.880418 ignition[659]: op(1): [started] loading QEMU firmware config module Dec 13 13:06:56.880422 ignition[659]: op(1): executing: "modprobe" "qemu_fw_cfg" Dec 13 13:06:56.889588 ignition[659]: op(1): [finished] loading QEMU firmware config module Dec 13 13:06:56.902136 systemd-networkd[762]: eth0: DHCPv4 address 10.0.0.33/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 13:06:56.934196 ignition[659]: parsing config with SHA512: 1e21fa382faaa2bfc03c5111723461b212bf4d7c28d4a98e77761544721710d4fc4b429df4f535045d562c5f968307732111a1802a2d3388dc5b7ee3a580f2fe Dec 13 13:06:56.939748 unknown[659]: fetched base config from "system" Dec 13 13:06:56.939763 unknown[659]: fetched user config from "qemu" Dec 13 13:06:56.940329 ignition[659]: fetch-offline: fetch-offline passed Dec 13 13:06:56.942116 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 13:06:56.940423 ignition[659]: Ignition finished successfully Dec 13 13:06:56.943466 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 13 13:06:56.953257 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 13:06:56.963365 ignition[771]: Ignition 2.20.0 Dec 13 13:06:56.963377 ignition[771]: Stage: kargs Dec 13 13:06:56.963539 ignition[771]: no configs at "/usr/lib/ignition/base.d" Dec 13 13:06:56.963548 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 13:06:56.964461 ignition[771]: kargs: kargs passed Dec 13 13:06:56.964510 ignition[771]: Ignition finished successfully Dec 13 13:06:56.967712 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 13:06:56.978253 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 13:06:56.988357 ignition[780]: Ignition 2.20.0 Dec 13 13:06:56.988369 ignition[780]: Stage: disks Dec 13 13:06:56.988547 ignition[780]: no configs at "/usr/lib/ignition/base.d" Dec 13 13:06:56.988572 ignition[780]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 13:06:56.989523 ignition[780]: disks: disks passed Dec 13 13:06:56.991078 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 13:06:56.989574 ignition[780]: Ignition finished successfully Dec 13 13:06:56.992341 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 13:06:56.993587 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 13:06:56.995228 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 13:06:56.996542 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 13:06:56.998063 systemd[1]: Reached target basic.target - Basic System. Dec 13 13:06:57.015269 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 13:06:57.026799 systemd-fsck[791]: ROOT: clean, 14/553520 files, 52654/553472 blocks Dec 13 13:06:57.031242 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 13:06:57.040254 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 13:06:57.095180 kernel: EXT4-fs (vda9): mounted filesystem 0aa4851d-a2ba-4d04-90b3-5d00bf608ecc r/w with ordered data mode. Quota mode: none. Dec 13 13:06:57.092983 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 13:06:57.094192 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 13:06:57.108205 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 13:06:57.110916 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 13:06:57.112726 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 13 13:06:57.112772 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 13:06:57.112795 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 13:06:57.118396 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (799) Dec 13 13:06:57.118192 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 13:06:57.119995 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 13:06:57.124775 kernel: BTRFS info (device vda6): first mount of filesystem d0a3d620-8ab2-45d8-a26c-bb488ffd59f2 Dec 13 13:06:57.124801 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 13:06:57.124811 kernel: BTRFS info (device vda6): using free space tree Dec 13 13:06:57.126141 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 13:06:57.127291 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 13:06:57.168442 initrd-setup-root[823]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 13:06:57.176841 initrd-setup-root[830]: cut: /sysroot/etc/group: No such file or directory Dec 13 13:06:57.181908 initrd-setup-root[837]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 13:06:57.186633 initrd-setup-root[844]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 13:06:57.272132 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 13:06:57.287198 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 13:06:57.289520 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 13:06:57.294114 kernel: BTRFS info (device vda6): last unmount of filesystem d0a3d620-8ab2-45d8-a26c-bb488ffd59f2 Dec 13 13:06:57.308590 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 13:06:57.312836 ignition[912]: INFO : Ignition 2.20.0 Dec 13 13:06:57.312836 ignition[912]: INFO : Stage: mount Dec 13 13:06:57.314143 ignition[912]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 13:06:57.314143 ignition[912]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 13:06:57.314143 ignition[912]: INFO : mount: mount passed Dec 13 13:06:57.314143 ignition[912]: INFO : Ignition finished successfully Dec 13 13:06:57.316216 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 13:06:57.323207 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 13:06:57.719612 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 13:06:57.730252 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 13:06:57.737285 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (926) Dec 13 13:06:57.737313 kernel: BTRFS info (device vda6): first mount of filesystem d0a3d620-8ab2-45d8-a26c-bb488ffd59f2 Dec 13 13:06:57.737324 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 13:06:57.738366 kernel: BTRFS info (device vda6): using free space tree Dec 13 13:06:57.741110 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 13:06:57.741643 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 13:06:57.756809 ignition[943]: INFO : Ignition 2.20.0 Dec 13 13:06:57.756809 ignition[943]: INFO : Stage: files Dec 13 13:06:57.757976 ignition[943]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 13:06:57.757976 ignition[943]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 13:06:57.757976 ignition[943]: DEBUG : files: compiled without relabeling support, skipping Dec 13 13:06:57.760677 ignition[943]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 13:06:57.760677 ignition[943]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 13:06:57.760677 ignition[943]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 13:06:57.760677 ignition[943]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 13:06:57.764628 ignition[943]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 13:06:57.764628 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Dec 13 13:06:57.764628 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Dec 13 13:06:57.760879 unknown[943]: wrote ssh authorized keys file for user: core Dec 13 13:06:57.822895 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 13:06:57.952376 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Dec 13 13:06:57.954143 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 13 13:06:57.955259 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 13:06:57.955259 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 13:06:57.955259 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 13:06:57.955259 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 13:06:57.960007 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 13:06:57.960007 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 13:06:57.960007 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 13:06:57.960007 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 13:06:57.960007 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 13:06:57.960007 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 13:06:57.960007 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 13:06:57.960007 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 13:06:57.960007 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Dec 13 13:06:58.327965 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 13 13:06:58.768320 systemd-networkd[762]: eth0: Gained IPv6LL Dec 13 13:06:58.920807 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 13:06:58.922756 ignition[943]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 13 13:06:58.922756 ignition[943]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 13:06:58.922756 ignition[943]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 13:06:58.922756 ignition[943]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 13 13:06:58.922756 ignition[943]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Dec 13 13:06:58.922756 ignition[943]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 13:06:58.922756 ignition[943]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 13:06:58.922756 ignition[943]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Dec 13 13:06:58.922756 ignition[943]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Dec 13 13:06:58.943875 ignition[943]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 13:06:58.947043 ignition[943]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 13:06:58.948133 ignition[943]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Dec 13 13:06:58.948133 ignition[943]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Dec 13 13:06:58.948133 ignition[943]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 13:06:58.948133 ignition[943]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 13:06:58.948133 ignition[943]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 13:06:58.948133 ignition[943]: INFO : files: files passed Dec 13 13:06:58.948133 ignition[943]: INFO : Ignition finished successfully Dec 13 13:06:58.948956 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 13:06:58.958277 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 13:06:58.959916 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 13:06:58.964500 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 13:06:58.964599 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 13:06:58.967056 initrd-setup-root-after-ignition[972]: grep: /sysroot/oem/oem-release: No such file or directory Dec 13 13:06:58.969256 initrd-setup-root-after-ignition[974]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 13:06:58.969256 initrd-setup-root-after-ignition[974]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 13:06:58.971585 initrd-setup-root-after-ignition[978]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 13:06:58.972707 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 13:06:58.974330 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 13:06:58.984298 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 13:06:59.001478 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 13:06:59.001574 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 13:06:59.003420 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 13:06:59.004812 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 13:06:59.006186 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 13:06:59.012209 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 13:06:59.024020 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 13:06:59.026057 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 13:06:59.036108 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 13:06:59.036966 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 13:06:59.038442 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 13:06:59.039681 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 13:06:59.039786 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 13:06:59.041597 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 13:06:59.043016 systemd[1]: Stopped target basic.target - Basic System. Dec 13 13:06:59.044182 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 13:06:59.045514 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 13:06:59.046977 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 13:06:59.048365 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 13:06:59.049647 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 13:06:59.050993 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 13:06:59.052387 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 13:06:59.053603 systemd[1]: Stopped target swap.target - Swaps. Dec 13 13:06:59.054682 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 13:06:59.054788 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 13:06:59.056467 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 13:06:59.057932 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 13:06:59.059342 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 13:06:59.059428 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 13:06:59.060822 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 13:06:59.060924 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 13:06:59.062897 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 13:06:59.063007 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 13:06:59.064359 systemd[1]: Stopped target paths.target - Path Units. Dec 13 13:06:59.065437 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 13:06:59.070164 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 13:06:59.071085 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 13:06:59.072613 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 13:06:59.073764 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 13:06:59.073853 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 13:06:59.074910 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 13:06:59.074988 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 13:06:59.076209 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 13:06:59.076310 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 13:06:59.077587 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 13:06:59.077688 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 13:06:59.089247 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 13:06:59.089898 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 13:06:59.090012 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 13:06:59.092243 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 13:06:59.093449 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 13:06:59.093564 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 13:06:59.094855 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 13:06:59.094944 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 13:06:59.099160 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 13:06:59.101274 ignition[999]: INFO : Ignition 2.20.0 Dec 13 13:06:59.101274 ignition[999]: INFO : Stage: umount Dec 13 13:06:59.101274 ignition[999]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 13:06:59.101274 ignition[999]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 13:06:59.101274 ignition[999]: INFO : umount: umount passed Dec 13 13:06:59.101274 ignition[999]: INFO : Ignition finished successfully Dec 13 13:06:59.099237 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 13:06:59.102342 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 13:06:59.102436 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 13:06:59.104382 systemd[1]: Stopped target network.target - Network. Dec 13 13:06:59.105437 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 13:06:59.105497 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 13:06:59.106928 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 13:06:59.106971 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 13:06:59.108357 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 13:06:59.108399 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 13:06:59.110215 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 13:06:59.110260 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 13:06:59.111618 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 13:06:59.112860 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 13:06:59.115049 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 13:06:59.115578 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 13:06:59.115687 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 13:06:59.116165 systemd-networkd[762]: eth0: DHCPv6 lease lost Dec 13 13:06:59.117361 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 13:06:59.117446 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 13:06:59.118587 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 13:06:59.119446 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 13:06:59.121427 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 13:06:59.121523 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 13:06:59.124023 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 13:06:59.124062 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 13:06:59.136241 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 13:06:59.136904 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 13:06:59.136961 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 13:06:59.138396 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 13:06:59.138434 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 13:06:59.139787 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 13:06:59.139829 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 13:06:59.141285 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 13:06:59.141323 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 13:06:59.142815 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 13:06:59.150008 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 13:06:59.150211 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 13:06:59.163780 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 13:06:59.163918 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 13:06:59.165640 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 13:06:59.165691 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 13:06:59.166461 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 13:06:59.166487 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 13:06:59.167211 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 13:06:59.167252 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 13:06:59.169280 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 13:06:59.169321 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 13:06:59.171209 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 13:06:59.171248 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:06:59.186226 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 13:06:59.186950 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 13:06:59.186998 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 13:06:59.188568 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 13:06:59.188606 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:06:59.190621 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 13:06:59.190721 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 13:06:59.192328 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 13:06:59.195233 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 13:06:59.202490 systemd[1]: Switching root. Dec 13 13:06:59.221700 systemd-journald[239]: Journal stopped Dec 13 13:06:59.874734 systemd-journald[239]: Received SIGTERM from PID 1 (systemd). Dec 13 13:06:59.874788 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 13:06:59.874803 kernel: SELinux: policy capability open_perms=1 Dec 13 13:06:59.874819 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 13:06:59.874828 kernel: SELinux: policy capability always_check_network=0 Dec 13 13:06:59.874837 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 13:06:59.874846 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 13:06:59.874855 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 13:06:59.874864 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 13:06:59.874873 kernel: audit: type=1403 audit(1734095219.356:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 13:06:59.874884 systemd[1]: Successfully loaded SELinux policy in 28.992ms. Dec 13 13:06:59.874896 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.016ms. Dec 13 13:06:59.874908 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 13:06:59.874919 systemd[1]: Detected virtualization kvm. Dec 13 13:06:59.874930 systemd[1]: Detected architecture arm64. Dec 13 13:06:59.874940 systemd[1]: Detected first boot. Dec 13 13:06:59.874950 systemd[1]: Initializing machine ID from VM UUID. Dec 13 13:06:59.874960 zram_generator::config[1043]: No configuration found. Dec 13 13:06:59.874971 systemd[1]: Populated /etc with preset unit settings. Dec 13 13:06:59.874981 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 13:06:59.874992 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 13 13:06:59.875002 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 13:06:59.875012 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 13:06:59.875022 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 13:06:59.875032 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 13:06:59.875042 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 13:06:59.875052 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 13:06:59.875062 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 13:06:59.875071 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 13:06:59.875082 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 13:06:59.875103 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 13:06:59.875114 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 13:06:59.875124 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 13:06:59.875134 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 13:06:59.875145 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 13:06:59.875157 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 13:06:59.875167 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Dec 13 13:06:59.875177 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 13:06:59.875189 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 13 13:06:59.875199 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 13 13:06:59.875209 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 13 13:06:59.875219 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 13:06:59.875229 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 13:06:59.875239 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 13:06:59.875248 systemd[1]: Reached target slices.target - Slice Units. Dec 13 13:06:59.875259 systemd[1]: Reached target swap.target - Swaps. Dec 13 13:06:59.875269 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 13:06:59.875279 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 13:06:59.875289 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 13:06:59.875299 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 13:06:59.875309 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 13:06:59.875319 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 13:06:59.875329 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 13:06:59.875343 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 13:06:59.875353 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 13:06:59.875365 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 13:06:59.875375 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 13:06:59.875386 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 13:06:59.875396 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 13:06:59.875407 systemd[1]: Reached target machines.target - Containers. Dec 13 13:06:59.875417 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 13:06:59.875427 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 13:06:59.875437 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 13:06:59.875449 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 13:06:59.875459 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 13:06:59.875469 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 13:06:59.875479 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 13:06:59.875488 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 13:06:59.875498 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 13:06:59.875508 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 13:06:59.875518 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 13:06:59.875529 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 13 13:06:59.875539 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 13:06:59.875549 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 13:06:59.875558 kernel: fuse: init (API version 7.39) Dec 13 13:06:59.875567 kernel: loop: module loaded Dec 13 13:06:59.875576 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 13:06:59.875587 kernel: ACPI: bus type drm_connector registered Dec 13 13:06:59.875597 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 13:06:59.875607 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 13:06:59.875617 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 13:06:59.875628 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 13:06:59.875652 systemd-journald[1110]: Collecting audit messages is disabled. Dec 13 13:06:59.875680 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 13:06:59.875691 systemd[1]: Stopped verity-setup.service. Dec 13 13:06:59.875701 systemd-journald[1110]: Journal started Dec 13 13:06:59.875726 systemd-journald[1110]: Runtime Journal (/run/log/journal/c12986a13e064134afe6b4191d071f4a) is 5.9M, max 47.3M, 41.4M free. Dec 13 13:06:59.706104 systemd[1]: Queued start job for default target multi-user.target. Dec 13 13:06:59.723141 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 13 13:06:59.723482 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 13:06:59.877112 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 13:06:59.878454 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 13:06:59.879454 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 13:06:59.880520 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 13:06:59.881429 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 13:06:59.882432 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 13:06:59.883456 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 13:06:59.885143 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 13:06:59.887150 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 13:06:59.888294 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 13:06:59.888436 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 13:06:59.889513 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 13:06:59.889682 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 13:06:59.890779 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 13:06:59.890917 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 13:06:59.892142 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 13:06:59.892291 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 13:06:59.893373 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 13:06:59.893512 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 13:06:59.894630 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 13:06:59.894774 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 13:06:59.895879 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 13:06:59.897030 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 13:06:59.898347 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 13:06:59.910037 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 13:06:59.924247 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 13:06:59.926053 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 13:06:59.926908 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 13:06:59.926944 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 13:06:59.928585 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 13:06:59.930517 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 13:06:59.932314 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 13:06:59.933149 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 13:06:59.934796 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 13:06:59.937414 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 13:06:59.938330 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 13:06:59.941295 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 13:06:59.942249 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 13:06:59.946164 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 13:06:59.951591 systemd-journald[1110]: Time spent on flushing to /var/log/journal/c12986a13e064134afe6b4191d071f4a is 23.818ms for 856 entries. Dec 13 13:06:59.951591 systemd-journald[1110]: System Journal (/var/log/journal/c12986a13e064134afe6b4191d071f4a) is 8.0M, max 195.6M, 187.6M free. Dec 13 13:06:59.988716 systemd-journald[1110]: Received client request to flush runtime journal. Dec 13 13:06:59.988766 kernel: loop0: detected capacity change from 0 to 113552 Dec 13 13:06:59.988783 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 13:06:59.951344 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 13:06:59.954945 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 13:06:59.959649 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 13:06:59.966464 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 13:06:59.967845 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 13:06:59.969424 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 13:06:59.971118 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 13:06:59.976249 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 13:06:59.987525 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 13:06:59.990278 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 13:06:59.991420 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 13:06:59.994319 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 13:07:00.000339 kernel: loop1: detected capacity change from 0 to 194512 Dec 13 13:07:00.006006 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 13:07:00.006607 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 13:07:00.013582 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 13:07:00.025321 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 13:07:00.028693 udevadm[1168]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Dec 13 13:07:00.043439 systemd-tmpfiles[1175]: ACLs are not supported, ignoring. Dec 13 13:07:00.043456 systemd-tmpfiles[1175]: ACLs are not supported, ignoring. Dec 13 13:07:00.047539 kernel: loop2: detected capacity change from 0 to 116784 Dec 13 13:07:00.047594 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 13:07:00.090113 kernel: loop3: detected capacity change from 0 to 113552 Dec 13 13:07:00.094108 kernel: loop4: detected capacity change from 0 to 194512 Dec 13 13:07:00.100113 kernel: loop5: detected capacity change from 0 to 116784 Dec 13 13:07:00.103647 (sd-merge)[1181]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Dec 13 13:07:00.104178 (sd-merge)[1181]: Merged extensions into '/usr'. Dec 13 13:07:00.108225 systemd[1]: Reloading requested from client PID 1154 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 13:07:00.108241 systemd[1]: Reloading... Dec 13 13:07:00.164126 zram_generator::config[1210]: No configuration found. Dec 13 13:07:00.214751 ldconfig[1149]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 13:07:00.243519 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 13:07:00.279037 systemd[1]: Reloading finished in 170 ms. Dec 13 13:07:00.310153 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 13:07:00.311368 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 13:07:00.324268 systemd[1]: Starting ensure-sysext.service... Dec 13 13:07:00.326245 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 13:07:00.337145 systemd[1]: Reloading requested from client PID 1241 ('systemctl') (unit ensure-sysext.service)... Dec 13 13:07:00.337163 systemd[1]: Reloading... Dec 13 13:07:00.347915 systemd-tmpfiles[1242]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 13:07:00.348138 systemd-tmpfiles[1242]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 13:07:00.348787 systemd-tmpfiles[1242]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 13:07:00.348992 systemd-tmpfiles[1242]: ACLs are not supported, ignoring. Dec 13 13:07:00.349041 systemd-tmpfiles[1242]: ACLs are not supported, ignoring. Dec 13 13:07:00.351546 systemd-tmpfiles[1242]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 13:07:00.351559 systemd-tmpfiles[1242]: Skipping /boot Dec 13 13:07:00.359598 systemd-tmpfiles[1242]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 13:07:00.359617 systemd-tmpfiles[1242]: Skipping /boot Dec 13 13:07:00.383125 zram_generator::config[1272]: No configuration found. Dec 13 13:07:00.458498 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 13:07:00.494105 systemd[1]: Reloading finished in 156 ms. Dec 13 13:07:00.509989 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 13:07:00.525158 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 13:07:00.532056 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 13 13:07:00.534295 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 13:07:00.536281 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 13:07:00.540407 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 13:07:00.548357 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 13:07:00.550453 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 13:07:00.555908 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 13:07:00.556990 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 13:07:00.559323 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 13:07:00.563038 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 13:07:00.564190 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 13:07:00.570392 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 13:07:00.572195 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 13:07:00.572319 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 13:07:00.575481 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 13:07:00.577032 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 13:07:00.577168 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 13:07:00.580237 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 13:07:00.580390 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 13:07:00.581699 systemd-udevd[1313]: Using default interface naming scheme 'v255'. Dec 13 13:07:00.588359 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 13:07:00.589977 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 13:07:00.600338 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 13:07:00.602696 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 13:07:00.603953 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 13:07:00.605688 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 13:07:00.607284 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 13:07:00.609583 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 13:07:00.611499 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 13:07:00.613717 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 13:07:00.613852 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 13:07:00.616724 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 13:07:00.616854 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 13:07:00.619309 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 13:07:00.619436 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 13:07:00.636830 augenrules[1367]: No rules Dec 13 13:07:00.639339 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 13:07:00.640722 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 13:07:00.642378 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 13 13:07:00.653399 systemd[1]: Finished ensure-sysext.service. Dec 13 13:07:00.654313 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1356) Dec 13 13:07:00.654346 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 13:07:00.658029 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Dec 13 13:07:00.660834 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 13:07:00.664872 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1356) Dec 13 13:07:00.674333 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 13:07:00.676527 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 13:07:00.680278 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 13:07:00.683480 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 13:07:00.685813 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 13:07:00.688106 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1338) Dec 13 13:07:00.687593 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 13:07:00.690676 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 13 13:07:00.691549 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 13:07:00.691975 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 13:07:00.692161 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 13:07:00.693467 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 13:07:00.694776 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 13:07:00.701240 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 13:07:00.702276 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 13:07:00.703874 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 13:07:00.704000 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 13:07:00.711428 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 13:07:00.711508 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 13:07:00.724048 systemd-resolved[1308]: Positive Trust Anchors: Dec 13 13:07:00.724066 systemd-resolved[1308]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 13:07:00.724125 systemd-resolved[1308]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 13:07:00.730721 systemd-resolved[1308]: Defaulting to hostname 'linux'. Dec 13 13:07:00.732197 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 13:07:00.733050 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 13:07:00.743565 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 13:07:00.757256 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 13:07:00.769646 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 13 13:07:00.770605 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 13:07:00.776600 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 13:07:00.784239 systemd-networkd[1389]: lo: Link UP Dec 13 13:07:00.784245 systemd-networkd[1389]: lo: Gained carrier Dec 13 13:07:00.784982 systemd-networkd[1389]: Enumeration completed Dec 13 13:07:00.785157 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 13:07:00.785522 systemd-networkd[1389]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 13:07:00.785531 systemd-networkd[1389]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 13:07:00.786161 systemd-networkd[1389]: eth0: Link UP Dec 13 13:07:00.786170 systemd-networkd[1389]: eth0: Gained carrier Dec 13 13:07:00.786182 systemd-networkd[1389]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 13:07:00.786475 systemd[1]: Reached target network.target - Network. Dec 13 13:07:00.799305 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 13:07:00.803250 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:07:00.806244 systemd-networkd[1389]: eth0: DHCPv4 address 10.0.0.33/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 13:07:00.806888 systemd-timesyncd[1390]: Network configuration changed, trying to establish connection. Dec 13 13:07:01.225418 systemd-resolved[1308]: Clock change detected. Flushing caches. Dec 13 13:07:01.225465 systemd-timesyncd[1390]: Contacted time server 10.0.0.1:123 (10.0.0.1). Dec 13 13:07:01.225523 systemd-timesyncd[1390]: Initial clock synchronization to Fri 2024-12-13 13:07:01.225361 UTC. Dec 13 13:07:01.226065 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 13:07:01.228302 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 13:07:01.250521 lvm[1407]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 13:07:01.273956 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:07:01.284271 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 13:07:01.285353 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 13:07:01.286182 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 13:07:01.287002 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 13:07:01.287984 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 13:07:01.289037 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 13:07:01.290138 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 13:07:01.291026 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 13:07:01.291860 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 13:07:01.291895 systemd[1]: Reached target paths.target - Path Units. Dec 13 13:07:01.292558 systemd[1]: Reached target timers.target - Timer Units. Dec 13 13:07:01.294030 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 13:07:01.296037 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 13:07:01.302785 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 13:07:01.304694 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 13:07:01.305948 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 13:07:01.306785 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 13:07:01.307544 systemd[1]: Reached target basic.target - Basic System. Dec 13 13:07:01.308228 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 13:07:01.308259 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 13:07:01.309113 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 13:07:01.311047 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 13:07:01.313080 lvm[1415]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 13:07:01.313792 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 13:07:01.320722 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 13:07:01.321485 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 13:07:01.325991 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 13:07:01.328042 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 13 13:07:01.331107 jq[1418]: false Dec 13 13:07:01.332159 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 13:07:01.336352 extend-filesystems[1419]: Found loop3 Dec 13 13:07:01.339231 extend-filesystems[1419]: Found loop4 Dec 13 13:07:01.339231 extend-filesystems[1419]: Found loop5 Dec 13 13:07:01.339231 extend-filesystems[1419]: Found vda Dec 13 13:07:01.339231 extend-filesystems[1419]: Found vda1 Dec 13 13:07:01.339231 extend-filesystems[1419]: Found vda2 Dec 13 13:07:01.339231 extend-filesystems[1419]: Found vda3 Dec 13 13:07:01.339231 extend-filesystems[1419]: Found usr Dec 13 13:07:01.339231 extend-filesystems[1419]: Found vda4 Dec 13 13:07:01.339231 extend-filesystems[1419]: Found vda6 Dec 13 13:07:01.339231 extend-filesystems[1419]: Found vda7 Dec 13 13:07:01.339231 extend-filesystems[1419]: Found vda9 Dec 13 13:07:01.339231 extend-filesystems[1419]: Checking size of /dev/vda9 Dec 13 13:07:01.336407 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 13:07:01.341655 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 13:07:01.345202 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 13:07:01.345599 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 13:07:01.347057 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 13:07:01.354464 dbus-daemon[1417]: [system] SELinux support is enabled Dec 13 13:07:01.354823 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 13:07:01.356579 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 13:07:01.359673 jq[1436]: true Dec 13 13:07:01.363059 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 13:07:01.364724 extend-filesystems[1419]: Resized partition /dev/vda9 Dec 13 13:07:01.371755 extend-filesystems[1441]: resize2fs 1.47.1 (20-May-2024) Dec 13 13:07:01.374368 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 13:07:01.374533 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 13:07:01.374797 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 13:07:01.374945 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 13:07:01.376592 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 13:07:01.376860 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 13:07:01.380040 update_engine[1435]: I20241213 13:07:01.379878 1435 main.cc:92] Flatcar Update Engine starting Dec 13 13:07:01.383476 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1354) Dec 13 13:07:01.383525 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Dec 13 13:07:01.388071 (ntainerd)[1444]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 13:07:01.394371 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 13:07:01.394425 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 13:07:01.395797 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 13:07:01.395823 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 13:07:01.396076 jq[1443]: true Dec 13 13:07:01.397532 update_engine[1435]: I20241213 13:07:01.397465 1435 update_check_scheduler.cc:74] Next update check in 7m37s Dec 13 13:07:01.401211 systemd[1]: Started update-engine.service - Update Engine. Dec 13 13:07:01.406785 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 13:07:01.410859 tar[1442]: linux-arm64/helm Dec 13 13:07:01.414778 systemd-logind[1431]: Watching system buttons on /dev/input/event0 (Power Button) Dec 13 13:07:01.415320 systemd-logind[1431]: New seat seat0. Dec 13 13:07:01.422972 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Dec 13 13:07:01.435726 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 13:07:01.452342 extend-filesystems[1441]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 13 13:07:01.452342 extend-filesystems[1441]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 13 13:07:01.452342 extend-filesystems[1441]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Dec 13 13:07:01.455230 extend-filesystems[1419]: Resized filesystem in /dev/vda9 Dec 13 13:07:01.454754 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 13:07:01.454947 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 13:07:01.462514 bash[1471]: Updated "/home/core/.ssh/authorized_keys" Dec 13 13:07:01.465964 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 13:07:01.467545 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 13 13:07:01.484577 locksmithd[1457]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 13:07:01.597403 containerd[1444]: time="2024-12-13T13:07:01.597301105Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Dec 13 13:07:01.622975 containerd[1444]: time="2024-12-13T13:07:01.622903625Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 13:07:01.624249 containerd[1444]: time="2024-12-13T13:07:01.624211865Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:07:01.624249 containerd[1444]: time="2024-12-13T13:07:01.624242905Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 13:07:01.624332 containerd[1444]: time="2024-12-13T13:07:01.624258145Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 13:07:01.624407 containerd[1444]: time="2024-12-13T13:07:01.624389145Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 13:07:01.624433 containerd[1444]: time="2024-12-13T13:07:01.624411265Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 13:07:01.624504 containerd[1444]: time="2024-12-13T13:07:01.624463745Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:07:01.624504 containerd[1444]: time="2024-12-13T13:07:01.624479785Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 13:07:01.624644 containerd[1444]: time="2024-12-13T13:07:01.624625145Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:07:01.624644 containerd[1444]: time="2024-12-13T13:07:01.624644265Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 13:07:01.624692 containerd[1444]: time="2024-12-13T13:07:01.624657025Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:07:01.624692 containerd[1444]: time="2024-12-13T13:07:01.624665585Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 13:07:01.624756 containerd[1444]: time="2024-12-13T13:07:01.624740745Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 13:07:01.624977 containerd[1444]: time="2024-12-13T13:07:01.624958465Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 13:07:01.625074 containerd[1444]: time="2024-12-13T13:07:01.625058185Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:07:01.625098 containerd[1444]: time="2024-12-13T13:07:01.625075105Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 13:07:01.625166 containerd[1444]: time="2024-12-13T13:07:01.625148105Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 13:07:01.625207 containerd[1444]: time="2024-12-13T13:07:01.625193705Z" level=info msg="metadata content store policy set" policy=shared Dec 13 13:07:01.628820 containerd[1444]: time="2024-12-13T13:07:01.628792545Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 13:07:01.628878 containerd[1444]: time="2024-12-13T13:07:01.628838745Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 13:07:01.628878 containerd[1444]: time="2024-12-13T13:07:01.628854505Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 13:07:01.628878 containerd[1444]: time="2024-12-13T13:07:01.628869185Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 13:07:01.628969 containerd[1444]: time="2024-12-13T13:07:01.628882065Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 13:07:01.629061 containerd[1444]: time="2024-12-13T13:07:01.629025985Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 13:07:01.632844 containerd[1444]: time="2024-12-13T13:07:01.630298465Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 13:07:01.632844 containerd[1444]: time="2024-12-13T13:07:01.630457705Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 13:07:01.632844 containerd[1444]: time="2024-12-13T13:07:01.630484105Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 13:07:01.632844 containerd[1444]: time="2024-12-13T13:07:01.630517545Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 13:07:01.632844 containerd[1444]: time="2024-12-13T13:07:01.630538745Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 13:07:01.632844 containerd[1444]: time="2024-12-13T13:07:01.630555585Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 13:07:01.632844 containerd[1444]: time="2024-12-13T13:07:01.630573185Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 13:07:01.632844 containerd[1444]: time="2024-12-13T13:07:01.630590785Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 13:07:01.632844 containerd[1444]: time="2024-12-13T13:07:01.630607945Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 13:07:01.632844 containerd[1444]: time="2024-12-13T13:07:01.630623865Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 13:07:01.632844 containerd[1444]: time="2024-12-13T13:07:01.630660745Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 13:07:01.632844 containerd[1444]: time="2024-12-13T13:07:01.630675985Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 13:07:01.632844 containerd[1444]: time="2024-12-13T13:07:01.630703065Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 13:07:01.632844 containerd[1444]: time="2024-12-13T13:07:01.630722785Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 13:07:01.633122 containerd[1444]: time="2024-12-13T13:07:01.630738385Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 13:07:01.633122 containerd[1444]: time="2024-12-13T13:07:01.630753945Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 13:07:01.633122 containerd[1444]: time="2024-12-13T13:07:01.630769665Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 13:07:01.633122 containerd[1444]: time="2024-12-13T13:07:01.630795105Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 13:07:01.633122 containerd[1444]: time="2024-12-13T13:07:01.630811785Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 13:07:01.633122 containerd[1444]: time="2024-12-13T13:07:01.630826905Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 13:07:01.633122 containerd[1444]: time="2024-12-13T13:07:01.630843985Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 13:07:01.633122 containerd[1444]: time="2024-12-13T13:07:01.630862905Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 13:07:01.633122 containerd[1444]: time="2024-12-13T13:07:01.630874545Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 13:07:01.633122 containerd[1444]: time="2024-12-13T13:07:01.630889745Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 13:07:01.633122 containerd[1444]: time="2024-12-13T13:07:01.630904785Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 13:07:01.633122 containerd[1444]: time="2024-12-13T13:07:01.630944825Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 13:07:01.633122 containerd[1444]: time="2024-12-13T13:07:01.630976065Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 13:07:01.633122 containerd[1444]: time="2024-12-13T13:07:01.630994145Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 13:07:01.633122 containerd[1444]: time="2024-12-13T13:07:01.631008705Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 13:07:01.633356 containerd[1444]: time="2024-12-13T13:07:01.631200585Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 13:07:01.633356 containerd[1444]: time="2024-12-13T13:07:01.631224905Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 13:07:01.633356 containerd[1444]: time="2024-12-13T13:07:01.631276585Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 13:07:01.633356 containerd[1444]: time="2024-12-13T13:07:01.631296945Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 13:07:01.633356 containerd[1444]: time="2024-12-13T13:07:01.631311545Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 13:07:01.633356 containerd[1444]: time="2024-12-13T13:07:01.631327665Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 13:07:01.633356 containerd[1444]: time="2024-12-13T13:07:01.631338545Z" level=info msg="NRI interface is disabled by configuration." Dec 13 13:07:01.633356 containerd[1444]: time="2024-12-13T13:07:01.631351865Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 13:07:01.633482 containerd[1444]: time="2024-12-13T13:07:01.631728865Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 13:07:01.633482 containerd[1444]: time="2024-12-13T13:07:01.631778745Z" level=info msg="Connect containerd service" Dec 13 13:07:01.633482 containerd[1444]: time="2024-12-13T13:07:01.631818345Z" level=info msg="using legacy CRI server" Dec 13 13:07:01.633482 containerd[1444]: time="2024-12-13T13:07:01.631825665Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 13:07:01.633482 containerd[1444]: time="2024-12-13T13:07:01.632105385Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 13:07:01.633482 containerd[1444]: time="2024-12-13T13:07:01.633141745Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 13:07:01.633825 containerd[1444]: time="2024-12-13T13:07:01.633789905Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 13:07:01.633863 containerd[1444]: time="2024-12-13T13:07:01.633845265Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 13:07:01.634623 containerd[1444]: time="2024-12-13T13:07:01.634586385Z" level=info msg="Start subscribing containerd event" Dec 13 13:07:01.634670 containerd[1444]: time="2024-12-13T13:07:01.634635585Z" level=info msg="Start recovering state" Dec 13 13:07:01.634714 containerd[1444]: time="2024-12-13T13:07:01.634695585Z" level=info msg="Start event monitor" Dec 13 13:07:01.634714 containerd[1444]: time="2024-12-13T13:07:01.634710665Z" level=info msg="Start snapshots syncer" Dec 13 13:07:01.634755 containerd[1444]: time="2024-12-13T13:07:01.634719665Z" level=info msg="Start cni network conf syncer for default" Dec 13 13:07:01.634755 containerd[1444]: time="2024-12-13T13:07:01.634726425Z" level=info msg="Start streaming server" Dec 13 13:07:01.635825 containerd[1444]: time="2024-12-13T13:07:01.634855745Z" level=info msg="containerd successfully booted in 0.039842s" Dec 13 13:07:01.634949 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 13:07:01.774323 tar[1442]: linux-arm64/LICENSE Dec 13 13:07:01.774323 tar[1442]: linux-arm64/README.md Dec 13 13:07:01.786274 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 13 13:07:02.322100 systemd-networkd[1389]: eth0: Gained IPv6LL Dec 13 13:07:02.327476 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 13:07:02.329627 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 13:07:02.343205 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Dec 13 13:07:02.345772 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:07:02.347825 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 13:07:02.365359 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 13:07:02.374708 systemd[1]: coreos-metadata.service: Deactivated successfully. Dec 13 13:07:02.374953 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Dec 13 13:07:02.376352 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 13:07:02.814008 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:07:02.817910 (kubelet)[1514]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 13:07:02.838430 sshd_keygen[1437]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 13:07:02.856241 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 13:07:02.865142 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 13:07:02.869793 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 13:07:02.869973 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 13:07:02.872695 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 13:07:02.885140 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 13:07:02.897232 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 13:07:02.899025 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Dec 13 13:07:02.900005 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 13:07:02.900775 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 13:07:02.901698 systemd[1]: Startup finished in 536ms (kernel) + 4.659s (initrd) + 3.159s (userspace) = 8.354s. Dec 13 13:07:02.911418 agetty[1529]: failed to open credentials directory Dec 13 13:07:02.911730 agetty[1530]: failed to open credentials directory Dec 13 13:07:03.269118 kubelet[1514]: E1213 13:07:03.268980 1514 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 13:07:03.271851 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 13:07:03.272012 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 13:07:07.689508 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 13:07:07.690631 systemd[1]: Started sshd@0-10.0.0.33:22-10.0.0.1:49800.service - OpenSSH per-connection server daemon (10.0.0.1:49800). Dec 13 13:07:07.748766 sshd[1544]: Accepted publickey for core from 10.0.0.1 port 49800 ssh2: RSA SHA256:q9cWvSR3bBxu+L28Z4JmOHhvW5qF2BbU+1GVJNGhIf4 Dec 13 13:07:07.752357 sshd-session[1544]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:07:07.759573 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 13:07:07.772210 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 13:07:07.774399 systemd-logind[1431]: New session 1 of user core. Dec 13 13:07:07.781470 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 13:07:07.783591 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 13:07:07.789602 (systemd)[1549]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 13:07:07.864195 systemd[1549]: Queued start job for default target default.target. Dec 13 13:07:07.876801 systemd[1549]: Created slice app.slice - User Application Slice. Dec 13 13:07:07.876842 systemd[1549]: Reached target paths.target - Paths. Dec 13 13:07:07.876854 systemd[1549]: Reached target timers.target - Timers. Dec 13 13:07:07.878075 systemd[1549]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 13:07:07.887741 systemd[1549]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 13:07:07.887792 systemd[1549]: Reached target sockets.target - Sockets. Dec 13 13:07:07.887803 systemd[1549]: Reached target basic.target - Basic System. Dec 13 13:07:07.887848 systemd[1549]: Reached target default.target - Main User Target. Dec 13 13:07:07.887873 systemd[1549]: Startup finished in 91ms. Dec 13 13:07:07.887978 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 13:07:07.889193 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 13:07:07.950487 systemd[1]: Started sshd@1-10.0.0.33:22-10.0.0.1:49816.service - OpenSSH per-connection server daemon (10.0.0.1:49816). Dec 13 13:07:07.989805 sshd[1560]: Accepted publickey for core from 10.0.0.1 port 49816 ssh2: RSA SHA256:q9cWvSR3bBxu+L28Z4JmOHhvW5qF2BbU+1GVJNGhIf4 Dec 13 13:07:07.990888 sshd-session[1560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:07:07.994823 systemd-logind[1431]: New session 2 of user core. Dec 13 13:07:08.007057 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 13:07:08.057348 sshd[1562]: Connection closed by 10.0.0.1 port 49816 Dec 13 13:07:08.057792 sshd-session[1560]: pam_unix(sshd:session): session closed for user core Dec 13 13:07:08.071171 systemd[1]: sshd@1-10.0.0.33:22-10.0.0.1:49816.service: Deactivated successfully. Dec 13 13:07:08.072620 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 13:07:08.074078 systemd-logind[1431]: Session 2 logged out. Waiting for processes to exit. Dec 13 13:07:08.075257 systemd[1]: Started sshd@2-10.0.0.33:22-10.0.0.1:49830.service - OpenSSH per-connection server daemon (10.0.0.1:49830). Dec 13 13:07:08.076141 systemd-logind[1431]: Removed session 2. Dec 13 13:07:08.114949 sshd[1567]: Accepted publickey for core from 10.0.0.1 port 49830 ssh2: RSA SHA256:q9cWvSR3bBxu+L28Z4JmOHhvW5qF2BbU+1GVJNGhIf4 Dec 13 13:07:08.116008 sshd-session[1567]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:07:08.119456 systemd-logind[1431]: New session 3 of user core. Dec 13 13:07:08.130039 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 13:07:08.177006 sshd[1569]: Connection closed by 10.0.0.1 port 49830 Dec 13 13:07:08.177441 sshd-session[1567]: pam_unix(sshd:session): session closed for user core Dec 13 13:07:08.186272 systemd[1]: sshd@2-10.0.0.33:22-10.0.0.1:49830.service: Deactivated successfully. Dec 13 13:07:08.187604 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 13:07:08.190042 systemd-logind[1431]: Session 3 logged out. Waiting for processes to exit. Dec 13 13:07:08.191092 systemd[1]: Started sshd@3-10.0.0.33:22-10.0.0.1:49842.service - OpenSSH per-connection server daemon (10.0.0.1:49842). Dec 13 13:07:08.191822 systemd-logind[1431]: Removed session 3. Dec 13 13:07:08.230592 sshd[1574]: Accepted publickey for core from 10.0.0.1 port 49842 ssh2: RSA SHA256:q9cWvSR3bBxu+L28Z4JmOHhvW5qF2BbU+1GVJNGhIf4 Dec 13 13:07:08.231647 sshd-session[1574]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:07:08.235425 systemd-logind[1431]: New session 4 of user core. Dec 13 13:07:08.246120 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 13:07:08.297736 sshd[1576]: Connection closed by 10.0.0.1 port 49842 Dec 13 13:07:08.298044 sshd-session[1574]: pam_unix(sshd:session): session closed for user core Dec 13 13:07:08.307521 systemd[1]: sshd@3-10.0.0.33:22-10.0.0.1:49842.service: Deactivated successfully. Dec 13 13:07:08.309111 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 13:07:08.310358 systemd-logind[1431]: Session 4 logged out. Waiting for processes to exit. Dec 13 13:07:08.324237 systemd[1]: Started sshd@4-10.0.0.33:22-10.0.0.1:49848.service - OpenSSH per-connection server daemon (10.0.0.1:49848). Dec 13 13:07:08.325272 systemd-logind[1431]: Removed session 4. Dec 13 13:07:08.360607 sshd[1581]: Accepted publickey for core from 10.0.0.1 port 49848 ssh2: RSA SHA256:q9cWvSR3bBxu+L28Z4JmOHhvW5qF2BbU+1GVJNGhIf4 Dec 13 13:07:08.361651 sshd-session[1581]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:07:08.364880 systemd-logind[1431]: New session 5 of user core. Dec 13 13:07:08.372125 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 13:07:08.428673 sudo[1584]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 13:07:08.428970 sudo[1584]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 13:07:08.441812 sudo[1584]: pam_unix(sudo:session): session closed for user root Dec 13 13:07:08.444005 sshd[1583]: Connection closed by 10.0.0.1 port 49848 Dec 13 13:07:08.443876 sshd-session[1581]: pam_unix(sshd:session): session closed for user core Dec 13 13:07:08.456342 systemd[1]: sshd@4-10.0.0.33:22-10.0.0.1:49848.service: Deactivated successfully. Dec 13 13:07:08.459281 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 13:07:08.460567 systemd-logind[1431]: Session 5 logged out. Waiting for processes to exit. Dec 13 13:07:08.471220 systemd[1]: Started sshd@5-10.0.0.33:22-10.0.0.1:49860.service - OpenSSH per-connection server daemon (10.0.0.1:49860). Dec 13 13:07:08.472383 systemd-logind[1431]: Removed session 5. Dec 13 13:07:08.511031 sshd[1589]: Accepted publickey for core from 10.0.0.1 port 49860 ssh2: RSA SHA256:q9cWvSR3bBxu+L28Z4JmOHhvW5qF2BbU+1GVJNGhIf4 Dec 13 13:07:08.512044 sshd-session[1589]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:07:08.515532 systemd-logind[1431]: New session 6 of user core. Dec 13 13:07:08.525063 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 13:07:08.575646 sudo[1593]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 13:07:08.575917 sudo[1593]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 13:07:08.578987 sudo[1593]: pam_unix(sudo:session): session closed for user root Dec 13 13:07:08.583411 sudo[1592]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 13 13:07:08.583672 sudo[1592]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 13:07:08.601348 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 13 13:07:08.622630 augenrules[1615]: No rules Dec 13 13:07:08.623165 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 13:07:08.623332 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 13 13:07:08.624254 sudo[1592]: pam_unix(sudo:session): session closed for user root Dec 13 13:07:08.625354 sshd[1591]: Connection closed by 10.0.0.1 port 49860 Dec 13 13:07:08.625674 sshd-session[1589]: pam_unix(sshd:session): session closed for user core Dec 13 13:07:08.638049 systemd[1]: sshd@5-10.0.0.33:22-10.0.0.1:49860.service: Deactivated successfully. Dec 13 13:07:08.639509 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 13:07:08.641082 systemd-logind[1431]: Session 6 logged out. Waiting for processes to exit. Dec 13 13:07:08.641999 systemd[1]: Started sshd@6-10.0.0.33:22-10.0.0.1:49864.service - OpenSSH per-connection server daemon (10.0.0.1:49864). Dec 13 13:07:08.642942 systemd-logind[1431]: Removed session 6. Dec 13 13:07:08.682365 sshd[1623]: Accepted publickey for core from 10.0.0.1 port 49864 ssh2: RSA SHA256:q9cWvSR3bBxu+L28Z4JmOHhvW5qF2BbU+1GVJNGhIf4 Dec 13 13:07:08.683414 sshd-session[1623]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:07:08.686938 systemd-logind[1431]: New session 7 of user core. Dec 13 13:07:08.701071 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 13:07:08.750599 sudo[1626]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 13:07:08.751161 sudo[1626]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 13:07:09.077231 (dockerd)[1646]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 13 13:07:09.077329 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 13 13:07:09.316974 dockerd[1646]: time="2024-12-13T13:07:09.316702185Z" level=info msg="Starting up" Dec 13 13:07:09.459598 dockerd[1646]: time="2024-12-13T13:07:09.459258545Z" level=info msg="Loading containers: start." Dec 13 13:07:09.584951 kernel: Initializing XFRM netlink socket Dec 13 13:07:09.646506 systemd-networkd[1389]: docker0: Link UP Dec 13 13:07:09.681101 dockerd[1646]: time="2024-12-13T13:07:09.681056545Z" level=info msg="Loading containers: done." Dec 13 13:07:09.693733 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4192038308-merged.mount: Deactivated successfully. Dec 13 13:07:09.694801 dockerd[1646]: time="2024-12-13T13:07:09.694749105Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 13:07:09.694862 dockerd[1646]: time="2024-12-13T13:07:09.694839025Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Dec 13 13:07:09.695060 dockerd[1646]: time="2024-12-13T13:07:09.695028985Z" level=info msg="Daemon has completed initialization" Dec 13 13:07:09.721934 dockerd[1646]: time="2024-12-13T13:07:09.721819185Z" level=info msg="API listen on /run/docker.sock" Dec 13 13:07:09.722409 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 13 13:07:10.468483 containerd[1444]: time="2024-12-13T13:07:10.468429065Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Dec 13 13:07:11.186313 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2427847154.mount: Deactivated successfully. Dec 13 13:07:12.302381 containerd[1444]: time="2024-12-13T13:07:12.302318465Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:07:12.302874 containerd[1444]: time="2024-12-13T13:07:12.302839625Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.12: active requests=0, bytes read=32201252" Dec 13 13:07:12.303864 containerd[1444]: time="2024-12-13T13:07:12.303830425Z" level=info msg="ImageCreate event name:\"sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:07:12.306547 containerd[1444]: time="2024-12-13T13:07:12.306519025Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:07:12.307757 containerd[1444]: time="2024-12-13T13:07:12.307710985Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.12\" with image id \"sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\", size \"32198050\" in 1.83922892s" Dec 13 13:07:12.307757 containerd[1444]: time="2024-12-13T13:07:12.307749105Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f\"" Dec 13 13:07:12.326074 containerd[1444]: time="2024-12-13T13:07:12.326034745Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Dec 13 13:07:13.522287 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 13:07:13.528106 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:07:13.622149 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:07:13.625215 (kubelet)[1929]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 13:07:13.672736 kubelet[1929]: E1213 13:07:13.672679 1929 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 13:07:13.678141 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 13:07:13.678282 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 13:07:14.073029 containerd[1444]: time="2024-12-13T13:07:14.072983705Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:07:14.073445 containerd[1444]: time="2024-12-13T13:07:14.073400945Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.12: active requests=0, bytes read=29381299" Dec 13 13:07:14.074283 containerd[1444]: time="2024-12-13T13:07:14.074238145Z" level=info msg="ImageCreate event name:\"sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:07:14.076818 containerd[1444]: time="2024-12-13T13:07:14.076791665Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:07:14.078004 containerd[1444]: time="2024-12-13T13:07:14.077954305Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.12\" with image id \"sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\", size \"30783618\" in 1.75188056s" Dec 13 13:07:14.078004 containerd[1444]: time="2024-12-13T13:07:14.077983465Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7\"" Dec 13 13:07:14.096599 containerd[1444]: time="2024-12-13T13:07:14.096387785Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Dec 13 13:07:15.016838 containerd[1444]: time="2024-12-13T13:07:15.016792025Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:07:15.017622 containerd[1444]: time="2024-12-13T13:07:15.017548385Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.12: active requests=0, bytes read=15765642" Dec 13 13:07:15.018273 containerd[1444]: time="2024-12-13T13:07:15.018241865Z" level=info msg="ImageCreate event name:\"sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:07:15.020980 containerd[1444]: time="2024-12-13T13:07:15.020932225Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:07:15.022138 containerd[1444]: time="2024-12-13T13:07:15.022093465Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.12\" with image id \"sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\", size \"17167979\" in 925.66648ms" Dec 13 13:07:15.022138 containerd[1444]: time="2024-12-13T13:07:15.022126665Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db\"" Dec 13 13:07:15.039570 containerd[1444]: time="2024-12-13T13:07:15.039511625Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Dec 13 13:07:16.014753 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4112424385.mount: Deactivated successfully. Dec 13 13:07:16.380442 containerd[1444]: time="2024-12-13T13:07:16.380318745Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:07:16.394339 containerd[1444]: time="2024-12-13T13:07:16.394270665Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.12: active requests=0, bytes read=25273979" Dec 13 13:07:16.407183 containerd[1444]: time="2024-12-13T13:07:16.407139905Z" level=info msg="ImageCreate event name:\"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:07:16.410431 containerd[1444]: time="2024-12-13T13:07:16.410358785Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:07:16.411464 containerd[1444]: time="2024-12-13T13:07:16.411419465Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.12\" with image id \"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\", repo tag \"registry.k8s.io/kube-proxy:v1.29.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\", size \"25272996\" in 1.37186768s" Dec 13 13:07:16.411559 containerd[1444]: time="2024-12-13T13:07:16.411468025Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\"" Dec 13 13:07:16.430031 containerd[1444]: time="2024-12-13T13:07:16.429996425Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 13:07:17.103584 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3300157541.mount: Deactivated successfully. Dec 13 13:07:18.069397 containerd[1444]: time="2024-12-13T13:07:18.069346265Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:07:18.070350 containerd[1444]: time="2024-12-13T13:07:18.070093265Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Dec 13 13:07:18.071016 containerd[1444]: time="2024-12-13T13:07:18.070989465Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:07:18.073797 containerd[1444]: time="2024-12-13T13:07:18.073763105Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:07:18.075952 containerd[1444]: time="2024-12-13T13:07:18.075911505Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.64587656s" Dec 13 13:07:18.076001 containerd[1444]: time="2024-12-13T13:07:18.075958425Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Dec 13 13:07:18.094040 containerd[1444]: time="2024-12-13T13:07:18.094004545Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 13:07:18.527625 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1934894210.mount: Deactivated successfully. Dec 13 13:07:18.532335 containerd[1444]: time="2024-12-13T13:07:18.532293585Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:07:18.533454 containerd[1444]: time="2024-12-13T13:07:18.533394545Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" Dec 13 13:07:18.534147 containerd[1444]: time="2024-12-13T13:07:18.534110025Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:07:18.536545 containerd[1444]: time="2024-12-13T13:07:18.536508305Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:07:18.537478 containerd[1444]: time="2024-12-13T13:07:18.537341625Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 443.29564ms" Dec 13 13:07:18.537478 containerd[1444]: time="2024-12-13T13:07:18.537376505Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Dec 13 13:07:18.556292 containerd[1444]: time="2024-12-13T13:07:18.556235225Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Dec 13 13:07:19.117413 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3068515292.mount: Deactivated successfully. Dec 13 13:07:20.678675 containerd[1444]: time="2024-12-13T13:07:20.678612145Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:07:20.679216 containerd[1444]: time="2024-12-13T13:07:20.679170905Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200788" Dec 13 13:07:20.679944 containerd[1444]: time="2024-12-13T13:07:20.679861425Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:07:20.683543 containerd[1444]: time="2024-12-13T13:07:20.683501025Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:07:20.684245 containerd[1444]: time="2024-12-13T13:07:20.684209665Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 2.127934s" Dec 13 13:07:20.684245 containerd[1444]: time="2024-12-13T13:07:20.684240385Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Dec 13 13:07:23.912626 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 13:07:23.922258 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:07:24.047101 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:07:24.050759 (kubelet)[2159]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 13:07:24.106103 kubelet[2159]: E1213 13:07:24.106045 2159 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 13:07:24.108279 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 13:07:24.108403 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 13:07:25.483776 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:07:25.494134 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:07:25.508860 systemd[1]: Reloading requested from client PID 2174 ('systemctl') (unit session-7.scope)... Dec 13 13:07:25.508877 systemd[1]: Reloading... Dec 13 13:07:25.574956 zram_generator::config[2217]: No configuration found. Dec 13 13:07:25.687987 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 13:07:25.740023 systemd[1]: Reloading finished in 230 ms. Dec 13 13:07:25.777410 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:07:25.781109 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:07:25.785900 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 13:07:25.786169 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:07:25.788033 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:07:25.892061 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:07:25.895540 (kubelet)[2260]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 13:07:25.932479 kubelet[2260]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 13:07:25.932479 kubelet[2260]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 13:07:25.932479 kubelet[2260]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 13:07:25.932817 kubelet[2260]: I1213 13:07:25.932523 2260 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 13:07:27.026691 kubelet[2260]: I1213 13:07:27.026650 2260 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 13:07:27.026691 kubelet[2260]: I1213 13:07:27.026684 2260 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 13:07:27.027053 kubelet[2260]: I1213 13:07:27.026905 2260 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 13:07:27.059604 kubelet[2260]: E1213 13:07:27.059575 2260 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.33:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.33:6443: connect: connection refused Dec 13 13:07:27.061447 kubelet[2260]: I1213 13:07:27.061429 2260 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 13:07:27.070146 kubelet[2260]: I1213 13:07:27.070123 2260 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 13:07:27.071669 kubelet[2260]: I1213 13:07:27.071165 2260 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 13:07:27.071669 kubelet[2260]: I1213 13:07:27.071355 2260 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 13:07:27.071669 kubelet[2260]: I1213 13:07:27.071376 2260 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 13:07:27.071669 kubelet[2260]: I1213 13:07:27.071384 2260 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 13:07:27.071669 kubelet[2260]: I1213 13:07:27.071496 2260 state_mem.go:36] "Initialized new in-memory state store" Dec 13 13:07:27.073750 kubelet[2260]: I1213 13:07:27.073726 2260 kubelet.go:396] "Attempting to sync node with API server" Dec 13 13:07:27.073862 kubelet[2260]: I1213 13:07:27.073850 2260 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 13:07:27.073959 kubelet[2260]: I1213 13:07:27.073948 2260 kubelet.go:312] "Adding apiserver pod source" Dec 13 13:07:27.074026 kubelet[2260]: I1213 13:07:27.074016 2260 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 13:07:27.074312 kubelet[2260]: W1213 13:07:27.074235 2260 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.33:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.33:6443: connect: connection refused Dec 13 13:07:27.074361 kubelet[2260]: E1213 13:07:27.074322 2260 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.33:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.33:6443: connect: connection refused Dec 13 13:07:27.074729 kubelet[2260]: W1213 13:07:27.074693 2260 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.33:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.33:6443: connect: connection refused Dec 13 13:07:27.074767 kubelet[2260]: E1213 13:07:27.074732 2260 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.33:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.33:6443: connect: connection refused Dec 13 13:07:27.075959 kubelet[2260]: I1213 13:07:27.075824 2260 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Dec 13 13:07:27.076549 kubelet[2260]: I1213 13:07:27.076534 2260 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 13:07:27.077017 kubelet[2260]: W1213 13:07:27.076736 2260 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 13:07:27.079860 kubelet[2260]: I1213 13:07:27.079843 2260 server.go:1256] "Started kubelet" Dec 13 13:07:27.080336 kubelet[2260]: I1213 13:07:27.080304 2260 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 13:07:27.081692 kubelet[2260]: I1213 13:07:27.081669 2260 server.go:461] "Adding debug handlers to kubelet server" Dec 13 13:07:27.083548 kubelet[2260]: I1213 13:07:27.082292 2260 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 13:07:27.083548 kubelet[2260]: I1213 13:07:27.083309 2260 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 13:07:27.083548 kubelet[2260]: I1213 13:07:27.083515 2260 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 13:07:27.090471 kubelet[2260]: I1213 13:07:27.090445 2260 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 13:07:27.090608 kubelet[2260]: E1213 13:07:27.090579 2260 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.33:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.33:6443: connect: connection refused" interval="200ms" Dec 13 13:07:27.090654 kubelet[2260]: I1213 13:07:27.090614 2260 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 13:07:27.090768 kubelet[2260]: I1213 13:07:27.090753 2260 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 13:07:27.090872 kubelet[2260]: W1213 13:07:27.090830 2260 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.33:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.33:6443: connect: connection refused Dec 13 13:07:27.090872 kubelet[2260]: E1213 13:07:27.090871 2260 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.33:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.33:6443: connect: connection refused Dec 13 13:07:27.091368 kubelet[2260]: E1213 13:07:27.091339 2260 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.33:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.33:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1810be7088e2b3b1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-12-13 13:07:27.079814065 +0000 UTC m=+1.181164161,LastTimestamp:2024-12-13 13:07:27.079814065 +0000 UTC m=+1.181164161,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 13 13:07:27.092410 kubelet[2260]: E1213 13:07:27.092340 2260 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 13:07:27.093748 kubelet[2260]: I1213 13:07:27.092799 2260 factory.go:221] Registration of the containerd container factory successfully Dec 13 13:07:27.093748 kubelet[2260]: I1213 13:07:27.092817 2260 factory.go:221] Registration of the systemd container factory successfully Dec 13 13:07:27.093748 kubelet[2260]: I1213 13:07:27.092885 2260 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 13:07:27.103621 kubelet[2260]: I1213 13:07:27.103377 2260 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 13:07:27.103621 kubelet[2260]: I1213 13:07:27.103401 2260 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 13:07:27.103621 kubelet[2260]: I1213 13:07:27.103415 2260 state_mem.go:36] "Initialized new in-memory state store" Dec 13 13:07:27.103873 kubelet[2260]: I1213 13:07:27.103838 2260 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 13:07:27.105004 kubelet[2260]: I1213 13:07:27.104914 2260 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 13:07:27.105004 kubelet[2260]: I1213 13:07:27.105004 2260 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 13:07:27.105089 kubelet[2260]: I1213 13:07:27.105028 2260 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 13:07:27.105089 kubelet[2260]: E1213 13:07:27.105070 2260 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 13:07:27.293726 kubelet[2260]: E1213 13:07:27.292143 2260 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 13:07:27.293726 kubelet[2260]: I1213 13:07:27.292170 2260 policy_none.go:49] "None policy: Start" Dec 13 13:07:27.293726 kubelet[2260]: E1213 13:07:27.292760 2260 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.33:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.33:6443: connect: connection refused" interval="400ms" Dec 13 13:07:27.293726 kubelet[2260]: I1213 13:07:27.292778 2260 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 13:07:27.293726 kubelet[2260]: W1213 13:07:27.293123 2260 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.33:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.33:6443: connect: connection refused Dec 13 13:07:27.293726 kubelet[2260]: E1213 13:07:27.293169 2260 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.33:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.33:6443: connect: connection refused Dec 13 13:07:27.293726 kubelet[2260]: E1213 13:07:27.293173 2260 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.33:6443/api/v1/nodes\": dial tcp 10.0.0.33:6443: connect: connection refused" node="localhost" Dec 13 13:07:27.293726 kubelet[2260]: I1213 13:07:27.293249 2260 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 13:07:27.293726 kubelet[2260]: I1213 13:07:27.293283 2260 state_mem.go:35] "Initializing new in-memory state store" Dec 13 13:07:27.299221 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 13 13:07:27.316714 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 13 13:07:27.319372 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 13 13:07:27.329738 kubelet[2260]: I1213 13:07:27.329599 2260 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 13:07:27.330006 kubelet[2260]: I1213 13:07:27.329852 2260 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 13:07:27.331091 kubelet[2260]: E1213 13:07:27.331039 2260 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Dec 13 13:07:27.492441 kubelet[2260]: I1213 13:07:27.492378 2260 topology_manager.go:215] "Topology Admit Handler" podUID="4f8e0d694c07e04969646aa3c152c34a" podNamespace="kube-system" podName="kube-controller-manager-localhost" Dec 13 13:07:27.493479 kubelet[2260]: I1213 13:07:27.493454 2260 topology_manager.go:215] "Topology Admit Handler" podUID="c4144e8f85b2123a6afada0c1705bbba" podNamespace="kube-system" podName="kube-scheduler-localhost" Dec 13 13:07:27.494491 kubelet[2260]: I1213 13:07:27.494388 2260 topology_manager.go:215] "Topology Admit Handler" podUID="693889598a783314c23e956cc5b0ba0e" podNamespace="kube-system" podName="kube-apiserver-localhost" Dec 13 13:07:27.495341 kubelet[2260]: I1213 13:07:27.495103 2260 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 13:07:27.495933 kubelet[2260]: E1213 13:07:27.495560 2260 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.33:6443/api/v1/nodes\": dial tcp 10.0.0.33:6443: connect: connection refused" node="localhost" Dec 13 13:07:27.500277 systemd[1]: Created slice kubepods-burstable-pod4f8e0d694c07e04969646aa3c152c34a.slice - libcontainer container kubepods-burstable-pod4f8e0d694c07e04969646aa3c152c34a.slice. Dec 13 13:07:27.523442 systemd[1]: Created slice kubepods-burstable-podc4144e8f85b2123a6afada0c1705bbba.slice - libcontainer container kubepods-burstable-podc4144e8f85b2123a6afada0c1705bbba.slice. Dec 13 13:07:27.540001 systemd[1]: Created slice kubepods-burstable-pod693889598a783314c23e956cc5b0ba0e.slice - libcontainer container kubepods-burstable-pod693889598a783314c23e956cc5b0ba0e.slice. Dec 13 13:07:27.593986 kubelet[2260]: I1213 13:07:27.593867 2260 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:07:27.593986 kubelet[2260]: I1213 13:07:27.593916 2260 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c4144e8f85b2123a6afada0c1705bbba-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c4144e8f85b2123a6afada0c1705bbba\") " pod="kube-system/kube-scheduler-localhost" Dec 13 13:07:27.593986 kubelet[2260]: I1213 13:07:27.593955 2260 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/693889598a783314c23e956cc5b0ba0e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"693889598a783314c23e956cc5b0ba0e\") " pod="kube-system/kube-apiserver-localhost" Dec 13 13:07:27.593986 kubelet[2260]: I1213 13:07:27.593975 2260 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/693889598a783314c23e956cc5b0ba0e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"693889598a783314c23e956cc5b0ba0e\") " pod="kube-system/kube-apiserver-localhost" Dec 13 13:07:27.593986 kubelet[2260]: I1213 13:07:27.593995 2260 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/693889598a783314c23e956cc5b0ba0e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"693889598a783314c23e956cc5b0ba0e\") " pod="kube-system/kube-apiserver-localhost" Dec 13 13:07:27.594163 kubelet[2260]: I1213 13:07:27.594016 2260 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:07:27.594163 kubelet[2260]: I1213 13:07:27.594034 2260 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:07:27.594163 kubelet[2260]: I1213 13:07:27.594062 2260 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:07:27.594163 kubelet[2260]: I1213 13:07:27.594083 2260 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:07:27.693550 kubelet[2260]: E1213 13:07:27.693516 2260 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.33:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.33:6443: connect: connection refused" interval="800ms" Dec 13 13:07:27.824035 kubelet[2260]: E1213 13:07:27.823987 2260 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:07:27.824742 containerd[1444]: time="2024-12-13T13:07:27.824691145Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4f8e0d694c07e04969646aa3c152c34a,Namespace:kube-system,Attempt:0,}" Dec 13 13:07:27.838987 kubelet[2260]: E1213 13:07:27.838887 2260 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:07:27.839649 containerd[1444]: time="2024-12-13T13:07:27.839431025Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c4144e8f85b2123a6afada0c1705bbba,Namespace:kube-system,Attempt:0,}" Dec 13 13:07:27.842565 kubelet[2260]: E1213 13:07:27.842545 2260 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:07:27.842860 containerd[1444]: time="2024-12-13T13:07:27.842831265Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:693889598a783314c23e956cc5b0ba0e,Namespace:kube-system,Attempt:0,}" Dec 13 13:07:27.892615 kubelet[2260]: W1213 13:07:27.892474 2260 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.33:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.33:6443: connect: connection refused Dec 13 13:07:27.892615 kubelet[2260]: E1213 13:07:27.892537 2260 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.33:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.33:6443: connect: connection refused Dec 13 13:07:27.897500 kubelet[2260]: I1213 13:07:27.897466 2260 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 13:07:27.897729 kubelet[2260]: E1213 13:07:27.897706 2260 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.33:6443/api/v1/nodes\": dial tcp 10.0.0.33:6443: connect: connection refused" node="localhost" Dec 13 13:07:28.131112 kubelet[2260]: W1213 13:07:28.131030 2260 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.33:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.33:6443: connect: connection refused Dec 13 13:07:28.131112 kubelet[2260]: E1213 13:07:28.131086 2260 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.33:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.33:6443: connect: connection refused Dec 13 13:07:28.153558 kubelet[2260]: W1213 13:07:28.153441 2260 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.33:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.33:6443: connect: connection refused Dec 13 13:07:28.153558 kubelet[2260]: E1213 13:07:28.153487 2260 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.33:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.33:6443: connect: connection refused Dec 13 13:07:28.370577 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3445366489.mount: Deactivated successfully. Dec 13 13:07:28.375170 containerd[1444]: time="2024-12-13T13:07:28.375125305Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:07:28.376164 containerd[1444]: time="2024-12-13T13:07:28.376127545Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Dec 13 13:07:28.378681 containerd[1444]: time="2024-12-13T13:07:28.377973425Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:07:28.379754 containerd[1444]: time="2024-12-13T13:07:28.379721665Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:07:28.381339 containerd[1444]: time="2024-12-13T13:07:28.381307145Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 13:07:28.382578 containerd[1444]: time="2024-12-13T13:07:28.382553865Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:07:28.383754 containerd[1444]: time="2024-12-13T13:07:28.383715705Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 13:07:28.384353 containerd[1444]: time="2024-12-13T13:07:28.384315825Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:07:28.385323 containerd[1444]: time="2024-12-13T13:07:28.385299185Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 560.52692ms" Dec 13 13:07:28.388942 containerd[1444]: time="2024-12-13T13:07:28.388890265Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 549.38156ms" Dec 13 13:07:28.390581 containerd[1444]: time="2024-12-13T13:07:28.390551705Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 547.66476ms" Dec 13 13:07:28.494762 kubelet[2260]: E1213 13:07:28.494693 2260 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.33:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.33:6443: connect: connection refused" interval="1.6s" Dec 13 13:07:28.520762 containerd[1444]: time="2024-12-13T13:07:28.520632425Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:07:28.520762 containerd[1444]: time="2024-12-13T13:07:28.520714105Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:07:28.520762 containerd[1444]: time="2024-12-13T13:07:28.520729905Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:07:28.521565 containerd[1444]: time="2024-12-13T13:07:28.521053705Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:07:28.521565 containerd[1444]: time="2024-12-13T13:07:28.521113945Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:07:28.521565 containerd[1444]: time="2024-12-13T13:07:28.521124745Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:07:28.521565 containerd[1444]: time="2024-12-13T13:07:28.521207945Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:07:28.522178 containerd[1444]: time="2024-12-13T13:07:28.522114105Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:07:28.522225 containerd[1444]: time="2024-12-13T13:07:28.522200185Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:07:28.522263 containerd[1444]: time="2024-12-13T13:07:28.522233745Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:07:28.522378 containerd[1444]: time="2024-12-13T13:07:28.522295985Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:07:28.522484 containerd[1444]: time="2024-12-13T13:07:28.522447105Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:07:28.545760 kubelet[2260]: W1213 13:07:28.545707 2260 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.33:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.33:6443: connect: connection refused Dec 13 13:07:28.545760 kubelet[2260]: E1213 13:07:28.545754 2260 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.33:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.33:6443: connect: connection refused Dec 13 13:07:28.546099 systemd[1]: Started cri-containerd-57624e05618b4f71184a10cd126b8c6e96bd0c7c3b225fb5a90c5057d31e30f9.scope - libcontainer container 57624e05618b4f71184a10cd126b8c6e96bd0c7c3b225fb5a90c5057d31e30f9. Dec 13 13:07:28.547239 systemd[1]: Started cri-containerd-69a89b09dd3b9816066d71be36ee89da9b041f3c06d6c212aa1b9160002eba12.scope - libcontainer container 69a89b09dd3b9816066d71be36ee89da9b041f3c06d6c212aa1b9160002eba12. Dec 13 13:07:28.548249 systemd[1]: Started cri-containerd-f17244d2662834c1b262c136b4aaf65d202d70d35e7cff4dadcc24d09ed5399c.scope - libcontainer container f17244d2662834c1b262c136b4aaf65d202d70d35e7cff4dadcc24d09ed5399c. Dec 13 13:07:28.580746 containerd[1444]: time="2024-12-13T13:07:28.580514705Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4f8e0d694c07e04969646aa3c152c34a,Namespace:kube-system,Attempt:0,} returns sandbox id \"69a89b09dd3b9816066d71be36ee89da9b041f3c06d6c212aa1b9160002eba12\"" Dec 13 13:07:28.580746 containerd[1444]: time="2024-12-13T13:07:28.580724945Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:693889598a783314c23e956cc5b0ba0e,Namespace:kube-system,Attempt:0,} returns sandbox id \"57624e05618b4f71184a10cd126b8c6e96bd0c7c3b225fb5a90c5057d31e30f9\"" Dec 13 13:07:28.581974 kubelet[2260]: E1213 13:07:28.581953 2260 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:07:28.582812 kubelet[2260]: E1213 13:07:28.582515 2260 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:07:28.585729 containerd[1444]: time="2024-12-13T13:07:28.585643185Z" level=info msg="CreateContainer within sandbox \"69a89b09dd3b9816066d71be36ee89da9b041f3c06d6c212aa1b9160002eba12\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 13:07:28.585827 containerd[1444]: time="2024-12-13T13:07:28.585682025Z" level=info msg="CreateContainer within sandbox \"57624e05618b4f71184a10cd126b8c6e96bd0c7c3b225fb5a90c5057d31e30f9\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 13:07:28.587186 containerd[1444]: time="2024-12-13T13:07:28.587158665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c4144e8f85b2123a6afada0c1705bbba,Namespace:kube-system,Attempt:0,} returns sandbox id \"f17244d2662834c1b262c136b4aaf65d202d70d35e7cff4dadcc24d09ed5399c\"" Dec 13 13:07:28.588462 kubelet[2260]: E1213 13:07:28.588441 2260 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:07:28.590162 containerd[1444]: time="2024-12-13T13:07:28.590132265Z" level=info msg="CreateContainer within sandbox \"f17244d2662834c1b262c136b4aaf65d202d70d35e7cff4dadcc24d09ed5399c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 13:07:28.605046 containerd[1444]: time="2024-12-13T13:07:28.605012425Z" level=info msg="CreateContainer within sandbox \"57624e05618b4f71184a10cd126b8c6e96bd0c7c3b225fb5a90c5057d31e30f9\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"5ed97729dd8d35e4942497e65216c1e4954bda524ff60f6acd45ae1e9b64bb9a\"" Dec 13 13:07:28.605800 containerd[1444]: time="2024-12-13T13:07:28.605769145Z" level=info msg="StartContainer for \"5ed97729dd8d35e4942497e65216c1e4954bda524ff60f6acd45ae1e9b64bb9a\"" Dec 13 13:07:28.608048 containerd[1444]: time="2024-12-13T13:07:28.608007905Z" level=info msg="CreateContainer within sandbox \"f17244d2662834c1b262c136b4aaf65d202d70d35e7cff4dadcc24d09ed5399c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"7c1f16966d720fe11ec3e02d86b791a30ee3c549a5a8c46f3c3cb00d157451c6\"" Dec 13 13:07:28.608777 containerd[1444]: time="2024-12-13T13:07:28.608737545Z" level=info msg="StartContainer for \"7c1f16966d720fe11ec3e02d86b791a30ee3c549a5a8c46f3c3cb00d157451c6\"" Dec 13 13:07:28.609690 containerd[1444]: time="2024-12-13T13:07:28.609574665Z" level=info msg="CreateContainer within sandbox \"69a89b09dd3b9816066d71be36ee89da9b041f3c06d6c212aa1b9160002eba12\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"33d3ae8b932026904d3e60ea8d53b6a079ab880d55a5c79bcb9dfdf0a631e42d\"" Dec 13 13:07:28.610040 containerd[1444]: time="2024-12-13T13:07:28.610019905Z" level=info msg="StartContainer for \"33d3ae8b932026904d3e60ea8d53b6a079ab880d55a5c79bcb9dfdf0a631e42d\"" Dec 13 13:07:28.629089 systemd[1]: Started cri-containerd-5ed97729dd8d35e4942497e65216c1e4954bda524ff60f6acd45ae1e9b64bb9a.scope - libcontainer container 5ed97729dd8d35e4942497e65216c1e4954bda524ff60f6acd45ae1e9b64bb9a. Dec 13 13:07:28.633262 systemd[1]: Started cri-containerd-33d3ae8b932026904d3e60ea8d53b6a079ab880d55a5c79bcb9dfdf0a631e42d.scope - libcontainer container 33d3ae8b932026904d3e60ea8d53b6a079ab880d55a5c79bcb9dfdf0a631e42d. Dec 13 13:07:28.634296 systemd[1]: Started cri-containerd-7c1f16966d720fe11ec3e02d86b791a30ee3c549a5a8c46f3c3cb00d157451c6.scope - libcontainer container 7c1f16966d720fe11ec3e02d86b791a30ee3c549a5a8c46f3c3cb00d157451c6. Dec 13 13:07:28.671457 containerd[1444]: time="2024-12-13T13:07:28.671405705Z" level=info msg="StartContainer for \"7c1f16966d720fe11ec3e02d86b791a30ee3c549a5a8c46f3c3cb00d157451c6\" returns successfully" Dec 13 13:07:28.671976 containerd[1444]: time="2024-12-13T13:07:28.671678625Z" level=info msg="StartContainer for \"5ed97729dd8d35e4942497e65216c1e4954bda524ff60f6acd45ae1e9b64bb9a\" returns successfully" Dec 13 13:07:28.686051 containerd[1444]: time="2024-12-13T13:07:28.685802905Z" level=info msg="StartContainer for \"33d3ae8b932026904d3e60ea8d53b6a079ab880d55a5c79bcb9dfdf0a631e42d\" returns successfully" Dec 13 13:07:28.702725 kubelet[2260]: I1213 13:07:28.699553 2260 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 13:07:28.702725 kubelet[2260]: E1213 13:07:28.699843 2260 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.33:6443/api/v1/nodes\": dial tcp 10.0.0.33:6443: connect: connection refused" node="localhost" Dec 13 13:07:29.116191 kubelet[2260]: E1213 13:07:29.116152 2260 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:07:29.117849 kubelet[2260]: E1213 13:07:29.116619 2260 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:07:29.119878 kubelet[2260]: E1213 13:07:29.119851 2260 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:07:30.121358 kubelet[2260]: E1213 13:07:30.121299 2260 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:07:30.245321 kubelet[2260]: E1213 13:07:30.245284 2260 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Dec 13 13:07:30.303646 kubelet[2260]: I1213 13:07:30.303611 2260 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 13:07:30.314565 kubelet[2260]: I1213 13:07:30.314517 2260 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Dec 13 13:07:30.335178 kubelet[2260]: E1213 13:07:30.335093 2260 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 13:07:30.435792 kubelet[2260]: E1213 13:07:30.435746 2260 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 13:07:31.076750 kubelet[2260]: I1213 13:07:31.076709 2260 apiserver.go:52] "Watching apiserver" Dec 13 13:07:31.091015 kubelet[2260]: I1213 13:07:31.090961 2260 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 13:07:33.007406 systemd[1]: Reloading requested from client PID 2541 ('systemctl') (unit session-7.scope)... Dec 13 13:07:33.007423 systemd[1]: Reloading... Dec 13 13:07:33.059962 zram_generator::config[2583]: No configuration found. Dec 13 13:07:33.134711 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 13:07:33.197790 systemd[1]: Reloading finished in 190 ms. Dec 13 13:07:33.232273 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:07:33.243276 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 13:07:33.243591 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:07:33.243630 systemd[1]: kubelet.service: Consumed 1.539s CPU time, 112.1M memory peak, 0B memory swap peak. Dec 13 13:07:33.258368 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:07:33.344541 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:07:33.348502 (kubelet)[2622]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 13:07:33.388713 kubelet[2622]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 13:07:33.388713 kubelet[2622]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 13:07:33.388713 kubelet[2622]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 13:07:33.389103 kubelet[2622]: I1213 13:07:33.388766 2622 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 13:07:33.393795 kubelet[2622]: I1213 13:07:33.393765 2622 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 13:07:33.393795 kubelet[2622]: I1213 13:07:33.393794 2622 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 13:07:33.393995 kubelet[2622]: I1213 13:07:33.393979 2622 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 13:07:33.395617 kubelet[2622]: I1213 13:07:33.395589 2622 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 13:07:33.399031 kubelet[2622]: I1213 13:07:33.398996 2622 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 13:07:33.404716 kubelet[2622]: I1213 13:07:33.404689 2622 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 13:07:33.405442 kubelet[2622]: I1213 13:07:33.404861 2622 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 13:07:33.405442 kubelet[2622]: I1213 13:07:33.405045 2622 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 13:07:33.405442 kubelet[2622]: I1213 13:07:33.405060 2622 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 13:07:33.405442 kubelet[2622]: I1213 13:07:33.405068 2622 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 13:07:33.405442 kubelet[2622]: I1213 13:07:33.405097 2622 state_mem.go:36] "Initialized new in-memory state store" Dec 13 13:07:33.405442 kubelet[2622]: I1213 13:07:33.405178 2622 kubelet.go:396] "Attempting to sync node with API server" Dec 13 13:07:33.405657 kubelet[2622]: I1213 13:07:33.405195 2622 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 13:07:33.405657 kubelet[2622]: I1213 13:07:33.405213 2622 kubelet.go:312] "Adding apiserver pod source" Dec 13 13:07:33.405657 kubelet[2622]: I1213 13:07:33.405229 2622 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 13:07:33.407180 kubelet[2622]: I1213 13:07:33.407024 2622 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Dec 13 13:07:33.407373 kubelet[2622]: I1213 13:07:33.407357 2622 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 13:07:33.407871 kubelet[2622]: I1213 13:07:33.407850 2622 server.go:1256] "Started kubelet" Dec 13 13:07:33.414933 kubelet[2622]: I1213 13:07:33.412652 2622 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 13:07:33.414933 kubelet[2622]: I1213 13:07:33.412762 2622 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 13:07:33.414933 kubelet[2622]: I1213 13:07:33.412909 2622 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 13:07:33.414933 kubelet[2622]: I1213 13:07:33.414015 2622 server.go:461] "Adding debug handlers to kubelet server" Dec 13 13:07:33.414933 kubelet[2622]: I1213 13:07:33.414378 2622 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 13:07:33.419800 kubelet[2622]: I1213 13:07:33.419532 2622 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 13:07:33.421236 kubelet[2622]: I1213 13:07:33.421220 2622 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 13:07:33.421473 kubelet[2622]: I1213 13:07:33.421438 2622 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 13:07:33.434976 kubelet[2622]: I1213 13:07:33.430910 2622 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 13:07:33.439719 kubelet[2622]: I1213 13:07:33.439699 2622 factory.go:221] Registration of the containerd container factory successfully Dec 13 13:07:33.439862 kubelet[2622]: I1213 13:07:33.439851 2622 factory.go:221] Registration of the systemd container factory successfully Dec 13 13:07:33.442684 kubelet[2622]: E1213 13:07:33.442640 2622 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 13:07:33.454260 kubelet[2622]: I1213 13:07:33.454238 2622 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 13:07:33.455547 kubelet[2622]: I1213 13:07:33.455438 2622 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 13:07:33.455547 kubelet[2622]: I1213 13:07:33.455463 2622 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 13:07:33.455547 kubelet[2622]: I1213 13:07:33.455478 2622 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 13:07:33.455547 kubelet[2622]: E1213 13:07:33.455538 2622 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 13:07:33.475013 kubelet[2622]: I1213 13:07:33.473116 2622 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 13:07:33.475013 kubelet[2622]: I1213 13:07:33.473164 2622 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 13:07:33.475013 kubelet[2622]: I1213 13:07:33.473182 2622 state_mem.go:36] "Initialized new in-memory state store" Dec 13 13:07:33.475013 kubelet[2622]: I1213 13:07:33.473530 2622 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 13:07:33.475013 kubelet[2622]: I1213 13:07:33.473560 2622 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 13:07:33.475013 kubelet[2622]: I1213 13:07:33.473568 2622 policy_none.go:49] "None policy: Start" Dec 13 13:07:33.476907 kubelet[2622]: I1213 13:07:33.476885 2622 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 13:07:33.476983 kubelet[2622]: I1213 13:07:33.476934 2622 state_mem.go:35] "Initializing new in-memory state store" Dec 13 13:07:33.477085 kubelet[2622]: I1213 13:07:33.477070 2622 state_mem.go:75] "Updated machine memory state" Dec 13 13:07:33.481004 kubelet[2622]: I1213 13:07:33.480978 2622 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 13:07:33.481380 kubelet[2622]: I1213 13:07:33.481209 2622 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 13:07:33.524140 kubelet[2622]: I1213 13:07:33.524044 2622 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 13:07:33.550752 kubelet[2622]: I1213 13:07:33.550718 2622 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Dec 13 13:07:33.550912 kubelet[2622]: I1213 13:07:33.550810 2622 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Dec 13 13:07:33.556037 kubelet[2622]: I1213 13:07:33.556000 2622 topology_manager.go:215] "Topology Admit Handler" podUID="c4144e8f85b2123a6afada0c1705bbba" podNamespace="kube-system" podName="kube-scheduler-localhost" Dec 13 13:07:33.556151 kubelet[2622]: I1213 13:07:33.556092 2622 topology_manager.go:215] "Topology Admit Handler" podUID="693889598a783314c23e956cc5b0ba0e" podNamespace="kube-system" podName="kube-apiserver-localhost" Dec 13 13:07:33.556151 kubelet[2622]: I1213 13:07:33.556147 2622 topology_manager.go:215] "Topology Admit Handler" podUID="4f8e0d694c07e04969646aa3c152c34a" podNamespace="kube-system" podName="kube-controller-manager-localhost" Dec 13 13:07:33.622492 kubelet[2622]: I1213 13:07:33.622260 2622 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:07:33.622492 kubelet[2622]: I1213 13:07:33.622303 2622 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:07:33.622492 kubelet[2622]: I1213 13:07:33.622322 2622 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:07:33.622492 kubelet[2622]: I1213 13:07:33.622350 2622 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/693889598a783314c23e956cc5b0ba0e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"693889598a783314c23e956cc5b0ba0e\") " pod="kube-system/kube-apiserver-localhost" Dec 13 13:07:33.622492 kubelet[2622]: I1213 13:07:33.622372 2622 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:07:33.622712 kubelet[2622]: I1213 13:07:33.622396 2622 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:07:33.622712 kubelet[2622]: I1213 13:07:33.622426 2622 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c4144e8f85b2123a6afada0c1705bbba-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c4144e8f85b2123a6afada0c1705bbba\") " pod="kube-system/kube-scheduler-localhost" Dec 13 13:07:33.622712 kubelet[2622]: I1213 13:07:33.622445 2622 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/693889598a783314c23e956cc5b0ba0e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"693889598a783314c23e956cc5b0ba0e\") " pod="kube-system/kube-apiserver-localhost" Dec 13 13:07:33.622712 kubelet[2622]: I1213 13:07:33.622466 2622 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/693889598a783314c23e956cc5b0ba0e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"693889598a783314c23e956cc5b0ba0e\") " pod="kube-system/kube-apiserver-localhost" Dec 13 13:07:33.957493 kubelet[2622]: E1213 13:07:33.957458 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:07:33.963414 kubelet[2622]: E1213 13:07:33.963317 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:07:33.963715 kubelet[2622]: E1213 13:07:33.963697 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:07:34.406779 kubelet[2622]: I1213 13:07:34.406481 2622 apiserver.go:52] "Watching apiserver" Dec 13 13:07:34.421907 kubelet[2622]: I1213 13:07:34.421866 2622 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 13:07:34.466120 kubelet[2622]: E1213 13:07:34.466082 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:07:34.473779 kubelet[2622]: E1213 13:07:34.473745 2622 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Dec 13 13:07:34.474050 kubelet[2622]: E1213 13:07:34.474030 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:07:34.474313 kubelet[2622]: E1213 13:07:34.474292 2622 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Dec 13 13:07:34.474725 kubelet[2622]: E1213 13:07:34.474704 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:07:34.501239 kubelet[2622]: I1213 13:07:34.501197 2622 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.501144979 podStartE2EDuration="1.501144979s" podCreationTimestamp="2024-12-13 13:07:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:07:34.481987189 +0000 UTC m=+1.129876205" watchObservedRunningTime="2024-12-13 13:07:34.501144979 +0000 UTC m=+1.149033955" Dec 13 13:07:34.513163 kubelet[2622]: I1213 13:07:34.513120 2622 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.513078543 podStartE2EDuration="1.513078543s" podCreationTimestamp="2024-12-13 13:07:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:07:34.501618021 +0000 UTC m=+1.149507037" watchObservedRunningTime="2024-12-13 13:07:34.513078543 +0000 UTC m=+1.160967559" Dec 13 13:07:34.524854 kubelet[2622]: I1213 13:07:34.524332 2622 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.524274344 podStartE2EDuration="1.524274344s" podCreationTimestamp="2024-12-13 13:07:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:07:34.513499704 +0000 UTC m=+1.161388720" watchObservedRunningTime="2024-12-13 13:07:34.524274344 +0000 UTC m=+1.172163360" Dec 13 13:07:35.466754 kubelet[2622]: E1213 13:07:35.466684 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:07:35.466754 kubelet[2622]: E1213 13:07:35.466706 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:07:37.842955 sudo[1626]: pam_unix(sudo:session): session closed for user root Dec 13 13:07:37.844448 sshd[1625]: Connection closed by 10.0.0.1 port 49864 Dec 13 13:07:37.844982 sshd-session[1623]: pam_unix(sshd:session): session closed for user core Dec 13 13:07:37.848450 systemd[1]: sshd@6-10.0.0.33:22-10.0.0.1:49864.service: Deactivated successfully. Dec 13 13:07:37.850103 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 13:07:37.850248 systemd[1]: session-7.scope: Consumed 6.837s CPU time, 194.3M memory peak, 0B memory swap peak. Dec 13 13:07:37.850654 systemd-logind[1431]: Session 7 logged out. Waiting for processes to exit. Dec 13 13:07:37.851750 systemd-logind[1431]: Removed session 7. Dec 13 13:07:40.526431 kubelet[2622]: E1213 13:07:40.526345 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:07:41.473631 kubelet[2622]: E1213 13:07:41.473600 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:07:43.326244 kubelet[2622]: E1213 13:07:43.326198 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:07:43.476459 kubelet[2622]: E1213 13:07:43.476376 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:07:44.392299 kubelet[2622]: E1213 13:07:44.392228 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:07:46.547264 update_engine[1435]: I20241213 13:07:46.547183 1435 update_attempter.cc:509] Updating boot flags... Dec 13 13:07:46.576987 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2722) Dec 13 13:07:48.512018 kubelet[2622]: I1213 13:07:48.511991 2622 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 13:07:48.527619 containerd[1444]: time="2024-12-13T13:07:48.527515616Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 13:07:48.528059 kubelet[2622]: I1213 13:07:48.527775 2622 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 13:07:49.406215 kubelet[2622]: I1213 13:07:49.406037 2622 topology_manager.go:215] "Topology Admit Handler" podUID="894c5fcc-6a6c-4280-a804-d754ebf0ff57" podNamespace="kube-system" podName="kube-proxy-v7chv" Dec 13 13:07:49.417593 systemd[1]: Created slice kubepods-besteffort-pod894c5fcc_6a6c_4280_a804_d754ebf0ff57.slice - libcontainer container kubepods-besteffort-pod894c5fcc_6a6c_4280_a804_d754ebf0ff57.slice. Dec 13 13:07:49.530460 kubelet[2622]: I1213 13:07:49.530402 2622 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/894c5fcc-6a6c-4280-a804-d754ebf0ff57-lib-modules\") pod \"kube-proxy-v7chv\" (UID: \"894c5fcc-6a6c-4280-a804-d754ebf0ff57\") " pod="kube-system/kube-proxy-v7chv" Dec 13 13:07:49.530460 kubelet[2622]: I1213 13:07:49.530463 2622 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/894c5fcc-6a6c-4280-a804-d754ebf0ff57-kube-proxy\") pod \"kube-proxy-v7chv\" (UID: \"894c5fcc-6a6c-4280-a804-d754ebf0ff57\") " pod="kube-system/kube-proxy-v7chv" Dec 13 13:07:49.537720 kubelet[2622]: I1213 13:07:49.530485 2622 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/894c5fcc-6a6c-4280-a804-d754ebf0ff57-xtables-lock\") pod \"kube-proxy-v7chv\" (UID: \"894c5fcc-6a6c-4280-a804-d754ebf0ff57\") " pod="kube-system/kube-proxy-v7chv" Dec 13 13:07:49.537720 kubelet[2622]: I1213 13:07:49.530526 2622 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gb7d2\" (UniqueName: \"kubernetes.io/projected/894c5fcc-6a6c-4280-a804-d754ebf0ff57-kube-api-access-gb7d2\") pod \"kube-proxy-v7chv\" (UID: \"894c5fcc-6a6c-4280-a804-d754ebf0ff57\") " pod="kube-system/kube-proxy-v7chv" Dec 13 13:07:49.572843 kubelet[2622]: I1213 13:07:49.572807 2622 topology_manager.go:215] "Topology Admit Handler" podUID="2b938993-9efa-469e-bc88-8236091d6fa8" podNamespace="tigera-operator" podName="tigera-operator-c7ccbd65-4nzvc" Dec 13 13:07:49.583264 systemd[1]: Created slice kubepods-besteffort-pod2b938993_9efa_469e_bc88_8236091d6fa8.slice - libcontainer container kubepods-besteffort-pod2b938993_9efa_469e_bc88_8236091d6fa8.slice. Dec 13 13:07:49.732346 kubelet[2622]: I1213 13:07:49.732299 2622 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2fm9\" (UniqueName: \"kubernetes.io/projected/2b938993-9efa-469e-bc88-8236091d6fa8-kube-api-access-v2fm9\") pod \"tigera-operator-c7ccbd65-4nzvc\" (UID: \"2b938993-9efa-469e-bc88-8236091d6fa8\") " pod="tigera-operator/tigera-operator-c7ccbd65-4nzvc" Dec 13 13:07:49.732346 kubelet[2622]: I1213 13:07:49.732351 2622 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2b938993-9efa-469e-bc88-8236091d6fa8-var-lib-calico\") pod \"tigera-operator-c7ccbd65-4nzvc\" (UID: \"2b938993-9efa-469e-bc88-8236091d6fa8\") " pod="tigera-operator/tigera-operator-c7ccbd65-4nzvc" Dec 13 13:07:49.732500 kubelet[2622]: E1213 13:07:49.732354 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:07:49.733070 containerd[1444]: time="2024-12-13T13:07:49.733024778Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-v7chv,Uid:894c5fcc-6a6c-4280-a804-d754ebf0ff57,Namespace:kube-system,Attempt:0,}" Dec 13 13:07:49.751848 containerd[1444]: time="2024-12-13T13:07:49.751766924Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:07:49.751848 containerd[1444]: time="2024-12-13T13:07:49.751813365Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:07:49.752125 containerd[1444]: time="2024-12-13T13:07:49.751828205Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:07:49.752228 containerd[1444]: time="2024-12-13T13:07:49.752110725Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:07:49.773283 systemd[1]: Started cri-containerd-6f6449498974944903e6463b2c13ba8412446a94de3b3b236ae920e792bdb3cb.scope - libcontainer container 6f6449498974944903e6463b2c13ba8412446a94de3b3b236ae920e792bdb3cb. Dec 13 13:07:49.792809 containerd[1444]: time="2024-12-13T13:07:49.792765062Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-v7chv,Uid:894c5fcc-6a6c-4280-a804-d754ebf0ff57,Namespace:kube-system,Attempt:0,} returns sandbox id \"6f6449498974944903e6463b2c13ba8412446a94de3b3b236ae920e792bdb3cb\"" Dec 13 13:07:49.795838 kubelet[2622]: E1213 13:07:49.795614 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:07:49.799369 containerd[1444]: time="2024-12-13T13:07:49.799330271Z" level=info msg="CreateContainer within sandbox \"6f6449498974944903e6463b2c13ba8412446a94de3b3b236ae920e792bdb3cb\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 13:07:49.811758 containerd[1444]: time="2024-12-13T13:07:49.811718688Z" level=info msg="CreateContainer within sandbox \"6f6449498974944903e6463b2c13ba8412446a94de3b3b236ae920e792bdb3cb\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4535c89ce0cea2e052c17dc24e96a73df96a70e7fa61870ee1b1f5b907dd3c3f\"" Dec 13 13:07:49.812500 containerd[1444]: time="2024-12-13T13:07:49.812462689Z" level=info msg="StartContainer for \"4535c89ce0cea2e052c17dc24e96a73df96a70e7fa61870ee1b1f5b907dd3c3f\"" Dec 13 13:07:49.838114 systemd[1]: Started cri-containerd-4535c89ce0cea2e052c17dc24e96a73df96a70e7fa61870ee1b1f5b907dd3c3f.scope - libcontainer container 4535c89ce0cea2e052c17dc24e96a73df96a70e7fa61870ee1b1f5b907dd3c3f. Dec 13 13:07:49.869121 containerd[1444]: time="2024-12-13T13:07:49.867509086Z" level=info msg="StartContainer for \"4535c89ce0cea2e052c17dc24e96a73df96a70e7fa61870ee1b1f5b907dd3c3f\" returns successfully" Dec 13 13:07:49.886607 containerd[1444]: time="2024-12-13T13:07:49.886499192Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-4nzvc,Uid:2b938993-9efa-469e-bc88-8236091d6fa8,Namespace:tigera-operator,Attempt:0,}" Dec 13 13:07:49.905299 containerd[1444]: time="2024-12-13T13:07:49.905116338Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:07:49.905299 containerd[1444]: time="2024-12-13T13:07:49.905175138Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:07:49.905299 containerd[1444]: time="2024-12-13T13:07:49.905186778Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:07:49.906949 containerd[1444]: time="2024-12-13T13:07:49.906846900Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:07:49.924091 systemd[1]: Started cri-containerd-b05e59c150f49bc53706d2ab81ca66d29c3430bc9c318ec4ab90ca563877b7e0.scope - libcontainer container b05e59c150f49bc53706d2ab81ca66d29c3430bc9c318ec4ab90ca563877b7e0. Dec 13 13:07:49.962908 containerd[1444]: time="2024-12-13T13:07:49.962788098Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-4nzvc,Uid:2b938993-9efa-469e-bc88-8236091d6fa8,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"b05e59c150f49bc53706d2ab81ca66d29c3430bc9c318ec4ab90ca563877b7e0\"" Dec 13 13:07:49.965380 containerd[1444]: time="2024-12-13T13:07:49.965291022Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Dec 13 13:07:50.486794 kubelet[2622]: E1213 13:07:50.486720 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:07:50.495907 kubelet[2622]: I1213 13:07:50.495737 2622 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-v7chv" podStartSLOduration=1.495685397 podStartE2EDuration="1.495685397s" podCreationTimestamp="2024-12-13 13:07:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:07:50.495467557 +0000 UTC m=+17.143356573" watchObservedRunningTime="2024-12-13 13:07:50.495685397 +0000 UTC m=+17.143574413" Dec 13 13:07:50.964898 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2168981553.mount: Deactivated successfully. Dec 13 13:07:51.222169 containerd[1444]: time="2024-12-13T13:07:51.222016847Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:07:51.223621 containerd[1444]: time="2024-12-13T13:07:51.223577569Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=19125964" Dec 13 13:07:51.224516 containerd[1444]: time="2024-12-13T13:07:51.224479370Z" level=info msg="ImageCreate event name:\"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:07:51.226193 containerd[1444]: time="2024-12-13T13:07:51.226166052Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:07:51.227036 containerd[1444]: time="2024-12-13T13:07:51.227010413Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"19120155\" in 1.261679711s" Dec 13 13:07:51.227100 containerd[1444]: time="2024-12-13T13:07:51.227042813Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\"" Dec 13 13:07:51.237934 containerd[1444]: time="2024-12-13T13:07:51.237882267Z" level=info msg="CreateContainer within sandbox \"b05e59c150f49bc53706d2ab81ca66d29c3430bc9c318ec4ab90ca563877b7e0\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 13 13:07:51.278808 containerd[1444]: time="2024-12-13T13:07:51.278723677Z" level=info msg="CreateContainer within sandbox \"b05e59c150f49bc53706d2ab81ca66d29c3430bc9c318ec4ab90ca563877b7e0\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"424b54f9823b93640698876bd8969a0a8498eea2b474452e9e0ac9088d74a91f\"" Dec 13 13:07:51.279540 containerd[1444]: time="2024-12-13T13:07:51.279509118Z" level=info msg="StartContainer for \"424b54f9823b93640698876bd8969a0a8498eea2b474452e9e0ac9088d74a91f\"" Dec 13 13:07:51.306074 systemd[1]: Started cri-containerd-424b54f9823b93640698876bd8969a0a8498eea2b474452e9e0ac9088d74a91f.scope - libcontainer container 424b54f9823b93640698876bd8969a0a8498eea2b474452e9e0ac9088d74a91f. Dec 13 13:07:51.331894 containerd[1444]: time="2024-12-13T13:07:51.331777221Z" level=info msg="StartContainer for \"424b54f9823b93640698876bd8969a0a8498eea2b474452e9e0ac9088d74a91f\" returns successfully" Dec 13 13:07:51.500862 kubelet[2622]: I1213 13:07:51.500737 2622 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-c7ccbd65-4nzvc" podStartSLOduration=1.2364386330000001 podStartE2EDuration="2.500694948s" podCreationTimestamp="2024-12-13 13:07:49 +0000 UTC" firstStartedPulling="2024-12-13 13:07:49.96417794 +0000 UTC m=+16.612066916" lastFinishedPulling="2024-12-13 13:07:51.228434215 +0000 UTC m=+17.876323231" observedRunningTime="2024-12-13 13:07:51.500517148 +0000 UTC m=+18.148406164" watchObservedRunningTime="2024-12-13 13:07:51.500694948 +0000 UTC m=+18.148583964" Dec 13 13:07:55.435289 kubelet[2622]: I1213 13:07:55.435245 2622 topology_manager.go:215] "Topology Admit Handler" podUID="27289ec7-7a8d-42af-9e3a-5f87fb08ac99" podNamespace="calico-system" podName="calico-typha-64f77449dd-8brs2" Dec 13 13:07:55.451210 systemd[1]: Created slice kubepods-besteffort-pod27289ec7_7a8d_42af_9e3a_5f87fb08ac99.slice - libcontainer container kubepods-besteffort-pod27289ec7_7a8d_42af_9e3a_5f87fb08ac99.slice. Dec 13 13:07:55.484943 kubelet[2622]: I1213 13:07:55.484745 2622 topology_manager.go:215] "Topology Admit Handler" podUID="17402168-e267-4760-80e7-9d7c2d54ac29" podNamespace="calico-system" podName="calico-node-k762b" Dec 13 13:07:55.498606 systemd[1]: Created slice kubepods-besteffort-pod17402168_e267_4760_80e7_9d7c2d54ac29.slice - libcontainer container kubepods-besteffort-pod17402168_e267_4760_80e7_9d7c2d54ac29.slice. Dec 13 13:07:55.569836 kubelet[2622]: I1213 13:07:55.569795 2622 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmddw\" (UniqueName: \"kubernetes.io/projected/27289ec7-7a8d-42af-9e3a-5f87fb08ac99-kube-api-access-mmddw\") pod \"calico-typha-64f77449dd-8brs2\" (UID: \"27289ec7-7a8d-42af-9e3a-5f87fb08ac99\") " pod="calico-system/calico-typha-64f77449dd-8brs2" Dec 13 13:07:55.569836 kubelet[2622]: I1213 13:07:55.569846 2622 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/27289ec7-7a8d-42af-9e3a-5f87fb08ac99-typha-certs\") pod \"calico-typha-64f77449dd-8brs2\" (UID: \"27289ec7-7a8d-42af-9e3a-5f87fb08ac99\") " pod="calico-system/calico-typha-64f77449dd-8brs2" Dec 13 13:07:55.570026 kubelet[2622]: I1213 13:07:55.569884 2622 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/27289ec7-7a8d-42af-9e3a-5f87fb08ac99-tigera-ca-bundle\") pod \"calico-typha-64f77449dd-8brs2\" (UID: \"27289ec7-7a8d-42af-9e3a-5f87fb08ac99\") " pod="calico-system/calico-typha-64f77449dd-8brs2" Dec 13 13:07:55.604896 kubelet[2622]: I1213 13:07:55.603680 2622 topology_manager.go:215] "Topology Admit Handler" podUID="0cbbdf0f-40e1-46d6-a471-bc442a66a580" podNamespace="calico-system" podName="csi-node-driver-bvvfs" Dec 13 13:07:55.606193 kubelet[2622]: E1213 13:07:55.605866 2622 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bvvfs" podUID="0cbbdf0f-40e1-46d6-a471-bc442a66a580" Dec 13 13:07:55.671124 kubelet[2622]: I1213 13:07:55.671083 2622 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/17402168-e267-4760-80e7-9d7c2d54ac29-cni-log-dir\") pod \"calico-node-k762b\" (UID: \"17402168-e267-4760-80e7-9d7c2d54ac29\") " pod="calico-system/calico-node-k762b" Dec 13 13:07:55.671256 kubelet[2622]: I1213 13:07:55.671170 2622 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/17402168-e267-4760-80e7-9d7c2d54ac29-xtables-lock\") pod \"calico-node-k762b\" (UID: \"17402168-e267-4760-80e7-9d7c2d54ac29\") " pod="calico-system/calico-node-k762b" Dec 13 13:07:55.671256 kubelet[2622]: I1213 13:07:55.671198 2622 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/17402168-e267-4760-80e7-9d7c2d54ac29-cni-net-dir\") pod \"calico-node-k762b\" (UID: \"17402168-e267-4760-80e7-9d7c2d54ac29\") " pod="calico-system/calico-node-k762b" Dec 13 13:07:55.671256 kubelet[2622]: I1213 13:07:55.671221 2622 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtbc2\" (UniqueName: \"kubernetes.io/projected/17402168-e267-4760-80e7-9d7c2d54ac29-kube-api-access-wtbc2\") pod \"calico-node-k762b\" (UID: \"17402168-e267-4760-80e7-9d7c2d54ac29\") " pod="calico-system/calico-node-k762b" Dec 13 13:07:55.671526 kubelet[2622]: I1213 13:07:55.671497 2622 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/17402168-e267-4760-80e7-9d7c2d54ac29-var-lib-calico\") pod \"calico-node-k762b\" (UID: \"17402168-e267-4760-80e7-9d7c2d54ac29\") " pod="calico-system/calico-node-k762b" Dec 13 13:07:55.671567 kubelet[2622]: I1213 13:07:55.671533 2622 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/17402168-e267-4760-80e7-9d7c2d54ac29-cni-bin-dir\") pod \"calico-node-k762b\" (UID: \"17402168-e267-4760-80e7-9d7c2d54ac29\") " pod="calico-system/calico-node-k762b" Dec 13 13:07:55.671567 kubelet[2622]: I1213 13:07:55.671555 2622 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/17402168-e267-4760-80e7-9d7c2d54ac29-policysync\") pod \"calico-node-k762b\" (UID: \"17402168-e267-4760-80e7-9d7c2d54ac29\") " pod="calico-system/calico-node-k762b" Dec 13 13:07:55.671720 kubelet[2622]: I1213 13:07:55.671693 2622 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/17402168-e267-4760-80e7-9d7c2d54ac29-tigera-ca-bundle\") pod \"calico-node-k762b\" (UID: \"17402168-e267-4760-80e7-9d7c2d54ac29\") " pod="calico-system/calico-node-k762b" Dec 13 13:07:55.671758 kubelet[2622]: I1213 13:07:55.671735 2622 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/17402168-e267-4760-80e7-9d7c2d54ac29-node-certs\") pod \"calico-node-k762b\" (UID: \"17402168-e267-4760-80e7-9d7c2d54ac29\") " pod="calico-system/calico-node-k762b" Dec 13 13:07:55.671785 kubelet[2622]: I1213 13:07:55.671775 2622 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/17402168-e267-4760-80e7-9d7c2d54ac29-flexvol-driver-host\") pod \"calico-node-k762b\" (UID: \"17402168-e267-4760-80e7-9d7c2d54ac29\") " pod="calico-system/calico-node-k762b" Dec 13 13:07:55.671812 kubelet[2622]: I1213 13:07:55.671799 2622 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/17402168-e267-4760-80e7-9d7c2d54ac29-lib-modules\") pod \"calico-node-k762b\" (UID: \"17402168-e267-4760-80e7-9d7c2d54ac29\") " pod="calico-system/calico-node-k762b" Dec 13 13:07:55.671837 kubelet[2622]: I1213 13:07:55.671832 2622 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/17402168-e267-4760-80e7-9d7c2d54ac29-var-run-calico\") pod \"calico-node-k762b\" (UID: \"17402168-e267-4760-80e7-9d7c2d54ac29\") " pod="calico-system/calico-node-k762b" Dec 13 13:07:55.758891 kubelet[2622]: E1213 13:07:55.758682 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:07:55.759629 containerd[1444]: time="2024-12-13T13:07:55.759580068Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-64f77449dd-8brs2,Uid:27289ec7-7a8d-42af-9e3a-5f87fb08ac99,Namespace:calico-system,Attempt:0,}" Dec 13 13:07:55.772225 kubelet[2622]: I1213 13:07:55.772194 2622 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/0cbbdf0f-40e1-46d6-a471-bc442a66a580-varrun\") pod \"csi-node-driver-bvvfs\" (UID: \"0cbbdf0f-40e1-46d6-a471-bc442a66a580\") " pod="calico-system/csi-node-driver-bvvfs" Dec 13 13:07:55.772501 kubelet[2622]: I1213 13:07:55.772476 2622 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0cbbdf0f-40e1-46d6-a471-bc442a66a580-kubelet-dir\") pod \"csi-node-driver-bvvfs\" (UID: \"0cbbdf0f-40e1-46d6-a471-bc442a66a580\") " pod="calico-system/csi-node-driver-bvvfs" Dec 13 13:07:55.772555 kubelet[2622]: I1213 13:07:55.772541 2622 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwqqz\" (UniqueName: \"kubernetes.io/projected/0cbbdf0f-40e1-46d6-a471-bc442a66a580-kube-api-access-nwqqz\") pod \"csi-node-driver-bvvfs\" (UID: \"0cbbdf0f-40e1-46d6-a471-bc442a66a580\") " pod="calico-system/csi-node-driver-bvvfs" Dec 13 13:07:55.772586 kubelet[2622]: I1213 13:07:55.772571 2622 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0cbbdf0f-40e1-46d6-a471-bc442a66a580-registration-dir\") pod \"csi-node-driver-bvvfs\" (UID: \"0cbbdf0f-40e1-46d6-a471-bc442a66a580\") " pod="calico-system/csi-node-driver-bvvfs" Dec 13 13:07:55.774845 kubelet[2622]: I1213 13:07:55.772651 2622 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0cbbdf0f-40e1-46d6-a471-bc442a66a580-socket-dir\") pod \"csi-node-driver-bvvfs\" (UID: \"0cbbdf0f-40e1-46d6-a471-bc442a66a580\") " pod="calico-system/csi-node-driver-bvvfs" Dec 13 13:07:55.843248 containerd[1444]: time="2024-12-13T13:07:55.842839867Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:07:55.843248 containerd[1444]: time="2024-12-13T13:07:55.842908507Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:07:55.843248 containerd[1444]: time="2024-12-13T13:07:55.842933787Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:07:55.843248 containerd[1444]: time="2024-12-13T13:07:55.843022547Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:07:55.870109 systemd[1]: Started cri-containerd-9e87dbd77b69b62ae65b16ad8f62111691386056704aab19a1524426238a2485.scope - libcontainer container 9e87dbd77b69b62ae65b16ad8f62111691386056704aab19a1524426238a2485. Dec 13 13:07:55.873667 kubelet[2622]: E1213 13:07:55.873510 2622 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:07:55.873667 kubelet[2622]: W1213 13:07:55.873535 2622 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:07:55.873667 kubelet[2622]: E1213 13:07:55.873571 2622 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:07:55.873809 kubelet[2622]: E1213 13:07:55.873752 2622 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:07:55.873809 kubelet[2622]: W1213 13:07:55.873760 2622 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:07:55.873809 kubelet[2622]: E1213 13:07:55.873772 2622 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:07:55.874478 kubelet[2622]: E1213 13:07:55.873945 2622 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:07:55.874478 kubelet[2622]: W1213 13:07:55.873957 2622 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:07:55.874478 kubelet[2622]: E1213 13:07:55.873969 2622 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:07:55.874478 kubelet[2622]: E1213 13:07:55.874132 2622 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:07:55.874478 kubelet[2622]: W1213 13:07:55.874139 2622 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:07:55.874478 kubelet[2622]: E1213 13:07:55.874151 2622 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:07:55.874905 kubelet[2622]: E1213 13:07:55.874599 2622 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:07:55.874905 kubelet[2622]: W1213 13:07:55.874612 2622 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:07:55.874905 kubelet[2622]: E1213 13:07:55.874633 2622 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:07:55.876141 kubelet[2622]: E1213 13:07:55.875912 2622 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:07:55.876141 kubelet[2622]: W1213 13:07:55.875943 2622 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:07:55.876141 kubelet[2622]: E1213 13:07:55.875966 2622 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:07:55.876654 kubelet[2622]: E1213 13:07:55.876259 2622 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:07:55.876654 kubelet[2622]: W1213 13:07:55.876326 2622 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:07:55.876654 kubelet[2622]: E1213 13:07:55.876347 2622 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:07:55.876816 kubelet[2622]: E1213 13:07:55.876781 2622 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:07:55.877244 kubelet[2622]: W1213 13:07:55.877222 2622 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:07:55.877526 kubelet[2622]: E1213 13:07:55.877490 2622 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:07:55.877762 kubelet[2622]: E1213 13:07:55.877736 2622 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:07:55.877762 kubelet[2622]: W1213 13:07:55.877749 2622 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:07:55.878063 kubelet[2622]: E1213 13:07:55.878048 2622 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:07:55.878533 kubelet[2622]: E1213 13:07:55.878480 2622 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:07:55.878533 kubelet[2622]: W1213 13:07:55.878495 2622 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:07:55.878619 kubelet[2622]: E1213 13:07:55.878539 2622 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:07:55.879004 kubelet[2622]: E1213 13:07:55.878902 2622 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:07:55.879004 kubelet[2622]: W1213 13:07:55.878914 2622 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:07:55.879090 kubelet[2622]: E1213 13:07:55.879024 2622 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:07:55.880404 kubelet[2622]: E1213 13:07:55.879685 2622 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:07:55.880404 kubelet[2622]: W1213 13:07:55.880269 2622 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:07:55.880404 kubelet[2622]: E1213 13:07:55.880377 2622 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:07:55.881178 kubelet[2622]: E1213 13:07:55.880844 2622 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:07:55.881178 kubelet[2622]: W1213 13:07:55.880882 2622 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:07:55.881178 kubelet[2622]: E1213 13:07:55.880950 2622 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:07:55.882253 kubelet[2622]: E1213 13:07:55.882096 2622 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:07:55.882253 kubelet[2622]: W1213 13:07:55.882111 2622 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:07:55.882253 kubelet[2622]: E1213 13:07:55.882170 2622 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:07:55.882482 kubelet[2622]: E1213 13:07:55.882470 2622 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:07:55.882644 kubelet[2622]: W1213 13:07:55.882558 2622 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:07:55.882899 kubelet[2622]: E1213 13:07:55.882820 2622 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:07:55.883143 kubelet[2622]: E1213 13:07:55.883131 2622 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:07:55.883229 kubelet[2622]: W1213 13:07:55.883182 2622 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:07:55.883600 kubelet[2622]: E1213 13:07:55.883448 2622 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:07:55.889192 kubelet[2622]: E1213 13:07:55.889102 2622 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:07:55.889733 kubelet[2622]: W1213 13:07:55.889707 2622 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:07:55.889897 kubelet[2622]: E1213 13:07:55.889871 2622 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:07:55.890579 kubelet[2622]: E1213 13:07:55.890557 2622 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:07:55.891373 kubelet[2622]: W1213 13:07:55.891303 2622 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:07:55.891707 kubelet[2622]: E1213 13:07:55.891549 2622 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:07:55.891943 kubelet[2622]: E1213 13:07:55.891917 2622 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:07:55.892534 kubelet[2622]: W1213 13:07:55.892361 2622 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:07:55.892534 kubelet[2622]: E1213 13:07:55.892433 2622 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:07:55.892880 kubelet[2622]: E1213 13:07:55.892825 2622 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:07:55.892880 kubelet[2622]: W1213 13:07:55.892839 2622 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:07:55.893054 kubelet[2622]: E1213 13:07:55.893001 2622 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:07:55.893486 kubelet[2622]: E1213 13:07:55.893315 2622 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:07:55.893486 kubelet[2622]: W1213 13:07:55.893328 2622 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:07:55.893640 kubelet[2622]: E1213 13:07:55.893624 2622 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:07:55.894272 kubelet[2622]: E1213 13:07:55.893776 2622 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:07:55.894272 kubelet[2622]: W1213 13:07:55.894091 2622 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:07:55.894462 kubelet[2622]: E1213 13:07:55.894428 2622 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:07:55.894842 kubelet[2622]: E1213 13:07:55.894743 2622 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:07:55.894842 kubelet[2622]: W1213 13:07:55.894755 2622 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:07:55.894968 kubelet[2622]: E1213 13:07:55.894954 2622 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:07:55.895301 kubelet[2622]: E1213 13:07:55.895226 2622 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:07:55.895301 kubelet[2622]: W1213 13:07:55.895239 2622 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:07:55.896062 kubelet[2622]: E1213 13:07:55.895636 2622 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:07:55.896370 kubelet[2622]: E1213 13:07:55.896356 2622 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:07:55.896586 kubelet[2622]: W1213 13:07:55.896536 2622 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:07:55.896586 kubelet[2622]: E1213 13:07:55.896560 2622 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:07:55.906885 kubelet[2622]: E1213 13:07:55.906860 2622 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:07:55.906885 kubelet[2622]: W1213 13:07:55.906878 2622 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:07:55.907066 kubelet[2622]: E1213 13:07:55.906894 2622 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:07:55.941080 containerd[1444]: time="2024-12-13T13:07:55.941000720Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-64f77449dd-8brs2,Uid:27289ec7-7a8d-42af-9e3a-5f87fb08ac99,Namespace:calico-system,Attempt:0,} returns sandbox id \"9e87dbd77b69b62ae65b16ad8f62111691386056704aab19a1524426238a2485\"" Dec 13 13:07:55.941840 kubelet[2622]: E1213 13:07:55.941816 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:07:55.944739 containerd[1444]: time="2024-12-13T13:07:55.944705083Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Dec 13 13:07:56.101308 kubelet[2622]: E1213 13:07:56.101200 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:07:56.102019 containerd[1444]: time="2024-12-13T13:07:56.101676906Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-k762b,Uid:17402168-e267-4760-80e7-9d7c2d54ac29,Namespace:calico-system,Attempt:0,}" Dec 13 13:07:56.122152 containerd[1444]: time="2024-12-13T13:07:56.121956044Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:07:56.122152 containerd[1444]: time="2024-12-13T13:07:56.122008484Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:07:56.122152 containerd[1444]: time="2024-12-13T13:07:56.122019364Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:07:56.122152 containerd[1444]: time="2024-12-13T13:07:56.122088684Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:07:56.141124 systemd[1]: Started cri-containerd-17c817c737b6a17ab7a88f8e2b5162789abbeefb2794567a0ce86c4469dbc5d2.scope - libcontainer container 17c817c737b6a17ab7a88f8e2b5162789abbeefb2794567a0ce86c4469dbc5d2. Dec 13 13:07:56.160958 containerd[1444]: time="2024-12-13T13:07:56.160903118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-k762b,Uid:17402168-e267-4760-80e7-9d7c2d54ac29,Namespace:calico-system,Attempt:0,} returns sandbox id \"17c817c737b6a17ab7a88f8e2b5162789abbeefb2794567a0ce86c4469dbc5d2\"" Dec 13 13:07:56.161680 kubelet[2622]: E1213 13:07:56.161644 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:07:56.929601 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1824494828.mount: Deactivated successfully. Dec 13 13:07:57.456234 kubelet[2622]: E1213 13:07:57.456190 2622 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bvvfs" podUID="0cbbdf0f-40e1-46d6-a471-bc442a66a580" Dec 13 13:07:57.480676 containerd[1444]: time="2024-12-13T13:07:57.480522941Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:07:57.481468 containerd[1444]: time="2024-12-13T13:07:57.481425862Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29231308" Dec 13 13:07:57.482167 containerd[1444]: time="2024-12-13T13:07:57.482137662Z" level=info msg="ImageCreate event name:\"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:07:57.484689 containerd[1444]: time="2024-12-13T13:07:57.484639144Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:07:57.485338 containerd[1444]: time="2024-12-13T13:07:57.485299065Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"29231162\" in 1.540554542s" Dec 13 13:07:57.485338 containerd[1444]: time="2024-12-13T13:07:57.485334105Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\"" Dec 13 13:07:57.486165 containerd[1444]: time="2024-12-13T13:07:57.486135386Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Dec 13 13:07:57.493211 containerd[1444]: time="2024-12-13T13:07:57.493106351Z" level=info msg="CreateContainer within sandbox \"9e87dbd77b69b62ae65b16ad8f62111691386056704aab19a1524426238a2485\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 13 13:07:57.509368 containerd[1444]: time="2024-12-13T13:07:57.508183644Z" level=info msg="CreateContainer within sandbox \"9e87dbd77b69b62ae65b16ad8f62111691386056704aab19a1524426238a2485\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"13dc86d3f71ca4c57f42f80c118040aa6600433c45a499b9e4c1af7db68f44f9\"" Dec 13 13:07:57.510423 containerd[1444]: time="2024-12-13T13:07:57.509804965Z" level=info msg="StartContainer for \"13dc86d3f71ca4c57f42f80c118040aa6600433c45a499b9e4c1af7db68f44f9\"" Dec 13 13:07:57.538104 systemd[1]: Started cri-containerd-13dc86d3f71ca4c57f42f80c118040aa6600433c45a499b9e4c1af7db68f44f9.scope - libcontainer container 13dc86d3f71ca4c57f42f80c118040aa6600433c45a499b9e4c1af7db68f44f9. Dec 13 13:07:57.572305 containerd[1444]: time="2024-12-13T13:07:57.572269097Z" level=info msg="StartContainer for \"13dc86d3f71ca4c57f42f80c118040aa6600433c45a499b9e4c1af7db68f44f9\" returns successfully" Dec 13 13:07:58.507340 kubelet[2622]: E1213 13:07:58.506774 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:07:58.520985 kubelet[2622]: I1213 13:07:58.520582 2622 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-64f77449dd-8brs2" podStartSLOduration=1.9789732359999999 podStartE2EDuration="3.520507898s" podCreationTimestamp="2024-12-13 13:07:55 +0000 UTC" firstStartedPulling="2024-12-13 13:07:55.944135843 +0000 UTC m=+22.592024819" lastFinishedPulling="2024-12-13 13:07:57.485670465 +0000 UTC m=+24.133559481" observedRunningTime="2024-12-13 13:07:58.518913177 +0000 UTC m=+25.166802193" watchObservedRunningTime="2024-12-13 13:07:58.520507898 +0000 UTC m=+25.168396914" Dec 13 13:07:58.561045 containerd[1444]: time="2024-12-13T13:07:58.560410489Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:07:58.561045 containerd[1444]: time="2024-12-13T13:07:58.561011530Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5117811" Dec 13 13:07:58.561769 containerd[1444]: time="2024-12-13T13:07:58.561742170Z" level=info msg="ImageCreate event name:\"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:07:58.563987 containerd[1444]: time="2024-12-13T13:07:58.563957412Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:07:58.564579 containerd[1444]: time="2024-12-13T13:07:58.564555692Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6487425\" in 1.078387986s" Dec 13 13:07:58.564620 containerd[1444]: time="2024-12-13T13:07:58.564584252Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\"" Dec 13 13:07:58.567186 containerd[1444]: time="2024-12-13T13:07:58.567154174Z" level=info msg="CreateContainer within sandbox \"17c817c737b6a17ab7a88f8e2b5162789abbeefb2794567a0ce86c4469dbc5d2\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 13 13:07:58.577982 containerd[1444]: time="2024-12-13T13:07:58.577905863Z" level=info msg="CreateContainer within sandbox \"17c817c737b6a17ab7a88f8e2b5162789abbeefb2794567a0ce86c4469dbc5d2\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"1ddc14e60c76213432ca32208bc29d26d36911bd7e2fd712d3d79de74633effc\"" Dec 13 13:07:58.578429 containerd[1444]: time="2024-12-13T13:07:58.578405343Z" level=info msg="StartContainer for \"1ddc14e60c76213432ca32208bc29d26d36911bd7e2fd712d3d79de74633effc\"" Dec 13 13:07:58.600113 kubelet[2622]: E1213 13:07:58.600076 2622 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:07:58.600113 kubelet[2622]: W1213 13:07:58.600097 2622 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:07:58.600113 kubelet[2622]: E1213 13:07:58.600116 2622 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:07:58.600523 kubelet[2622]: E1213 13:07:58.600300 2622 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:07:58.600523 kubelet[2622]: W1213 13:07:58.600313 2622 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:07:58.600523 kubelet[2622]: E1213 13:07:58.600324 2622 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:07:58.600523 kubelet[2622]: E1213 13:07:58.600484 2622 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:07:58.600523 kubelet[2622]: W1213 13:07:58.600492 2622 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:07:58.600523 kubelet[2622]: E1213 13:07:58.600504 2622 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:07:58.600744 kubelet[2622]: E1213 13:07:58.600685 2622 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:07:58.600744 kubelet[2622]: W1213 13:07:58.600694 2622 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:07:58.600744 kubelet[2622]: E1213 13:07:58.600705 2622 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:07:58.601263 kubelet[2622]: E1213 13:07:58.601128 2622 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:07:58.601263 kubelet[2622]: W1213 13:07:58.601143 2622 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:07:58.601263 kubelet[2622]: E1213 13:07:58.601169 2622 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:07:58.601757 kubelet[2622]: E1213 13:07:58.601537 2622 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:07:58.601757 kubelet[2622]: W1213 13:07:58.601551 2622 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:07:58.601757 kubelet[2622]: E1213 13:07:58.601572 2622 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:07:58.601951 kubelet[2622]: E1213 13:07:58.601918 2622 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:07:58.602110 kubelet[2622]: W1213 13:07:58.602034 2622 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:07:58.602110 kubelet[2622]: E1213 13:07:58.602052 2622 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:07:58.602465 kubelet[2622]: E1213 13:07:58.602325 2622 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:07:58.602465 kubelet[2622]: W1213 13:07:58.602342 2622 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:07:58.602465 kubelet[2622]: E1213 13:07:58.602355 2622 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:07:58.602638 kubelet[2622]: E1213 13:07:58.602624 2622 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:07:58.602709 kubelet[2622]: W1213 13:07:58.602697 2622 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:07:58.602873 kubelet[2622]: E1213 13:07:58.602801 2622 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:07:58.603227 kubelet[2622]: E1213 13:07:58.603118 2622 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:07:58.603227 kubelet[2622]: W1213 13:07:58.603134 2622 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:07:58.603227 kubelet[2622]: E1213 13:07:58.603147 2622 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:07:58.603416 kubelet[2622]: E1213 13:07:58.603399 2622 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:07:58.603556 kubelet[2622]: W1213 13:07:58.603445 2622 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:07:58.603556 kubelet[2622]: E1213 13:07:58.603475 2622 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:07:58.603861 kubelet[2622]: E1213 13:07:58.603813 2622 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:07:58.603861 kubelet[2622]: W1213 13:07:58.603826 2622 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:07:58.603861 kubelet[2622]: E1213 13:07:58.603840 2622 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:07:58.604399 kubelet[2622]: E1213 13:07:58.604288 2622 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:07:58.604399 kubelet[2622]: W1213 13:07:58.604302 2622 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:07:58.604399 kubelet[2622]: E1213 13:07:58.604314 2622 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:07:58.604619 kubelet[2622]: E1213 13:07:58.604518 2622 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:07:58.604619 kubelet[2622]: W1213 13:07:58.604528 2622 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:07:58.604619 kubelet[2622]: E1213 13:07:58.604542 2622 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:07:58.605012 kubelet[2622]: E1213 13:07:58.604917 2622 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:07:58.605012 kubelet[2622]: W1213 13:07:58.604948 2622 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:07:58.605012 kubelet[2622]: E1213 13:07:58.604960 2622 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:07:58.610119 systemd[1]: Started cri-containerd-1ddc14e60c76213432ca32208bc29d26d36911bd7e2fd712d3d79de74633effc.scope - libcontainer container 1ddc14e60c76213432ca32208bc29d26d36911bd7e2fd712d3d79de74633effc. Dec 13 13:07:58.663456 systemd[1]: cri-containerd-1ddc14e60c76213432ca32208bc29d26d36911bd7e2fd712d3d79de74633effc.scope: Deactivated successfully. Dec 13 13:07:58.709288 containerd[1444]: time="2024-12-13T13:07:58.709233525Z" level=info msg="StartContainer for \"1ddc14e60c76213432ca32208bc29d26d36911bd7e2fd712d3d79de74633effc\" returns successfully" Dec 13 13:07:58.725666 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1ddc14e60c76213432ca32208bc29d26d36911bd7e2fd712d3d79de74633effc-rootfs.mount: Deactivated successfully. Dec 13 13:07:58.731450 containerd[1444]: time="2024-12-13T13:07:58.731387982Z" level=info msg="shim disconnected" id=1ddc14e60c76213432ca32208bc29d26d36911bd7e2fd712d3d79de74633effc namespace=k8s.io Dec 13 13:07:58.731450 containerd[1444]: time="2024-12-13T13:07:58.731437982Z" level=warning msg="cleaning up after shim disconnected" id=1ddc14e60c76213432ca32208bc29d26d36911bd7e2fd712d3d79de74633effc namespace=k8s.io Dec 13 13:07:58.731450 containerd[1444]: time="2024-12-13T13:07:58.731446582Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:07:59.456079 kubelet[2622]: E1213 13:07:59.456035 2622 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bvvfs" podUID="0cbbdf0f-40e1-46d6-a471-bc442a66a580" Dec 13 13:07:59.509329 kubelet[2622]: E1213 13:07:59.509292 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:07:59.509880 kubelet[2622]: E1213 13:07:59.509836 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:07:59.510645 containerd[1444]: time="2024-12-13T13:07:59.510528284Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Dec 13 13:08:00.511604 kubelet[2622]: E1213 13:08:00.510803 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:08:01.456079 kubelet[2622]: E1213 13:08:01.455998 2622 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bvvfs" podUID="0cbbdf0f-40e1-46d6-a471-bc442a66a580" Dec 13 13:08:01.697959 systemd[1]: Started sshd@7-10.0.0.33:22-10.0.0.1:43468.service - OpenSSH per-connection server daemon (10.0.0.1:43468). Dec 13 13:08:01.743232 sshd[3278]: Accepted publickey for core from 10.0.0.1 port 43468 ssh2: RSA SHA256:q9cWvSR3bBxu+L28Z4JmOHhvW5qF2BbU+1GVJNGhIf4 Dec 13 13:08:01.744467 sshd-session[3278]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:08:01.750605 systemd-logind[1431]: New session 8 of user core. Dec 13 13:08:01.766077 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 13 13:08:01.891962 sshd[3280]: Connection closed by 10.0.0.1 port 43468 Dec 13 13:08:01.892690 sshd-session[3278]: pam_unix(sshd:session): session closed for user core Dec 13 13:08:01.896437 systemd[1]: sshd@7-10.0.0.33:22-10.0.0.1:43468.service: Deactivated successfully. Dec 13 13:08:01.898265 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 13:08:01.900444 systemd-logind[1431]: Session 8 logged out. Waiting for processes to exit. Dec 13 13:08:01.901785 systemd-logind[1431]: Removed session 8. Dec 13 13:08:03.369960 containerd[1444]: time="2024-12-13T13:08:03.369899658Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:08:03.370628 containerd[1444]: time="2024-12-13T13:08:03.370460259Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=89703123" Dec 13 13:08:03.371219 containerd[1444]: time="2024-12-13T13:08:03.371194619Z" level=info msg="ImageCreate event name:\"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:08:03.373286 containerd[1444]: time="2024-12-13T13:08:03.373228820Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:08:03.374020 containerd[1444]: time="2024-12-13T13:08:03.373990661Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"91072777\" in 3.863417137s" Dec 13 13:08:03.374020 containerd[1444]: time="2024-12-13T13:08:03.374018941Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\"" Dec 13 13:08:03.376697 containerd[1444]: time="2024-12-13T13:08:03.376492302Z" level=info msg="CreateContainer within sandbox \"17c817c737b6a17ab7a88f8e2b5162789abbeefb2794567a0ce86c4469dbc5d2\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 13 13:08:03.386272 containerd[1444]: time="2024-12-13T13:08:03.386216388Z" level=info msg="CreateContainer within sandbox \"17c817c737b6a17ab7a88f8e2b5162789abbeefb2794567a0ce86c4469dbc5d2\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"46460b7a5b18d430ecb8d95fb44af22c8046551fa95c393e7141a6645befd5de\"" Dec 13 13:08:03.387946 containerd[1444]: time="2024-12-13T13:08:03.386945268Z" level=info msg="StartContainer for \"46460b7a5b18d430ecb8d95fb44af22c8046551fa95c393e7141a6645befd5de\"" Dec 13 13:08:03.431247 systemd[1]: Started cri-containerd-46460b7a5b18d430ecb8d95fb44af22c8046551fa95c393e7141a6645befd5de.scope - libcontainer container 46460b7a5b18d430ecb8d95fb44af22c8046551fa95c393e7141a6645befd5de. Dec 13 13:08:03.457754 kubelet[2622]: E1213 13:08:03.456590 2622 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bvvfs" podUID="0cbbdf0f-40e1-46d6-a471-bc442a66a580" Dec 13 13:08:03.476411 containerd[1444]: time="2024-12-13T13:08:03.476297598Z" level=info msg="StartContainer for \"46460b7a5b18d430ecb8d95fb44af22c8046551fa95c393e7141a6645befd5de\" returns successfully" Dec 13 13:08:03.518945 kubelet[2622]: E1213 13:08:03.517995 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:08:04.090937 systemd[1]: cri-containerd-46460b7a5b18d430ecb8d95fb44af22c8046551fa95c393e7141a6645befd5de.scope: Deactivated successfully. Dec 13 13:08:04.108840 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-46460b7a5b18d430ecb8d95fb44af22c8046551fa95c393e7141a6645befd5de-rootfs.mount: Deactivated successfully. Dec 13 13:08:04.110838 containerd[1444]: time="2024-12-13T13:08:04.110783913Z" level=info msg="shim disconnected" id=46460b7a5b18d430ecb8d95fb44af22c8046551fa95c393e7141a6645befd5de namespace=k8s.io Dec 13 13:08:04.110838 containerd[1444]: time="2024-12-13T13:08:04.110837273Z" level=warning msg="cleaning up after shim disconnected" id=46460b7a5b18d430ecb8d95fb44af22c8046551fa95c393e7141a6645befd5de namespace=k8s.io Dec 13 13:08:04.110982 containerd[1444]: time="2024-12-13T13:08:04.110846073Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:08:04.132697 kubelet[2622]: I1213 13:08:04.132657 2622 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 13:08:04.172812 kubelet[2622]: I1213 13:08:04.172300 2622 topology_manager.go:215] "Topology Admit Handler" podUID="6531a85c-6dd0-4079-bc39-8116bc3f4b54" podNamespace="kube-system" podName="coredns-76f75df574-njbdr" Dec 13 13:08:04.173575 kubelet[2622]: I1213 13:08:04.173528 2622 topology_manager.go:215] "Topology Admit Handler" podUID="575f75ad-b249-4356-8ee6-1279602164ae" podNamespace="kube-system" podName="coredns-76f75df574-kvwcl" Dec 13 13:08:04.177391 kubelet[2622]: I1213 13:08:04.177301 2622 topology_manager.go:215] "Topology Admit Handler" podUID="b381e5c1-7896-4e9c-934b-2f01903d7a34" podNamespace="calico-apiserver" podName="calico-apiserver-7d6cbc9658-ggvqp" Dec 13 13:08:04.177772 kubelet[2622]: I1213 13:08:04.177569 2622 topology_manager.go:215] "Topology Admit Handler" podUID="afc0c628-56bd-4014-86d9-0b030f93cf65" podNamespace="calico-system" podName="calico-kube-controllers-6895d58756-pfggb" Dec 13 13:08:04.177842 kubelet[2622]: I1213 13:08:04.177811 2622 topology_manager.go:215] "Topology Admit Handler" podUID="2fee8b36-caea-489f-b412-0f8b4408366f" podNamespace="calico-apiserver" podName="calico-apiserver-7d6cbc9658-ph7sh" Dec 13 13:08:04.183782 systemd[1]: Created slice kubepods-burstable-pod6531a85c_6dd0_4079_bc39_8116bc3f4b54.slice - libcontainer container kubepods-burstable-pod6531a85c_6dd0_4079_bc39_8116bc3f4b54.slice. Dec 13 13:08:04.201681 systemd[1]: Created slice kubepods-burstable-pod575f75ad_b249_4356_8ee6_1279602164ae.slice - libcontainer container kubepods-burstable-pod575f75ad_b249_4356_8ee6_1279602164ae.slice. Dec 13 13:08:04.208887 systemd[1]: Created slice kubepods-besteffort-podb381e5c1_7896_4e9c_934b_2f01903d7a34.slice - libcontainer container kubepods-besteffort-podb381e5c1_7896_4e9c_934b_2f01903d7a34.slice. Dec 13 13:08:04.216537 systemd[1]: Created slice kubepods-besteffort-podafc0c628_56bd_4014_86d9_0b030f93cf65.slice - libcontainer container kubepods-besteffort-podafc0c628_56bd_4014_86d9_0b030f93cf65.slice. Dec 13 13:08:04.222904 systemd[1]: Created slice kubepods-besteffort-pod2fee8b36_caea_489f_b412_0f8b4408366f.slice - libcontainer container kubepods-besteffort-pod2fee8b36_caea_489f_b412_0f8b4408366f.slice. Dec 13 13:08:04.254713 kubelet[2622]: I1213 13:08:04.254662 2622 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rpck\" (UniqueName: \"kubernetes.io/projected/2fee8b36-caea-489f-b412-0f8b4408366f-kube-api-access-2rpck\") pod \"calico-apiserver-7d6cbc9658-ph7sh\" (UID: \"2fee8b36-caea-489f-b412-0f8b4408366f\") " pod="calico-apiserver/calico-apiserver-7d6cbc9658-ph7sh" Dec 13 13:08:04.254713 kubelet[2622]: I1213 13:08:04.254710 2622 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6531a85c-6dd0-4079-bc39-8116bc3f4b54-config-volume\") pod \"coredns-76f75df574-njbdr\" (UID: \"6531a85c-6dd0-4079-bc39-8116bc3f4b54\") " pod="kube-system/coredns-76f75df574-njbdr" Dec 13 13:08:04.254870 kubelet[2622]: I1213 13:08:04.254734 2622 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/575f75ad-b249-4356-8ee6-1279602164ae-config-volume\") pod \"coredns-76f75df574-kvwcl\" (UID: \"575f75ad-b249-4356-8ee6-1279602164ae\") " pod="kube-system/coredns-76f75df574-kvwcl" Dec 13 13:08:04.255411 kubelet[2622]: I1213 13:08:04.255128 2622 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2fee8b36-caea-489f-b412-0f8b4408366f-calico-apiserver-certs\") pod \"calico-apiserver-7d6cbc9658-ph7sh\" (UID: \"2fee8b36-caea-489f-b412-0f8b4408366f\") " pod="calico-apiserver/calico-apiserver-7d6cbc9658-ph7sh" Dec 13 13:08:04.255411 kubelet[2622]: I1213 13:08:04.255167 2622 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/afc0c628-56bd-4014-86d9-0b030f93cf65-tigera-ca-bundle\") pod \"calico-kube-controllers-6895d58756-pfggb\" (UID: \"afc0c628-56bd-4014-86d9-0b030f93cf65\") " pod="calico-system/calico-kube-controllers-6895d58756-pfggb" Dec 13 13:08:04.255411 kubelet[2622]: I1213 13:08:04.255191 2622 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzgb5\" (UniqueName: \"kubernetes.io/projected/b381e5c1-7896-4e9c-934b-2f01903d7a34-kube-api-access-vzgb5\") pod \"calico-apiserver-7d6cbc9658-ggvqp\" (UID: \"b381e5c1-7896-4e9c-934b-2f01903d7a34\") " pod="calico-apiserver/calico-apiserver-7d6cbc9658-ggvqp" Dec 13 13:08:04.255411 kubelet[2622]: I1213 13:08:04.255212 2622 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8tws\" (UniqueName: \"kubernetes.io/projected/6531a85c-6dd0-4079-bc39-8116bc3f4b54-kube-api-access-g8tws\") pod \"coredns-76f75df574-njbdr\" (UID: \"6531a85c-6dd0-4079-bc39-8116bc3f4b54\") " pod="kube-system/coredns-76f75df574-njbdr" Dec 13 13:08:04.255411 kubelet[2622]: I1213 13:08:04.255235 2622 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftlp6\" (UniqueName: \"kubernetes.io/projected/afc0c628-56bd-4014-86d9-0b030f93cf65-kube-api-access-ftlp6\") pod \"calico-kube-controllers-6895d58756-pfggb\" (UID: \"afc0c628-56bd-4014-86d9-0b030f93cf65\") " pod="calico-system/calico-kube-controllers-6895d58756-pfggb" Dec 13 13:08:04.255567 kubelet[2622]: I1213 13:08:04.255258 2622 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b381e5c1-7896-4e9c-934b-2f01903d7a34-calico-apiserver-certs\") pod \"calico-apiserver-7d6cbc9658-ggvqp\" (UID: \"b381e5c1-7896-4e9c-934b-2f01903d7a34\") " pod="calico-apiserver/calico-apiserver-7d6cbc9658-ggvqp" Dec 13 13:08:04.255567 kubelet[2622]: I1213 13:08:04.255302 2622 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbzdc\" (UniqueName: \"kubernetes.io/projected/575f75ad-b249-4356-8ee6-1279602164ae-kube-api-access-lbzdc\") pod \"coredns-76f75df574-kvwcl\" (UID: \"575f75ad-b249-4356-8ee6-1279602164ae\") " pod="kube-system/coredns-76f75df574-kvwcl" Dec 13 13:08:04.487294 kubelet[2622]: E1213 13:08:04.486994 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:08:04.487776 containerd[1444]: time="2024-12-13T13:08:04.487551072Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-njbdr,Uid:6531a85c-6dd0-4079-bc39-8116bc3f4b54,Namespace:kube-system,Attempt:0,}" Dec 13 13:08:04.508883 kubelet[2622]: E1213 13:08:04.508768 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:08:04.511727 containerd[1444]: time="2024-12-13T13:08:04.511201724Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-kvwcl,Uid:575f75ad-b249-4356-8ee6-1279602164ae,Namespace:kube-system,Attempt:0,}" Dec 13 13:08:04.516906 containerd[1444]: time="2024-12-13T13:08:04.516661447Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d6cbc9658-ggvqp,Uid:b381e5c1-7896-4e9c-934b-2f01903d7a34,Namespace:calico-apiserver,Attempt:0,}" Dec 13 13:08:04.523133 containerd[1444]: time="2024-12-13T13:08:04.523084331Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6895d58756-pfggb,Uid:afc0c628-56bd-4014-86d9-0b030f93cf65,Namespace:calico-system,Attempt:0,}" Dec 13 13:08:04.524393 kubelet[2622]: E1213 13:08:04.524074 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:08:04.525457 containerd[1444]: time="2024-12-13T13:08:04.525233012Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Dec 13 13:08:04.525609 containerd[1444]: time="2024-12-13T13:08:04.525579052Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d6cbc9658-ph7sh,Uid:2fee8b36-caea-489f-b412-0f8b4408366f,Namespace:calico-apiserver,Attempt:0,}" Dec 13 13:08:04.813499 containerd[1444]: time="2024-12-13T13:08:04.813379244Z" level=error msg="Failed to destroy network for sandbox \"65d7ae9340947db8c318335db7a7645c1bba6c5a2804addb1135c38c35ec52e3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:08:04.816640 containerd[1444]: time="2024-12-13T13:08:04.815986285Z" level=error msg="Failed to destroy network for sandbox \"ae479ae6a6663462293a9efbcb374b740041a02760ed8739eaf35a922675a17d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:08:04.816640 containerd[1444]: time="2024-12-13T13:08:04.816479446Z" level=error msg="Failed to destroy network for sandbox \"7d3ed20e49897916e8d2add9f19ca57e443caa327bb88ce2f1a943a02055e707\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:08:04.818046 containerd[1444]: time="2024-12-13T13:08:04.818005166Z" level=error msg="encountered an error cleaning up failed sandbox \"7d3ed20e49897916e8d2add9f19ca57e443caa327bb88ce2f1a943a02055e707\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:08:04.818111 containerd[1444]: time="2024-12-13T13:08:04.818090047Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d6cbc9658-ph7sh,Uid:2fee8b36-caea-489f-b412-0f8b4408366f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7d3ed20e49897916e8d2add9f19ca57e443caa327bb88ce2f1a943a02055e707\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:08:04.818602 containerd[1444]: time="2024-12-13T13:08:04.818485487Z" level=error msg="Failed to destroy network for sandbox \"88715665723cccb00aeae59cb9ef9aaf2503e60184edd6f11004d9d98a1bb7c2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:08:04.818672 kubelet[2622]: E1213 13:08:04.818596 2622 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d3ed20e49897916e8d2add9f19ca57e443caa327bb88ce2f1a943a02055e707\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:08:04.818672 kubelet[2622]: E1213 13:08:04.818652 2622 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d3ed20e49897916e8d2add9f19ca57e443caa327bb88ce2f1a943a02055e707\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d6cbc9658-ph7sh" Dec 13 13:08:04.818759 kubelet[2622]: E1213 13:08:04.818681 2622 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d3ed20e49897916e8d2add9f19ca57e443caa327bb88ce2f1a943a02055e707\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d6cbc9658-ph7sh" Dec 13 13:08:04.818759 kubelet[2622]: E1213 13:08:04.818734 2622 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7d6cbc9658-ph7sh_calico-apiserver(2fee8b36-caea-489f-b412-0f8b4408366f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7d6cbc9658-ph7sh_calico-apiserver(2fee8b36-caea-489f-b412-0f8b4408366f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7d3ed20e49897916e8d2add9f19ca57e443caa327bb88ce2f1a943a02055e707\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7d6cbc9658-ph7sh" podUID="2fee8b36-caea-489f-b412-0f8b4408366f" Dec 13 13:08:04.819706 containerd[1444]: time="2024-12-13T13:08:04.819523127Z" level=error msg="encountered an error cleaning up failed sandbox \"88715665723cccb00aeae59cb9ef9aaf2503e60184edd6f11004d9d98a1bb7c2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:08:04.819914 containerd[1444]: time="2024-12-13T13:08:04.819874447Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-njbdr,Uid:6531a85c-6dd0-4079-bc39-8116bc3f4b54,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"88715665723cccb00aeae59cb9ef9aaf2503e60184edd6f11004d9d98a1bb7c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:08:04.820077 kubelet[2622]: E1213 13:08:04.820054 2622 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"88715665723cccb00aeae59cb9ef9aaf2503e60184edd6f11004d9d98a1bb7c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:08:04.820502 kubelet[2622]: E1213 13:08:04.820150 2622 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"88715665723cccb00aeae59cb9ef9aaf2503e60184edd6f11004d9d98a1bb7c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-njbdr" Dec 13 13:08:04.820502 kubelet[2622]: E1213 13:08:04.820172 2622 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"88715665723cccb00aeae59cb9ef9aaf2503e60184edd6f11004d9d98a1bb7c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-njbdr" Dec 13 13:08:04.820502 kubelet[2622]: E1213 13:08:04.820212 2622 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-njbdr_kube-system(6531a85c-6dd0-4079-bc39-8116bc3f4b54)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-njbdr_kube-system(6531a85c-6dd0-4079-bc39-8116bc3f4b54)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"88715665723cccb00aeae59cb9ef9aaf2503e60184edd6f11004d9d98a1bb7c2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-njbdr" podUID="6531a85c-6dd0-4079-bc39-8116bc3f4b54" Dec 13 13:08:04.821349 containerd[1444]: time="2024-12-13T13:08:04.821319528Z" level=error msg="Failed to destroy network for sandbox \"03044fd1b3482e0180c2772d126611fce77f77f2395fb599361aef45f96093af\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:08:04.821621 containerd[1444]: time="2024-12-13T13:08:04.821596688Z" level=error msg="encountered an error cleaning up failed sandbox \"03044fd1b3482e0180c2772d126611fce77f77f2395fb599361aef45f96093af\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:08:04.821663 containerd[1444]: time="2024-12-13T13:08:04.821643408Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6895d58756-pfggb,Uid:afc0c628-56bd-4014-86d9-0b030f93cf65,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"03044fd1b3482e0180c2772d126611fce77f77f2395fb599361aef45f96093af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:08:04.821858 kubelet[2622]: E1213 13:08:04.821841 2622 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"03044fd1b3482e0180c2772d126611fce77f77f2395fb599361aef45f96093af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:08:04.821899 kubelet[2622]: E1213 13:08:04.821880 2622 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"03044fd1b3482e0180c2772d126611fce77f77f2395fb599361aef45f96093af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6895d58756-pfggb" Dec 13 13:08:04.821899 kubelet[2622]: E1213 13:08:04.821898 2622 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"03044fd1b3482e0180c2772d126611fce77f77f2395fb599361aef45f96093af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6895d58756-pfggb" Dec 13 13:08:04.821961 kubelet[2622]: E1213 13:08:04.821946 2622 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6895d58756-pfggb_calico-system(afc0c628-56bd-4014-86d9-0b030f93cf65)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6895d58756-pfggb_calico-system(afc0c628-56bd-4014-86d9-0b030f93cf65)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"03044fd1b3482e0180c2772d126611fce77f77f2395fb599361aef45f96093af\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6895d58756-pfggb" podUID="afc0c628-56bd-4014-86d9-0b030f93cf65" Dec 13 13:08:04.822418 containerd[1444]: time="2024-12-13T13:08:04.822382609Z" level=error msg="encountered an error cleaning up failed sandbox \"65d7ae9340947db8c318335db7a7645c1bba6c5a2804addb1135c38c35ec52e3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:08:04.822465 containerd[1444]: time="2024-12-13T13:08:04.822446169Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-kvwcl,Uid:575f75ad-b249-4356-8ee6-1279602164ae,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"65d7ae9340947db8c318335db7a7645c1bba6c5a2804addb1135c38c35ec52e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:08:04.823081 kubelet[2622]: E1213 13:08:04.823064 2622 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"65d7ae9340947db8c318335db7a7645c1bba6c5a2804addb1135c38c35ec52e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:08:04.823121 kubelet[2622]: E1213 13:08:04.823097 2622 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"65d7ae9340947db8c318335db7a7645c1bba6c5a2804addb1135c38c35ec52e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-kvwcl" Dec 13 13:08:04.823121 kubelet[2622]: E1213 13:08:04.823116 2622 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"65d7ae9340947db8c318335db7a7645c1bba6c5a2804addb1135c38c35ec52e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-kvwcl" Dec 13 13:08:04.823184 kubelet[2622]: E1213 13:08:04.823157 2622 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-kvwcl_kube-system(575f75ad-b249-4356-8ee6-1279602164ae)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-kvwcl_kube-system(575f75ad-b249-4356-8ee6-1279602164ae)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"65d7ae9340947db8c318335db7a7645c1bba6c5a2804addb1135c38c35ec52e3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-kvwcl" podUID="575f75ad-b249-4356-8ee6-1279602164ae" Dec 13 13:08:04.824034 containerd[1444]: time="2024-12-13T13:08:04.823983170Z" level=error msg="encountered an error cleaning up failed sandbox \"ae479ae6a6663462293a9efbcb374b740041a02760ed8739eaf35a922675a17d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:08:04.824076 containerd[1444]: time="2024-12-13T13:08:04.824053490Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d6cbc9658-ggvqp,Uid:b381e5c1-7896-4e9c-934b-2f01903d7a34,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ae479ae6a6663462293a9efbcb374b740041a02760ed8739eaf35a922675a17d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:08:04.824217 kubelet[2622]: E1213 13:08:04.824190 2622 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae479ae6a6663462293a9efbcb374b740041a02760ed8739eaf35a922675a17d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:08:04.824259 kubelet[2622]: E1213 13:08:04.824248 2622 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae479ae6a6663462293a9efbcb374b740041a02760ed8739eaf35a922675a17d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d6cbc9658-ggvqp" Dec 13 13:08:04.824297 kubelet[2622]: E1213 13:08:04.824268 2622 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae479ae6a6663462293a9efbcb374b740041a02760ed8739eaf35a922675a17d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d6cbc9658-ggvqp" Dec 13 13:08:04.824370 kubelet[2622]: E1213 13:08:04.824348 2622 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7d6cbc9658-ggvqp_calico-apiserver(b381e5c1-7896-4e9c-934b-2f01903d7a34)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7d6cbc9658-ggvqp_calico-apiserver(b381e5c1-7896-4e9c-934b-2f01903d7a34)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ae479ae6a6663462293a9efbcb374b740041a02760ed8739eaf35a922675a17d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7d6cbc9658-ggvqp" podUID="b381e5c1-7896-4e9c-934b-2f01903d7a34" Dec 13 13:08:05.387565 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ae479ae6a6663462293a9efbcb374b740041a02760ed8739eaf35a922675a17d-shm.mount: Deactivated successfully. Dec 13 13:08:05.387663 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-65d7ae9340947db8c318335db7a7645c1bba6c5a2804addb1135c38c35ec52e3-shm.mount: Deactivated successfully. Dec 13 13:08:05.387714 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-88715665723cccb00aeae59cb9ef9aaf2503e60184edd6f11004d9d98a1bb7c2-shm.mount: Deactivated successfully. Dec 13 13:08:05.464119 systemd[1]: Created slice kubepods-besteffort-pod0cbbdf0f_40e1_46d6_a471_bc442a66a580.slice - libcontainer container kubepods-besteffort-pod0cbbdf0f_40e1_46d6_a471_bc442a66a580.slice. Dec 13 13:08:05.467502 containerd[1444]: time="2024-12-13T13:08:05.467463254Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bvvfs,Uid:0cbbdf0f-40e1-46d6-a471-bc442a66a580,Namespace:calico-system,Attempt:0,}" Dec 13 13:08:05.526905 kubelet[2622]: I1213 13:08:05.526753 2622 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d3ed20e49897916e8d2add9f19ca57e443caa327bb88ce2f1a943a02055e707" Dec 13 13:08:05.528512 containerd[1444]: time="2024-12-13T13:08:05.528380365Z" level=info msg="StopPodSandbox for \"7d3ed20e49897916e8d2add9f19ca57e443caa327bb88ce2f1a943a02055e707\"" Dec 13 13:08:05.530176 containerd[1444]: time="2024-12-13T13:08:05.528549565Z" level=info msg="Ensure that sandbox 7d3ed20e49897916e8d2add9f19ca57e443caa327bb88ce2f1a943a02055e707 in task-service has been cleanup successfully" Dec 13 13:08:05.530176 containerd[1444]: time="2024-12-13T13:08:05.528864845Z" level=info msg="TearDown network for sandbox \"7d3ed20e49897916e8d2add9f19ca57e443caa327bb88ce2f1a943a02055e707\" successfully" Dec 13 13:08:05.530176 containerd[1444]: time="2024-12-13T13:08:05.528881245Z" level=info msg="StopPodSandbox for \"7d3ed20e49897916e8d2add9f19ca57e443caa327bb88ce2f1a943a02055e707\" returns successfully" Dec 13 13:08:05.530257 kubelet[2622]: I1213 13:08:05.529235 2622 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="03044fd1b3482e0180c2772d126611fce77f77f2395fb599361aef45f96093af" Dec 13 13:08:05.533544 containerd[1444]: time="2024-12-13T13:08:05.531047126Z" level=info msg="StopPodSandbox for \"03044fd1b3482e0180c2772d126611fce77f77f2395fb599361aef45f96093af\"" Dec 13 13:08:05.533544 containerd[1444]: time="2024-12-13T13:08:05.531189046Z" level=info msg="Ensure that sandbox 03044fd1b3482e0180c2772d126611fce77f77f2395fb599361aef45f96093af in task-service has been cleanup successfully" Dec 13 13:08:05.533544 containerd[1444]: time="2024-12-13T13:08:05.531349206Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d6cbc9658-ph7sh,Uid:2fee8b36-caea-489f-b412-0f8b4408366f,Namespace:calico-apiserver,Attempt:1,}" Dec 13 13:08:05.533544 containerd[1444]: time="2024-12-13T13:08:05.531738726Z" level=info msg="TearDown network for sandbox \"03044fd1b3482e0180c2772d126611fce77f77f2395fb599361aef45f96093af\" successfully" Dec 13 13:08:05.533544 containerd[1444]: time="2024-12-13T13:08:05.531762446Z" level=info msg="StopPodSandbox for \"03044fd1b3482e0180c2772d126611fce77f77f2395fb599361aef45f96093af\" returns successfully" Dec 13 13:08:05.531400 systemd[1]: run-netns-cni\x2d2009002b\x2dbc61\x2d230c\x2d2755\x2dddf0b9f117d5.mount: Deactivated successfully. Dec 13 13:08:05.533857 containerd[1444]: time="2024-12-13T13:08:05.533751487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6895d58756-pfggb,Uid:afc0c628-56bd-4014-86d9-0b030f93cf65,Namespace:calico-system,Attempt:1,}" Dec 13 13:08:05.535152 kubelet[2622]: I1213 13:08:05.534667 2622 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae479ae6a6663462293a9efbcb374b740041a02760ed8739eaf35a922675a17d" Dec 13 13:08:05.536157 systemd[1]: run-netns-cni\x2d83e171ef\x2d425c\x2d3a0c\x2d4d4c\x2d0056738dc371.mount: Deactivated successfully. Dec 13 13:08:05.536263 containerd[1444]: time="2024-12-13T13:08:05.536233009Z" level=info msg="StopPodSandbox for \"ae479ae6a6663462293a9efbcb374b740041a02760ed8739eaf35a922675a17d\"" Dec 13 13:08:05.536845 containerd[1444]: time="2024-12-13T13:08:05.536397649Z" level=info msg="Ensure that sandbox ae479ae6a6663462293a9efbcb374b740041a02760ed8739eaf35a922675a17d in task-service has been cleanup successfully" Dec 13 13:08:05.537805 kubelet[2622]: I1213 13:08:05.537508 2622 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="65d7ae9340947db8c318335db7a7645c1bba6c5a2804addb1135c38c35ec52e3" Dec 13 13:08:05.538022 containerd[1444]: time="2024-12-13T13:08:05.537982849Z" level=info msg="TearDown network for sandbox \"ae479ae6a6663462293a9efbcb374b740041a02760ed8739eaf35a922675a17d\" successfully" Dec 13 13:08:05.538022 containerd[1444]: time="2024-12-13T13:08:05.538016729Z" level=info msg="StopPodSandbox for \"ae479ae6a6663462293a9efbcb374b740041a02760ed8739eaf35a922675a17d\" returns successfully" Dec 13 13:08:05.541045 containerd[1444]: time="2024-12-13T13:08:05.539881850Z" level=info msg="StopPodSandbox for \"65d7ae9340947db8c318335db7a7645c1bba6c5a2804addb1135c38c35ec52e3\"" Dec 13 13:08:05.541045 containerd[1444]: time="2024-12-13T13:08:05.540077170Z" level=info msg="Ensure that sandbox 65d7ae9340947db8c318335db7a7645c1bba6c5a2804addb1135c38c35ec52e3 in task-service has been cleanup successfully" Dec 13 13:08:05.541045 containerd[1444]: time="2024-12-13T13:08:05.540147291Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d6cbc9658-ggvqp,Uid:b381e5c1-7896-4e9c-934b-2f01903d7a34,Namespace:calico-apiserver,Attempt:1,}" Dec 13 13:08:05.540845 systemd[1]: run-netns-cni\x2d9e15ddf7\x2d9890\x2dfbff\x2d7743\x2d230266c4a958.mount: Deactivated successfully. Dec 13 13:08:05.547972 containerd[1444]: time="2024-12-13T13:08:05.542066651Z" level=info msg="TearDown network for sandbox \"65d7ae9340947db8c318335db7a7645c1bba6c5a2804addb1135c38c35ec52e3\" successfully" Dec 13 13:08:05.547972 containerd[1444]: time="2024-12-13T13:08:05.542096651Z" level=info msg="StopPodSandbox for \"65d7ae9340947db8c318335db7a7645c1bba6c5a2804addb1135c38c35ec52e3\" returns successfully" Dec 13 13:08:05.547972 containerd[1444]: time="2024-12-13T13:08:05.544038532Z" level=info msg="StopPodSandbox for \"88715665723cccb00aeae59cb9ef9aaf2503e60184edd6f11004d9d98a1bb7c2\"" Dec 13 13:08:05.547972 containerd[1444]: time="2024-12-13T13:08:05.544209413Z" level=info msg="Ensure that sandbox 88715665723cccb00aeae59cb9ef9aaf2503e60184edd6f11004d9d98a1bb7c2 in task-service has been cleanup successfully" Dec 13 13:08:05.547972 containerd[1444]: time="2024-12-13T13:08:05.544885533Z" level=info msg="TearDown network for sandbox \"88715665723cccb00aeae59cb9ef9aaf2503e60184edd6f11004d9d98a1bb7c2\" successfully" Dec 13 13:08:05.547972 containerd[1444]: time="2024-12-13T13:08:05.544903093Z" level=info msg="StopPodSandbox for \"88715665723cccb00aeae59cb9ef9aaf2503e60184edd6f11004d9d98a1bb7c2\" returns successfully" Dec 13 13:08:05.547972 containerd[1444]: time="2024-12-13T13:08:05.547380214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-kvwcl,Uid:575f75ad-b249-4356-8ee6-1279602164ae,Namespace:kube-system,Attempt:1,}" Dec 13 13:08:05.547972 containerd[1444]: time="2024-12-13T13:08:05.547411654Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-njbdr,Uid:6531a85c-6dd0-4079-bc39-8116bc3f4b54,Namespace:kube-system,Attempt:1,}" Dec 13 13:08:05.546010 systemd[1]: run-netns-cni\x2d236bac42\x2da223\x2d1c4c\x2d9f4e\x2dd797e95fdcb6.mount: Deactivated successfully. Dec 13 13:08:05.548311 kubelet[2622]: I1213 13:08:05.543180 2622 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="88715665723cccb00aeae59cb9ef9aaf2503e60184edd6f11004d9d98a1bb7c2" Dec 13 13:08:05.548311 kubelet[2622]: E1213 13:08:05.546134 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:08:05.548311 kubelet[2622]: E1213 13:08:05.547046 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:08:05.546099 systemd[1]: run-netns-cni\x2d0eb937b0\x2df1a6\x2d23a5\x2dbbbf\x2d34822cd91281.mount: Deactivated successfully. Dec 13 13:08:05.746941 containerd[1444]: time="2024-12-13T13:08:05.746881913Z" level=error msg="Failed to destroy network for sandbox \"11af14cbd2e99571b75d9d81ccfc84e1972eee71b3e867bf76c66e8364d5bd46\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:08:05.747334 containerd[1444]: time="2024-12-13T13:08:05.747297633Z" level=error msg="encountered an error cleaning up failed sandbox \"11af14cbd2e99571b75d9d81ccfc84e1972eee71b3e867bf76c66e8364d5bd46\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:08:05.747395 containerd[1444]: time="2024-12-13T13:08:05.747380153Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bvvfs,Uid:0cbbdf0f-40e1-46d6-a471-bc442a66a580,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"11af14cbd2e99571b75d9d81ccfc84e1972eee71b3e867bf76c66e8364d5bd46\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:08:05.747632 kubelet[2622]: E1213 13:08:05.747601 2622 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11af14cbd2e99571b75d9d81ccfc84e1972eee71b3e867bf76c66e8364d5bd46\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:08:05.747703 kubelet[2622]: E1213 13:08:05.747662 2622 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11af14cbd2e99571b75d9d81ccfc84e1972eee71b3e867bf76c66e8364d5bd46\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bvvfs" Dec 13 13:08:05.747703 kubelet[2622]: E1213 13:08:05.747686 2622 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11af14cbd2e99571b75d9d81ccfc84e1972eee71b3e867bf76c66e8364d5bd46\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bvvfs" Dec 13 13:08:05.747750 kubelet[2622]: E1213 13:08:05.747742 2622 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-bvvfs_calico-system(0cbbdf0f-40e1-46d6-a471-bc442a66a580)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-bvvfs_calico-system(0cbbdf0f-40e1-46d6-a471-bc442a66a580)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"11af14cbd2e99571b75d9d81ccfc84e1972eee71b3e867bf76c66e8364d5bd46\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-bvvfs" podUID="0cbbdf0f-40e1-46d6-a471-bc442a66a580" Dec 13 13:08:05.793954 containerd[1444]: time="2024-12-13T13:08:05.792034335Z" level=error msg="Failed to destroy network for sandbox \"ccac7d173d88a47737a4249b48065a67d7021746be32a7ede2429cc7f6245bca\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:08:05.793954 containerd[1444]: time="2024-12-13T13:08:05.792392416Z" level=error msg="encountered an error cleaning up failed sandbox \"ccac7d173d88a47737a4249b48065a67d7021746be32a7ede2429cc7f6245bca\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:08:05.793954 containerd[1444]: time="2024-12-13T13:08:05.792456496Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d6cbc9658-ph7sh,Uid:2fee8b36-caea-489f-b412-0f8b4408366f,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"ccac7d173d88a47737a4249b48065a67d7021746be32a7ede2429cc7f6245bca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:08:05.794204 kubelet[2622]: E1213 13:08:05.792688 2622 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ccac7d173d88a47737a4249b48065a67d7021746be32a7ede2429cc7f6245bca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:08:05.794204 kubelet[2622]: E1213 13:08:05.792738 2622 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ccac7d173d88a47737a4249b48065a67d7021746be32a7ede2429cc7f6245bca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d6cbc9658-ph7sh" Dec 13 13:08:05.794204 kubelet[2622]: E1213 13:08:05.792764 2622 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ccac7d173d88a47737a4249b48065a67d7021746be32a7ede2429cc7f6245bca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d6cbc9658-ph7sh" Dec 13 13:08:05.794292 kubelet[2622]: E1213 13:08:05.792817 2622 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7d6cbc9658-ph7sh_calico-apiserver(2fee8b36-caea-489f-b412-0f8b4408366f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7d6cbc9658-ph7sh_calico-apiserver(2fee8b36-caea-489f-b412-0f8b4408366f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ccac7d173d88a47737a4249b48065a67d7021746be32a7ede2429cc7f6245bca\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7d6cbc9658-ph7sh" podUID="2fee8b36-caea-489f-b412-0f8b4408366f" Dec 13 13:08:05.795294 containerd[1444]: time="2024-12-13T13:08:05.795217817Z" level=error msg="Failed to destroy network for sandbox \"6df4a84e0027bd9244a453db628e982a16ba2e4aa5d8112d6e577f178f2c240a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:08:05.796916 containerd[1444]: time="2024-12-13T13:08:05.796870898Z" level=error msg="encountered an error cleaning up failed sandbox \"6df4a84e0027bd9244a453db628e982a16ba2e4aa5d8112d6e577f178f2c240a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:08:05.797000 containerd[1444]: time="2024-12-13T13:08:05.796954498Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6895d58756-pfggb,Uid:afc0c628-56bd-4014-86d9-0b030f93cf65,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"6df4a84e0027bd9244a453db628e982a16ba2e4aa5d8112d6e577f178f2c240a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:08:05.797190 kubelet[2622]: E1213 13:08:05.797165 2622 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6df4a84e0027bd9244a453db628e982a16ba2e4aa5d8112d6e577f178f2c240a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:08:05.797239 kubelet[2622]: E1213 13:08:05.797225 2622 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6df4a84e0027bd9244a453db628e982a16ba2e4aa5d8112d6e577f178f2c240a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6895d58756-pfggb" Dec 13 13:08:05.797270 kubelet[2622]: E1213 13:08:05.797246 2622 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6df4a84e0027bd9244a453db628e982a16ba2e4aa5d8112d6e577f178f2c240a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6895d58756-pfggb" Dec 13 13:08:05.797325 kubelet[2622]: E1213 13:08:05.797301 2622 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6895d58756-pfggb_calico-system(afc0c628-56bd-4014-86d9-0b030f93cf65)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6895d58756-pfggb_calico-system(afc0c628-56bd-4014-86d9-0b030f93cf65)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6df4a84e0027bd9244a453db628e982a16ba2e4aa5d8112d6e577f178f2c240a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6895d58756-pfggb" podUID="afc0c628-56bd-4014-86d9-0b030f93cf65" Dec 13 13:08:05.797512 containerd[1444]: time="2024-12-13T13:08:05.797475658Z" level=error msg="Failed to destroy network for sandbox \"f4d68f57e58ff45a4362d2c135d3ef7842ccf3f45df10b8b2eb6543d36fc2210\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:08:05.797773 containerd[1444]: time="2024-12-13T13:08:05.797746058Z" level=error msg="encountered an error cleaning up failed sandbox \"f4d68f57e58ff45a4362d2c135d3ef7842ccf3f45df10b8b2eb6543d36fc2210\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:08:05.797811 containerd[1444]: time="2024-12-13T13:08:05.797793458Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-njbdr,Uid:6531a85c-6dd0-4079-bc39-8116bc3f4b54,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"f4d68f57e58ff45a4362d2c135d3ef7842ccf3f45df10b8b2eb6543d36fc2210\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:08:05.798091 kubelet[2622]: E1213 13:08:05.797947 2622 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4d68f57e58ff45a4362d2c135d3ef7842ccf3f45df10b8b2eb6543d36fc2210\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:08:05.798091 kubelet[2622]: E1213 13:08:05.797989 2622 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4d68f57e58ff45a4362d2c135d3ef7842ccf3f45df10b8b2eb6543d36fc2210\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-njbdr" Dec 13 13:08:05.798091 kubelet[2622]: E1213 13:08:05.798007 2622 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4d68f57e58ff45a4362d2c135d3ef7842ccf3f45df10b8b2eb6543d36fc2210\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-njbdr" Dec 13 13:08:05.798198 kubelet[2622]: E1213 13:08:05.798051 2622 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-njbdr_kube-system(6531a85c-6dd0-4079-bc39-8116bc3f4b54)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-njbdr_kube-system(6531a85c-6dd0-4079-bc39-8116bc3f4b54)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f4d68f57e58ff45a4362d2c135d3ef7842ccf3f45df10b8b2eb6543d36fc2210\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-njbdr" podUID="6531a85c-6dd0-4079-bc39-8116bc3f4b54" Dec 13 13:08:05.804680 containerd[1444]: time="2024-12-13T13:08:05.804632582Z" level=error msg="Failed to destroy network for sandbox \"807e6b897d5c5a9455bb9eb345295c94f79378f38bc2debd28d654f1a5f4b816\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:08:05.805208 containerd[1444]: time="2024-12-13T13:08:05.805174622Z" level=error msg="encountered an error cleaning up failed sandbox \"807e6b897d5c5a9455bb9eb345295c94f79378f38bc2debd28d654f1a5f4b816\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:08:05.805262 containerd[1444]: time="2024-12-13T13:08:05.805241542Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d6cbc9658-ggvqp,Uid:b381e5c1-7896-4e9c-934b-2f01903d7a34,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"807e6b897d5c5a9455bb9eb345295c94f79378f38bc2debd28d654f1a5f4b816\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:08:05.805449 kubelet[2622]: E1213 13:08:05.805426 2622 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"807e6b897d5c5a9455bb9eb345295c94f79378f38bc2debd28d654f1a5f4b816\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:08:05.805492 kubelet[2622]: E1213 13:08:05.805486 2622 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"807e6b897d5c5a9455bb9eb345295c94f79378f38bc2debd28d654f1a5f4b816\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d6cbc9658-ggvqp" Dec 13 13:08:05.805517 kubelet[2622]: E1213 13:08:05.805507 2622 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"807e6b897d5c5a9455bb9eb345295c94f79378f38bc2debd28d654f1a5f4b816\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d6cbc9658-ggvqp" Dec 13 13:08:05.805574 kubelet[2622]: E1213 13:08:05.805561 2622 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7d6cbc9658-ggvqp_calico-apiserver(b381e5c1-7896-4e9c-934b-2f01903d7a34)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7d6cbc9658-ggvqp_calico-apiserver(b381e5c1-7896-4e9c-934b-2f01903d7a34)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"807e6b897d5c5a9455bb9eb345295c94f79378f38bc2debd28d654f1a5f4b816\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7d6cbc9658-ggvqp" podUID="b381e5c1-7896-4e9c-934b-2f01903d7a34" Dec 13 13:08:05.807290 containerd[1444]: time="2024-12-13T13:08:05.807222583Z" level=error msg="Failed to destroy network for sandbox \"371ed6cf24b6c19135943a4eb8cf94941db4e398c37eb15a00aabd36cf86a461\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:08:05.807561 containerd[1444]: time="2024-12-13T13:08:05.807528463Z" level=error msg="encountered an error cleaning up failed sandbox \"371ed6cf24b6c19135943a4eb8cf94941db4e398c37eb15a00aabd36cf86a461\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:08:05.807596 containerd[1444]: time="2024-12-13T13:08:05.807580983Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-kvwcl,Uid:575f75ad-b249-4356-8ee6-1279602164ae,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"371ed6cf24b6c19135943a4eb8cf94941db4e398c37eb15a00aabd36cf86a461\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:08:05.807775 kubelet[2622]: E1213 13:08:05.807754 2622 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"371ed6cf24b6c19135943a4eb8cf94941db4e398c37eb15a00aabd36cf86a461\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:08:05.807820 kubelet[2622]: E1213 13:08:05.807797 2622 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"371ed6cf24b6c19135943a4eb8cf94941db4e398c37eb15a00aabd36cf86a461\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-kvwcl" Dec 13 13:08:05.807820 kubelet[2622]: E1213 13:08:05.807817 2622 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"371ed6cf24b6c19135943a4eb8cf94941db4e398c37eb15a00aabd36cf86a461\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-kvwcl" Dec 13 13:08:05.807884 kubelet[2622]: E1213 13:08:05.807866 2622 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-kvwcl_kube-system(575f75ad-b249-4356-8ee6-1279602164ae)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-kvwcl_kube-system(575f75ad-b249-4356-8ee6-1279602164ae)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"371ed6cf24b6c19135943a4eb8cf94941db4e398c37eb15a00aabd36cf86a461\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-kvwcl" podUID="575f75ad-b249-4356-8ee6-1279602164ae" Dec 13 13:08:06.545871 kubelet[2622]: I1213 13:08:06.545840 2622 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ccac7d173d88a47737a4249b48065a67d7021746be32a7ede2429cc7f6245bca" Dec 13 13:08:06.547739 containerd[1444]: time="2024-12-13T13:08:06.546279132Z" level=info msg="StopPodSandbox for \"ccac7d173d88a47737a4249b48065a67d7021746be32a7ede2429cc7f6245bca\"" Dec 13 13:08:06.547739 containerd[1444]: time="2024-12-13T13:08:06.546450572Z" level=info msg="Ensure that sandbox ccac7d173d88a47737a4249b48065a67d7021746be32a7ede2429cc7f6245bca in task-service has been cleanup successfully" Dec 13 13:08:06.547739 containerd[1444]: time="2024-12-13T13:08:06.547725573Z" level=info msg="TearDown network for sandbox \"ccac7d173d88a47737a4249b48065a67d7021746be32a7ede2429cc7f6245bca\" successfully" Dec 13 13:08:06.548655 containerd[1444]: time="2024-12-13T13:08:06.547748133Z" level=info msg="StopPodSandbox for \"ccac7d173d88a47737a4249b48065a67d7021746be32a7ede2429cc7f6245bca\" returns successfully" Dec 13 13:08:06.548878 systemd[1]: run-netns-cni\x2d5dfee0e2\x2da95e\x2dcb06\x2d4b94\x2d3d5c4c09dc9d.mount: Deactivated successfully. Dec 13 13:08:06.550474 containerd[1444]: time="2024-12-13T13:08:06.550449534Z" level=info msg="StopPodSandbox for \"7d3ed20e49897916e8d2add9f19ca57e443caa327bb88ce2f1a943a02055e707\"" Dec 13 13:08:06.550820 containerd[1444]: time="2024-12-13T13:08:06.550628414Z" level=info msg="TearDown network for sandbox \"7d3ed20e49897916e8d2add9f19ca57e443caa327bb88ce2f1a943a02055e707\" successfully" Dec 13 13:08:06.551244 containerd[1444]: time="2024-12-13T13:08:06.550935655Z" level=info msg="StopPodSandbox for \"7d3ed20e49897916e8d2add9f19ca57e443caa327bb88ce2f1a943a02055e707\" returns successfully" Dec 13 13:08:06.551435 containerd[1444]: time="2024-12-13T13:08:06.551352495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d6cbc9658-ph7sh,Uid:2fee8b36-caea-489f-b412-0f8b4408366f,Namespace:calico-apiserver,Attempt:2,}" Dec 13 13:08:06.551817 kubelet[2622]: I1213 13:08:06.551753 2622 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6df4a84e0027bd9244a453db628e982a16ba2e4aa5d8112d6e577f178f2c240a" Dec 13 13:08:06.552700 containerd[1444]: time="2024-12-13T13:08:06.552659055Z" level=info msg="StopPodSandbox for \"6df4a84e0027bd9244a453db628e982a16ba2e4aa5d8112d6e577f178f2c240a\"" Dec 13 13:08:06.553047 containerd[1444]: time="2024-12-13T13:08:06.553023096Z" level=info msg="Ensure that sandbox 6df4a84e0027bd9244a453db628e982a16ba2e4aa5d8112d6e577f178f2c240a in task-service has been cleanup successfully" Dec 13 13:08:06.553108 kubelet[2622]: I1213 13:08:06.553087 2622 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="11af14cbd2e99571b75d9d81ccfc84e1972eee71b3e867bf76c66e8364d5bd46" Dec 13 13:08:06.553404 containerd[1444]: time="2024-12-13T13:08:06.553377936Z" level=info msg="TearDown network for sandbox \"6df4a84e0027bd9244a453db628e982a16ba2e4aa5d8112d6e577f178f2c240a\" successfully" Dec 13 13:08:06.553404 containerd[1444]: time="2024-12-13T13:08:06.553399296Z" level=info msg="StopPodSandbox for \"6df4a84e0027bd9244a453db628e982a16ba2e4aa5d8112d6e577f178f2c240a\" returns successfully" Dec 13 13:08:06.554884 systemd[1]: run-netns-cni\x2d881f46f4\x2dd82e\x2d6361\x2d610e\x2ded311c530da3.mount: Deactivated successfully. Dec 13 13:08:06.555100 containerd[1444]: time="2024-12-13T13:08:06.555072216Z" level=info msg="StopPodSandbox for \"03044fd1b3482e0180c2772d126611fce77f77f2395fb599361aef45f96093af\"" Dec 13 13:08:06.555181 containerd[1444]: time="2024-12-13T13:08:06.555160457Z" level=info msg="TearDown network for sandbox \"03044fd1b3482e0180c2772d126611fce77f77f2395fb599361aef45f96093af\" successfully" Dec 13 13:08:06.555215 containerd[1444]: time="2024-12-13T13:08:06.555176257Z" level=info msg="StopPodSandbox for \"03044fd1b3482e0180c2772d126611fce77f77f2395fb599361aef45f96093af\" returns successfully" Dec 13 13:08:06.555336 containerd[1444]: time="2024-12-13T13:08:06.555311497Z" level=info msg="StopPodSandbox for \"11af14cbd2e99571b75d9d81ccfc84e1972eee71b3e867bf76c66e8364d5bd46\"" Dec 13 13:08:06.555657 containerd[1444]: time="2024-12-13T13:08:06.555439777Z" level=info msg="Ensure that sandbox 11af14cbd2e99571b75d9d81ccfc84e1972eee71b3e867bf76c66e8364d5bd46 in task-service has been cleanup successfully" Dec 13 13:08:06.555657 containerd[1444]: time="2024-12-13T13:08:06.555653657Z" level=info msg="TearDown network for sandbox \"11af14cbd2e99571b75d9d81ccfc84e1972eee71b3e867bf76c66e8364d5bd46\" successfully" Dec 13 13:08:06.555770 containerd[1444]: time="2024-12-13T13:08:06.555670417Z" level=info msg="StopPodSandbox for \"11af14cbd2e99571b75d9d81ccfc84e1972eee71b3e867bf76c66e8364d5bd46\" returns successfully" Dec 13 13:08:06.558509 containerd[1444]: time="2024-12-13T13:08:06.558472618Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bvvfs,Uid:0cbbdf0f-40e1-46d6-a471-bc442a66a580,Namespace:calico-system,Attempt:1,}" Dec 13 13:08:06.558612 systemd[1]: run-netns-cni\x2d20a745d7\x2d2806\x2d9b85\x2d2ec6\x2d9fe734ba7d43.mount: Deactivated successfully. Dec 13 13:08:06.559141 containerd[1444]: time="2024-12-13T13:08:06.558657258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6895d58756-pfggb,Uid:afc0c628-56bd-4014-86d9-0b030f93cf65,Namespace:calico-system,Attempt:2,}" Dec 13 13:08:06.561000 kubelet[2622]: I1213 13:08:06.560584 2622 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="807e6b897d5c5a9455bb9eb345295c94f79378f38bc2debd28d654f1a5f4b816" Dec 13 13:08:06.563221 containerd[1444]: time="2024-12-13T13:08:06.561219859Z" level=info msg="StopPodSandbox for \"807e6b897d5c5a9455bb9eb345295c94f79378f38bc2debd28d654f1a5f4b816\"" Dec 13 13:08:06.563221 containerd[1444]: time="2024-12-13T13:08:06.561356019Z" level=info msg="Ensure that sandbox 807e6b897d5c5a9455bb9eb345295c94f79378f38bc2debd28d654f1a5f4b816 in task-service has been cleanup successfully" Dec 13 13:08:06.563221 containerd[1444]: time="2024-12-13T13:08:06.561607420Z" level=info msg="TearDown network for sandbox \"807e6b897d5c5a9455bb9eb345295c94f79378f38bc2debd28d654f1a5f4b816\" successfully" Dec 13 13:08:06.563221 containerd[1444]: time="2024-12-13T13:08:06.561621780Z" level=info msg="StopPodSandbox for \"807e6b897d5c5a9455bb9eb345295c94f79378f38bc2debd28d654f1a5f4b816\" returns successfully" Dec 13 13:08:06.563221 containerd[1444]: time="2024-12-13T13:08:06.563111740Z" level=info msg="StopPodSandbox for \"ae479ae6a6663462293a9efbcb374b740041a02760ed8739eaf35a922675a17d\"" Dec 13 13:08:06.563221 containerd[1444]: time="2024-12-13T13:08:06.563195340Z" level=info msg="TearDown network for sandbox \"ae479ae6a6663462293a9efbcb374b740041a02760ed8739eaf35a922675a17d\" successfully" Dec 13 13:08:06.563221 containerd[1444]: time="2024-12-13T13:08:06.563204620Z" level=info msg="StopPodSandbox for \"ae479ae6a6663462293a9efbcb374b740041a02760ed8739eaf35a922675a17d\" returns successfully" Dec 13 13:08:06.563751 systemd[1]: run-netns-cni\x2de45091c8\x2d82b0\x2d7740\x2d73e1\x2de01c74f18d04.mount: Deactivated successfully. Dec 13 13:08:06.564532 containerd[1444]: time="2024-12-13T13:08:06.564502141Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d6cbc9658-ggvqp,Uid:b381e5c1-7896-4e9c-934b-2f01903d7a34,Namespace:calico-apiserver,Attempt:2,}" Dec 13 13:08:06.565421 kubelet[2622]: I1213 13:08:06.565092 2622 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="371ed6cf24b6c19135943a4eb8cf94941db4e398c37eb15a00aabd36cf86a461" Dec 13 13:08:06.565952 containerd[1444]: time="2024-12-13T13:08:06.565663701Z" level=info msg="StopPodSandbox for \"371ed6cf24b6c19135943a4eb8cf94941db4e398c37eb15a00aabd36cf86a461\"" Dec 13 13:08:06.565952 containerd[1444]: time="2024-12-13T13:08:06.565807741Z" level=info msg="Ensure that sandbox 371ed6cf24b6c19135943a4eb8cf94941db4e398c37eb15a00aabd36cf86a461 in task-service has been cleanup successfully" Dec 13 13:08:06.567761 kubelet[2622]: I1213 13:08:06.567735 2622 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f4d68f57e58ff45a4362d2c135d3ef7842ccf3f45df10b8b2eb6543d36fc2210" Dec 13 13:08:06.567868 containerd[1444]: time="2024-12-13T13:08:06.567748422Z" level=info msg="TearDown network for sandbox \"371ed6cf24b6c19135943a4eb8cf94941db4e398c37eb15a00aabd36cf86a461\" successfully" Dec 13 13:08:06.567941 containerd[1444]: time="2024-12-13T13:08:06.567910902Z" level=info msg="StopPodSandbox for \"371ed6cf24b6c19135943a4eb8cf94941db4e398c37eb15a00aabd36cf86a461\" returns successfully" Dec 13 13:08:06.568378 containerd[1444]: time="2024-12-13T13:08:06.568336143Z" level=info msg="StopPodSandbox for \"f4d68f57e58ff45a4362d2c135d3ef7842ccf3f45df10b8b2eb6543d36fc2210\"" Dec 13 13:08:06.569437 containerd[1444]: time="2024-12-13T13:08:06.569412223Z" level=info msg="StopPodSandbox for \"65d7ae9340947db8c318335db7a7645c1bba6c5a2804addb1135c38c35ec52e3\"" Dec 13 13:08:06.570080 containerd[1444]: time="2024-12-13T13:08:06.569904823Z" level=info msg="TearDown network for sandbox \"65d7ae9340947db8c318335db7a7645c1bba6c5a2804addb1135c38c35ec52e3\" successfully" Dec 13 13:08:06.570080 containerd[1444]: time="2024-12-13T13:08:06.569942383Z" level=info msg="StopPodSandbox for \"65d7ae9340947db8c318335db7a7645c1bba6c5a2804addb1135c38c35ec52e3\" returns successfully" Dec 13 13:08:06.570716 kubelet[2622]: E1213 13:08:06.570695 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:08:06.571157 containerd[1444]: time="2024-12-13T13:08:06.571005304Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-kvwcl,Uid:575f75ad-b249-4356-8ee6-1279602164ae,Namespace:kube-system,Attempt:2,}" Dec 13 13:08:06.662141 containerd[1444]: time="2024-12-13T13:08:06.662082706Z" level=info msg="Ensure that sandbox f4d68f57e58ff45a4362d2c135d3ef7842ccf3f45df10b8b2eb6543d36fc2210 in task-service has been cleanup successfully" Dec 13 13:08:06.662938 containerd[1444]: time="2024-12-13T13:08:06.662768467Z" level=info msg="TearDown network for sandbox \"f4d68f57e58ff45a4362d2c135d3ef7842ccf3f45df10b8b2eb6543d36fc2210\" successfully" Dec 13 13:08:06.662938 containerd[1444]: time="2024-12-13T13:08:06.662793307Z" level=info msg="StopPodSandbox for \"f4d68f57e58ff45a4362d2c135d3ef7842ccf3f45df10b8b2eb6543d36fc2210\" returns successfully" Dec 13 13:08:06.663509 containerd[1444]: time="2024-12-13T13:08:06.663475787Z" level=info msg="StopPodSandbox for \"88715665723cccb00aeae59cb9ef9aaf2503e60184edd6f11004d9d98a1bb7c2\"" Dec 13 13:08:06.663627 containerd[1444]: time="2024-12-13T13:08:06.663573107Z" level=info msg="TearDown network for sandbox \"88715665723cccb00aeae59cb9ef9aaf2503e60184edd6f11004d9d98a1bb7c2\" successfully" Dec 13 13:08:06.663627 containerd[1444]: time="2024-12-13T13:08:06.663584987Z" level=info msg="StopPodSandbox for \"88715665723cccb00aeae59cb9ef9aaf2503e60184edd6f11004d9d98a1bb7c2\" returns successfully" Dec 13 13:08:06.663811 kubelet[2622]: E1213 13:08:06.663768 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:08:06.664405 containerd[1444]: time="2024-12-13T13:08:06.664354627Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-njbdr,Uid:6531a85c-6dd0-4079-bc39-8116bc3f4b54,Namespace:kube-system,Attempt:2,}" Dec 13 13:08:06.908088 systemd[1]: Started sshd@8-10.0.0.33:22-10.0.0.1:55346.service - OpenSSH per-connection server daemon (10.0.0.1:55346). Dec 13 13:08:06.988269 sshd[3835]: Accepted publickey for core from 10.0.0.1 port 55346 ssh2: RSA SHA256:q9cWvSR3bBxu+L28Z4JmOHhvW5qF2BbU+1GVJNGhIf4 Dec 13 13:08:06.989828 sshd-session[3835]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:08:06.999277 systemd-logind[1431]: New session 9 of user core. Dec 13 13:08:07.005141 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 13 13:08:07.012906 containerd[1444]: time="2024-12-13T13:08:07.012848749Z" level=error msg="Failed to destroy network for sandbox \"7645574e91d41985d8b928f0ec11cd2aece9951aa0ed1bce8534f8d7c5f8c50e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:08:07.015381 containerd[1444]: time="2024-12-13T13:08:07.015314790Z" level=error msg="encountered an error cleaning up failed sandbox \"7645574e91d41985d8b928f0ec11cd2aece9951aa0ed1bce8534f8d7c5f8c50e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:08:07.015495 containerd[1444]: time="2024-12-13T13:08:07.015411470Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bvvfs,Uid:0cbbdf0f-40e1-46d6-a471-bc442a66a580,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"7645574e91d41985d8b928f0ec11cd2aece9951aa0ed1bce8534f8d7c5f8c50e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:08:07.015736 kubelet[2622]: E1213 13:08:07.015698 2622 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7645574e91d41985d8b928f0ec11cd2aece9951aa0ed1bce8534f8d7c5f8c50e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:08:07.015850 kubelet[2622]: E1213 13:08:07.015783 2622 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7645574e91d41985d8b928f0ec11cd2aece9951aa0ed1bce8534f8d7c5f8c50e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bvvfs" Dec 13 13:08:07.015968 kubelet[2622]: E1213 13:08:07.015853 2622 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7645574e91d41985d8b928f0ec11cd2aece9951aa0ed1bce8534f8d7c5f8c50e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bvvfs" Dec 13 13:08:07.016201 kubelet[2622]: E1213 13:08:07.016181 2622 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-bvvfs_calico-system(0cbbdf0f-40e1-46d6-a471-bc442a66a580)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-bvvfs_calico-system(0cbbdf0f-40e1-46d6-a471-bc442a66a580)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7645574e91d41985d8b928f0ec11cd2aece9951aa0ed1bce8534f8d7c5f8c50e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-bvvfs" podUID="0cbbdf0f-40e1-46d6-a471-bc442a66a580" Dec 13 13:08:07.017242 containerd[1444]: time="2024-12-13T13:08:07.016853151Z" level=error msg="Failed to destroy network for sandbox \"485087125903c712999ca3ac6293c276d8ed0f0a29e6cdb8396ba0834e10212f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:08:07.018443 containerd[1444]: time="2024-12-13T13:08:07.018405711Z" level=error msg="encountered an error cleaning up failed sandbox \"485087125903c712999ca3ac6293c276d8ed0f0a29e6cdb8396ba0834e10212f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:08:07.018598 containerd[1444]: time="2024-12-13T13:08:07.018576671Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d6cbc9658-ggvqp,Uid:b381e5c1-7896-4e9c-934b-2f01903d7a34,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"485087125903c712999ca3ac6293c276d8ed0f0a29e6cdb8396ba0834e10212f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:08:07.018902 kubelet[2622]: E1213 13:08:07.018873 2622 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"485087125903c712999ca3ac6293c276d8ed0f0a29e6cdb8396ba0834e10212f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:08:07.019042 kubelet[2622]: E1213 13:08:07.018940 2622 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"485087125903c712999ca3ac6293c276d8ed0f0a29e6cdb8396ba0834e10212f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d6cbc9658-ggvqp" Dec 13 13:08:07.019042 kubelet[2622]: E1213 13:08:07.018961 2622 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"485087125903c712999ca3ac6293c276d8ed0f0a29e6cdb8396ba0834e10212f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d6cbc9658-ggvqp" Dec 13 13:08:07.019042 kubelet[2622]: E1213 13:08:07.019003 2622 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7d6cbc9658-ggvqp_calico-apiserver(b381e5c1-7896-4e9c-934b-2f01903d7a34)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7d6cbc9658-ggvqp_calico-apiserver(b381e5c1-7896-4e9c-934b-2f01903d7a34)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"485087125903c712999ca3ac6293c276d8ed0f0a29e6cdb8396ba0834e10212f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7d6cbc9658-ggvqp" podUID="b381e5c1-7896-4e9c-934b-2f01903d7a34" Dec 13 13:08:07.023637 containerd[1444]: time="2024-12-13T13:08:07.023588594Z" level=error msg="Failed to destroy network for sandbox \"68d12b2ab0b873b30886d308ea3de85af76db5272af295b1efc8675fbaeb9a81\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:08:07.023946 containerd[1444]: time="2024-12-13T13:08:07.023906034Z" level=error msg="encountered an error cleaning up failed sandbox \"68d12b2ab0b873b30886d308ea3de85af76db5272af295b1efc8675fbaeb9a81\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:08:07.024001 containerd[1444]: time="2024-12-13T13:08:07.023979194Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d6cbc9658-ph7sh,Uid:2fee8b36-caea-489f-b412-0f8b4408366f,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"68d12b2ab0b873b30886d308ea3de85af76db5272af295b1efc8675fbaeb9a81\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:08:07.024342 kubelet[2622]: E1213 13:08:07.024174 2622 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68d12b2ab0b873b30886d308ea3de85af76db5272af295b1efc8675fbaeb9a81\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:08:07.024342 kubelet[2622]: E1213 13:08:07.024233 2622 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68d12b2ab0b873b30886d308ea3de85af76db5272af295b1efc8675fbaeb9a81\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d6cbc9658-ph7sh" Dec 13 13:08:07.024342 kubelet[2622]: E1213 13:08:07.024257 2622 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68d12b2ab0b873b30886d308ea3de85af76db5272af295b1efc8675fbaeb9a81\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d6cbc9658-ph7sh" Dec 13 13:08:07.024590 kubelet[2622]: E1213 13:08:07.024312 2622 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7d6cbc9658-ph7sh_calico-apiserver(2fee8b36-caea-489f-b412-0f8b4408366f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7d6cbc9658-ph7sh_calico-apiserver(2fee8b36-caea-489f-b412-0f8b4408366f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"68d12b2ab0b873b30886d308ea3de85af76db5272af295b1efc8675fbaeb9a81\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7d6cbc9658-ph7sh" podUID="2fee8b36-caea-489f-b412-0f8b4408366f" Dec 13 13:08:07.027971 containerd[1444]: time="2024-12-13T13:08:07.027886995Z" level=error msg="Failed to destroy network for sandbox \"0ccc050dfd7491f2bd4ad4f0574d3de631147e1664ac73062ca2d68ac359b82b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:08:07.029029 containerd[1444]: time="2024-12-13T13:08:07.028447716Z" level=error msg="encountered an error cleaning up failed sandbox \"0ccc050dfd7491f2bd4ad4f0574d3de631147e1664ac73062ca2d68ac359b82b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:08:07.029029 containerd[1444]: time="2024-12-13T13:08:07.028511196Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6895d58756-pfggb,Uid:afc0c628-56bd-4014-86d9-0b030f93cf65,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"0ccc050dfd7491f2bd4ad4f0574d3de631147e1664ac73062ca2d68ac359b82b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:08:07.029374 kubelet[2622]: E1213 13:08:07.028721 2622 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ccc050dfd7491f2bd4ad4f0574d3de631147e1664ac73062ca2d68ac359b82b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:08:07.029374 kubelet[2622]: E1213 13:08:07.028768 2622 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ccc050dfd7491f2bd4ad4f0574d3de631147e1664ac73062ca2d68ac359b82b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6895d58756-pfggb" Dec 13 13:08:07.029374 kubelet[2622]: E1213 13:08:07.028803 2622 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ccc050dfd7491f2bd4ad4f0574d3de631147e1664ac73062ca2d68ac359b82b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6895d58756-pfggb" Dec 13 13:08:07.029532 kubelet[2622]: E1213 13:08:07.028858 2622 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6895d58756-pfggb_calico-system(afc0c628-56bd-4014-86d9-0b030f93cf65)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6895d58756-pfggb_calico-system(afc0c628-56bd-4014-86d9-0b030f93cf65)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0ccc050dfd7491f2bd4ad4f0574d3de631147e1664ac73062ca2d68ac359b82b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6895d58756-pfggb" podUID="afc0c628-56bd-4014-86d9-0b030f93cf65" Dec 13 13:08:07.035650 containerd[1444]: time="2024-12-13T13:08:07.034662838Z" level=error msg="Failed to destroy network for sandbox \"8ce431818123ae357a10e2a06319f4801d9da0db3bdc80e6ff88ce4fbaacbf6b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:08:07.035650 containerd[1444]: time="2024-12-13T13:08:07.035354279Z" level=error msg="encountered an error cleaning up failed sandbox \"8ce431818123ae357a10e2a06319f4801d9da0db3bdc80e6ff88ce4fbaacbf6b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:08:07.035650 containerd[1444]: time="2024-12-13T13:08:07.035421679Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-njbdr,Uid:6531a85c-6dd0-4079-bc39-8116bc3f4b54,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"8ce431818123ae357a10e2a06319f4801d9da0db3bdc80e6ff88ce4fbaacbf6b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:08:07.035815 kubelet[2622]: E1213 13:08:07.035657 2622 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ce431818123ae357a10e2a06319f4801d9da0db3bdc80e6ff88ce4fbaacbf6b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:08:07.035815 kubelet[2622]: E1213 13:08:07.035803 2622 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ce431818123ae357a10e2a06319f4801d9da0db3bdc80e6ff88ce4fbaacbf6b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-njbdr" Dec 13 13:08:07.035872 kubelet[2622]: E1213 13:08:07.035829 2622 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ce431818123ae357a10e2a06319f4801d9da0db3bdc80e6ff88ce4fbaacbf6b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-njbdr" Dec 13 13:08:07.035932 kubelet[2622]: E1213 13:08:07.035902 2622 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-njbdr_kube-system(6531a85c-6dd0-4079-bc39-8116bc3f4b54)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-njbdr_kube-system(6531a85c-6dd0-4079-bc39-8116bc3f4b54)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8ce431818123ae357a10e2a06319f4801d9da0db3bdc80e6ff88ce4fbaacbf6b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-njbdr" podUID="6531a85c-6dd0-4079-bc39-8116bc3f4b54" Dec 13 13:08:07.044366 containerd[1444]: time="2024-12-13T13:08:07.044309083Z" level=error msg="Failed to destroy network for sandbox \"d6b8d0158d3945c8194a81d24d71326d7a131773b0cf51accf264beb019547ca\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:08:07.044654 containerd[1444]: time="2024-12-13T13:08:07.044628603Z" level=error msg="encountered an error cleaning up failed sandbox \"d6b8d0158d3945c8194a81d24d71326d7a131773b0cf51accf264beb019547ca\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:08:07.044732 containerd[1444]: time="2024-12-13T13:08:07.044692763Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-kvwcl,Uid:575f75ad-b249-4356-8ee6-1279602164ae,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"d6b8d0158d3945c8194a81d24d71326d7a131773b0cf51accf264beb019547ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:08:07.044992 kubelet[2622]: E1213 13:08:07.044968 2622 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d6b8d0158d3945c8194a81d24d71326d7a131773b0cf51accf264beb019547ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:08:07.045054 kubelet[2622]: E1213 13:08:07.045020 2622 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d6b8d0158d3945c8194a81d24d71326d7a131773b0cf51accf264beb019547ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-kvwcl" Dec 13 13:08:07.045054 kubelet[2622]: E1213 13:08:07.045039 2622 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d6b8d0158d3945c8194a81d24d71326d7a131773b0cf51accf264beb019547ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-kvwcl" Dec 13 13:08:07.045300 kubelet[2622]: E1213 13:08:07.045254 2622 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-kvwcl_kube-system(575f75ad-b249-4356-8ee6-1279602164ae)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-kvwcl_kube-system(575f75ad-b249-4356-8ee6-1279602164ae)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d6b8d0158d3945c8194a81d24d71326d7a131773b0cf51accf264beb019547ca\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-kvwcl" podUID="575f75ad-b249-4356-8ee6-1279602164ae" Dec 13 13:08:07.137636 sshd[3982]: Connection closed by 10.0.0.1 port 55346 Dec 13 13:08:07.138083 sshd-session[3835]: pam_unix(sshd:session): session closed for user core Dec 13 13:08:07.142568 systemd[1]: sshd@8-10.0.0.33:22-10.0.0.1:55346.service: Deactivated successfully. Dec 13 13:08:07.146833 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 13:08:07.148084 systemd-logind[1431]: Session 9 logged out. Waiting for processes to exit. Dec 13 13:08:07.149793 systemd-logind[1431]: Removed session 9. Dec 13 13:08:07.387033 systemd[1]: run-netns-cni\x2d382c5e13\x2d32f5\x2dba0a\x2df74a\x2df471ef58cb7c.mount: Deactivated successfully. Dec 13 13:08:07.387119 systemd[1]: run-netns-cni\x2de9e16b69\x2dc60a\x2d7ce0\x2d202a\x2d2bde79f20bda.mount: Deactivated successfully. Dec 13 13:08:07.573233 kubelet[2622]: I1213 13:08:07.573204 2622 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7645574e91d41985d8b928f0ec11cd2aece9951aa0ed1bce8534f8d7c5f8c50e" Dec 13 13:08:07.574482 containerd[1444]: time="2024-12-13T13:08:07.574446074Z" level=info msg="StopPodSandbox for \"7645574e91d41985d8b928f0ec11cd2aece9951aa0ed1bce8534f8d7c5f8c50e\"" Dec 13 13:08:07.574758 containerd[1444]: time="2024-12-13T13:08:07.574664714Z" level=info msg="Ensure that sandbox 7645574e91d41985d8b928f0ec11cd2aece9951aa0ed1bce8534f8d7c5f8c50e in task-service has been cleanup successfully" Dec 13 13:08:07.575619 containerd[1444]: time="2024-12-13T13:08:07.575568794Z" level=info msg="TearDown network for sandbox \"7645574e91d41985d8b928f0ec11cd2aece9951aa0ed1bce8534f8d7c5f8c50e\" successfully" Dec 13 13:08:07.575619 containerd[1444]: time="2024-12-13T13:08:07.575588314Z" level=info msg="StopPodSandbox for \"7645574e91d41985d8b928f0ec11cd2aece9951aa0ed1bce8534f8d7c5f8c50e\" returns successfully" Dec 13 13:08:07.576728 containerd[1444]: time="2024-12-13T13:08:07.576414434Z" level=info msg="StopPodSandbox for \"11af14cbd2e99571b75d9d81ccfc84e1972eee71b3e867bf76c66e8364d5bd46\"" Dec 13 13:08:07.576728 containerd[1444]: time="2024-12-13T13:08:07.576547475Z" level=info msg="TearDown network for sandbox \"11af14cbd2e99571b75d9d81ccfc84e1972eee71b3e867bf76c66e8364d5bd46\" successfully" Dec 13 13:08:07.576728 containerd[1444]: time="2024-12-13T13:08:07.576560635Z" level=info msg="StopPodSandbox for \"11af14cbd2e99571b75d9d81ccfc84e1972eee71b3e867bf76c66e8364d5bd46\" returns successfully" Dec 13 13:08:07.577751 containerd[1444]: time="2024-12-13T13:08:07.577293595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bvvfs,Uid:0cbbdf0f-40e1-46d6-a471-bc442a66a580,Namespace:calico-system,Attempt:2,}" Dec 13 13:08:07.577827 kubelet[2622]: I1213 13:08:07.577371 2622 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="485087125903c712999ca3ac6293c276d8ed0f0a29e6cdb8396ba0834e10212f" Dec 13 13:08:07.578027 containerd[1444]: time="2024-12-13T13:08:07.577996315Z" level=info msg="StopPodSandbox for \"485087125903c712999ca3ac6293c276d8ed0f0a29e6cdb8396ba0834e10212f\"" Dec 13 13:08:07.578213 containerd[1444]: time="2024-12-13T13:08:07.578182995Z" level=info msg="Ensure that sandbox 485087125903c712999ca3ac6293c276d8ed0f0a29e6cdb8396ba0834e10212f in task-service has been cleanup successfully" Dec 13 13:08:07.578638 containerd[1444]: time="2024-12-13T13:08:07.578600075Z" level=info msg="TearDown network for sandbox \"485087125903c712999ca3ac6293c276d8ed0f0a29e6cdb8396ba0834e10212f\" successfully" Dec 13 13:08:07.578699 systemd[1]: run-netns-cni\x2df9c7249b\x2d15a6\x2dfa7b\x2d0d43\x2de7ee727c4540.mount: Deactivated successfully. Dec 13 13:08:07.578907 containerd[1444]: time="2024-12-13T13:08:07.578882676Z" level=info msg="StopPodSandbox for \"485087125903c712999ca3ac6293c276d8ed0f0a29e6cdb8396ba0834e10212f\" returns successfully" Dec 13 13:08:07.579565 containerd[1444]: time="2024-12-13T13:08:07.579533676Z" level=info msg="StopPodSandbox for \"807e6b897d5c5a9455bb9eb345295c94f79378f38bc2debd28d654f1a5f4b816\"" Dec 13 13:08:07.579852 containerd[1444]: time="2024-12-13T13:08:07.579830636Z" level=info msg="TearDown network for sandbox \"807e6b897d5c5a9455bb9eb345295c94f79378f38bc2debd28d654f1a5f4b816\" successfully" Dec 13 13:08:07.579916 containerd[1444]: time="2024-12-13T13:08:07.579903676Z" level=info msg="StopPodSandbox for \"807e6b897d5c5a9455bb9eb345295c94f79378f38bc2debd28d654f1a5f4b816\" returns successfully" Dec 13 13:08:07.582516 systemd[1]: run-netns-cni\x2d82ed2362\x2d92a1\x2d443f\x2d9d94\x2d1454f6d0b22c.mount: Deactivated successfully. Dec 13 13:08:07.583955 containerd[1444]: time="2024-12-13T13:08:07.583501198Z" level=info msg="StopPodSandbox for \"ae479ae6a6663462293a9efbcb374b740041a02760ed8739eaf35a922675a17d\"" Dec 13 13:08:07.583955 containerd[1444]: time="2024-12-13T13:08:07.583580118Z" level=info msg="TearDown network for sandbox \"ae479ae6a6663462293a9efbcb374b740041a02760ed8739eaf35a922675a17d\" successfully" Dec 13 13:08:07.583955 containerd[1444]: time="2024-12-13T13:08:07.583590438Z" level=info msg="StopPodSandbox for \"ae479ae6a6663462293a9efbcb374b740041a02760ed8739eaf35a922675a17d\" returns successfully" Dec 13 13:08:07.586080 kubelet[2622]: I1213 13:08:07.584933 2622 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d6b8d0158d3945c8194a81d24d71326d7a131773b0cf51accf264beb019547ca" Dec 13 13:08:07.587122 containerd[1444]: time="2024-12-13T13:08:07.586678039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d6cbc9658-ggvqp,Uid:b381e5c1-7896-4e9c-934b-2f01903d7a34,Namespace:calico-apiserver,Attempt:3,}" Dec 13 13:08:07.587322 containerd[1444]: time="2024-12-13T13:08:07.587287119Z" level=info msg="StopPodSandbox for \"d6b8d0158d3945c8194a81d24d71326d7a131773b0cf51accf264beb019547ca\"" Dec 13 13:08:07.587785 containerd[1444]: time="2024-12-13T13:08:07.587752359Z" level=info msg="Ensure that sandbox d6b8d0158d3945c8194a81d24d71326d7a131773b0cf51accf264beb019547ca in task-service has been cleanup successfully" Dec 13 13:08:07.588613 containerd[1444]: time="2024-12-13T13:08:07.588569840Z" level=info msg="TearDown network for sandbox \"d6b8d0158d3945c8194a81d24d71326d7a131773b0cf51accf264beb019547ca\" successfully" Dec 13 13:08:07.588613 containerd[1444]: time="2024-12-13T13:08:07.588601520Z" level=info msg="StopPodSandbox for \"d6b8d0158d3945c8194a81d24d71326d7a131773b0cf51accf264beb019547ca\" returns successfully" Dec 13 13:08:07.589573 kubelet[2622]: I1213 13:08:07.589473 2622 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ce431818123ae357a10e2a06319f4801d9da0db3bdc80e6ff88ce4fbaacbf6b" Dec 13 13:08:07.589678 containerd[1444]: time="2024-12-13T13:08:07.589650720Z" level=info msg="StopPodSandbox for \"371ed6cf24b6c19135943a4eb8cf94941db4e398c37eb15a00aabd36cf86a461\"" Dec 13 13:08:07.589747 containerd[1444]: time="2024-12-13T13:08:07.589729320Z" level=info msg="TearDown network for sandbox \"371ed6cf24b6c19135943a4eb8cf94941db4e398c37eb15a00aabd36cf86a461\" successfully" Dec 13 13:08:07.589747 containerd[1444]: time="2024-12-13T13:08:07.589742800Z" level=info msg="StopPodSandbox for \"371ed6cf24b6c19135943a4eb8cf94941db4e398c37eb15a00aabd36cf86a461\" returns successfully" Dec 13 13:08:07.589955 containerd[1444]: time="2024-12-13T13:08:07.589932560Z" level=info msg="StopPodSandbox for \"8ce431818123ae357a10e2a06319f4801d9da0db3bdc80e6ff88ce4fbaacbf6b\"" Dec 13 13:08:07.591058 containerd[1444]: time="2024-12-13T13:08:07.590123200Z" level=info msg="Ensure that sandbox 8ce431818123ae357a10e2a06319f4801d9da0db3bdc80e6ff88ce4fbaacbf6b in task-service has been cleanup successfully" Dec 13 13:08:07.592948 containerd[1444]: time="2024-12-13T13:08:07.591462961Z" level=info msg="TearDown network for sandbox \"8ce431818123ae357a10e2a06319f4801d9da0db3bdc80e6ff88ce4fbaacbf6b\" successfully" Dec 13 13:08:07.592948 containerd[1444]: time="2024-12-13T13:08:07.591488401Z" level=info msg="StopPodSandbox for \"8ce431818123ae357a10e2a06319f4801d9da0db3bdc80e6ff88ce4fbaacbf6b\" returns successfully" Dec 13 13:08:07.592948 containerd[1444]: time="2024-12-13T13:08:07.591531761Z" level=info msg="StopPodSandbox for \"65d7ae9340947db8c318335db7a7645c1bba6c5a2804addb1135c38c35ec52e3\"" Dec 13 13:08:07.592948 containerd[1444]: time="2024-12-13T13:08:07.591610161Z" level=info msg="TearDown network for sandbox \"65d7ae9340947db8c318335db7a7645c1bba6c5a2804addb1135c38c35ec52e3\" successfully" Dec 13 13:08:07.592948 containerd[1444]: time="2024-12-13T13:08:07.591620041Z" level=info msg="StopPodSandbox for \"65d7ae9340947db8c318335db7a7645c1bba6c5a2804addb1135c38c35ec52e3\" returns successfully" Dec 13 13:08:07.592948 containerd[1444]: time="2024-12-13T13:08:07.592059641Z" level=info msg="StopPodSandbox for \"f4d68f57e58ff45a4362d2c135d3ef7842ccf3f45df10b8b2eb6543d36fc2210\"" Dec 13 13:08:07.592948 containerd[1444]: time="2024-12-13T13:08:07.592156201Z" level=info msg="TearDown network for sandbox \"f4d68f57e58ff45a4362d2c135d3ef7842ccf3f45df10b8b2eb6543d36fc2210\" successfully" Dec 13 13:08:07.592948 containerd[1444]: time="2024-12-13T13:08:07.592166801Z" level=info msg="StopPodSandbox for \"f4d68f57e58ff45a4362d2c135d3ef7842ccf3f45df10b8b2eb6543d36fc2210\" returns successfully" Dec 13 13:08:07.592307 systemd[1]: run-netns-cni\x2de6322172\x2d0a10\x2d6443\x2d88bc\x2d3d0abdd6cfd2.mount: Deactivated successfully. Dec 13 13:08:07.593308 kubelet[2622]: E1213 13:08:07.592021 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:08:07.593349 containerd[1444]: time="2024-12-13T13:08:07.592957282Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-kvwcl,Uid:575f75ad-b249-4356-8ee6-1279602164ae,Namespace:kube-system,Attempt:3,}" Dec 13 13:08:07.596983 containerd[1444]: time="2024-12-13T13:08:07.594123162Z" level=info msg="StopPodSandbox for \"88715665723cccb00aeae59cb9ef9aaf2503e60184edd6f11004d9d98a1bb7c2\"" Dec 13 13:08:07.596983 containerd[1444]: time="2024-12-13T13:08:07.594213762Z" level=info msg="TearDown network for sandbox \"88715665723cccb00aeae59cb9ef9aaf2503e60184edd6f11004d9d98a1bb7c2\" successfully" Dec 13 13:08:07.596983 containerd[1444]: time="2024-12-13T13:08:07.594224042Z" level=info msg="StopPodSandbox for \"88715665723cccb00aeae59cb9ef9aaf2503e60184edd6f11004d9d98a1bb7c2\" returns successfully" Dec 13 13:08:07.596983 containerd[1444]: time="2024-12-13T13:08:07.594870242Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-njbdr,Uid:6531a85c-6dd0-4079-bc39-8116bc3f4b54,Namespace:kube-system,Attempt:3,}" Dec 13 13:08:07.596983 containerd[1444]: time="2024-12-13T13:08:07.595783403Z" level=info msg="StopPodSandbox for \"68d12b2ab0b873b30886d308ea3de85af76db5272af295b1efc8675fbaeb9a81\"" Dec 13 13:08:07.596579 systemd[1]: run-netns-cni\x2da587cacc\x2d938a\x2d3aec\x2d1a56\x2d1e5289fe334d.mount: Deactivated successfully. Dec 13 13:08:07.597330 kubelet[2622]: E1213 13:08:07.594477 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:08:07.597330 kubelet[2622]: I1213 13:08:07.595282 2622 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="68d12b2ab0b873b30886d308ea3de85af76db5272af295b1efc8675fbaeb9a81" Dec 13 13:08:07.598200 kubelet[2622]: I1213 13:08:07.597778 2622 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ccc050dfd7491f2bd4ad4f0574d3de631147e1664ac73062ca2d68ac359b82b" Dec 13 13:08:07.598819 containerd[1444]: time="2024-12-13T13:08:07.598706964Z" level=info msg="StopPodSandbox for \"0ccc050dfd7491f2bd4ad4f0574d3de631147e1664ac73062ca2d68ac359b82b\"" Dec 13 13:08:07.598952 containerd[1444]: time="2024-12-13T13:08:07.598892004Z" level=info msg="Ensure that sandbox 0ccc050dfd7491f2bd4ad4f0574d3de631147e1664ac73062ca2d68ac359b82b in task-service has been cleanup successfully" Dec 13 13:08:07.599187 containerd[1444]: time="2024-12-13T13:08:07.599129164Z" level=info msg="TearDown network for sandbox \"0ccc050dfd7491f2bd4ad4f0574d3de631147e1664ac73062ca2d68ac359b82b\" successfully" Dec 13 13:08:07.599217 containerd[1444]: time="2024-12-13T13:08:07.599187724Z" level=info msg="StopPodSandbox for \"0ccc050dfd7491f2bd4ad4f0574d3de631147e1664ac73062ca2d68ac359b82b\" returns successfully" Dec 13 13:08:07.599588 containerd[1444]: time="2024-12-13T13:08:07.599562085Z" level=info msg="StopPodSandbox for \"6df4a84e0027bd9244a453db628e982a16ba2e4aa5d8112d6e577f178f2c240a\"" Dec 13 13:08:07.599811 containerd[1444]: time="2024-12-13T13:08:07.599663925Z" level=info msg="TearDown network for sandbox \"6df4a84e0027bd9244a453db628e982a16ba2e4aa5d8112d6e577f178f2c240a\" successfully" Dec 13 13:08:07.599811 containerd[1444]: time="2024-12-13T13:08:07.599675125Z" level=info msg="StopPodSandbox for \"6df4a84e0027bd9244a453db628e982a16ba2e4aa5d8112d6e577f178f2c240a\" returns successfully" Dec 13 13:08:07.600017 containerd[1444]: time="2024-12-13T13:08:07.599946725Z" level=info msg="StopPodSandbox for \"03044fd1b3482e0180c2772d126611fce77f77f2395fb599361aef45f96093af\"" Dec 13 13:08:07.600076 containerd[1444]: time="2024-12-13T13:08:07.600024245Z" level=info msg="TearDown network for sandbox \"03044fd1b3482e0180c2772d126611fce77f77f2395fb599361aef45f96093af\" successfully" Dec 13 13:08:07.600076 containerd[1444]: time="2024-12-13T13:08:07.600034725Z" level=info msg="StopPodSandbox for \"03044fd1b3482e0180c2772d126611fce77f77f2395fb599361aef45f96093af\" returns successfully" Dec 13 13:08:07.600481 containerd[1444]: time="2024-12-13T13:08:07.600453645Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6895d58756-pfggb,Uid:afc0c628-56bd-4014-86d9-0b030f93cf65,Namespace:calico-system,Attempt:3,}" Dec 13 13:08:07.628840 containerd[1444]: time="2024-12-13T13:08:07.628657097Z" level=info msg="Ensure that sandbox 68d12b2ab0b873b30886d308ea3de85af76db5272af295b1efc8675fbaeb9a81 in task-service has been cleanup successfully" Dec 13 13:08:07.629033 containerd[1444]: time="2024-12-13T13:08:07.629012657Z" level=info msg="TearDown network for sandbox \"68d12b2ab0b873b30886d308ea3de85af76db5272af295b1efc8675fbaeb9a81\" successfully" Dec 13 13:08:07.629088 containerd[1444]: time="2024-12-13T13:08:07.629075737Z" level=info msg="StopPodSandbox for \"68d12b2ab0b873b30886d308ea3de85af76db5272af295b1efc8675fbaeb9a81\" returns successfully" Dec 13 13:08:07.629619 containerd[1444]: time="2024-12-13T13:08:07.629593018Z" level=info msg="StopPodSandbox for \"ccac7d173d88a47737a4249b48065a67d7021746be32a7ede2429cc7f6245bca\"" Dec 13 13:08:07.630114 containerd[1444]: time="2024-12-13T13:08:07.629902938Z" level=info msg="TearDown network for sandbox \"ccac7d173d88a47737a4249b48065a67d7021746be32a7ede2429cc7f6245bca\" successfully" Dec 13 13:08:07.630114 containerd[1444]: time="2024-12-13T13:08:07.629942738Z" level=info msg="StopPodSandbox for \"ccac7d173d88a47737a4249b48065a67d7021746be32a7ede2429cc7f6245bca\" returns successfully" Dec 13 13:08:07.630520 containerd[1444]: time="2024-12-13T13:08:07.630375818Z" level=info msg="StopPodSandbox for \"7d3ed20e49897916e8d2add9f19ca57e443caa327bb88ce2f1a943a02055e707\"" Dec 13 13:08:07.630520 containerd[1444]: time="2024-12-13T13:08:07.630461818Z" level=info msg="TearDown network for sandbox \"7d3ed20e49897916e8d2add9f19ca57e443caa327bb88ce2f1a943a02055e707\" successfully" Dec 13 13:08:07.630520 containerd[1444]: time="2024-12-13T13:08:07.630472258Z" level=info msg="StopPodSandbox for \"7d3ed20e49897916e8d2add9f19ca57e443caa327bb88ce2f1a943a02055e707\" returns successfully" Dec 13 13:08:07.631179 containerd[1444]: time="2024-12-13T13:08:07.631139338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d6cbc9658-ph7sh,Uid:2fee8b36-caea-489f-b412-0f8b4408366f,Namespace:calico-apiserver,Attempt:3,}" Dec 13 13:08:07.739951 containerd[1444]: time="2024-12-13T13:08:07.739857946Z" level=error msg="Failed to destroy network for sandbox \"ca002f6339855b3efe0262cb09378d4272682cfbc73b94d9029dfa74c5ff09b9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:08:07.740764 containerd[1444]: time="2024-12-13T13:08:07.740675226Z" level=error msg="encountered an error cleaning up failed sandbox \"ca002f6339855b3efe0262cb09378d4272682cfbc73b94d9029dfa74c5ff09b9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:08:07.740832 containerd[1444]: time="2024-12-13T13:08:07.740810346Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bvvfs,Uid:0cbbdf0f-40e1-46d6-a471-bc442a66a580,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"ca002f6339855b3efe0262cb09378d4272682cfbc73b94d9029dfa74c5ff09b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:08:07.741132 kubelet[2622]: E1213 13:08:07.741083 2622 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca002f6339855b3efe0262cb09378d4272682cfbc73b94d9029dfa74c5ff09b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:08:07.741226 kubelet[2622]: E1213 13:08:07.741179 2622 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca002f6339855b3efe0262cb09378d4272682cfbc73b94d9029dfa74c5ff09b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bvvfs" Dec 13 13:08:07.741226 kubelet[2622]: E1213 13:08:07.741203 2622 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca002f6339855b3efe0262cb09378d4272682cfbc73b94d9029dfa74c5ff09b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bvvfs" Dec 13 13:08:07.741279 kubelet[2622]: E1213 13:08:07.741260 2622 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-bvvfs_calico-system(0cbbdf0f-40e1-46d6-a471-bc442a66a580)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-bvvfs_calico-system(0cbbdf0f-40e1-46d6-a471-bc442a66a580)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ca002f6339855b3efe0262cb09378d4272682cfbc73b94d9029dfa74c5ff09b9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-bvvfs" podUID="0cbbdf0f-40e1-46d6-a471-bc442a66a580" Dec 13 13:08:07.852087 containerd[1444]: time="2024-12-13T13:08:07.852039235Z" level=error msg="Failed to destroy network for sandbox \"fedc453b3e594ca2a682fe9c66d5bc1bea01c09f6e7f0334a00e274b0d5e7e0b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:08:07.852269 containerd[1444]: time="2024-12-13T13:08:07.852091635Z" level=error msg="Failed to destroy network for sandbox \"35c18f060763675ee252c5d3d0b6434e0850da79a4ae2cb9d5e7c8a664090b55\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:08:07.852438 containerd[1444]: time="2024-12-13T13:08:07.852409315Z" level=error msg="encountered an error cleaning up failed sandbox \"fedc453b3e594ca2a682fe9c66d5bc1bea01c09f6e7f0334a00e274b0d5e7e0b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:08:07.852499 containerd[1444]: time="2024-12-13T13:08:07.852479635Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d6cbc9658-ph7sh,Uid:2fee8b36-caea-489f-b412-0f8b4408366f,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"fedc453b3e594ca2a682fe9c66d5bc1bea01c09f6e7f0334a00e274b0d5e7e0b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:08:07.852559 containerd[1444]: time="2024-12-13T13:08:07.852429235Z" level=error msg="encountered an error cleaning up failed sandbox \"35c18f060763675ee252c5d3d0b6434e0850da79a4ae2cb9d5e7c8a664090b55\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:08:07.852627 containerd[1444]: time="2024-12-13T13:08:07.852604675Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d6cbc9658-ggvqp,Uid:b381e5c1-7896-4e9c-934b-2f01903d7a34,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"35c18f060763675ee252c5d3d0b6434e0850da79a4ae2cb9d5e7c8a664090b55\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:08:07.852821 kubelet[2622]: E1213 13:08:07.852798 2622 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"35c18f060763675ee252c5d3d0b6434e0850da79a4ae2cb9d5e7c8a664090b55\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:08:07.852866 kubelet[2622]: E1213 13:08:07.852850 2622 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"35c18f060763675ee252c5d3d0b6434e0850da79a4ae2cb9d5e7c8a664090b55\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d6cbc9658-ggvqp" Dec 13 13:08:07.852888 kubelet[2622]: E1213 13:08:07.852875 2622 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"35c18f060763675ee252c5d3d0b6434e0850da79a4ae2cb9d5e7c8a664090b55\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d6cbc9658-ggvqp" Dec 13 13:08:07.852958 kubelet[2622]: E1213 13:08:07.852941 2622 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7d6cbc9658-ggvqp_calico-apiserver(b381e5c1-7896-4e9c-934b-2f01903d7a34)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7d6cbc9658-ggvqp_calico-apiserver(b381e5c1-7896-4e9c-934b-2f01903d7a34)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"35c18f060763675ee252c5d3d0b6434e0850da79a4ae2cb9d5e7c8a664090b55\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7d6cbc9658-ggvqp" podUID="b381e5c1-7896-4e9c-934b-2f01903d7a34" Dec 13 13:08:07.853009 kubelet[2622]: E1213 13:08:07.852978 2622 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fedc453b3e594ca2a682fe9c66d5bc1bea01c09f6e7f0334a00e274b0d5e7e0b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:08:07.853009 kubelet[2622]: E1213 13:08:07.852999 2622 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fedc453b3e594ca2a682fe9c66d5bc1bea01c09f6e7f0334a00e274b0d5e7e0b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d6cbc9658-ph7sh" Dec 13 13:08:07.853053 kubelet[2622]: E1213 13:08:07.853014 2622 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fedc453b3e594ca2a682fe9c66d5bc1bea01c09f6e7f0334a00e274b0d5e7e0b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d6cbc9658-ph7sh" Dec 13 13:08:07.853076 kubelet[2622]: E1213 13:08:07.853058 2622 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7d6cbc9658-ph7sh_calico-apiserver(2fee8b36-caea-489f-b412-0f8b4408366f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7d6cbc9658-ph7sh_calico-apiserver(2fee8b36-caea-489f-b412-0f8b4408366f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fedc453b3e594ca2a682fe9c66d5bc1bea01c09f6e7f0334a00e274b0d5e7e0b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7d6cbc9658-ph7sh" podUID="2fee8b36-caea-489f-b412-0f8b4408366f" Dec 13 13:08:07.928768 containerd[1444]: time="2024-12-13T13:08:07.928701908Z" level=error msg="Failed to destroy network for sandbox \"56077653ebacc3b1051e470842f46345115608852df7cac0e9454b903217d018\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:08:07.929221 containerd[1444]: time="2024-12-13T13:08:07.929192228Z" level=error msg="encountered an error cleaning up failed sandbox \"56077653ebacc3b1051e470842f46345115608852df7cac0e9454b903217d018\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:08:07.929370 containerd[1444]: time="2024-12-13T13:08:07.929331788Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-kvwcl,Uid:575f75ad-b249-4356-8ee6-1279602164ae,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"56077653ebacc3b1051e470842f46345115608852df7cac0e9454b903217d018\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:08:07.929994 kubelet[2622]: E1213 13:08:07.929965 2622 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"56077653ebacc3b1051e470842f46345115608852df7cac0e9454b903217d018\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:08:07.930070 kubelet[2622]: E1213 13:08:07.930026 2622 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"56077653ebacc3b1051e470842f46345115608852df7cac0e9454b903217d018\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-kvwcl" Dec 13 13:08:07.930070 kubelet[2622]: E1213 13:08:07.930047 2622 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"56077653ebacc3b1051e470842f46345115608852df7cac0e9454b903217d018\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-kvwcl" Dec 13 13:08:07.930122 kubelet[2622]: E1213 13:08:07.930106 2622 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-kvwcl_kube-system(575f75ad-b249-4356-8ee6-1279602164ae)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-kvwcl_kube-system(575f75ad-b249-4356-8ee6-1279602164ae)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"56077653ebacc3b1051e470842f46345115608852df7cac0e9454b903217d018\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-kvwcl" podUID="575f75ad-b249-4356-8ee6-1279602164ae" Dec 13 13:08:07.961535 containerd[1444]: time="2024-12-13T13:08:07.961459442Z" level=error msg="Failed to destroy network for sandbox \"7bcc2f392126e4c11de3a5ac865152355315160a77673196a6cb7e7f0fe1b3b3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:08:07.961938 containerd[1444]: time="2024-12-13T13:08:07.961898282Z" level=error msg="encountered an error cleaning up failed sandbox \"7bcc2f392126e4c11de3a5ac865152355315160a77673196a6cb7e7f0fe1b3b3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:08:07.962158 containerd[1444]: time="2024-12-13T13:08:07.961988282Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-njbdr,Uid:6531a85c-6dd0-4079-bc39-8116bc3f4b54,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"7bcc2f392126e4c11de3a5ac865152355315160a77673196a6cb7e7f0fe1b3b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:08:07.962270 kubelet[2622]: E1213 13:08:07.962242 2622 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7bcc2f392126e4c11de3a5ac865152355315160a77673196a6cb7e7f0fe1b3b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:08:07.962329 kubelet[2622]: E1213 13:08:07.962293 2622 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7bcc2f392126e4c11de3a5ac865152355315160a77673196a6cb7e7f0fe1b3b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-njbdr" Dec 13 13:08:07.962329 kubelet[2622]: E1213 13:08:07.962319 2622 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7bcc2f392126e4c11de3a5ac865152355315160a77673196a6cb7e7f0fe1b3b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-njbdr" Dec 13 13:08:07.962387 kubelet[2622]: E1213 13:08:07.962381 2622 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-njbdr_kube-system(6531a85c-6dd0-4079-bc39-8116bc3f4b54)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-njbdr_kube-system(6531a85c-6dd0-4079-bc39-8116bc3f4b54)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7bcc2f392126e4c11de3a5ac865152355315160a77673196a6cb7e7f0fe1b3b3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-njbdr" podUID="6531a85c-6dd0-4079-bc39-8116bc3f4b54" Dec 13 13:08:07.986723 containerd[1444]: time="2024-12-13T13:08:07.986665213Z" level=error msg="Failed to destroy network for sandbox \"0c5da573ef43886eb00234ed50fff214076b52db4117c157c3df8ec2fcb80e66\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:08:07.987108 containerd[1444]: time="2024-12-13T13:08:07.987070093Z" level=error msg="encountered an error cleaning up failed sandbox \"0c5da573ef43886eb00234ed50fff214076b52db4117c157c3df8ec2fcb80e66\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:08:07.987162 containerd[1444]: time="2024-12-13T13:08:07.987145253Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6895d58756-pfggb,Uid:afc0c628-56bd-4014-86d9-0b030f93cf65,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"0c5da573ef43886eb00234ed50fff214076b52db4117c157c3df8ec2fcb80e66\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:08:07.987541 kubelet[2622]: E1213 13:08:07.987378 2622 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c5da573ef43886eb00234ed50fff214076b52db4117c157c3df8ec2fcb80e66\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:08:07.987541 kubelet[2622]: E1213 13:08:07.987430 2622 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c5da573ef43886eb00234ed50fff214076b52db4117c157c3df8ec2fcb80e66\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6895d58756-pfggb" Dec 13 13:08:07.987541 kubelet[2622]: E1213 13:08:07.987455 2622 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c5da573ef43886eb00234ed50fff214076b52db4117c157c3df8ec2fcb80e66\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6895d58756-pfggb" Dec 13 13:08:07.987659 kubelet[2622]: E1213 13:08:07.987511 2622 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6895d58756-pfggb_calico-system(afc0c628-56bd-4014-86d9-0b030f93cf65)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6895d58756-pfggb_calico-system(afc0c628-56bd-4014-86d9-0b030f93cf65)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0c5da573ef43886eb00234ed50fff214076b52db4117c157c3df8ec2fcb80e66\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6895d58756-pfggb" podUID="afc0c628-56bd-4014-86d9-0b030f93cf65" Dec 13 13:08:07.988851 containerd[1444]: time="2024-12-13T13:08:07.988812494Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:08:07.989787 containerd[1444]: time="2024-12-13T13:08:07.989688055Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=137671762" Dec 13 13:08:07.993102 containerd[1444]: time="2024-12-13T13:08:07.992040456Z" level=info msg="ImageCreate event name:\"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:08:07.995163 containerd[1444]: time="2024-12-13T13:08:07.995098617Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:08:07.995800 containerd[1444]: time="2024-12-13T13:08:07.995600177Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"137671624\" in 3.470304085s" Dec 13 13:08:07.995800 containerd[1444]: time="2024-12-13T13:08:07.995639897Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\"" Dec 13 13:08:08.017148 containerd[1444]: time="2024-12-13T13:08:08.017096186Z" level=info msg="CreateContainer within sandbox \"17c817c737b6a17ab7a88f8e2b5162789abbeefb2794567a0ce86c4469dbc5d2\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 13 13:08:08.031275 containerd[1444]: time="2024-12-13T13:08:08.031225552Z" level=info msg="CreateContainer within sandbox \"17c817c737b6a17ab7a88f8e2b5162789abbeefb2794567a0ce86c4469dbc5d2\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"8112da0ba7e8f55ee6d22436cf808f0efed02d7e553161da1fe3b21c74e3cf55\"" Dec 13 13:08:08.031829 containerd[1444]: time="2024-12-13T13:08:08.031804792Z" level=info msg="StartContainer for \"8112da0ba7e8f55ee6d22436cf808f0efed02d7e553161da1fe3b21c74e3cf55\"" Dec 13 13:08:08.085111 systemd[1]: Started cri-containerd-8112da0ba7e8f55ee6d22436cf808f0efed02d7e553161da1fe3b21c74e3cf55.scope - libcontainer container 8112da0ba7e8f55ee6d22436cf808f0efed02d7e553161da1fe3b21c74e3cf55. Dec 13 13:08:08.114416 containerd[1444]: time="2024-12-13T13:08:08.114352506Z" level=info msg="StartContainer for \"8112da0ba7e8f55ee6d22436cf808f0efed02d7e553161da1fe3b21c74e3cf55\" returns successfully" Dec 13 13:08:08.390323 systemd[1]: run-netns-cni\x2d8a526a0b\x2d2bca\x2db924\x2d55ee\x2dffea7a2de4d8.mount: Deactivated successfully. Dec 13 13:08:08.390425 systemd[1]: run-netns-cni\x2da7056017\x2de4c4\x2d7a08\x2ddc23\x2dae1a076f975a.mount: Deactivated successfully. Dec 13 13:08:08.390639 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1773396876.mount: Deactivated successfully. Dec 13 13:08:08.459689 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 13 13:08:08.459799 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 13 13:08:08.602505 kubelet[2622]: I1213 13:08:08.602463 2622 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0c5da573ef43886eb00234ed50fff214076b52db4117c157c3df8ec2fcb80e66" Dec 13 13:08:08.603423 containerd[1444]: time="2024-12-13T13:08:08.603333745Z" level=info msg="StopPodSandbox for \"0c5da573ef43886eb00234ed50fff214076b52db4117c157c3df8ec2fcb80e66\"" Dec 13 13:08:08.604270 containerd[1444]: time="2024-12-13T13:08:08.603671706Z" level=info msg="Ensure that sandbox 0c5da573ef43886eb00234ed50fff214076b52db4117c157c3df8ec2fcb80e66 in task-service has been cleanup successfully" Dec 13 13:08:08.604270 containerd[1444]: time="2024-12-13T13:08:08.603967666Z" level=info msg="TearDown network for sandbox \"0c5da573ef43886eb00234ed50fff214076b52db4117c157c3df8ec2fcb80e66\" successfully" Dec 13 13:08:08.604270 containerd[1444]: time="2024-12-13T13:08:08.604027066Z" level=info msg="StopPodSandbox for \"0c5da573ef43886eb00234ed50fff214076b52db4117c157c3df8ec2fcb80e66\" returns successfully" Dec 13 13:08:08.606450 systemd[1]: run-netns-cni\x2defdd3c62\x2d4eb3\x2dedc1\x2de5a8\x2d864d04f281e2.mount: Deactivated successfully. Dec 13 13:08:08.607836 containerd[1444]: time="2024-12-13T13:08:08.607635187Z" level=info msg="StopPodSandbox for \"0ccc050dfd7491f2bd4ad4f0574d3de631147e1664ac73062ca2d68ac359b82b\"" Dec 13 13:08:08.607836 containerd[1444]: time="2024-12-13T13:08:08.607727987Z" level=info msg="TearDown network for sandbox \"0ccc050dfd7491f2bd4ad4f0574d3de631147e1664ac73062ca2d68ac359b82b\" successfully" Dec 13 13:08:08.607836 containerd[1444]: time="2024-12-13T13:08:08.607739587Z" level=info msg="StopPodSandbox for \"0ccc050dfd7491f2bd4ad4f0574d3de631147e1664ac73062ca2d68ac359b82b\" returns successfully" Dec 13 13:08:08.608212 containerd[1444]: time="2024-12-13T13:08:08.608191187Z" level=info msg="StopPodSandbox for \"6df4a84e0027bd9244a453db628e982a16ba2e4aa5d8112d6e577f178f2c240a\"" Dec 13 13:08:08.608349 containerd[1444]: time="2024-12-13T13:08:08.608257468Z" level=info msg="TearDown network for sandbox \"6df4a84e0027bd9244a453db628e982a16ba2e4aa5d8112d6e577f178f2c240a\" successfully" Dec 13 13:08:08.608349 containerd[1444]: time="2024-12-13T13:08:08.608267548Z" level=info msg="StopPodSandbox for \"6df4a84e0027bd9244a453db628e982a16ba2e4aa5d8112d6e577f178f2c240a\" returns successfully" Dec 13 13:08:08.608822 containerd[1444]: time="2024-12-13T13:08:08.608674668Z" level=info msg="StopPodSandbox for \"03044fd1b3482e0180c2772d126611fce77f77f2395fb599361aef45f96093af\"" Dec 13 13:08:08.608822 containerd[1444]: time="2024-12-13T13:08:08.608764228Z" level=info msg="TearDown network for sandbox \"03044fd1b3482e0180c2772d126611fce77f77f2395fb599361aef45f96093af\" successfully" Dec 13 13:08:08.608822 containerd[1444]: time="2024-12-13T13:08:08.608775788Z" level=info msg="StopPodSandbox for \"03044fd1b3482e0180c2772d126611fce77f77f2395fb599361aef45f96093af\" returns successfully" Dec 13 13:08:08.609701 containerd[1444]: time="2024-12-13T13:08:08.609658988Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6895d58756-pfggb,Uid:afc0c628-56bd-4014-86d9-0b030f93cf65,Namespace:calico-system,Attempt:4,}" Dec 13 13:08:08.609856 kubelet[2622]: I1213 13:08:08.609831 2622 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca002f6339855b3efe0262cb09378d4272682cfbc73b94d9029dfa74c5ff09b9" Dec 13 13:08:08.610917 containerd[1444]: time="2024-12-13T13:08:08.610850669Z" level=info msg="StopPodSandbox for \"ca002f6339855b3efe0262cb09378d4272682cfbc73b94d9029dfa74c5ff09b9\"" Dec 13 13:08:08.611166 containerd[1444]: time="2024-12-13T13:08:08.611136709Z" level=info msg="Ensure that sandbox ca002f6339855b3efe0262cb09378d4272682cfbc73b94d9029dfa74c5ff09b9 in task-service has been cleanup successfully" Dec 13 13:08:08.612171 containerd[1444]: time="2024-12-13T13:08:08.611936389Z" level=info msg="TearDown network for sandbox \"ca002f6339855b3efe0262cb09378d4272682cfbc73b94d9029dfa74c5ff09b9\" successfully" Dec 13 13:08:08.612171 containerd[1444]: time="2024-12-13T13:08:08.611962789Z" level=info msg="StopPodSandbox for \"ca002f6339855b3efe0262cb09378d4272682cfbc73b94d9029dfa74c5ff09b9\" returns successfully" Dec 13 13:08:08.614337 systemd[1]: run-netns-cni\x2d8855aeac\x2d004b\x2d78b4\x2ddfa8\x2d2b60a284801e.mount: Deactivated successfully. Dec 13 13:08:08.615030 containerd[1444]: time="2024-12-13T13:08:08.614719510Z" level=info msg="StopPodSandbox for \"7645574e91d41985d8b928f0ec11cd2aece9951aa0ed1bce8534f8d7c5f8c50e\"" Dec 13 13:08:08.615030 containerd[1444]: time="2024-12-13T13:08:08.614830190Z" level=info msg="TearDown network for sandbox \"7645574e91d41985d8b928f0ec11cd2aece9951aa0ed1bce8534f8d7c5f8c50e\" successfully" Dec 13 13:08:08.615030 containerd[1444]: time="2024-12-13T13:08:08.614839990Z" level=info msg="StopPodSandbox for \"7645574e91d41985d8b928f0ec11cd2aece9951aa0ed1bce8534f8d7c5f8c50e\" returns successfully" Dec 13 13:08:08.615431 containerd[1444]: time="2024-12-13T13:08:08.615403870Z" level=info msg="StopPodSandbox for \"11af14cbd2e99571b75d9d81ccfc84e1972eee71b3e867bf76c66e8364d5bd46\"" Dec 13 13:08:08.615513 containerd[1444]: time="2024-12-13T13:08:08.615496910Z" level=info msg="TearDown network for sandbox \"11af14cbd2e99571b75d9d81ccfc84e1972eee71b3e867bf76c66e8364d5bd46\" successfully" Dec 13 13:08:08.615513 containerd[1444]: time="2024-12-13T13:08:08.615511550Z" level=info msg="StopPodSandbox for \"11af14cbd2e99571b75d9d81ccfc84e1972eee71b3e867bf76c66e8364d5bd46\" returns successfully" Dec 13 13:08:08.616553 containerd[1444]: time="2024-12-13T13:08:08.616463111Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bvvfs,Uid:0cbbdf0f-40e1-46d6-a471-bc442a66a580,Namespace:calico-system,Attempt:3,}" Dec 13 13:08:08.617789 kubelet[2622]: I1213 13:08:08.617761 2622 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fedc453b3e594ca2a682fe9c66d5bc1bea01c09f6e7f0334a00e274b0d5e7e0b" Dec 13 13:08:08.618450 containerd[1444]: time="2024-12-13T13:08:08.618292312Z" level=info msg="StopPodSandbox for \"fedc453b3e594ca2a682fe9c66d5bc1bea01c09f6e7f0334a00e274b0d5e7e0b\"" Dec 13 13:08:08.618666 containerd[1444]: time="2024-12-13T13:08:08.618458752Z" level=info msg="Ensure that sandbox fedc453b3e594ca2a682fe9c66d5bc1bea01c09f6e7f0334a00e274b0d5e7e0b in task-service has been cleanup successfully" Dec 13 13:08:08.618666 containerd[1444]: time="2024-12-13T13:08:08.618647352Z" level=info msg="TearDown network for sandbox \"fedc453b3e594ca2a682fe9c66d5bc1bea01c09f6e7f0334a00e274b0d5e7e0b\" successfully" Dec 13 13:08:08.618666 containerd[1444]: time="2024-12-13T13:08:08.618663472Z" level=info msg="StopPodSandbox for \"fedc453b3e594ca2a682fe9c66d5bc1bea01c09f6e7f0334a00e274b0d5e7e0b\" returns successfully" Dec 13 13:08:08.620018 containerd[1444]: time="2024-12-13T13:08:08.619386352Z" level=info msg="StopPodSandbox for \"68d12b2ab0b873b30886d308ea3de85af76db5272af295b1efc8675fbaeb9a81\"" Dec 13 13:08:08.620018 containerd[1444]: time="2024-12-13T13:08:08.619848632Z" level=info msg="TearDown network for sandbox \"68d12b2ab0b873b30886d308ea3de85af76db5272af295b1efc8675fbaeb9a81\" successfully" Dec 13 13:08:08.620018 containerd[1444]: time="2024-12-13T13:08:08.619868392Z" level=info msg="StopPodSandbox for \"68d12b2ab0b873b30886d308ea3de85af76db5272af295b1efc8675fbaeb9a81\" returns successfully" Dec 13 13:08:08.620514 containerd[1444]: time="2024-12-13T13:08:08.620482432Z" level=info msg="StopPodSandbox for \"ccac7d173d88a47737a4249b48065a67d7021746be32a7ede2429cc7f6245bca\"" Dec 13 13:08:08.621191 containerd[1444]: time="2024-12-13T13:08:08.620669393Z" level=info msg="TearDown network for sandbox \"ccac7d173d88a47737a4249b48065a67d7021746be32a7ede2429cc7f6245bca\" successfully" Dec 13 13:08:08.621191 containerd[1444]: time="2024-12-13T13:08:08.620752033Z" level=info msg="StopPodSandbox for \"ccac7d173d88a47737a4249b48065a67d7021746be32a7ede2429cc7f6245bca\" returns successfully" Dec 13 13:08:08.621063 systemd[1]: run-netns-cni\x2da610026d\x2daf4c\x2d6b2f\x2d7c07\x2df9f4b97ddb01.mount: Deactivated successfully. Dec 13 13:08:08.622619 containerd[1444]: time="2024-12-13T13:08:08.622513913Z" level=info msg="StopPodSandbox for \"7d3ed20e49897916e8d2add9f19ca57e443caa327bb88ce2f1a943a02055e707\"" Dec 13 13:08:08.622619 containerd[1444]: time="2024-12-13T13:08:08.622603593Z" level=info msg="TearDown network for sandbox \"7d3ed20e49897916e8d2add9f19ca57e443caa327bb88ce2f1a943a02055e707\" successfully" Dec 13 13:08:08.622619 containerd[1444]: time="2024-12-13T13:08:08.622614153Z" level=info msg="StopPodSandbox for \"7d3ed20e49897916e8d2add9f19ca57e443caa327bb88ce2f1a943a02055e707\" returns successfully" Dec 13 13:08:08.623386 containerd[1444]: time="2024-12-13T13:08:08.623181754Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d6cbc9658-ph7sh,Uid:2fee8b36-caea-489f-b412-0f8b4408366f,Namespace:calico-apiserver,Attempt:4,}" Dec 13 13:08:08.623641 kubelet[2622]: I1213 13:08:08.623524 2622 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="35c18f060763675ee252c5d3d0b6434e0850da79a4ae2cb9d5e7c8a664090b55" Dec 13 13:08:08.625149 containerd[1444]: time="2024-12-13T13:08:08.625123834Z" level=info msg="StopPodSandbox for \"35c18f060763675ee252c5d3d0b6434e0850da79a4ae2cb9d5e7c8a664090b55\"" Dec 13 13:08:08.625328 containerd[1444]: time="2024-12-13T13:08:08.625271154Z" level=info msg="Ensure that sandbox 35c18f060763675ee252c5d3d0b6434e0850da79a4ae2cb9d5e7c8a664090b55 in task-service has been cleanup successfully" Dec 13 13:08:08.627189 systemd[1]: run-netns-cni\x2de4104d51\x2d2f82\x2d7e82\x2d5be7\x2d7b4fbe49dadc.mount: Deactivated successfully. Dec 13 13:08:08.627905 kubelet[2622]: I1213 13:08:08.627869 2622 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="56077653ebacc3b1051e470842f46345115608852df7cac0e9454b903217d018" Dec 13 13:08:08.628304 containerd[1444]: time="2024-12-13T13:08:08.628251156Z" level=info msg="TearDown network for sandbox \"35c18f060763675ee252c5d3d0b6434e0850da79a4ae2cb9d5e7c8a664090b55\" successfully" Dec 13 13:08:08.628304 containerd[1444]: time="2024-12-13T13:08:08.628284956Z" level=info msg="StopPodSandbox for \"35c18f060763675ee252c5d3d0b6434e0850da79a4ae2cb9d5e7c8a664090b55\" returns successfully" Dec 13 13:08:08.630753 containerd[1444]: time="2024-12-13T13:08:08.629359996Z" level=info msg="StopPodSandbox for \"485087125903c712999ca3ac6293c276d8ed0f0a29e6cdb8396ba0834e10212f\"" Dec 13 13:08:08.630753 containerd[1444]: time="2024-12-13T13:08:08.629449476Z" level=info msg="TearDown network for sandbox \"485087125903c712999ca3ac6293c276d8ed0f0a29e6cdb8396ba0834e10212f\" successfully" Dec 13 13:08:08.630753 containerd[1444]: time="2024-12-13T13:08:08.629459836Z" level=info msg="StopPodSandbox for \"485087125903c712999ca3ac6293c276d8ed0f0a29e6cdb8396ba0834e10212f\" returns successfully" Dec 13 13:08:08.630753 containerd[1444]: time="2024-12-13T13:08:08.629519556Z" level=info msg="StopPodSandbox for \"56077653ebacc3b1051e470842f46345115608852df7cac0e9454b903217d018\"" Dec 13 13:08:08.630753 containerd[1444]: time="2024-12-13T13:08:08.629653996Z" level=info msg="Ensure that sandbox 56077653ebacc3b1051e470842f46345115608852df7cac0e9454b903217d018 in task-service has been cleanup successfully" Dec 13 13:08:08.630753 containerd[1444]: time="2024-12-13T13:08:08.629811756Z" level=info msg="StopPodSandbox for \"807e6b897d5c5a9455bb9eb345295c94f79378f38bc2debd28d654f1a5f4b816\"" Dec 13 13:08:08.630753 containerd[1444]: time="2024-12-13T13:08:08.629888596Z" level=info msg="TearDown network for sandbox \"807e6b897d5c5a9455bb9eb345295c94f79378f38bc2debd28d654f1a5f4b816\" successfully" Dec 13 13:08:08.630753 containerd[1444]: time="2024-12-13T13:08:08.629897876Z" level=info msg="StopPodSandbox for \"807e6b897d5c5a9455bb9eb345295c94f79378f38bc2debd28d654f1a5f4b816\" returns successfully" Dec 13 13:08:08.630753 containerd[1444]: time="2024-12-13T13:08:08.630385957Z" level=info msg="StopPodSandbox for \"ae479ae6a6663462293a9efbcb374b740041a02760ed8739eaf35a922675a17d\"" Dec 13 13:08:08.630753 containerd[1444]: time="2024-12-13T13:08:08.630450477Z" level=info msg="TearDown network for sandbox \"ae479ae6a6663462293a9efbcb374b740041a02760ed8739eaf35a922675a17d\" successfully" Dec 13 13:08:08.630753 containerd[1444]: time="2024-12-13T13:08:08.630460957Z" level=info msg="StopPodSandbox for \"ae479ae6a6663462293a9efbcb374b740041a02760ed8739eaf35a922675a17d\" returns successfully" Dec 13 13:08:08.631154 containerd[1444]: time="2024-12-13T13:08:08.630999397Z" level=info msg="TearDown network for sandbox \"56077653ebacc3b1051e470842f46345115608852df7cac0e9454b903217d018\" successfully" Dec 13 13:08:08.631154 containerd[1444]: time="2024-12-13T13:08:08.631015477Z" level=info msg="StopPodSandbox for \"56077653ebacc3b1051e470842f46345115608852df7cac0e9454b903217d018\" returns successfully" Dec 13 13:08:08.631720 containerd[1444]: time="2024-12-13T13:08:08.631422437Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d6cbc9658-ggvqp,Uid:b381e5c1-7896-4e9c-934b-2f01903d7a34,Namespace:calico-apiserver,Attempt:4,}" Dec 13 13:08:08.632067 containerd[1444]: time="2024-12-13T13:08:08.632043877Z" level=info msg="StopPodSandbox for \"d6b8d0158d3945c8194a81d24d71326d7a131773b0cf51accf264beb019547ca\"" Dec 13 13:08:08.632103 kubelet[2622]: I1213 13:08:08.632049 2622 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7bcc2f392126e4c11de3a5ac865152355315160a77673196a6cb7e7f0fe1b3b3" Dec 13 13:08:08.632141 containerd[1444]: time="2024-12-13T13:08:08.632122277Z" level=info msg="TearDown network for sandbox \"d6b8d0158d3945c8194a81d24d71326d7a131773b0cf51accf264beb019547ca\" successfully" Dec 13 13:08:08.632141 containerd[1444]: time="2024-12-13T13:08:08.632131517Z" level=info msg="StopPodSandbox for \"d6b8d0158d3945c8194a81d24d71326d7a131773b0cf51accf264beb019547ca\" returns successfully" Dec 13 13:08:08.633710 containerd[1444]: time="2024-12-13T13:08:08.632942118Z" level=info msg="StopPodSandbox for \"371ed6cf24b6c19135943a4eb8cf94941db4e398c37eb15a00aabd36cf86a461\"" Dec 13 13:08:08.633710 containerd[1444]: time="2024-12-13T13:08:08.633111678Z" level=info msg="TearDown network for sandbox \"371ed6cf24b6c19135943a4eb8cf94941db4e398c37eb15a00aabd36cf86a461\" successfully" Dec 13 13:08:08.633710 containerd[1444]: time="2024-12-13T13:08:08.633125478Z" level=info msg="StopPodSandbox for \"371ed6cf24b6c19135943a4eb8cf94941db4e398c37eb15a00aabd36cf86a461\" returns successfully" Dec 13 13:08:08.633710 containerd[1444]: time="2024-12-13T13:08:08.633193758Z" level=info msg="StopPodSandbox for \"7bcc2f392126e4c11de3a5ac865152355315160a77673196a6cb7e7f0fe1b3b3\"" Dec 13 13:08:08.633710 containerd[1444]: time="2024-12-13T13:08:08.633490598Z" level=info msg="Ensure that sandbox 7bcc2f392126e4c11de3a5ac865152355315160a77673196a6cb7e7f0fe1b3b3 in task-service has been cleanup successfully" Dec 13 13:08:08.635157 containerd[1444]: time="2024-12-13T13:08:08.633798238Z" level=info msg="TearDown network for sandbox \"7bcc2f392126e4c11de3a5ac865152355315160a77673196a6cb7e7f0fe1b3b3\" successfully" Dec 13 13:08:08.635157 containerd[1444]: time="2024-12-13T13:08:08.633817678Z" level=info msg="StopPodSandbox for \"7bcc2f392126e4c11de3a5ac865152355315160a77673196a6cb7e7f0fe1b3b3\" returns successfully" Dec 13 13:08:08.635157 containerd[1444]: time="2024-12-13T13:08:08.634168638Z" level=info msg="StopPodSandbox for \"8ce431818123ae357a10e2a06319f4801d9da0db3bdc80e6ff88ce4fbaacbf6b\"" Dec 13 13:08:08.635157 containerd[1444]: time="2024-12-13T13:08:08.634211478Z" level=info msg="StopPodSandbox for \"65d7ae9340947db8c318335db7a7645c1bba6c5a2804addb1135c38c35ec52e3\"" Dec 13 13:08:08.635157 containerd[1444]: time="2024-12-13T13:08:08.634577518Z" level=info msg="TearDown network for sandbox \"8ce431818123ae357a10e2a06319f4801d9da0db3bdc80e6ff88ce4fbaacbf6b\" successfully" Dec 13 13:08:08.635157 containerd[1444]: time="2024-12-13T13:08:08.634588878Z" level=info msg="StopPodSandbox for \"8ce431818123ae357a10e2a06319f4801d9da0db3bdc80e6ff88ce4fbaacbf6b\" returns successfully" Dec 13 13:08:08.635157 containerd[1444]: time="2024-12-13T13:08:08.634645798Z" level=info msg="TearDown network for sandbox \"65d7ae9340947db8c318335db7a7645c1bba6c5a2804addb1135c38c35ec52e3\" successfully" Dec 13 13:08:08.635157 containerd[1444]: time="2024-12-13T13:08:08.634656598Z" level=info msg="StopPodSandbox for \"65d7ae9340947db8c318335db7a7645c1bba6c5a2804addb1135c38c35ec52e3\" returns successfully" Dec 13 13:08:08.635157 containerd[1444]: time="2024-12-13T13:08:08.635105798Z" level=info msg="StopPodSandbox for \"f4d68f57e58ff45a4362d2c135d3ef7842ccf3f45df10b8b2eb6543d36fc2210\"" Dec 13 13:08:08.636498 kubelet[2622]: E1213 13:08:08.634800 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:08:08.636498 kubelet[2622]: E1213 13:08:08.635965 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:08:08.636563 containerd[1444]: time="2024-12-13T13:08:08.635172758Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-kvwcl,Uid:575f75ad-b249-4356-8ee6-1279602164ae,Namespace:kube-system,Attempt:4,}" Dec 13 13:08:08.636563 containerd[1444]: time="2024-12-13T13:08:08.635182838Z" level=info msg="TearDown network for sandbox \"f4d68f57e58ff45a4362d2c135d3ef7842ccf3f45df10b8b2eb6543d36fc2210\" successfully" Dec 13 13:08:08.636563 containerd[1444]: time="2024-12-13T13:08:08.635341119Z" level=info msg="StopPodSandbox for \"f4d68f57e58ff45a4362d2c135d3ef7842ccf3f45df10b8b2eb6543d36fc2210\" returns successfully" Dec 13 13:08:08.636563 containerd[1444]: time="2024-12-13T13:08:08.635724439Z" level=info msg="StopPodSandbox for \"88715665723cccb00aeae59cb9ef9aaf2503e60184edd6f11004d9d98a1bb7c2\"" Dec 13 13:08:08.636563 containerd[1444]: time="2024-12-13T13:08:08.635799439Z" level=info msg="TearDown network for sandbox \"88715665723cccb00aeae59cb9ef9aaf2503e60184edd6f11004d9d98a1bb7c2\" successfully" Dec 13 13:08:08.636563 containerd[1444]: time="2024-12-13T13:08:08.635808479Z" level=info msg="StopPodSandbox for \"88715665723cccb00aeae59cb9ef9aaf2503e60184edd6f11004d9d98a1bb7c2\" returns successfully" Dec 13 13:08:08.636563 containerd[1444]: time="2024-12-13T13:08:08.636207279Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-njbdr,Uid:6531a85c-6dd0-4079-bc39-8116bc3f4b54,Namespace:kube-system,Attempt:4,}" Dec 13 13:08:08.644744 kubelet[2622]: E1213 13:08:08.644617 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:08:08.662060 kubelet[2622]: I1213 13:08:08.661808 2622 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-k762b" podStartSLOduration=1.828218911 podStartE2EDuration="13.661769489s" podCreationTimestamp="2024-12-13 13:07:55 +0000 UTC" firstStartedPulling="2024-12-13 13:07:56.162356719 +0000 UTC m=+22.810245695" lastFinishedPulling="2024-12-13 13:08:07.995907257 +0000 UTC m=+34.643796273" observedRunningTime="2024-12-13 13:08:08.661330609 +0000 UTC m=+35.309219625" watchObservedRunningTime="2024-12-13 13:08:08.661769489 +0000 UTC m=+35.309658505" Dec 13 13:08:09.389254 systemd-networkd[1389]: cali0127114f0a3: Link UP Dec 13 13:08:09.389418 systemd-networkd[1389]: cali0127114f0a3: Gained carrier Dec 13 13:08:09.390839 systemd[1]: run-netns-cni\x2db9585a0b\x2d6ac1\x2df313\x2ded85\x2d822ecf7d7531.mount: Deactivated successfully. Dec 13 13:08:09.391121 systemd[1]: run-netns-cni\x2d1957b967\x2da2e5\x2dde15\x2dabc0\x2d20fb0eb98011.mount: Deactivated successfully. Dec 13 13:08:09.407600 containerd[1444]: 2024-12-13 13:08:08.768 [INFO][4324] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 13:08:09.407600 containerd[1444]: 2024-12-13 13:08:08.863 [INFO][4324] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7d6cbc9658--ph7sh-eth0 calico-apiserver-7d6cbc9658- calico-apiserver 2fee8b36-caea-489f-b412-0f8b4408366f 786 0 2024-12-13 13:07:55 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7d6cbc9658 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7d6cbc9658-ph7sh eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali0127114f0a3 [] []}} ContainerID="a1bc02357e25bc7c24818fa290be070ed2ffeeab753d8bb5175cd6626a9182bb" Namespace="calico-apiserver" Pod="calico-apiserver-7d6cbc9658-ph7sh" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d6cbc9658--ph7sh-" Dec 13 13:08:09.407600 containerd[1444]: 2024-12-13 13:08:08.864 [INFO][4324] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a1bc02357e25bc7c24818fa290be070ed2ffeeab753d8bb5175cd6626a9182bb" Namespace="calico-apiserver" Pod="calico-apiserver-7d6cbc9658-ph7sh" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d6cbc9658--ph7sh-eth0" Dec 13 13:08:09.407600 containerd[1444]: 2024-12-13 13:08:09.299 [INFO][4410] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a1bc02357e25bc7c24818fa290be070ed2ffeeab753d8bb5175cd6626a9182bb" HandleID="k8s-pod-network.a1bc02357e25bc7c24818fa290be070ed2ffeeab753d8bb5175cd6626a9182bb" Workload="localhost-k8s-calico--apiserver--7d6cbc9658--ph7sh-eth0" Dec 13 13:08:09.407600 containerd[1444]: 2024-12-13 13:08:09.329 [INFO][4410] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a1bc02357e25bc7c24818fa290be070ed2ffeeab753d8bb5175cd6626a9182bb" HandleID="k8s-pod-network.a1bc02357e25bc7c24818fa290be070ed2ffeeab753d8bb5175cd6626a9182bb" Workload="localhost-k8s-calico--apiserver--7d6cbc9658--ph7sh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400019cda0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7d6cbc9658-ph7sh", "timestamp":"2024-12-13 13:08:09.299164902 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 13:08:09.407600 containerd[1444]: 2024-12-13 13:08:09.329 [INFO][4410] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 13:08:09.407600 containerd[1444]: 2024-12-13 13:08:09.329 [INFO][4410] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 13:08:09.407600 containerd[1444]: 2024-12-13 13:08:09.329 [INFO][4410] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 13:08:09.407600 containerd[1444]: 2024-12-13 13:08:09.340 [INFO][4410] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a1bc02357e25bc7c24818fa290be070ed2ffeeab753d8bb5175cd6626a9182bb" host="localhost" Dec 13 13:08:09.407600 containerd[1444]: 2024-12-13 13:08:09.354 [INFO][4410] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 13:08:09.407600 containerd[1444]: 2024-12-13 13:08:09.359 [INFO][4410] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 13:08:09.407600 containerd[1444]: 2024-12-13 13:08:09.361 [INFO][4410] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 13:08:09.407600 containerd[1444]: 2024-12-13 13:08:09.363 [INFO][4410] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 13:08:09.407600 containerd[1444]: 2024-12-13 13:08:09.363 [INFO][4410] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a1bc02357e25bc7c24818fa290be070ed2ffeeab753d8bb5175cd6626a9182bb" host="localhost" Dec 13 13:08:09.407600 containerd[1444]: 2024-12-13 13:08:09.365 [INFO][4410] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a1bc02357e25bc7c24818fa290be070ed2ffeeab753d8bb5175cd6626a9182bb Dec 13 13:08:09.407600 containerd[1444]: 2024-12-13 13:08:09.371 [INFO][4410] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a1bc02357e25bc7c24818fa290be070ed2ffeeab753d8bb5175cd6626a9182bb" host="localhost" Dec 13 13:08:09.407600 containerd[1444]: 2024-12-13 13:08:09.376 [INFO][4410] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.a1bc02357e25bc7c24818fa290be070ed2ffeeab753d8bb5175cd6626a9182bb" host="localhost" Dec 13 13:08:09.407600 containerd[1444]: 2024-12-13 13:08:09.376 [INFO][4410] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.a1bc02357e25bc7c24818fa290be070ed2ffeeab753d8bb5175cd6626a9182bb" host="localhost" Dec 13 13:08:09.407600 containerd[1444]: 2024-12-13 13:08:09.376 [INFO][4410] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 13:08:09.407600 containerd[1444]: 2024-12-13 13:08:09.376 [INFO][4410] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="a1bc02357e25bc7c24818fa290be070ed2ffeeab753d8bb5175cd6626a9182bb" HandleID="k8s-pod-network.a1bc02357e25bc7c24818fa290be070ed2ffeeab753d8bb5175cd6626a9182bb" Workload="localhost-k8s-calico--apiserver--7d6cbc9658--ph7sh-eth0" Dec 13 13:08:09.408169 containerd[1444]: 2024-12-13 13:08:09.379 [INFO][4324] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a1bc02357e25bc7c24818fa290be070ed2ffeeab753d8bb5175cd6626a9182bb" Namespace="calico-apiserver" Pod="calico-apiserver-7d6cbc9658-ph7sh" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d6cbc9658--ph7sh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7d6cbc9658--ph7sh-eth0", GenerateName:"calico-apiserver-7d6cbc9658-", Namespace:"calico-apiserver", SelfLink:"", UID:"2fee8b36-caea-489f-b412-0f8b4408366f", ResourceVersion:"786", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 13, 7, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d6cbc9658", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7d6cbc9658-ph7sh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0127114f0a3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 13:08:09.408169 containerd[1444]: 2024-12-13 13:08:09.379 [INFO][4324] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="a1bc02357e25bc7c24818fa290be070ed2ffeeab753d8bb5175cd6626a9182bb" Namespace="calico-apiserver" Pod="calico-apiserver-7d6cbc9658-ph7sh" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d6cbc9658--ph7sh-eth0" Dec 13 13:08:09.408169 containerd[1444]: 2024-12-13 13:08:09.379 [INFO][4324] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0127114f0a3 ContainerID="a1bc02357e25bc7c24818fa290be070ed2ffeeab753d8bb5175cd6626a9182bb" Namespace="calico-apiserver" Pod="calico-apiserver-7d6cbc9658-ph7sh" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d6cbc9658--ph7sh-eth0" Dec 13 13:08:09.408169 containerd[1444]: 2024-12-13 13:08:09.389 [INFO][4324] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a1bc02357e25bc7c24818fa290be070ed2ffeeab753d8bb5175cd6626a9182bb" Namespace="calico-apiserver" Pod="calico-apiserver-7d6cbc9658-ph7sh" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d6cbc9658--ph7sh-eth0" Dec 13 13:08:09.408169 containerd[1444]: 2024-12-13 13:08:09.390 [INFO][4324] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a1bc02357e25bc7c24818fa290be070ed2ffeeab753d8bb5175cd6626a9182bb" Namespace="calico-apiserver" Pod="calico-apiserver-7d6cbc9658-ph7sh" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d6cbc9658--ph7sh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7d6cbc9658--ph7sh-eth0", GenerateName:"calico-apiserver-7d6cbc9658-", Namespace:"calico-apiserver", SelfLink:"", UID:"2fee8b36-caea-489f-b412-0f8b4408366f", ResourceVersion:"786", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 13, 7, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d6cbc9658", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a1bc02357e25bc7c24818fa290be070ed2ffeeab753d8bb5175cd6626a9182bb", Pod:"calico-apiserver-7d6cbc9658-ph7sh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0127114f0a3", MAC:"b6:fe:9e:9c:05:f2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 13:08:09.408169 containerd[1444]: 2024-12-13 13:08:09.404 [INFO][4324] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a1bc02357e25bc7c24818fa290be070ed2ffeeab753d8bb5175cd6626a9182bb" Namespace="calico-apiserver" Pod="calico-apiserver-7d6cbc9658-ph7sh" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d6cbc9658--ph7sh-eth0" Dec 13 13:08:09.417072 systemd-networkd[1389]: cali1258d62ea9f: Link UP Dec 13 13:08:09.417328 systemd-networkd[1389]: cali1258d62ea9f: Gained carrier Dec 13 13:08:09.435677 containerd[1444]: 2024-12-13 13:08:08.838 [INFO][4374] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 13:08:09.435677 containerd[1444]: 2024-12-13 13:08:08.881 [INFO][4374] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--76f75df574--njbdr-eth0 coredns-76f75df574- kube-system 6531a85c-6dd0-4079-bc39-8116bc3f4b54 777 0 2024-12-13 13:07:49 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-76f75df574-njbdr eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali1258d62ea9f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="8fd21cf8cdfb8f0ef6d139a6327f096c0fa894f565d7f7912522be6079dce2cb" Namespace="kube-system" Pod="coredns-76f75df574-njbdr" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--njbdr-" Dec 13 13:08:09.435677 containerd[1444]: 2024-12-13 13:08:08.881 [INFO][4374] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8fd21cf8cdfb8f0ef6d139a6327f096c0fa894f565d7f7912522be6079dce2cb" Namespace="kube-system" Pod="coredns-76f75df574-njbdr" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--njbdr-eth0" Dec 13 13:08:09.435677 containerd[1444]: 2024-12-13 13:08:09.308 [INFO][4436] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8fd21cf8cdfb8f0ef6d139a6327f096c0fa894f565d7f7912522be6079dce2cb" HandleID="k8s-pod-network.8fd21cf8cdfb8f0ef6d139a6327f096c0fa894f565d7f7912522be6079dce2cb" Workload="localhost-k8s-coredns--76f75df574--njbdr-eth0" Dec 13 13:08:09.435677 containerd[1444]: 2024-12-13 13:08:09.340 [INFO][4436] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8fd21cf8cdfb8f0ef6d139a6327f096c0fa894f565d7f7912522be6079dce2cb" HandleID="k8s-pod-network.8fd21cf8cdfb8f0ef6d139a6327f096c0fa894f565d7f7912522be6079dce2cb" Workload="localhost-k8s-coredns--76f75df574--njbdr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004cdd0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-76f75df574-njbdr", "timestamp":"2024-12-13 13:08:09.308527226 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 13:08:09.435677 containerd[1444]: 2024-12-13 13:08:09.340 [INFO][4436] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 13:08:09.435677 containerd[1444]: 2024-12-13 13:08:09.376 [INFO][4436] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 13:08:09.435677 containerd[1444]: 2024-12-13 13:08:09.376 [INFO][4436] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 13:08:09.435677 containerd[1444]: 2024-12-13 13:08:09.379 [INFO][4436] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8fd21cf8cdfb8f0ef6d139a6327f096c0fa894f565d7f7912522be6079dce2cb" host="localhost" Dec 13 13:08:09.435677 containerd[1444]: 2024-12-13 13:08:09.386 [INFO][4436] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 13:08:09.435677 containerd[1444]: 2024-12-13 13:08:09.394 [INFO][4436] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 13:08:09.435677 containerd[1444]: 2024-12-13 13:08:09.396 [INFO][4436] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 13:08:09.435677 containerd[1444]: 2024-12-13 13:08:09.399 [INFO][4436] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 13:08:09.435677 containerd[1444]: 2024-12-13 13:08:09.399 [INFO][4436] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8fd21cf8cdfb8f0ef6d139a6327f096c0fa894f565d7f7912522be6079dce2cb" host="localhost" Dec 13 13:08:09.435677 containerd[1444]: 2024-12-13 13:08:09.401 [INFO][4436] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.8fd21cf8cdfb8f0ef6d139a6327f096c0fa894f565d7f7912522be6079dce2cb Dec 13 13:08:09.435677 containerd[1444]: 2024-12-13 13:08:09.405 [INFO][4436] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8fd21cf8cdfb8f0ef6d139a6327f096c0fa894f565d7f7912522be6079dce2cb" host="localhost" Dec 13 13:08:09.435677 containerd[1444]: 2024-12-13 13:08:09.410 [INFO][4436] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.8fd21cf8cdfb8f0ef6d139a6327f096c0fa894f565d7f7912522be6079dce2cb" host="localhost" Dec 13 13:08:09.435677 containerd[1444]: 2024-12-13 13:08:09.411 [INFO][4436] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.8fd21cf8cdfb8f0ef6d139a6327f096c0fa894f565d7f7912522be6079dce2cb" host="localhost" Dec 13 13:08:09.435677 containerd[1444]: 2024-12-13 13:08:09.411 [INFO][4436] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 13:08:09.435677 containerd[1444]: 2024-12-13 13:08:09.411 [INFO][4436] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="8fd21cf8cdfb8f0ef6d139a6327f096c0fa894f565d7f7912522be6079dce2cb" HandleID="k8s-pod-network.8fd21cf8cdfb8f0ef6d139a6327f096c0fa894f565d7f7912522be6079dce2cb" Workload="localhost-k8s-coredns--76f75df574--njbdr-eth0" Dec 13 13:08:09.436334 containerd[1444]: 2024-12-13 13:08:09.415 [INFO][4374] cni-plugin/k8s.go 386: Populated endpoint ContainerID="8fd21cf8cdfb8f0ef6d139a6327f096c0fa894f565d7f7912522be6079dce2cb" Namespace="kube-system" Pod="coredns-76f75df574-njbdr" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--njbdr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--njbdr-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"6531a85c-6dd0-4079-bc39-8116bc3f4b54", ResourceVersion:"777", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 13, 7, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-76f75df574-njbdr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1258d62ea9f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 13:08:09.436334 containerd[1444]: 2024-12-13 13:08:09.415 [INFO][4374] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="8fd21cf8cdfb8f0ef6d139a6327f096c0fa894f565d7f7912522be6079dce2cb" Namespace="kube-system" Pod="coredns-76f75df574-njbdr" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--njbdr-eth0" Dec 13 13:08:09.436334 containerd[1444]: 2024-12-13 13:08:09.415 [INFO][4374] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1258d62ea9f ContainerID="8fd21cf8cdfb8f0ef6d139a6327f096c0fa894f565d7f7912522be6079dce2cb" Namespace="kube-system" Pod="coredns-76f75df574-njbdr" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--njbdr-eth0" Dec 13 13:08:09.436334 containerd[1444]: 2024-12-13 13:08:09.417 [INFO][4374] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8fd21cf8cdfb8f0ef6d139a6327f096c0fa894f565d7f7912522be6079dce2cb" Namespace="kube-system" Pod="coredns-76f75df574-njbdr" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--njbdr-eth0" Dec 13 13:08:09.436334 containerd[1444]: 2024-12-13 13:08:09.418 [INFO][4374] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8fd21cf8cdfb8f0ef6d139a6327f096c0fa894f565d7f7912522be6079dce2cb" Namespace="kube-system" Pod="coredns-76f75df574-njbdr" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--njbdr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--njbdr-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"6531a85c-6dd0-4079-bc39-8116bc3f4b54", ResourceVersion:"777", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 13, 7, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8fd21cf8cdfb8f0ef6d139a6327f096c0fa894f565d7f7912522be6079dce2cb", Pod:"coredns-76f75df574-njbdr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1258d62ea9f", MAC:"c6:9b:a0:97:55:82", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 13:08:09.436334 containerd[1444]: 2024-12-13 13:08:09.431 [INFO][4374] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="8fd21cf8cdfb8f0ef6d139a6327f096c0fa894f565d7f7912522be6079dce2cb" Namespace="kube-system" Pod="coredns-76f75df574-njbdr" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--njbdr-eth0" Dec 13 13:08:09.436334 containerd[1444]: time="2024-12-13T13:08:09.435081474Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:08:09.436334 containerd[1444]: time="2024-12-13T13:08:09.435902915Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:08:09.436334 containerd[1444]: time="2024-12-13T13:08:09.435935115Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:08:09.436334 containerd[1444]: time="2024-12-13T13:08:09.436043875Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:08:09.463872 systemd-networkd[1389]: calice785f8c15a: Link UP Dec 13 13:08:09.464751 systemd-networkd[1389]: calice785f8c15a: Gained carrier Dec 13 13:08:09.469619 systemd[1]: Started cri-containerd-a1bc02357e25bc7c24818fa290be070ed2ffeeab753d8bb5175cd6626a9182bb.scope - libcontainer container a1bc02357e25bc7c24818fa290be070ed2ffeeab753d8bb5175cd6626a9182bb. Dec 13 13:08:09.477800 containerd[1444]: time="2024-12-13T13:08:09.477613131Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:08:09.477800 containerd[1444]: time="2024-12-13T13:08:09.477682251Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:08:09.478173 containerd[1444]: time="2024-12-13T13:08:09.477694171Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:08:09.478233 containerd[1444]: time="2024-12-13T13:08:09.478155371Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:08:09.486041 containerd[1444]: 2024-12-13 13:08:08.700 [INFO][4286] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 13:08:09.486041 containerd[1444]: 2024-12-13 13:08:08.863 [INFO][4286] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--6895d58756--pfggb-eth0 calico-kube-controllers-6895d58756- calico-system afc0c628-56bd-4014-86d9-0b030f93cf65 783 0 2024-12-13 13:07:55 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6895d58756 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-6895d58756-pfggb eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calice785f8c15a [] []}} ContainerID="bbfb8b8434ddc0c3b5ec525780df1a6536bea6ca543f1738e37956e7cb347d83" Namespace="calico-system" Pod="calico-kube-controllers-6895d58756-pfggb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6895d58756--pfggb-" Dec 13 13:08:09.486041 containerd[1444]: 2024-12-13 13:08:08.865 [INFO][4286] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="bbfb8b8434ddc0c3b5ec525780df1a6536bea6ca543f1738e37956e7cb347d83" Namespace="calico-system" Pod="calico-kube-controllers-6895d58756-pfggb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6895d58756--pfggb-eth0" Dec 13 13:08:09.486041 containerd[1444]: 2024-12-13 13:08:09.303 [INFO][4417] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bbfb8b8434ddc0c3b5ec525780df1a6536bea6ca543f1738e37956e7cb347d83" HandleID="k8s-pod-network.bbfb8b8434ddc0c3b5ec525780df1a6536bea6ca543f1738e37956e7cb347d83" Workload="localhost-k8s-calico--kube--controllers--6895d58756--pfggb-eth0" Dec 13 13:08:09.486041 containerd[1444]: 2024-12-13 13:08:09.340 [INFO][4417] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bbfb8b8434ddc0c3b5ec525780df1a6536bea6ca543f1738e37956e7cb347d83" HandleID="k8s-pod-network.bbfb8b8434ddc0c3b5ec525780df1a6536bea6ca543f1738e37956e7cb347d83" Workload="localhost-k8s-calico--kube--controllers--6895d58756--pfggb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003e0140), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-6895d58756-pfggb", "timestamp":"2024-12-13 13:08:09.303347984 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 13:08:09.486041 containerd[1444]: 2024-12-13 13:08:09.340 [INFO][4417] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 13:08:09.486041 containerd[1444]: 2024-12-13 13:08:09.411 [INFO][4417] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 13:08:09.486041 containerd[1444]: 2024-12-13 13:08:09.411 [INFO][4417] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 13:08:09.486041 containerd[1444]: 2024-12-13 13:08:09.414 [INFO][4417] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.bbfb8b8434ddc0c3b5ec525780df1a6536bea6ca543f1738e37956e7cb347d83" host="localhost" Dec 13 13:08:09.486041 containerd[1444]: 2024-12-13 13:08:09.419 [INFO][4417] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 13:08:09.486041 containerd[1444]: 2024-12-13 13:08:09.430 [INFO][4417] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 13:08:09.486041 containerd[1444]: 2024-12-13 13:08:09.435 [INFO][4417] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 13:08:09.486041 containerd[1444]: 2024-12-13 13:08:09.438 [INFO][4417] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 13:08:09.486041 containerd[1444]: 2024-12-13 13:08:09.438 [INFO][4417] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.bbfb8b8434ddc0c3b5ec525780df1a6536bea6ca543f1738e37956e7cb347d83" host="localhost" Dec 13 13:08:09.486041 containerd[1444]: 2024-12-13 13:08:09.442 [INFO][4417] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.bbfb8b8434ddc0c3b5ec525780df1a6536bea6ca543f1738e37956e7cb347d83 Dec 13 13:08:09.486041 containerd[1444]: 2024-12-13 13:08:09.448 [INFO][4417] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.bbfb8b8434ddc0c3b5ec525780df1a6536bea6ca543f1738e37956e7cb347d83" host="localhost" Dec 13 13:08:09.486041 containerd[1444]: 2024-12-13 13:08:09.454 [INFO][4417] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.bbfb8b8434ddc0c3b5ec525780df1a6536bea6ca543f1738e37956e7cb347d83" host="localhost" Dec 13 13:08:09.486041 containerd[1444]: 2024-12-13 13:08:09.454 [INFO][4417] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.bbfb8b8434ddc0c3b5ec525780df1a6536bea6ca543f1738e37956e7cb347d83" host="localhost" Dec 13 13:08:09.486041 containerd[1444]: 2024-12-13 13:08:09.454 [INFO][4417] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 13:08:09.486041 containerd[1444]: 2024-12-13 13:08:09.454 [INFO][4417] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="bbfb8b8434ddc0c3b5ec525780df1a6536bea6ca543f1738e37956e7cb347d83" HandleID="k8s-pod-network.bbfb8b8434ddc0c3b5ec525780df1a6536bea6ca543f1738e37956e7cb347d83" Workload="localhost-k8s-calico--kube--controllers--6895d58756--pfggb-eth0" Dec 13 13:08:09.486572 containerd[1444]: 2024-12-13 13:08:09.459 [INFO][4286] cni-plugin/k8s.go 386: Populated endpoint ContainerID="bbfb8b8434ddc0c3b5ec525780df1a6536bea6ca543f1738e37956e7cb347d83" Namespace="calico-system" Pod="calico-kube-controllers-6895d58756-pfggb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6895d58756--pfggb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6895d58756--pfggb-eth0", GenerateName:"calico-kube-controllers-6895d58756-", Namespace:"calico-system", SelfLink:"", UID:"afc0c628-56bd-4014-86d9-0b030f93cf65", ResourceVersion:"783", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 13, 7, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6895d58756", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-6895d58756-pfggb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calice785f8c15a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 13:08:09.486572 containerd[1444]: 2024-12-13 13:08:09.459 [INFO][4286] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="bbfb8b8434ddc0c3b5ec525780df1a6536bea6ca543f1738e37956e7cb347d83" Namespace="calico-system" Pod="calico-kube-controllers-6895d58756-pfggb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6895d58756--pfggb-eth0" Dec 13 13:08:09.486572 containerd[1444]: 2024-12-13 13:08:09.459 [INFO][4286] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calice785f8c15a ContainerID="bbfb8b8434ddc0c3b5ec525780df1a6536bea6ca543f1738e37956e7cb347d83" Namespace="calico-system" Pod="calico-kube-controllers-6895d58756-pfggb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6895d58756--pfggb-eth0" Dec 13 13:08:09.486572 containerd[1444]: 2024-12-13 13:08:09.464 [INFO][4286] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bbfb8b8434ddc0c3b5ec525780df1a6536bea6ca543f1738e37956e7cb347d83" Namespace="calico-system" Pod="calico-kube-controllers-6895d58756-pfggb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6895d58756--pfggb-eth0" Dec 13 13:08:09.486572 containerd[1444]: 2024-12-13 13:08:09.467 [INFO][4286] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="bbfb8b8434ddc0c3b5ec525780df1a6536bea6ca543f1738e37956e7cb347d83" Namespace="calico-system" Pod="calico-kube-controllers-6895d58756-pfggb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6895d58756--pfggb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6895d58756--pfggb-eth0", GenerateName:"calico-kube-controllers-6895d58756-", Namespace:"calico-system", SelfLink:"", UID:"afc0c628-56bd-4014-86d9-0b030f93cf65", ResourceVersion:"783", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 13, 7, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6895d58756", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bbfb8b8434ddc0c3b5ec525780df1a6536bea6ca543f1738e37956e7cb347d83", Pod:"calico-kube-controllers-6895d58756-pfggb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calice785f8c15a", MAC:"b6:f2:04:49:95:88", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 13:08:09.486572 containerd[1444]: 2024-12-13 13:08:09.476 [INFO][4286] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="bbfb8b8434ddc0c3b5ec525780df1a6536bea6ca543f1738e37956e7cb347d83" Namespace="calico-system" Pod="calico-kube-controllers-6895d58756-pfggb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6895d58756--pfggb-eth0" Dec 13 13:08:09.490467 systemd-resolved[1308]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 13:08:09.500148 systemd[1]: Started cri-containerd-8fd21cf8cdfb8f0ef6d139a6327f096c0fa894f565d7f7912522be6079dce2cb.scope - libcontainer container 8fd21cf8cdfb8f0ef6d139a6327f096c0fa894f565d7f7912522be6079dce2cb. Dec 13 13:08:09.513573 systemd-networkd[1389]: cali2a3a635b5b0: Link UP Dec 13 13:08:09.514238 systemd-networkd[1389]: cali2a3a635b5b0: Gained carrier Dec 13 13:08:09.521152 systemd-resolved[1308]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 13:08:09.532201 containerd[1444]: time="2024-12-13T13:08:09.532058992Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d6cbc9658-ph7sh,Uid:2fee8b36-caea-489f-b412-0f8b4408366f,Namespace:calico-apiserver,Attempt:4,} returns sandbox id \"a1bc02357e25bc7c24818fa290be070ed2ffeeab753d8bb5175cd6626a9182bb\"" Dec 13 13:08:09.535500 containerd[1444]: 2024-12-13 13:08:08.822 [INFO][4345] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 13:08:09.535500 containerd[1444]: 2024-12-13 13:08:08.868 [INFO][4345] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7d6cbc9658--ggvqp-eth0 calico-apiserver-7d6cbc9658- calico-apiserver b381e5c1-7896-4e9c-934b-2f01903d7a34 785 0 2024-12-13 13:07:55 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7d6cbc9658 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7d6cbc9658-ggvqp eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2a3a635b5b0 [] []}} ContainerID="67546c765ddc1f3becedc4b1e55b50206dce228b54625c9fcdb2170a55ceb541" Namespace="calico-apiserver" Pod="calico-apiserver-7d6cbc9658-ggvqp" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d6cbc9658--ggvqp-" Dec 13 13:08:09.535500 containerd[1444]: 2024-12-13 13:08:08.868 [INFO][4345] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="67546c765ddc1f3becedc4b1e55b50206dce228b54625c9fcdb2170a55ceb541" Namespace="calico-apiserver" Pod="calico-apiserver-7d6cbc9658-ggvqp" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d6cbc9658--ggvqp-eth0" Dec 13 13:08:09.535500 containerd[1444]: 2024-12-13 13:08:09.300 [INFO][4428] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="67546c765ddc1f3becedc4b1e55b50206dce228b54625c9fcdb2170a55ceb541" HandleID="k8s-pod-network.67546c765ddc1f3becedc4b1e55b50206dce228b54625c9fcdb2170a55ceb541" Workload="localhost-k8s-calico--apiserver--7d6cbc9658--ggvqp-eth0" Dec 13 13:08:09.535500 containerd[1444]: 2024-12-13 13:08:09.340 [INFO][4428] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="67546c765ddc1f3becedc4b1e55b50206dce228b54625c9fcdb2170a55ceb541" HandleID="k8s-pod-network.67546c765ddc1f3becedc4b1e55b50206dce228b54625c9fcdb2170a55ceb541" Workload="localhost-k8s-calico--apiserver--7d6cbc9658--ggvqp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ea170), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7d6cbc9658-ggvqp", "timestamp":"2024-12-13 13:08:09.300610703 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 13:08:09.535500 containerd[1444]: 2024-12-13 13:08:09.340 [INFO][4428] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 13:08:09.535500 containerd[1444]: 2024-12-13 13:08:09.455 [INFO][4428] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 13:08:09.535500 containerd[1444]: 2024-12-13 13:08:09.455 [INFO][4428] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 13:08:09.535500 containerd[1444]: 2024-12-13 13:08:09.460 [INFO][4428] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.67546c765ddc1f3becedc4b1e55b50206dce228b54625c9fcdb2170a55ceb541" host="localhost" Dec 13 13:08:09.535500 containerd[1444]: 2024-12-13 13:08:09.471 [INFO][4428] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 13:08:09.535500 containerd[1444]: 2024-12-13 13:08:09.478 [INFO][4428] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 13:08:09.535500 containerd[1444]: 2024-12-13 13:08:09.480 [INFO][4428] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 13:08:09.535500 containerd[1444]: 2024-12-13 13:08:09.487 [INFO][4428] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 13:08:09.535500 containerd[1444]: 2024-12-13 13:08:09.487 [INFO][4428] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.67546c765ddc1f3becedc4b1e55b50206dce228b54625c9fcdb2170a55ceb541" host="localhost" Dec 13 13:08:09.535500 containerd[1444]: 2024-12-13 13:08:09.490 [INFO][4428] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.67546c765ddc1f3becedc4b1e55b50206dce228b54625c9fcdb2170a55ceb541 Dec 13 13:08:09.535500 containerd[1444]: 2024-12-13 13:08:09.495 [INFO][4428] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.67546c765ddc1f3becedc4b1e55b50206dce228b54625c9fcdb2170a55ceb541" host="localhost" Dec 13 13:08:09.535500 containerd[1444]: 2024-12-13 13:08:09.503 [INFO][4428] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.67546c765ddc1f3becedc4b1e55b50206dce228b54625c9fcdb2170a55ceb541" host="localhost" Dec 13 13:08:09.535500 containerd[1444]: 2024-12-13 13:08:09.503 [INFO][4428] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.67546c765ddc1f3becedc4b1e55b50206dce228b54625c9fcdb2170a55ceb541" host="localhost" Dec 13 13:08:09.535500 containerd[1444]: 2024-12-13 13:08:09.503 [INFO][4428] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 13:08:09.535500 containerd[1444]: 2024-12-13 13:08:09.503 [INFO][4428] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="67546c765ddc1f3becedc4b1e55b50206dce228b54625c9fcdb2170a55ceb541" HandleID="k8s-pod-network.67546c765ddc1f3becedc4b1e55b50206dce228b54625c9fcdb2170a55ceb541" Workload="localhost-k8s-calico--apiserver--7d6cbc9658--ggvqp-eth0" Dec 13 13:08:09.536018 containerd[1444]: 2024-12-13 13:08:09.509 [INFO][4345] cni-plugin/k8s.go 386: Populated endpoint ContainerID="67546c765ddc1f3becedc4b1e55b50206dce228b54625c9fcdb2170a55ceb541" Namespace="calico-apiserver" Pod="calico-apiserver-7d6cbc9658-ggvqp" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d6cbc9658--ggvqp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7d6cbc9658--ggvqp-eth0", GenerateName:"calico-apiserver-7d6cbc9658-", Namespace:"calico-apiserver", SelfLink:"", UID:"b381e5c1-7896-4e9c-934b-2f01903d7a34", ResourceVersion:"785", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 13, 7, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d6cbc9658", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7d6cbc9658-ggvqp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2a3a635b5b0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 13:08:09.536018 containerd[1444]: 2024-12-13 13:08:09.511 [INFO][4345] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="67546c765ddc1f3becedc4b1e55b50206dce228b54625c9fcdb2170a55ceb541" Namespace="calico-apiserver" Pod="calico-apiserver-7d6cbc9658-ggvqp" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d6cbc9658--ggvqp-eth0" Dec 13 13:08:09.536018 containerd[1444]: 2024-12-13 13:08:09.511 [INFO][4345] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2a3a635b5b0 ContainerID="67546c765ddc1f3becedc4b1e55b50206dce228b54625c9fcdb2170a55ceb541" Namespace="calico-apiserver" Pod="calico-apiserver-7d6cbc9658-ggvqp" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d6cbc9658--ggvqp-eth0" Dec 13 13:08:09.536018 containerd[1444]: 2024-12-13 13:08:09.515 [INFO][4345] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="67546c765ddc1f3becedc4b1e55b50206dce228b54625c9fcdb2170a55ceb541" Namespace="calico-apiserver" Pod="calico-apiserver-7d6cbc9658-ggvqp" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d6cbc9658--ggvqp-eth0" Dec 13 13:08:09.536018 containerd[1444]: 2024-12-13 13:08:09.517 [INFO][4345] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="67546c765ddc1f3becedc4b1e55b50206dce228b54625c9fcdb2170a55ceb541" Namespace="calico-apiserver" Pod="calico-apiserver-7d6cbc9658-ggvqp" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d6cbc9658--ggvqp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7d6cbc9658--ggvqp-eth0", GenerateName:"calico-apiserver-7d6cbc9658-", Namespace:"calico-apiserver", SelfLink:"", UID:"b381e5c1-7896-4e9c-934b-2f01903d7a34", ResourceVersion:"785", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 13, 7, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d6cbc9658", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"67546c765ddc1f3becedc4b1e55b50206dce228b54625c9fcdb2170a55ceb541", Pod:"calico-apiserver-7d6cbc9658-ggvqp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2a3a635b5b0", MAC:"12:fb:1a:2b:46:83", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 13:08:09.536018 containerd[1444]: 2024-12-13 13:08:09.531 [INFO][4345] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="67546c765ddc1f3becedc4b1e55b50206dce228b54625c9fcdb2170a55ceb541" Namespace="calico-apiserver" Pod="calico-apiserver-7d6cbc9658-ggvqp" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d6cbc9658--ggvqp-eth0" Dec 13 13:08:09.536018 containerd[1444]: time="2024-12-13T13:08:09.535843353Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Dec 13 13:08:09.539452 containerd[1444]: time="2024-12-13T13:08:09.539172314Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:08:09.543187 containerd[1444]: time="2024-12-13T13:08:09.542486796Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:08:09.543187 containerd[1444]: time="2024-12-13T13:08:09.542517916Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:08:09.543187 containerd[1444]: time="2024-12-13T13:08:09.542624156Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:08:09.554305 containerd[1444]: time="2024-12-13T13:08:09.553973040Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-njbdr,Uid:6531a85c-6dd0-4079-bc39-8116bc3f4b54,Namespace:kube-system,Attempt:4,} returns sandbox id \"8fd21cf8cdfb8f0ef6d139a6327f096c0fa894f565d7f7912522be6079dce2cb\"" Dec 13 13:08:09.555180 kubelet[2622]: E1213 13:08:09.555151 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:08:09.559769 containerd[1444]: time="2024-12-13T13:08:09.559297522Z" level=info msg="CreateContainer within sandbox \"8fd21cf8cdfb8f0ef6d139a6327f096c0fa894f565d7f7912522be6079dce2cb\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 13:08:09.567649 systemd-networkd[1389]: cali77967e6425a: Link UP Dec 13 13:08:09.568336 systemd-networkd[1389]: cali77967e6425a: Gained carrier Dec 13 13:08:09.571089 systemd[1]: Started cri-containerd-bbfb8b8434ddc0c3b5ec525780df1a6536bea6ca543f1738e37956e7cb347d83.scope - libcontainer container bbfb8b8434ddc0c3b5ec525780df1a6536bea6ca543f1738e37956e7cb347d83. Dec 13 13:08:09.584233 containerd[1444]: 2024-12-13 13:08:08.823 [INFO][4361] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 13:08:09.584233 containerd[1444]: 2024-12-13 13:08:08.863 [INFO][4361] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--76f75df574--kvwcl-eth0 coredns-76f75df574- kube-system 575f75ad-b249-4356-8ee6-1279602164ae 780 0 2024-12-13 13:07:49 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-76f75df574-kvwcl eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali77967e6425a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="fbad18bf2100c98126a7005822bb6829e61a73ef23a3d9f9480dfe56ea037016" Namespace="kube-system" Pod="coredns-76f75df574-kvwcl" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--kvwcl-" Dec 13 13:08:09.584233 containerd[1444]: 2024-12-13 13:08:08.868 [INFO][4361] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="fbad18bf2100c98126a7005822bb6829e61a73ef23a3d9f9480dfe56ea037016" Namespace="kube-system" Pod="coredns-76f75df574-kvwcl" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--kvwcl-eth0" Dec 13 13:08:09.584233 containerd[1444]: 2024-12-13 13:08:09.297 [INFO][4411] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fbad18bf2100c98126a7005822bb6829e61a73ef23a3d9f9480dfe56ea037016" HandleID="k8s-pod-network.fbad18bf2100c98126a7005822bb6829e61a73ef23a3d9f9480dfe56ea037016" Workload="localhost-k8s-coredns--76f75df574--kvwcl-eth0" Dec 13 13:08:09.584233 containerd[1444]: 2024-12-13 13:08:09.340 [INFO][4411] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fbad18bf2100c98126a7005822bb6829e61a73ef23a3d9f9480dfe56ea037016" HandleID="k8s-pod-network.fbad18bf2100c98126a7005822bb6829e61a73ef23a3d9f9480dfe56ea037016" Workload="localhost-k8s-coredns--76f75df574--kvwcl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004cf90), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-76f75df574-kvwcl", "timestamp":"2024-12-13 13:08:09.297292942 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 13:08:09.584233 containerd[1444]: 2024-12-13 13:08:09.341 [INFO][4411] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 13:08:09.584233 containerd[1444]: 2024-12-13 13:08:09.503 [INFO][4411] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 13:08:09.584233 containerd[1444]: 2024-12-13 13:08:09.503 [INFO][4411] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 13:08:09.584233 containerd[1444]: 2024-12-13 13:08:09.506 [INFO][4411] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.fbad18bf2100c98126a7005822bb6829e61a73ef23a3d9f9480dfe56ea037016" host="localhost" Dec 13 13:08:09.584233 containerd[1444]: 2024-12-13 13:08:09.513 [INFO][4411] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 13:08:09.584233 containerd[1444]: 2024-12-13 13:08:09.521 [INFO][4411] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 13:08:09.584233 containerd[1444]: 2024-12-13 13:08:09.525 [INFO][4411] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 13:08:09.584233 containerd[1444]: 2024-12-13 13:08:09.533 [INFO][4411] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 13:08:09.584233 containerd[1444]: 2024-12-13 13:08:09.533 [INFO][4411] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.fbad18bf2100c98126a7005822bb6829e61a73ef23a3d9f9480dfe56ea037016" host="localhost" Dec 13 13:08:09.584233 containerd[1444]: 2024-12-13 13:08:09.539 [INFO][4411] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.fbad18bf2100c98126a7005822bb6829e61a73ef23a3d9f9480dfe56ea037016 Dec 13 13:08:09.584233 containerd[1444]: 2024-12-13 13:08:09.547 [INFO][4411] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.fbad18bf2100c98126a7005822bb6829e61a73ef23a3d9f9480dfe56ea037016" host="localhost" Dec 13 13:08:09.584233 containerd[1444]: 2024-12-13 13:08:09.556 [INFO][4411] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.fbad18bf2100c98126a7005822bb6829e61a73ef23a3d9f9480dfe56ea037016" host="localhost" Dec 13 13:08:09.584233 containerd[1444]: 2024-12-13 13:08:09.556 [INFO][4411] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.fbad18bf2100c98126a7005822bb6829e61a73ef23a3d9f9480dfe56ea037016" host="localhost" Dec 13 13:08:09.584233 containerd[1444]: 2024-12-13 13:08:09.556 [INFO][4411] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 13:08:09.584233 containerd[1444]: 2024-12-13 13:08:09.556 [INFO][4411] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="fbad18bf2100c98126a7005822bb6829e61a73ef23a3d9f9480dfe56ea037016" HandleID="k8s-pod-network.fbad18bf2100c98126a7005822bb6829e61a73ef23a3d9f9480dfe56ea037016" Workload="localhost-k8s-coredns--76f75df574--kvwcl-eth0" Dec 13 13:08:09.585879 containerd[1444]: 2024-12-13 13:08:09.559 [INFO][4361] cni-plugin/k8s.go 386: Populated endpoint ContainerID="fbad18bf2100c98126a7005822bb6829e61a73ef23a3d9f9480dfe56ea037016" Namespace="kube-system" Pod="coredns-76f75df574-kvwcl" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--kvwcl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--kvwcl-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"575f75ad-b249-4356-8ee6-1279602164ae", ResourceVersion:"780", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 13, 7, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-76f75df574-kvwcl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali77967e6425a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 13:08:09.585879 containerd[1444]: 2024-12-13 13:08:09.559 [INFO][4361] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="fbad18bf2100c98126a7005822bb6829e61a73ef23a3d9f9480dfe56ea037016" Namespace="kube-system" Pod="coredns-76f75df574-kvwcl" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--kvwcl-eth0" Dec 13 13:08:09.585879 containerd[1444]: 2024-12-13 13:08:09.559 [INFO][4361] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali77967e6425a ContainerID="fbad18bf2100c98126a7005822bb6829e61a73ef23a3d9f9480dfe56ea037016" Namespace="kube-system" Pod="coredns-76f75df574-kvwcl" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--kvwcl-eth0" Dec 13 13:08:09.585879 containerd[1444]: 2024-12-13 13:08:09.568 [INFO][4361] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fbad18bf2100c98126a7005822bb6829e61a73ef23a3d9f9480dfe56ea037016" Namespace="kube-system" Pod="coredns-76f75df574-kvwcl" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--kvwcl-eth0" Dec 13 13:08:09.585879 containerd[1444]: 2024-12-13 13:08:09.568 [INFO][4361] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="fbad18bf2100c98126a7005822bb6829e61a73ef23a3d9f9480dfe56ea037016" Namespace="kube-system" Pod="coredns-76f75df574-kvwcl" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--kvwcl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--kvwcl-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"575f75ad-b249-4356-8ee6-1279602164ae", ResourceVersion:"780", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 13, 7, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fbad18bf2100c98126a7005822bb6829e61a73ef23a3d9f9480dfe56ea037016", Pod:"coredns-76f75df574-kvwcl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali77967e6425a", MAC:"12:ca:51:0c:68:27", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 13:08:09.585879 containerd[1444]: 2024-12-13 13:08:09.579 [INFO][4361] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="fbad18bf2100c98126a7005822bb6829e61a73ef23a3d9f9480dfe56ea037016" Namespace="kube-system" Pod="coredns-76f75df574-kvwcl" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--kvwcl-eth0" Dec 13 13:08:09.591558 systemd-resolved[1308]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 13:08:09.601887 containerd[1444]: time="2024-12-13T13:08:09.598780937Z" level=info msg="CreateContainer within sandbox \"8fd21cf8cdfb8f0ef6d139a6327f096c0fa894f565d7f7912522be6079dce2cb\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"350b954311415783c9f7ebc71ef8c7945c38fa8dfd6ddd6b828f2f7187bcdbae\"" Dec 13 13:08:09.601887 containerd[1444]: time="2024-12-13T13:08:09.600673818Z" level=info msg="StartContainer for \"350b954311415783c9f7ebc71ef8c7945c38fa8dfd6ddd6b828f2f7187bcdbae\"" Dec 13 13:08:09.604202 systemd-networkd[1389]: cali2e56b2594e5: Link UP Dec 13 13:08:09.604433 systemd-networkd[1389]: cali2e56b2594e5: Gained carrier Dec 13 13:08:09.608968 containerd[1444]: time="2024-12-13T13:08:09.605789460Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:08:09.608968 containerd[1444]: time="2024-12-13T13:08:09.605970980Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:08:09.608968 containerd[1444]: time="2024-12-13T13:08:09.606020420Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:08:09.608968 containerd[1444]: time="2024-12-13T13:08:09.606153220Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:08:09.618978 containerd[1444]: 2024-12-13 13:08:08.718 [INFO][4298] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 13:08:09.618978 containerd[1444]: 2024-12-13 13:08:08.863 [INFO][4298] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--bvvfs-eth0 csi-node-driver- calico-system 0cbbdf0f-40e1-46d6-a471-bc442a66a580 610 0 2024-12-13 13:07:55 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b695c467 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-bvvfs eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali2e56b2594e5 [] []}} ContainerID="686fd640f2db28f02c216873df587a568c4d8b1ba3f64a79d6709bb5cef78537" Namespace="calico-system" Pod="csi-node-driver-bvvfs" WorkloadEndpoint="localhost-k8s-csi--node--driver--bvvfs-" Dec 13 13:08:09.618978 containerd[1444]: 2024-12-13 13:08:08.864 [INFO][4298] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="686fd640f2db28f02c216873df587a568c4d8b1ba3f64a79d6709bb5cef78537" Namespace="calico-system" Pod="csi-node-driver-bvvfs" WorkloadEndpoint="localhost-k8s-csi--node--driver--bvvfs-eth0" Dec 13 13:08:09.618978 containerd[1444]: 2024-12-13 13:08:09.296 [INFO][4416] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="686fd640f2db28f02c216873df587a568c4d8b1ba3f64a79d6709bb5cef78537" HandleID="k8s-pod-network.686fd640f2db28f02c216873df587a568c4d8b1ba3f64a79d6709bb5cef78537" Workload="localhost-k8s-csi--node--driver--bvvfs-eth0" Dec 13 13:08:09.618978 containerd[1444]: 2024-12-13 13:08:09.343 [INFO][4416] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="686fd640f2db28f02c216873df587a568c4d8b1ba3f64a79d6709bb5cef78537" HandleID="k8s-pod-network.686fd640f2db28f02c216873df587a568c4d8b1ba3f64a79d6709bb5cef78537" Workload="localhost-k8s-csi--node--driver--bvvfs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400068ea70), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-bvvfs", "timestamp":"2024-12-13 13:08:09.296601421 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 13:08:09.618978 containerd[1444]: 2024-12-13 13:08:09.343 [INFO][4416] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 13:08:09.618978 containerd[1444]: 2024-12-13 13:08:09.556 [INFO][4416] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 13:08:09.618978 containerd[1444]: 2024-12-13 13:08:09.557 [INFO][4416] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 13:08:09.618978 containerd[1444]: 2024-12-13 13:08:09.561 [INFO][4416] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.686fd640f2db28f02c216873df587a568c4d8b1ba3f64a79d6709bb5cef78537" host="localhost" Dec 13 13:08:09.618978 containerd[1444]: 2024-12-13 13:08:09.568 [INFO][4416] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 13:08:09.618978 containerd[1444]: 2024-12-13 13:08:09.574 [INFO][4416] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 13:08:09.618978 containerd[1444]: 2024-12-13 13:08:09.578 [INFO][4416] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 13:08:09.618978 containerd[1444]: 2024-12-13 13:08:09.581 [INFO][4416] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 13:08:09.618978 containerd[1444]: 2024-12-13 13:08:09.581 [INFO][4416] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.686fd640f2db28f02c216873df587a568c4d8b1ba3f64a79d6709bb5cef78537" host="localhost" Dec 13 13:08:09.618978 containerd[1444]: 2024-12-13 13:08:09.583 [INFO][4416] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.686fd640f2db28f02c216873df587a568c4d8b1ba3f64a79d6709bb5cef78537 Dec 13 13:08:09.618978 containerd[1444]: 2024-12-13 13:08:09.588 [INFO][4416] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.686fd640f2db28f02c216873df587a568c4d8b1ba3f64a79d6709bb5cef78537" host="localhost" Dec 13 13:08:09.618978 containerd[1444]: 2024-12-13 13:08:09.596 [INFO][4416] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.686fd640f2db28f02c216873df587a568c4d8b1ba3f64a79d6709bb5cef78537" host="localhost" Dec 13 13:08:09.618978 containerd[1444]: 2024-12-13 13:08:09.596 [INFO][4416] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.686fd640f2db28f02c216873df587a568c4d8b1ba3f64a79d6709bb5cef78537" host="localhost" Dec 13 13:08:09.618978 containerd[1444]: 2024-12-13 13:08:09.596 [INFO][4416] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 13:08:09.618978 containerd[1444]: 2024-12-13 13:08:09.596 [INFO][4416] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="686fd640f2db28f02c216873df587a568c4d8b1ba3f64a79d6709bb5cef78537" HandleID="k8s-pod-network.686fd640f2db28f02c216873df587a568c4d8b1ba3f64a79d6709bb5cef78537" Workload="localhost-k8s-csi--node--driver--bvvfs-eth0" Dec 13 13:08:09.619505 containerd[1444]: 2024-12-13 13:08:09.598 [INFO][4298] cni-plugin/k8s.go 386: Populated endpoint ContainerID="686fd640f2db28f02c216873df587a568c4d8b1ba3f64a79d6709bb5cef78537" Namespace="calico-system" Pod="csi-node-driver-bvvfs" WorkloadEndpoint="localhost-k8s-csi--node--driver--bvvfs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--bvvfs-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0cbbdf0f-40e1-46d6-a471-bc442a66a580", ResourceVersion:"610", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 13, 7, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-bvvfs", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2e56b2594e5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 13:08:09.619505 containerd[1444]: 2024-12-13 13:08:09.599 [INFO][4298] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="686fd640f2db28f02c216873df587a568c4d8b1ba3f64a79d6709bb5cef78537" Namespace="calico-system" Pod="csi-node-driver-bvvfs" WorkloadEndpoint="localhost-k8s-csi--node--driver--bvvfs-eth0" Dec 13 13:08:09.619505 containerd[1444]: 2024-12-13 13:08:09.599 [INFO][4298] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2e56b2594e5 ContainerID="686fd640f2db28f02c216873df587a568c4d8b1ba3f64a79d6709bb5cef78537" Namespace="calico-system" Pod="csi-node-driver-bvvfs" WorkloadEndpoint="localhost-k8s-csi--node--driver--bvvfs-eth0" Dec 13 13:08:09.619505 containerd[1444]: 2024-12-13 13:08:09.604 [INFO][4298] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="686fd640f2db28f02c216873df587a568c4d8b1ba3f64a79d6709bb5cef78537" Namespace="calico-system" Pod="csi-node-driver-bvvfs" WorkloadEndpoint="localhost-k8s-csi--node--driver--bvvfs-eth0" Dec 13 13:08:09.619505 containerd[1444]: 2024-12-13 13:08:09.604 [INFO][4298] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="686fd640f2db28f02c216873df587a568c4d8b1ba3f64a79d6709bb5cef78537" Namespace="calico-system" Pod="csi-node-driver-bvvfs" WorkloadEndpoint="localhost-k8s-csi--node--driver--bvvfs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--bvvfs-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0cbbdf0f-40e1-46d6-a471-bc442a66a580", ResourceVersion:"610", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 13, 7, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"686fd640f2db28f02c216873df587a568c4d8b1ba3f64a79d6709bb5cef78537", Pod:"csi-node-driver-bvvfs", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2e56b2594e5", MAC:"1a:64:96:1c:ed:70", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 13:08:09.619505 containerd[1444]: 2024-12-13 13:08:09.615 [INFO][4298] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="686fd640f2db28f02c216873df587a568c4d8b1ba3f64a79d6709bb5cef78537" Namespace="calico-system" Pod="csi-node-driver-bvvfs" WorkloadEndpoint="localhost-k8s-csi--node--driver--bvvfs-eth0" Dec 13 13:08:09.623595 containerd[1444]: time="2024-12-13T13:08:09.623518867Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:08:09.623782 containerd[1444]: time="2024-12-13T13:08:09.623584547Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:08:09.623782 containerd[1444]: time="2024-12-13T13:08:09.623600547Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:08:09.623782 containerd[1444]: time="2024-12-13T13:08:09.623678267Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:08:09.633138 systemd[1]: Started cri-containerd-67546c765ddc1f3becedc4b1e55b50206dce228b54625c9fcdb2170a55ceb541.scope - libcontainer container 67546c765ddc1f3becedc4b1e55b50206dce228b54625c9fcdb2170a55ceb541. Dec 13 13:08:09.637832 containerd[1444]: time="2024-12-13T13:08:09.637788512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6895d58756-pfggb,Uid:afc0c628-56bd-4014-86d9-0b030f93cf65,Namespace:calico-system,Attempt:4,} returns sandbox id \"bbfb8b8434ddc0c3b5ec525780df1a6536bea6ca543f1738e37956e7cb347d83\"" Dec 13 13:08:09.651091 systemd[1]: Started cri-containerd-fbad18bf2100c98126a7005822bb6829e61a73ef23a3d9f9480dfe56ea037016.scope - libcontainer container fbad18bf2100c98126a7005822bb6829e61a73ef23a3d9f9480dfe56ea037016. Dec 13 13:08:09.658270 systemd[1]: Started cri-containerd-350b954311415783c9f7ebc71ef8c7945c38fa8dfd6ddd6b828f2f7187bcdbae.scope - libcontainer container 350b954311415783c9f7ebc71ef8c7945c38fa8dfd6ddd6b828f2f7187bcdbae. Dec 13 13:08:09.663277 systemd-resolved[1308]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 13:08:09.665027 kubelet[2622]: E1213 13:08:09.664904 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:08:09.667605 containerd[1444]: time="2024-12-13T13:08:09.667496683Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:08:09.667690 containerd[1444]: time="2024-12-13T13:08:09.667611003Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:08:09.667690 containerd[1444]: time="2024-12-13T13:08:09.667628203Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:08:09.667747 containerd[1444]: time="2024-12-13T13:08:09.667721044Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:08:09.674277 systemd-resolved[1308]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 13:08:09.697326 containerd[1444]: time="2024-12-13T13:08:09.697151175Z" level=info msg="StartContainer for \"350b954311415783c9f7ebc71ef8c7945c38fa8dfd6ddd6b828f2f7187bcdbae\" returns successfully" Dec 13 13:08:09.698152 systemd[1]: Started cri-containerd-686fd640f2db28f02c216873df587a568c4d8b1ba3f64a79d6709bb5cef78537.scope - libcontainer container 686fd640f2db28f02c216873df587a568c4d8b1ba3f64a79d6709bb5cef78537. Dec 13 13:08:09.716033 containerd[1444]: time="2024-12-13T13:08:09.715715742Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-kvwcl,Uid:575f75ad-b249-4356-8ee6-1279602164ae,Namespace:kube-system,Attempt:4,} returns sandbox id \"fbad18bf2100c98126a7005822bb6829e61a73ef23a3d9f9480dfe56ea037016\"" Dec 13 13:08:09.720106 kubelet[2622]: E1213 13:08:09.719767 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:08:09.720231 containerd[1444]: time="2024-12-13T13:08:09.719985664Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d6cbc9658-ggvqp,Uid:b381e5c1-7896-4e9c-934b-2f01903d7a34,Namespace:calico-apiserver,Attempt:4,} returns sandbox id \"67546c765ddc1f3becedc4b1e55b50206dce228b54625c9fcdb2170a55ceb541\"" Dec 13 13:08:09.724740 systemd-resolved[1308]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 13:08:09.725407 containerd[1444]: time="2024-12-13T13:08:09.725364946Z" level=info msg="CreateContainer within sandbox \"fbad18bf2100c98126a7005822bb6829e61a73ef23a3d9f9480dfe56ea037016\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 13:08:09.747993 containerd[1444]: time="2024-12-13T13:08:09.747911914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bvvfs,Uid:0cbbdf0f-40e1-46d6-a471-bc442a66a580,Namespace:calico-system,Attempt:3,} returns sandbox id \"686fd640f2db28f02c216873df587a568c4d8b1ba3f64a79d6709bb5cef78537\"" Dec 13 13:08:09.765688 containerd[1444]: time="2024-12-13T13:08:09.765635041Z" level=info msg="CreateContainer within sandbox \"fbad18bf2100c98126a7005822bb6829e61a73ef23a3d9f9480dfe56ea037016\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"48836a7f9e3485f1de00585235d832f91b91638de181105381ca31c94aa6fb5e\"" Dec 13 13:08:09.766827 containerd[1444]: time="2024-12-13T13:08:09.766796841Z" level=info msg="StartContainer for \"48836a7f9e3485f1de00585235d832f91b91638de181105381ca31c94aa6fb5e\"" Dec 13 13:08:09.796103 systemd[1]: Started cri-containerd-48836a7f9e3485f1de00585235d832f91b91638de181105381ca31c94aa6fb5e.scope - libcontainer container 48836a7f9e3485f1de00585235d832f91b91638de181105381ca31c94aa6fb5e. Dec 13 13:08:09.829890 containerd[1444]: time="2024-12-13T13:08:09.829804866Z" level=info msg="StartContainer for \"48836a7f9e3485f1de00585235d832f91b91638de181105381ca31c94aa6fb5e\" returns successfully" Dec 13 13:08:10.055949 kernel: bpftool[5001]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Dec 13 13:08:10.212470 systemd-networkd[1389]: vxlan.calico: Link UP Dec 13 13:08:10.212479 systemd-networkd[1389]: vxlan.calico: Gained carrier Dec 13 13:08:10.669874 kubelet[2622]: E1213 13:08:10.669711 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:08:10.675232 systemd-networkd[1389]: cali77967e6425a: Gained IPv6LL Dec 13 13:08:10.682634 kubelet[2622]: I1213 13:08:10.682600 2622 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-njbdr" podStartSLOduration=21.682562696 podStartE2EDuration="21.682562696s" podCreationTimestamp="2024-12-13 13:07:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:08:10.682086256 +0000 UTC m=+37.329975632" watchObservedRunningTime="2024-12-13 13:08:10.682562696 +0000 UTC m=+37.330451712" Dec 13 13:08:10.682930 kubelet[2622]: E1213 13:08:10.682867 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:08:10.697812 kubelet[2622]: I1213 13:08:10.697772 2622 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-kvwcl" podStartSLOduration=21.697731141 podStartE2EDuration="21.697731141s" podCreationTimestamp="2024-12-13 13:07:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:08:10.697497981 +0000 UTC m=+37.345386997" watchObservedRunningTime="2024-12-13 13:08:10.697731141 +0000 UTC m=+37.345620157" Dec 13 13:08:10.994121 systemd-networkd[1389]: cali2a3a635b5b0: Gained IPv6LL Dec 13 13:08:10.994911 systemd-networkd[1389]: cali0127114f0a3: Gained IPv6LL Dec 13 13:08:10.995087 systemd-networkd[1389]: calice785f8c15a: Gained IPv6LL Dec 13 13:08:11.186279 systemd-networkd[1389]: cali1258d62ea9f: Gained IPv6LL Dec 13 13:08:11.393153 containerd[1444]: time="2024-12-13T13:08:11.393010822Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:08:11.394295 containerd[1444]: time="2024-12-13T13:08:11.393844302Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=39298409" Dec 13 13:08:11.395149 containerd[1444]: time="2024-12-13T13:08:11.395109423Z" level=info msg="ImageCreate event name:\"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:08:11.397450 containerd[1444]: time="2024-12-13T13:08:11.397408384Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:08:11.398531 containerd[1444]: time="2024-12-13T13:08:11.398500024Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 1.862622831s" Dec 13 13:08:11.398714 containerd[1444]: time="2024-12-13T13:08:11.398605664Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Dec 13 13:08:11.399222 containerd[1444]: time="2024-12-13T13:08:11.399200184Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Dec 13 13:08:11.400582 containerd[1444]: time="2024-12-13T13:08:11.400485865Z" level=info msg="CreateContainer within sandbox \"a1bc02357e25bc7c24818fa290be070ed2ffeeab753d8bb5175cd6626a9182bb\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 13:08:11.409689 containerd[1444]: time="2024-12-13T13:08:11.409639828Z" level=info msg="CreateContainer within sandbox \"a1bc02357e25bc7c24818fa290be070ed2ffeeab753d8bb5175cd6626a9182bb\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"73637dfb96fc4c241bb09b257e5077b1513bec4c8d773958406d7893bdf39cf8\"" Dec 13 13:08:11.410616 containerd[1444]: time="2024-12-13T13:08:11.410386988Z" level=info msg="StartContainer for \"73637dfb96fc4c241bb09b257e5077b1513bec4c8d773958406d7893bdf39cf8\"" Dec 13 13:08:11.443077 systemd-networkd[1389]: vxlan.calico: Gained IPv6LL Dec 13 13:08:11.465096 systemd[1]: Started cri-containerd-73637dfb96fc4c241bb09b257e5077b1513bec4c8d773958406d7893bdf39cf8.scope - libcontainer container 73637dfb96fc4c241bb09b257e5077b1513bec4c8d773958406d7893bdf39cf8. Dec 13 13:08:11.496258 containerd[1444]: time="2024-12-13T13:08:11.496143937Z" level=info msg="StartContainer for \"73637dfb96fc4c241bb09b257e5077b1513bec4c8d773958406d7893bdf39cf8\" returns successfully" Dec 13 13:08:11.570199 systemd-networkd[1389]: cali2e56b2594e5: Gained IPv6LL Dec 13 13:08:11.686843 kubelet[2622]: E1213 13:08:11.686796 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:08:11.687178 kubelet[2622]: E1213 13:08:11.686802 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:08:12.151567 systemd[1]: Started sshd@9-10.0.0.33:22-10.0.0.1:55362.service - OpenSSH per-connection server daemon (10.0.0.1:55362). Dec 13 13:08:12.240668 sshd[5136]: Accepted publickey for core from 10.0.0.1 port 55362 ssh2: RSA SHA256:q9cWvSR3bBxu+L28Z4JmOHhvW5qF2BbU+1GVJNGhIf4 Dec 13 13:08:12.242543 sshd-session[5136]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:08:12.247579 systemd-logind[1431]: New session 10 of user core. Dec 13 13:08:12.253092 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 13 13:08:12.443220 sshd[5140]: Connection closed by 10.0.0.1 port 55362 Dec 13 13:08:12.444605 sshd-session[5136]: pam_unix(sshd:session): session closed for user core Dec 13 13:08:12.456426 systemd[1]: sshd@9-10.0.0.33:22-10.0.0.1:55362.service: Deactivated successfully. Dec 13 13:08:12.458673 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 13:08:12.460423 systemd-logind[1431]: Session 10 logged out. Waiting for processes to exit. Dec 13 13:08:12.468229 systemd[1]: Started sshd@10-10.0.0.33:22-10.0.0.1:55378.service - OpenSSH per-connection server daemon (10.0.0.1:55378). Dec 13 13:08:12.470368 systemd-logind[1431]: Removed session 10. Dec 13 13:08:12.526817 sshd[5155]: Accepted publickey for core from 10.0.0.1 port 55378 ssh2: RSA SHA256:q9cWvSR3bBxu+L28Z4JmOHhvW5qF2BbU+1GVJNGhIf4 Dec 13 13:08:12.529302 sshd-session[5155]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:08:12.536987 systemd-logind[1431]: New session 11 of user core. Dec 13 13:08:12.541059 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 13 13:08:12.690152 kubelet[2622]: I1213 13:08:12.690041 2622 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 13:08:12.691206 kubelet[2622]: E1213 13:08:12.690943 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:08:12.691673 kubelet[2622]: E1213 13:08:12.691654 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:08:12.781472 sshd[5157]: Connection closed by 10.0.0.1 port 55378 Dec 13 13:08:12.783941 sshd-session[5155]: pam_unix(sshd:session): session closed for user core Dec 13 13:08:12.791638 systemd[1]: sshd@10-10.0.0.33:22-10.0.0.1:55378.service: Deactivated successfully. Dec 13 13:08:12.793241 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 13:08:12.794566 systemd-logind[1431]: Session 11 logged out. Waiting for processes to exit. Dec 13 13:08:12.800168 systemd[1]: Started sshd@11-10.0.0.33:22-10.0.0.1:39080.service - OpenSSH per-connection server daemon (10.0.0.1:39080). Dec 13 13:08:12.801262 systemd-logind[1431]: Removed session 11. Dec 13 13:08:12.876988 sshd[5172]: Accepted publickey for core from 10.0.0.1 port 39080 ssh2: RSA SHA256:q9cWvSR3bBxu+L28Z4JmOHhvW5qF2BbU+1GVJNGhIf4 Dec 13 13:08:12.879805 sshd-session[5172]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:08:12.889774 systemd-logind[1431]: New session 12 of user core. Dec 13 13:08:12.898099 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 13 13:08:13.106042 sshd[5179]: Connection closed by 10.0.0.1 port 39080 Dec 13 13:08:13.105505 sshd-session[5172]: pam_unix(sshd:session): session closed for user core Dec 13 13:08:13.109579 systemd[1]: sshd@11-10.0.0.33:22-10.0.0.1:39080.service: Deactivated successfully. Dec 13 13:08:13.112481 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 13:08:13.113701 systemd-logind[1431]: Session 12 logged out. Waiting for processes to exit. Dec 13 13:08:13.116784 systemd-logind[1431]: Removed session 12. Dec 13 13:08:13.456049 containerd[1444]: time="2024-12-13T13:08:13.455996037Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:08:13.457727 containerd[1444]: time="2024-12-13T13:08:13.457224437Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=31953828" Dec 13 13:08:13.458201 containerd[1444]: time="2024-12-13T13:08:13.458166758Z" level=info msg="ImageCreate event name:\"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:08:13.460893 containerd[1444]: time="2024-12-13T13:08:13.460224318Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:08:13.461781 containerd[1444]: time="2024-12-13T13:08:13.461750239Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"33323450\" in 2.062515015s" Dec 13 13:08:13.461899 containerd[1444]: time="2024-12-13T13:08:13.461880039Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\"" Dec 13 13:08:13.462386 containerd[1444]: time="2024-12-13T13:08:13.462364159Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Dec 13 13:08:13.471652 containerd[1444]: time="2024-12-13T13:08:13.471520481Z" level=info msg="CreateContainer within sandbox \"bbfb8b8434ddc0c3b5ec525780df1a6536bea6ca543f1738e37956e7cb347d83\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Dec 13 13:08:13.481218 containerd[1444]: time="2024-12-13T13:08:13.481158844Z" level=info msg="CreateContainer within sandbox \"bbfb8b8434ddc0c3b5ec525780df1a6536bea6ca543f1738e37956e7cb347d83\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"1b124a7f299743cae986fc4fd6deb5407aca3e8f4243b0efdffa65b26d1b8ea2\"" Dec 13 13:08:13.481895 containerd[1444]: time="2024-12-13T13:08:13.481865325Z" level=info msg="StartContainer for \"1b124a7f299743cae986fc4fd6deb5407aca3e8f4243b0efdffa65b26d1b8ea2\"" Dec 13 13:08:13.513134 systemd[1]: Started cri-containerd-1b124a7f299743cae986fc4fd6deb5407aca3e8f4243b0efdffa65b26d1b8ea2.scope - libcontainer container 1b124a7f299743cae986fc4fd6deb5407aca3e8f4243b0efdffa65b26d1b8ea2. Dec 13 13:08:13.544366 containerd[1444]: time="2024-12-13T13:08:13.543554103Z" level=info msg="StartContainer for \"1b124a7f299743cae986fc4fd6deb5407aca3e8f4243b0efdffa65b26d1b8ea2\" returns successfully" Dec 13 13:08:13.709892 kubelet[2622]: I1213 13:08:13.709652 2622 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7d6cbc9658-ph7sh" podStartSLOduration=16.846200641 podStartE2EDuration="18.709603672s" podCreationTimestamp="2024-12-13 13:07:55 +0000 UTC" firstStartedPulling="2024-12-13 13:08:09.535487833 +0000 UTC m=+36.183376809" lastFinishedPulling="2024-12-13 13:08:11.398890824 +0000 UTC m=+38.046779840" observedRunningTime="2024-12-13 13:08:11.715769851 +0000 UTC m=+38.363658867" watchObservedRunningTime="2024-12-13 13:08:13.709603672 +0000 UTC m=+40.357492688" Dec 13 13:08:13.710321 kubelet[2622]: I1213 13:08:13.709930 2622 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6895d58756-pfggb" podStartSLOduration=14.887606506000001 podStartE2EDuration="18.709897992s" podCreationTimestamp="2024-12-13 13:07:55 +0000 UTC" firstStartedPulling="2024-12-13 13:08:09.639896273 +0000 UTC m=+36.287785289" lastFinishedPulling="2024-12-13 13:08:13.462187759 +0000 UTC m=+40.110076775" observedRunningTime="2024-12-13 13:08:13.707378991 +0000 UTC m=+40.355268007" watchObservedRunningTime="2024-12-13 13:08:13.709897992 +0000 UTC m=+40.357787008" Dec 13 13:08:13.716413 containerd[1444]: time="2024-12-13T13:08:13.716365874Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:08:13.717091 containerd[1444]: time="2024-12-13T13:08:13.716714634Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Dec 13 13:08:13.723966 containerd[1444]: time="2024-12-13T13:08:13.723872956Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 261.474677ms" Dec 13 13:08:13.723966 containerd[1444]: time="2024-12-13T13:08:13.723913556Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Dec 13 13:08:13.725488 containerd[1444]: time="2024-12-13T13:08:13.725153197Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Dec 13 13:08:13.726261 containerd[1444]: time="2024-12-13T13:08:13.726219677Z" level=info msg="CreateContainer within sandbox \"67546c765ddc1f3becedc4b1e55b50206dce228b54625c9fcdb2170a55ceb541\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 13:08:13.745685 containerd[1444]: time="2024-12-13T13:08:13.745639923Z" level=info msg="CreateContainer within sandbox \"67546c765ddc1f3becedc4b1e55b50206dce228b54625c9fcdb2170a55ceb541\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"7b8de662e6a1c243c1563629f3732d919edbb3e70e9d1d4db231c2aa471ad86c\"" Dec 13 13:08:13.746998 containerd[1444]: time="2024-12-13T13:08:13.746967083Z" level=info msg="StartContainer for \"7b8de662e6a1c243c1563629f3732d919edbb3e70e9d1d4db231c2aa471ad86c\"" Dec 13 13:08:13.786114 systemd[1]: Started cri-containerd-7b8de662e6a1c243c1563629f3732d919edbb3e70e9d1d4db231c2aa471ad86c.scope - libcontainer container 7b8de662e6a1c243c1563629f3732d919edbb3e70e9d1d4db231c2aa471ad86c. Dec 13 13:08:13.821963 containerd[1444]: time="2024-12-13T13:08:13.821884465Z" level=info msg="StartContainer for \"7b8de662e6a1c243c1563629f3732d919edbb3e70e9d1d4db231c2aa471ad86c\" returns successfully" Dec 13 13:08:14.708984 containerd[1444]: time="2024-12-13T13:08:14.708915354Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:08:14.713398 containerd[1444]: time="2024-12-13T13:08:14.710103235Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7464730" Dec 13 13:08:14.713398 containerd[1444]: time="2024-12-13T13:08:14.713157756Z" level=info msg="ImageCreate event name:\"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:08:14.716248 containerd[1444]: time="2024-12-13T13:08:14.716198476Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:08:14.717380 containerd[1444]: time="2024-12-13T13:08:14.716996837Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"8834384\" in 991.80492ms" Dec 13 13:08:14.717380 containerd[1444]: time="2024-12-13T13:08:14.717267237Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\"" Dec 13 13:08:14.721429 containerd[1444]: time="2024-12-13T13:08:14.721370558Z" level=info msg="CreateContainer within sandbox \"686fd640f2db28f02c216873df587a568c4d8b1ba3f64a79d6709bb5cef78537\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Dec 13 13:08:14.734117 containerd[1444]: time="2024-12-13T13:08:14.734072761Z" level=info msg="CreateContainer within sandbox \"686fd640f2db28f02c216873df587a568c4d8b1ba3f64a79d6709bb5cef78537\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"034149414c785f3bba8d631f037d87e348cc709c81f6f5921ed297d1a721b7d4\"" Dec 13 13:08:14.735075 containerd[1444]: time="2024-12-13T13:08:14.734762442Z" level=info msg="StartContainer for \"034149414c785f3bba8d631f037d87e348cc709c81f6f5921ed297d1a721b7d4\"" Dec 13 13:08:14.770126 systemd[1]: Started cri-containerd-034149414c785f3bba8d631f037d87e348cc709c81f6f5921ed297d1a721b7d4.scope - libcontainer container 034149414c785f3bba8d631f037d87e348cc709c81f6f5921ed297d1a721b7d4. Dec 13 13:08:14.820469 containerd[1444]: time="2024-12-13T13:08:14.820332185Z" level=info msg="StartContainer for \"034149414c785f3bba8d631f037d87e348cc709c81f6f5921ed297d1a721b7d4\" returns successfully" Dec 13 13:08:14.823016 containerd[1444]: time="2024-12-13T13:08:14.822973466Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Dec 13 13:08:15.705423 kubelet[2622]: I1213 13:08:15.705391 2622 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 13:08:15.798249 containerd[1444]: time="2024-12-13T13:08:15.798197643Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:08:15.799164 containerd[1444]: time="2024-12-13T13:08:15.799112723Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=9883368" Dec 13 13:08:15.800080 containerd[1444]: time="2024-12-13T13:08:15.800048443Z" level=info msg="ImageCreate event name:\"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:08:15.803085 containerd[1444]: time="2024-12-13T13:08:15.803047324Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:08:15.803604 containerd[1444]: time="2024-12-13T13:08:15.803562884Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11252974\" in 980.435898ms" Dec 13 13:08:15.803648 containerd[1444]: time="2024-12-13T13:08:15.803599324Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\"" Dec 13 13:08:15.805798 containerd[1444]: time="2024-12-13T13:08:15.805633205Z" level=info msg="CreateContainer within sandbox \"686fd640f2db28f02c216873df587a568c4d8b1ba3f64a79d6709bb5cef78537\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Dec 13 13:08:15.818514 containerd[1444]: time="2024-12-13T13:08:15.818461928Z" level=info msg="CreateContainer within sandbox \"686fd640f2db28f02c216873df587a568c4d8b1ba3f64a79d6709bb5cef78537\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"ff857ceb5a42bb8a7c3577ed0af02389e2fc28a01d30cc2a53b80c4b7fc0320b\"" Dec 13 13:08:15.824304 containerd[1444]: time="2024-12-13T13:08:15.824260169Z" level=info msg="StartContainer for \"ff857ceb5a42bb8a7c3577ed0af02389e2fc28a01d30cc2a53b80c4b7fc0320b\"" Dec 13 13:08:15.851100 systemd[1]: Started cri-containerd-ff857ceb5a42bb8a7c3577ed0af02389e2fc28a01d30cc2a53b80c4b7fc0320b.scope - libcontainer container ff857ceb5a42bb8a7c3577ed0af02389e2fc28a01d30cc2a53b80c4b7fc0320b. Dec 13 13:08:15.876313 containerd[1444]: time="2024-12-13T13:08:15.876238863Z" level=info msg="StartContainer for \"ff857ceb5a42bb8a7c3577ed0af02389e2fc28a01d30cc2a53b80c4b7fc0320b\" returns successfully" Dec 13 13:08:16.521213 kubelet[2622]: I1213 13:08:16.521168 2622 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Dec 13 13:08:16.521213 kubelet[2622]: I1213 13:08:16.521214 2622 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Dec 13 13:08:16.723735 kubelet[2622]: I1213 13:08:16.723685 2622 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7d6cbc9658-ggvqp" podStartSLOduration=17.723624581 podStartE2EDuration="21.723642592s" podCreationTimestamp="2024-12-13 13:07:55 +0000 UTC" firstStartedPulling="2024-12-13 13:08:09.724213425 +0000 UTC m=+36.372102441" lastFinishedPulling="2024-12-13 13:08:13.724231436 +0000 UTC m=+40.372120452" observedRunningTime="2024-12-13 13:08:14.711747635 +0000 UTC m=+41.359636691" watchObservedRunningTime="2024-12-13 13:08:16.723642592 +0000 UTC m=+43.371531608" Dec 13 13:08:16.724117 kubelet[2622]: I1213 13:08:16.723976 2622 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-bvvfs" podStartSLOduration=15.669858903 podStartE2EDuration="21.723956952s" podCreationTimestamp="2024-12-13 13:07:55 +0000 UTC" firstStartedPulling="2024-12-13 13:08:09.749785715 +0000 UTC m=+36.397674771" lastFinishedPulling="2024-12-13 13:08:15.803883804 +0000 UTC m=+42.451772820" observedRunningTime="2024-12-13 13:08:16.723073231 +0000 UTC m=+43.370962207" watchObservedRunningTime="2024-12-13 13:08:16.723956952 +0000 UTC m=+43.371845968" Dec 13 13:08:18.114856 systemd[1]: Started sshd@12-10.0.0.33:22-10.0.0.1:39082.service - OpenSSH per-connection server daemon (10.0.0.1:39082). Dec 13 13:08:18.171783 sshd[5382]: Accepted publickey for core from 10.0.0.1 port 39082 ssh2: RSA SHA256:q9cWvSR3bBxu+L28Z4JmOHhvW5qF2BbU+1GVJNGhIf4 Dec 13 13:08:18.173200 sshd-session[5382]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:08:18.177420 systemd-logind[1431]: New session 13 of user core. Dec 13 13:08:18.187132 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 13 13:08:18.373941 sshd[5384]: Connection closed by 10.0.0.1 port 39082 Dec 13 13:08:18.374267 sshd-session[5382]: pam_unix(sshd:session): session closed for user core Dec 13 13:08:18.385689 systemd[1]: sshd@12-10.0.0.33:22-10.0.0.1:39082.service: Deactivated successfully. Dec 13 13:08:18.388437 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 13:08:18.390065 systemd-logind[1431]: Session 13 logged out. Waiting for processes to exit. Dec 13 13:08:18.395194 systemd[1]: Started sshd@13-10.0.0.33:22-10.0.0.1:39098.service - OpenSSH per-connection server daemon (10.0.0.1:39098). Dec 13 13:08:18.397103 systemd-logind[1431]: Removed session 13. Dec 13 13:08:18.437050 sshd[5397]: Accepted publickey for core from 10.0.0.1 port 39098 ssh2: RSA SHA256:q9cWvSR3bBxu+L28Z4JmOHhvW5qF2BbU+1GVJNGhIf4 Dec 13 13:08:18.438212 sshd-session[5397]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:08:18.442817 systemd-logind[1431]: New session 14 of user core. Dec 13 13:08:18.449098 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 13 13:08:18.566909 kubelet[2622]: I1213 13:08:18.566789 2622 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 13:08:18.769427 sshd[5399]: Connection closed by 10.0.0.1 port 39098 Dec 13 13:08:18.769735 sshd-session[5397]: pam_unix(sshd:session): session closed for user core Dec 13 13:08:18.782512 systemd[1]: sshd@13-10.0.0.33:22-10.0.0.1:39098.service: Deactivated successfully. Dec 13 13:08:18.784177 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 13:08:18.785319 systemd-logind[1431]: Session 14 logged out. Waiting for processes to exit. Dec 13 13:08:18.787783 systemd[1]: Started sshd@14-10.0.0.33:22-10.0.0.1:39104.service - OpenSSH per-connection server daemon (10.0.0.1:39104). Dec 13 13:08:18.788540 systemd-logind[1431]: Removed session 14. Dec 13 13:08:18.849779 sshd[5411]: Accepted publickey for core from 10.0.0.1 port 39104 ssh2: RSA SHA256:q9cWvSR3bBxu+L28Z4JmOHhvW5qF2BbU+1GVJNGhIf4 Dec 13 13:08:18.851024 sshd-session[5411]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:08:18.854948 systemd-logind[1431]: New session 15 of user core. Dec 13 13:08:18.862633 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 13 13:08:20.460134 sshd[5413]: Connection closed by 10.0.0.1 port 39104 Dec 13 13:08:20.460779 sshd-session[5411]: pam_unix(sshd:session): session closed for user core Dec 13 13:08:20.470095 systemd[1]: sshd@14-10.0.0.33:22-10.0.0.1:39104.service: Deactivated successfully. Dec 13 13:08:20.475868 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 13:08:20.478587 systemd-logind[1431]: Session 15 logged out. Waiting for processes to exit. Dec 13 13:08:20.489269 systemd[1]: Started sshd@15-10.0.0.33:22-10.0.0.1:39106.service - OpenSSH per-connection server daemon (10.0.0.1:39106). Dec 13 13:08:20.490825 systemd-logind[1431]: Removed session 15. Dec 13 13:08:20.527847 sshd[5440]: Accepted publickey for core from 10.0.0.1 port 39106 ssh2: RSA SHA256:q9cWvSR3bBxu+L28Z4JmOHhvW5qF2BbU+1GVJNGhIf4 Dec 13 13:08:20.529308 sshd-session[5440]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:08:20.533171 systemd-logind[1431]: New session 16 of user core. Dec 13 13:08:20.550102 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 13 13:08:20.854377 sshd[5442]: Connection closed by 10.0.0.1 port 39106 Dec 13 13:08:20.853745 sshd-session[5440]: pam_unix(sshd:session): session closed for user core Dec 13 13:08:20.863912 systemd[1]: sshd@15-10.0.0.33:22-10.0.0.1:39106.service: Deactivated successfully. Dec 13 13:08:20.865617 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 13:08:20.867808 systemd-logind[1431]: Session 16 logged out. Waiting for processes to exit. Dec 13 13:08:20.869113 systemd[1]: Started sshd@16-10.0.0.33:22-10.0.0.1:39120.service - OpenSSH per-connection server daemon (10.0.0.1:39120). Dec 13 13:08:20.869889 systemd-logind[1431]: Removed session 16. Dec 13 13:08:20.911250 sshd[5453]: Accepted publickey for core from 10.0.0.1 port 39120 ssh2: RSA SHA256:q9cWvSR3bBxu+L28Z4JmOHhvW5qF2BbU+1GVJNGhIf4 Dec 13 13:08:20.912560 sshd-session[5453]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:08:20.916917 systemd-logind[1431]: New session 17 of user core. Dec 13 13:08:20.928109 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 13 13:08:21.092630 sshd[5455]: Connection closed by 10.0.0.1 port 39120 Dec 13 13:08:21.093156 sshd-session[5453]: pam_unix(sshd:session): session closed for user core Dec 13 13:08:21.096522 systemd[1]: sshd@16-10.0.0.33:22-10.0.0.1:39120.service: Deactivated successfully. Dec 13 13:08:21.099605 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 13:08:21.100196 systemd-logind[1431]: Session 17 logged out. Waiting for processes to exit. Dec 13 13:08:21.100976 systemd-logind[1431]: Removed session 17. Dec 13 13:08:26.103633 systemd[1]: Started sshd@17-10.0.0.33:22-10.0.0.1:42240.service - OpenSSH per-connection server daemon (10.0.0.1:42240). Dec 13 13:08:26.144166 sshd[5474]: Accepted publickey for core from 10.0.0.1 port 42240 ssh2: RSA SHA256:q9cWvSR3bBxu+L28Z4JmOHhvW5qF2BbU+1GVJNGhIf4 Dec 13 13:08:26.145250 sshd-session[5474]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:08:26.148896 systemd-logind[1431]: New session 18 of user core. Dec 13 13:08:26.158123 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 13 13:08:26.323635 sshd[5476]: Connection closed by 10.0.0.1 port 42240 Dec 13 13:08:26.323996 sshd-session[5474]: pam_unix(sshd:session): session closed for user core Dec 13 13:08:26.326426 systemd[1]: sshd@17-10.0.0.33:22-10.0.0.1:42240.service: Deactivated successfully. Dec 13 13:08:26.328178 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 13:08:26.331440 systemd-logind[1431]: Session 18 logged out. Waiting for processes to exit. Dec 13 13:08:26.332488 systemd-logind[1431]: Removed session 18. Dec 13 13:08:31.150707 kubelet[2622]: E1213 13:08:31.150609 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:08:31.335507 systemd[1]: Started sshd@18-10.0.0.33:22-10.0.0.1:42250.service - OpenSSH per-connection server daemon (10.0.0.1:42250). Dec 13 13:08:31.379956 sshd[5517]: Accepted publickey for core from 10.0.0.1 port 42250 ssh2: RSA SHA256:q9cWvSR3bBxu+L28Z4JmOHhvW5qF2BbU+1GVJNGhIf4 Dec 13 13:08:31.381341 sshd-session[5517]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:08:31.384986 systemd-logind[1431]: New session 19 of user core. Dec 13 13:08:31.395104 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 13 13:08:31.536402 sshd[5519]: Connection closed by 10.0.0.1 port 42250 Dec 13 13:08:31.536736 sshd-session[5517]: pam_unix(sshd:session): session closed for user core Dec 13 13:08:31.540091 systemd[1]: sshd@18-10.0.0.33:22-10.0.0.1:42250.service: Deactivated successfully. Dec 13 13:08:31.541703 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 13:08:31.543188 systemd-logind[1431]: Session 19 logged out. Waiting for processes to exit. Dec 13 13:08:31.544073 systemd-logind[1431]: Removed session 19. Dec 13 13:08:33.440323 containerd[1444]: time="2024-12-13T13:08:33.440267609Z" level=info msg="StopPodSandbox for \"7d3ed20e49897916e8d2add9f19ca57e443caa327bb88ce2f1a943a02055e707\"" Dec 13 13:08:33.440864 containerd[1444]: time="2024-12-13T13:08:33.440386449Z" level=info msg="TearDown network for sandbox \"7d3ed20e49897916e8d2add9f19ca57e443caa327bb88ce2f1a943a02055e707\" successfully" Dec 13 13:08:33.440864 containerd[1444]: time="2024-12-13T13:08:33.440397769Z" level=info msg="StopPodSandbox for \"7d3ed20e49897916e8d2add9f19ca57e443caa327bb88ce2f1a943a02055e707\" returns successfully" Dec 13 13:08:33.447167 containerd[1444]: time="2024-12-13T13:08:33.447117450Z" level=info msg="RemovePodSandbox for \"7d3ed20e49897916e8d2add9f19ca57e443caa327bb88ce2f1a943a02055e707\"" Dec 13 13:08:33.447167 containerd[1444]: time="2024-12-13T13:08:33.447161490Z" level=info msg="Forcibly stopping sandbox \"7d3ed20e49897916e8d2add9f19ca57e443caa327bb88ce2f1a943a02055e707\"" Dec 13 13:08:33.447265 containerd[1444]: time="2024-12-13T13:08:33.447236490Z" level=info msg="TearDown network for sandbox \"7d3ed20e49897916e8d2add9f19ca57e443caa327bb88ce2f1a943a02055e707\" successfully" Dec 13 13:08:33.455764 containerd[1444]: time="2024-12-13T13:08:33.455722211Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7d3ed20e49897916e8d2add9f19ca57e443caa327bb88ce2f1a943a02055e707\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:08:33.455840 containerd[1444]: time="2024-12-13T13:08:33.455795571Z" level=info msg="RemovePodSandbox \"7d3ed20e49897916e8d2add9f19ca57e443caa327bb88ce2f1a943a02055e707\" returns successfully" Dec 13 13:08:33.456348 containerd[1444]: time="2024-12-13T13:08:33.456319851Z" level=info msg="StopPodSandbox for \"ccac7d173d88a47737a4249b48065a67d7021746be32a7ede2429cc7f6245bca\"" Dec 13 13:08:33.456704 containerd[1444]: time="2024-12-13T13:08:33.456408571Z" level=info msg="TearDown network for sandbox \"ccac7d173d88a47737a4249b48065a67d7021746be32a7ede2429cc7f6245bca\" successfully" Dec 13 13:08:33.456704 containerd[1444]: time="2024-12-13T13:08:33.456418051Z" level=info msg="StopPodSandbox for \"ccac7d173d88a47737a4249b48065a67d7021746be32a7ede2429cc7f6245bca\" returns successfully" Dec 13 13:08:33.456704 containerd[1444]: time="2024-12-13T13:08:33.456651411Z" level=info msg="RemovePodSandbox for \"ccac7d173d88a47737a4249b48065a67d7021746be32a7ede2429cc7f6245bca\"" Dec 13 13:08:33.456704 containerd[1444]: time="2024-12-13T13:08:33.456669691Z" level=info msg="Forcibly stopping sandbox \"ccac7d173d88a47737a4249b48065a67d7021746be32a7ede2429cc7f6245bca\"" Dec 13 13:08:33.456804 containerd[1444]: time="2024-12-13T13:08:33.456726331Z" level=info msg="TearDown network for sandbox \"ccac7d173d88a47737a4249b48065a67d7021746be32a7ede2429cc7f6245bca\" successfully" Dec 13 13:08:33.459748 containerd[1444]: time="2024-12-13T13:08:33.459578131Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ccac7d173d88a47737a4249b48065a67d7021746be32a7ede2429cc7f6245bca\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:08:33.459748 containerd[1444]: time="2024-12-13T13:08:33.459630731Z" level=info msg="RemovePodSandbox \"ccac7d173d88a47737a4249b48065a67d7021746be32a7ede2429cc7f6245bca\" returns successfully" Dec 13 13:08:33.460091 containerd[1444]: time="2024-12-13T13:08:33.459940291Z" level=info msg="StopPodSandbox for \"68d12b2ab0b873b30886d308ea3de85af76db5272af295b1efc8675fbaeb9a81\"" Dec 13 13:08:33.460091 containerd[1444]: time="2024-12-13T13:08:33.460022331Z" level=info msg="TearDown network for sandbox \"68d12b2ab0b873b30886d308ea3de85af76db5272af295b1efc8675fbaeb9a81\" successfully" Dec 13 13:08:33.460091 containerd[1444]: time="2024-12-13T13:08:33.460031651Z" level=info msg="StopPodSandbox for \"68d12b2ab0b873b30886d308ea3de85af76db5272af295b1efc8675fbaeb9a81\" returns successfully" Dec 13 13:08:33.461463 containerd[1444]: time="2024-12-13T13:08:33.460358891Z" level=info msg="RemovePodSandbox for \"68d12b2ab0b873b30886d308ea3de85af76db5272af295b1efc8675fbaeb9a81\"" Dec 13 13:08:33.461463 containerd[1444]: time="2024-12-13T13:08:33.460380411Z" level=info msg="Forcibly stopping sandbox \"68d12b2ab0b873b30886d308ea3de85af76db5272af295b1efc8675fbaeb9a81\"" Dec 13 13:08:33.461463 containerd[1444]: time="2024-12-13T13:08:33.460437971Z" level=info msg="TearDown network for sandbox \"68d12b2ab0b873b30886d308ea3de85af76db5272af295b1efc8675fbaeb9a81\" successfully" Dec 13 13:08:33.463219 containerd[1444]: time="2024-12-13T13:08:33.463159331Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"68d12b2ab0b873b30886d308ea3de85af76db5272af295b1efc8675fbaeb9a81\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:08:33.463350 containerd[1444]: time="2024-12-13T13:08:33.463332651Z" level=info msg="RemovePodSandbox \"68d12b2ab0b873b30886d308ea3de85af76db5272af295b1efc8675fbaeb9a81\" returns successfully" Dec 13 13:08:33.463736 containerd[1444]: time="2024-12-13T13:08:33.463710451Z" level=info msg="StopPodSandbox for \"fedc453b3e594ca2a682fe9c66d5bc1bea01c09f6e7f0334a00e274b0d5e7e0b\"" Dec 13 13:08:33.469174 containerd[1444]: time="2024-12-13T13:08:33.469136492Z" level=info msg="TearDown network for sandbox \"fedc453b3e594ca2a682fe9c66d5bc1bea01c09f6e7f0334a00e274b0d5e7e0b\" successfully" Dec 13 13:08:33.469174 containerd[1444]: time="2024-12-13T13:08:33.469166732Z" level=info msg="StopPodSandbox for \"fedc453b3e594ca2a682fe9c66d5bc1bea01c09f6e7f0334a00e274b0d5e7e0b\" returns successfully" Dec 13 13:08:33.469942 containerd[1444]: time="2024-12-13T13:08:33.469509772Z" level=info msg="RemovePodSandbox for \"fedc453b3e594ca2a682fe9c66d5bc1bea01c09f6e7f0334a00e274b0d5e7e0b\"" Dec 13 13:08:33.469942 containerd[1444]: time="2024-12-13T13:08:33.469537972Z" level=info msg="Forcibly stopping sandbox \"fedc453b3e594ca2a682fe9c66d5bc1bea01c09f6e7f0334a00e274b0d5e7e0b\"" Dec 13 13:08:33.469942 containerd[1444]: time="2024-12-13T13:08:33.469605692Z" level=info msg="TearDown network for sandbox \"fedc453b3e594ca2a682fe9c66d5bc1bea01c09f6e7f0334a00e274b0d5e7e0b\" successfully" Dec 13 13:08:33.472473 containerd[1444]: time="2024-12-13T13:08:33.472321452Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fedc453b3e594ca2a682fe9c66d5bc1bea01c09f6e7f0334a00e274b0d5e7e0b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:08:33.472473 containerd[1444]: time="2024-12-13T13:08:33.472378092Z" level=info msg="RemovePodSandbox \"fedc453b3e594ca2a682fe9c66d5bc1bea01c09f6e7f0334a00e274b0d5e7e0b\" returns successfully" Dec 13 13:08:33.472720 containerd[1444]: time="2024-12-13T13:08:33.472698732Z" level=info msg="StopPodSandbox for \"ae479ae6a6663462293a9efbcb374b740041a02760ed8739eaf35a922675a17d\"" Dec 13 13:08:33.472801 containerd[1444]: time="2024-12-13T13:08:33.472785772Z" level=info msg="TearDown network for sandbox \"ae479ae6a6663462293a9efbcb374b740041a02760ed8739eaf35a922675a17d\" successfully" Dec 13 13:08:33.472825 containerd[1444]: time="2024-12-13T13:08:33.472799732Z" level=info msg="StopPodSandbox for \"ae479ae6a6663462293a9efbcb374b740041a02760ed8739eaf35a922675a17d\" returns successfully" Dec 13 13:08:33.474168 containerd[1444]: time="2024-12-13T13:08:33.473060532Z" level=info msg="RemovePodSandbox for \"ae479ae6a6663462293a9efbcb374b740041a02760ed8739eaf35a922675a17d\"" Dec 13 13:08:33.474168 containerd[1444]: time="2024-12-13T13:08:33.473088372Z" level=info msg="Forcibly stopping sandbox \"ae479ae6a6663462293a9efbcb374b740041a02760ed8739eaf35a922675a17d\"" Dec 13 13:08:33.474168 containerd[1444]: time="2024-12-13T13:08:33.473144772Z" level=info msg="TearDown network for sandbox \"ae479ae6a6663462293a9efbcb374b740041a02760ed8739eaf35a922675a17d\" successfully" Dec 13 13:08:33.475662 containerd[1444]: time="2024-12-13T13:08:33.475578972Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ae479ae6a6663462293a9efbcb374b740041a02760ed8739eaf35a922675a17d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:08:33.475662 containerd[1444]: time="2024-12-13T13:08:33.475627892Z" level=info msg="RemovePodSandbox \"ae479ae6a6663462293a9efbcb374b740041a02760ed8739eaf35a922675a17d\" returns successfully" Dec 13 13:08:33.476328 containerd[1444]: time="2024-12-13T13:08:33.476109652Z" level=info msg="StopPodSandbox for \"807e6b897d5c5a9455bb9eb345295c94f79378f38bc2debd28d654f1a5f4b816\"" Dec 13 13:08:33.476328 containerd[1444]: time="2024-12-13T13:08:33.476207652Z" level=info msg="TearDown network for sandbox \"807e6b897d5c5a9455bb9eb345295c94f79378f38bc2debd28d654f1a5f4b816\" successfully" Dec 13 13:08:33.476328 containerd[1444]: time="2024-12-13T13:08:33.476222092Z" level=info msg="StopPodSandbox for \"807e6b897d5c5a9455bb9eb345295c94f79378f38bc2debd28d654f1a5f4b816\" returns successfully" Dec 13 13:08:33.476712 containerd[1444]: time="2024-12-13T13:08:33.476644412Z" level=info msg="RemovePodSandbox for \"807e6b897d5c5a9455bb9eb345295c94f79378f38bc2debd28d654f1a5f4b816\"" Dec 13 13:08:33.476712 containerd[1444]: time="2024-12-13T13:08:33.476679612Z" level=info msg="Forcibly stopping sandbox \"807e6b897d5c5a9455bb9eb345295c94f79378f38bc2debd28d654f1a5f4b816\"" Dec 13 13:08:33.476810 containerd[1444]: time="2024-12-13T13:08:33.476742532Z" level=info msg="TearDown network for sandbox \"807e6b897d5c5a9455bb9eb345295c94f79378f38bc2debd28d654f1a5f4b816\" successfully" Dec 13 13:08:33.479339 containerd[1444]: time="2024-12-13T13:08:33.479305052Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"807e6b897d5c5a9455bb9eb345295c94f79378f38bc2debd28d654f1a5f4b816\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:08:33.479389 containerd[1444]: time="2024-12-13T13:08:33.479355812Z" level=info msg="RemovePodSandbox \"807e6b897d5c5a9455bb9eb345295c94f79378f38bc2debd28d654f1a5f4b816\" returns successfully" Dec 13 13:08:33.479736 containerd[1444]: time="2024-12-13T13:08:33.479657772Z" level=info msg="StopPodSandbox for \"485087125903c712999ca3ac6293c276d8ed0f0a29e6cdb8396ba0834e10212f\"" Dec 13 13:08:33.479979 containerd[1444]: time="2024-12-13T13:08:33.479878372Z" level=info msg="TearDown network for sandbox \"485087125903c712999ca3ac6293c276d8ed0f0a29e6cdb8396ba0834e10212f\" successfully" Dec 13 13:08:33.479979 containerd[1444]: time="2024-12-13T13:08:33.479897132Z" level=info msg="StopPodSandbox for \"485087125903c712999ca3ac6293c276d8ed0f0a29e6cdb8396ba0834e10212f\" returns successfully" Dec 13 13:08:33.480161 containerd[1444]: time="2024-12-13T13:08:33.480145293Z" level=info msg="RemovePodSandbox for \"485087125903c712999ca3ac6293c276d8ed0f0a29e6cdb8396ba0834e10212f\"" Dec 13 13:08:33.480193 containerd[1444]: time="2024-12-13T13:08:33.480168773Z" level=info msg="Forcibly stopping sandbox \"485087125903c712999ca3ac6293c276d8ed0f0a29e6cdb8396ba0834e10212f\"" Dec 13 13:08:33.480309 containerd[1444]: time="2024-12-13T13:08:33.480229093Z" level=info msg="TearDown network for sandbox \"485087125903c712999ca3ac6293c276d8ed0f0a29e6cdb8396ba0834e10212f\" successfully" Dec 13 13:08:33.482557 containerd[1444]: time="2024-12-13T13:08:33.482525573Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"485087125903c712999ca3ac6293c276d8ed0f0a29e6cdb8396ba0834e10212f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:08:33.482623 containerd[1444]: time="2024-12-13T13:08:33.482580813Z" level=info msg="RemovePodSandbox \"485087125903c712999ca3ac6293c276d8ed0f0a29e6cdb8396ba0834e10212f\" returns successfully" Dec 13 13:08:33.482930 containerd[1444]: time="2024-12-13T13:08:33.482894813Z" level=info msg="StopPodSandbox for \"35c18f060763675ee252c5d3d0b6434e0850da79a4ae2cb9d5e7c8a664090b55\"" Dec 13 13:08:33.483006 containerd[1444]: time="2024-12-13T13:08:33.482991413Z" level=info msg="TearDown network for sandbox \"35c18f060763675ee252c5d3d0b6434e0850da79a4ae2cb9d5e7c8a664090b55\" successfully" Dec 13 13:08:33.483034 containerd[1444]: time="2024-12-13T13:08:33.483006413Z" level=info msg="StopPodSandbox for \"35c18f060763675ee252c5d3d0b6434e0850da79a4ae2cb9d5e7c8a664090b55\" returns successfully" Dec 13 13:08:33.483332 containerd[1444]: time="2024-12-13T13:08:33.483306253Z" level=info msg="RemovePodSandbox for \"35c18f060763675ee252c5d3d0b6434e0850da79a4ae2cb9d5e7c8a664090b55\"" Dec 13 13:08:33.483363 containerd[1444]: time="2024-12-13T13:08:33.483337693Z" level=info msg="Forcibly stopping sandbox \"35c18f060763675ee252c5d3d0b6434e0850da79a4ae2cb9d5e7c8a664090b55\"" Dec 13 13:08:33.483416 containerd[1444]: time="2024-12-13T13:08:33.483401453Z" level=info msg="TearDown network for sandbox \"35c18f060763675ee252c5d3d0b6434e0850da79a4ae2cb9d5e7c8a664090b55\" successfully" Dec 13 13:08:33.486155 containerd[1444]: time="2024-12-13T13:08:33.486119493Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"35c18f060763675ee252c5d3d0b6434e0850da79a4ae2cb9d5e7c8a664090b55\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:08:33.486228 containerd[1444]: time="2024-12-13T13:08:33.486177253Z" level=info msg="RemovePodSandbox \"35c18f060763675ee252c5d3d0b6434e0850da79a4ae2cb9d5e7c8a664090b55\" returns successfully" Dec 13 13:08:33.486688 containerd[1444]: time="2024-12-13T13:08:33.486519893Z" level=info msg="StopPodSandbox for \"65d7ae9340947db8c318335db7a7645c1bba6c5a2804addb1135c38c35ec52e3\"" Dec 13 13:08:33.486955 containerd[1444]: time="2024-12-13T13:08:33.486843333Z" level=info msg="TearDown network for sandbox \"65d7ae9340947db8c318335db7a7645c1bba6c5a2804addb1135c38c35ec52e3\" successfully" Dec 13 13:08:33.486955 containerd[1444]: time="2024-12-13T13:08:33.486863653Z" level=info msg="StopPodSandbox for \"65d7ae9340947db8c318335db7a7645c1bba6c5a2804addb1135c38c35ec52e3\" returns successfully" Dec 13 13:08:33.487186 containerd[1444]: time="2024-12-13T13:08:33.487162933Z" level=info msg="RemovePodSandbox for \"65d7ae9340947db8c318335db7a7645c1bba6c5a2804addb1135c38c35ec52e3\"" Dec 13 13:08:33.488386 containerd[1444]: time="2024-12-13T13:08:33.487262893Z" level=info msg="Forcibly stopping sandbox \"65d7ae9340947db8c318335db7a7645c1bba6c5a2804addb1135c38c35ec52e3\"" Dec 13 13:08:33.488386 containerd[1444]: time="2024-12-13T13:08:33.487339013Z" level=info msg="TearDown network for sandbox \"65d7ae9340947db8c318335db7a7645c1bba6c5a2804addb1135c38c35ec52e3\" successfully" Dec 13 13:08:33.490000 containerd[1444]: time="2024-12-13T13:08:33.489937493Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"65d7ae9340947db8c318335db7a7645c1bba6c5a2804addb1135c38c35ec52e3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:08:33.490119 containerd[1444]: time="2024-12-13T13:08:33.490102453Z" level=info msg="RemovePodSandbox \"65d7ae9340947db8c318335db7a7645c1bba6c5a2804addb1135c38c35ec52e3\" returns successfully" Dec 13 13:08:33.490482 containerd[1444]: time="2024-12-13T13:08:33.490457293Z" level=info msg="StopPodSandbox for \"371ed6cf24b6c19135943a4eb8cf94941db4e398c37eb15a00aabd36cf86a461\"" Dec 13 13:08:33.490556 containerd[1444]: time="2024-12-13T13:08:33.490542133Z" level=info msg="TearDown network for sandbox \"371ed6cf24b6c19135943a4eb8cf94941db4e398c37eb15a00aabd36cf86a461\" successfully" Dec 13 13:08:33.490602 containerd[1444]: time="2024-12-13T13:08:33.490555413Z" level=info msg="StopPodSandbox for \"371ed6cf24b6c19135943a4eb8cf94941db4e398c37eb15a00aabd36cf86a461\" returns successfully" Dec 13 13:08:33.491068 containerd[1444]: time="2024-12-13T13:08:33.491047093Z" level=info msg="RemovePodSandbox for \"371ed6cf24b6c19135943a4eb8cf94941db4e398c37eb15a00aabd36cf86a461\"" Dec 13 13:08:33.491121 containerd[1444]: time="2024-12-13T13:08:33.491070013Z" level=info msg="Forcibly stopping sandbox \"371ed6cf24b6c19135943a4eb8cf94941db4e398c37eb15a00aabd36cf86a461\"" Dec 13 13:08:33.491121 containerd[1444]: time="2024-12-13T13:08:33.491125333Z" level=info msg="TearDown network for sandbox \"371ed6cf24b6c19135943a4eb8cf94941db4e398c37eb15a00aabd36cf86a461\" successfully" Dec 13 13:08:33.493979 containerd[1444]: time="2024-12-13T13:08:33.493943614Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"371ed6cf24b6c19135943a4eb8cf94941db4e398c37eb15a00aabd36cf86a461\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:08:33.494060 containerd[1444]: time="2024-12-13T13:08:33.494005094Z" level=info msg="RemovePodSandbox \"371ed6cf24b6c19135943a4eb8cf94941db4e398c37eb15a00aabd36cf86a461\" returns successfully" Dec 13 13:08:33.494633 containerd[1444]: time="2024-12-13T13:08:33.494336694Z" level=info msg="StopPodSandbox for \"d6b8d0158d3945c8194a81d24d71326d7a131773b0cf51accf264beb019547ca\"" Dec 13 13:08:33.494633 containerd[1444]: time="2024-12-13T13:08:33.494419894Z" level=info msg="TearDown network for sandbox \"d6b8d0158d3945c8194a81d24d71326d7a131773b0cf51accf264beb019547ca\" successfully" Dec 13 13:08:33.494633 containerd[1444]: time="2024-12-13T13:08:33.494439294Z" level=info msg="StopPodSandbox for \"d6b8d0158d3945c8194a81d24d71326d7a131773b0cf51accf264beb019547ca\" returns successfully" Dec 13 13:08:33.495194 containerd[1444]: time="2024-12-13T13:08:33.494960254Z" level=info msg="RemovePodSandbox for \"d6b8d0158d3945c8194a81d24d71326d7a131773b0cf51accf264beb019547ca\"" Dec 13 13:08:33.495194 containerd[1444]: time="2024-12-13T13:08:33.495064854Z" level=info msg="Forcibly stopping sandbox \"d6b8d0158d3945c8194a81d24d71326d7a131773b0cf51accf264beb019547ca\"" Dec 13 13:08:33.495194 containerd[1444]: time="2024-12-13T13:08:33.495128054Z" level=info msg="TearDown network for sandbox \"d6b8d0158d3945c8194a81d24d71326d7a131773b0cf51accf264beb019547ca\" successfully" Dec 13 13:08:33.497725 containerd[1444]: time="2024-12-13T13:08:33.497691214Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d6b8d0158d3945c8194a81d24d71326d7a131773b0cf51accf264beb019547ca\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:08:33.497915 containerd[1444]: time="2024-12-13T13:08:33.497832934Z" level=info msg="RemovePodSandbox \"d6b8d0158d3945c8194a81d24d71326d7a131773b0cf51accf264beb019547ca\" returns successfully" Dec 13 13:08:33.498201 containerd[1444]: time="2024-12-13T13:08:33.498175774Z" level=info msg="StopPodSandbox for \"56077653ebacc3b1051e470842f46345115608852df7cac0e9454b903217d018\"" Dec 13 13:08:33.498284 containerd[1444]: time="2024-12-13T13:08:33.498270494Z" level=info msg="TearDown network for sandbox \"56077653ebacc3b1051e470842f46345115608852df7cac0e9454b903217d018\" successfully" Dec 13 13:08:33.498318 containerd[1444]: time="2024-12-13T13:08:33.498285414Z" level=info msg="StopPodSandbox for \"56077653ebacc3b1051e470842f46345115608852df7cac0e9454b903217d018\" returns successfully" Dec 13 13:08:33.498747 containerd[1444]: time="2024-12-13T13:08:33.498627134Z" level=info msg="RemovePodSandbox for \"56077653ebacc3b1051e470842f46345115608852df7cac0e9454b903217d018\"" Dec 13 13:08:33.498747 containerd[1444]: time="2024-12-13T13:08:33.498658494Z" level=info msg="Forcibly stopping sandbox \"56077653ebacc3b1051e470842f46345115608852df7cac0e9454b903217d018\"" Dec 13 13:08:33.498747 containerd[1444]: time="2024-12-13T13:08:33.498725854Z" level=info msg="TearDown network for sandbox \"56077653ebacc3b1051e470842f46345115608852df7cac0e9454b903217d018\" successfully" Dec 13 13:08:33.501072 containerd[1444]: time="2024-12-13T13:08:33.501038014Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"56077653ebacc3b1051e470842f46345115608852df7cac0e9454b903217d018\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:08:33.501129 containerd[1444]: time="2024-12-13T13:08:33.501100094Z" level=info msg="RemovePodSandbox \"56077653ebacc3b1051e470842f46345115608852df7cac0e9454b903217d018\" returns successfully" Dec 13 13:08:33.501580 containerd[1444]: time="2024-12-13T13:08:33.501552574Z" level=info msg="StopPodSandbox for \"03044fd1b3482e0180c2772d126611fce77f77f2395fb599361aef45f96093af\"" Dec 13 13:08:33.501665 containerd[1444]: time="2024-12-13T13:08:33.501643134Z" level=info msg="TearDown network for sandbox \"03044fd1b3482e0180c2772d126611fce77f77f2395fb599361aef45f96093af\" successfully" Dec 13 13:08:33.501665 containerd[1444]: time="2024-12-13T13:08:33.501654214Z" level=info msg="StopPodSandbox for \"03044fd1b3482e0180c2772d126611fce77f77f2395fb599361aef45f96093af\" returns successfully" Dec 13 13:08:33.502106 containerd[1444]: time="2024-12-13T13:08:33.502080454Z" level=info msg="RemovePodSandbox for \"03044fd1b3482e0180c2772d126611fce77f77f2395fb599361aef45f96093af\"" Dec 13 13:08:33.502172 containerd[1444]: time="2024-12-13T13:08:33.502108734Z" level=info msg="Forcibly stopping sandbox \"03044fd1b3482e0180c2772d126611fce77f77f2395fb599361aef45f96093af\"" Dec 13 13:08:33.502199 containerd[1444]: time="2024-12-13T13:08:33.502176814Z" level=info msg="TearDown network for sandbox \"03044fd1b3482e0180c2772d126611fce77f77f2395fb599361aef45f96093af\" successfully" Dec 13 13:08:33.504585 containerd[1444]: time="2024-12-13T13:08:33.504555735Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"03044fd1b3482e0180c2772d126611fce77f77f2395fb599361aef45f96093af\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:08:33.504633 containerd[1444]: time="2024-12-13T13:08:33.504606575Z" level=info msg="RemovePodSandbox \"03044fd1b3482e0180c2772d126611fce77f77f2395fb599361aef45f96093af\" returns successfully" Dec 13 13:08:33.505115 containerd[1444]: time="2024-12-13T13:08:33.505005135Z" level=info msg="StopPodSandbox for \"6df4a84e0027bd9244a453db628e982a16ba2e4aa5d8112d6e577f178f2c240a\"" Dec 13 13:08:33.505268 containerd[1444]: time="2024-12-13T13:08:33.505099655Z" level=info msg="TearDown network for sandbox \"6df4a84e0027bd9244a453db628e982a16ba2e4aa5d8112d6e577f178f2c240a\" successfully" Dec 13 13:08:33.505268 containerd[1444]: time="2024-12-13T13:08:33.505188775Z" level=info msg="StopPodSandbox for \"6df4a84e0027bd9244a453db628e982a16ba2e4aa5d8112d6e577f178f2c240a\" returns successfully" Dec 13 13:08:33.505498 containerd[1444]: time="2024-12-13T13:08:33.505474655Z" level=info msg="RemovePodSandbox for \"6df4a84e0027bd9244a453db628e982a16ba2e4aa5d8112d6e577f178f2c240a\"" Dec 13 13:08:33.505540 containerd[1444]: time="2024-12-13T13:08:33.505511895Z" level=info msg="Forcibly stopping sandbox \"6df4a84e0027bd9244a453db628e982a16ba2e4aa5d8112d6e577f178f2c240a\"" Dec 13 13:08:33.505587 containerd[1444]: time="2024-12-13T13:08:33.505572935Z" level=info msg="TearDown network for sandbox \"6df4a84e0027bd9244a453db628e982a16ba2e4aa5d8112d6e577f178f2c240a\" successfully" Dec 13 13:08:33.507990 containerd[1444]: time="2024-12-13T13:08:33.507956855Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6df4a84e0027bd9244a453db628e982a16ba2e4aa5d8112d6e577f178f2c240a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:08:33.508037 containerd[1444]: time="2024-12-13T13:08:33.508020975Z" level=info msg="RemovePodSandbox \"6df4a84e0027bd9244a453db628e982a16ba2e4aa5d8112d6e577f178f2c240a\" returns successfully" Dec 13 13:08:33.508483 containerd[1444]: time="2024-12-13T13:08:33.508335095Z" level=info msg="StopPodSandbox for \"0ccc050dfd7491f2bd4ad4f0574d3de631147e1664ac73062ca2d68ac359b82b\"" Dec 13 13:08:33.508483 containerd[1444]: time="2024-12-13T13:08:33.508432215Z" level=info msg="TearDown network for sandbox \"0ccc050dfd7491f2bd4ad4f0574d3de631147e1664ac73062ca2d68ac359b82b\" successfully" Dec 13 13:08:33.508664 containerd[1444]: time="2024-12-13T13:08:33.508441975Z" level=info msg="StopPodSandbox for \"0ccc050dfd7491f2bd4ad4f0574d3de631147e1664ac73062ca2d68ac359b82b\" returns successfully" Dec 13 13:08:33.508992 containerd[1444]: time="2024-12-13T13:08:33.508964215Z" level=info msg="RemovePodSandbox for \"0ccc050dfd7491f2bd4ad4f0574d3de631147e1664ac73062ca2d68ac359b82b\"" Dec 13 13:08:33.509044 containerd[1444]: time="2024-12-13T13:08:33.508997495Z" level=info msg="Forcibly stopping sandbox \"0ccc050dfd7491f2bd4ad4f0574d3de631147e1664ac73062ca2d68ac359b82b\"" Dec 13 13:08:33.509085 containerd[1444]: time="2024-12-13T13:08:33.509069055Z" level=info msg="TearDown network for sandbox \"0ccc050dfd7491f2bd4ad4f0574d3de631147e1664ac73062ca2d68ac359b82b\" successfully" Dec 13 13:08:33.511636 containerd[1444]: time="2024-12-13T13:08:33.511587015Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0ccc050dfd7491f2bd4ad4f0574d3de631147e1664ac73062ca2d68ac359b82b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:08:33.511752 containerd[1444]: time="2024-12-13T13:08:33.511653095Z" level=info msg="RemovePodSandbox \"0ccc050dfd7491f2bd4ad4f0574d3de631147e1664ac73062ca2d68ac359b82b\" returns successfully" Dec 13 13:08:33.512109 containerd[1444]: time="2024-12-13T13:08:33.512081655Z" level=info msg="StopPodSandbox for \"0c5da573ef43886eb00234ed50fff214076b52db4117c157c3df8ec2fcb80e66\"" Dec 13 13:08:33.512209 containerd[1444]: time="2024-12-13T13:08:33.512190815Z" level=info msg="TearDown network for sandbox \"0c5da573ef43886eb00234ed50fff214076b52db4117c157c3df8ec2fcb80e66\" successfully" Dec 13 13:08:33.512249 containerd[1444]: time="2024-12-13T13:08:33.512207855Z" level=info msg="StopPodSandbox for \"0c5da573ef43886eb00234ed50fff214076b52db4117c157c3df8ec2fcb80e66\" returns successfully" Dec 13 13:08:33.512528 containerd[1444]: time="2024-12-13T13:08:33.512501695Z" level=info msg="RemovePodSandbox for \"0c5da573ef43886eb00234ed50fff214076b52db4117c157c3df8ec2fcb80e66\"" Dec 13 13:08:33.512556 containerd[1444]: time="2024-12-13T13:08:33.512535975Z" level=info msg="Forcibly stopping sandbox \"0c5da573ef43886eb00234ed50fff214076b52db4117c157c3df8ec2fcb80e66\"" Dec 13 13:08:33.512615 containerd[1444]: time="2024-12-13T13:08:33.512601175Z" level=info msg="TearDown network for sandbox \"0c5da573ef43886eb00234ed50fff214076b52db4117c157c3df8ec2fcb80e66\" successfully" Dec 13 13:08:33.515085 containerd[1444]: time="2024-12-13T13:08:33.515051735Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0c5da573ef43886eb00234ed50fff214076b52db4117c157c3df8ec2fcb80e66\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:08:33.515137 containerd[1444]: time="2024-12-13T13:08:33.515111495Z" level=info msg="RemovePodSandbox \"0c5da573ef43886eb00234ed50fff214076b52db4117c157c3df8ec2fcb80e66\" returns successfully" Dec 13 13:08:33.515487 containerd[1444]: time="2024-12-13T13:08:33.515464975Z" level=info msg="StopPodSandbox for \"88715665723cccb00aeae59cb9ef9aaf2503e60184edd6f11004d9d98a1bb7c2\"" Dec 13 13:08:33.515569 containerd[1444]: time="2024-12-13T13:08:33.515554855Z" level=info msg="TearDown network for sandbox \"88715665723cccb00aeae59cb9ef9aaf2503e60184edd6f11004d9d98a1bb7c2\" successfully" Dec 13 13:08:33.515596 containerd[1444]: time="2024-12-13T13:08:33.515571215Z" level=info msg="StopPodSandbox for \"88715665723cccb00aeae59cb9ef9aaf2503e60184edd6f11004d9d98a1bb7c2\" returns successfully" Dec 13 13:08:33.515876 containerd[1444]: time="2024-12-13T13:08:33.515845335Z" level=info msg="RemovePodSandbox for \"88715665723cccb00aeae59cb9ef9aaf2503e60184edd6f11004d9d98a1bb7c2\"" Dec 13 13:08:33.515909 containerd[1444]: time="2024-12-13T13:08:33.515883095Z" level=info msg="Forcibly stopping sandbox \"88715665723cccb00aeae59cb9ef9aaf2503e60184edd6f11004d9d98a1bb7c2\"" Dec 13 13:08:33.515964 containerd[1444]: time="2024-12-13T13:08:33.515949695Z" level=info msg="TearDown network for sandbox \"88715665723cccb00aeae59cb9ef9aaf2503e60184edd6f11004d9d98a1bb7c2\" successfully" Dec 13 13:08:33.518822 containerd[1444]: time="2024-12-13T13:08:33.518788976Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"88715665723cccb00aeae59cb9ef9aaf2503e60184edd6f11004d9d98a1bb7c2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:08:33.518968 containerd[1444]: time="2024-12-13T13:08:33.518837776Z" level=info msg="RemovePodSandbox \"88715665723cccb00aeae59cb9ef9aaf2503e60184edd6f11004d9d98a1bb7c2\" returns successfully" Dec 13 13:08:33.519223 containerd[1444]: time="2024-12-13T13:08:33.519199496Z" level=info msg="StopPodSandbox for \"f4d68f57e58ff45a4362d2c135d3ef7842ccf3f45df10b8b2eb6543d36fc2210\"" Dec 13 13:08:33.519316 containerd[1444]: time="2024-12-13T13:08:33.519300016Z" level=info msg="TearDown network for sandbox \"f4d68f57e58ff45a4362d2c135d3ef7842ccf3f45df10b8b2eb6543d36fc2210\" successfully" Dec 13 13:08:33.519316 containerd[1444]: time="2024-12-13T13:08:33.519314016Z" level=info msg="StopPodSandbox for \"f4d68f57e58ff45a4362d2c135d3ef7842ccf3f45df10b8b2eb6543d36fc2210\" returns successfully" Dec 13 13:08:33.519698 containerd[1444]: time="2024-12-13T13:08:33.519671496Z" level=info msg="RemovePodSandbox for \"f4d68f57e58ff45a4362d2c135d3ef7842ccf3f45df10b8b2eb6543d36fc2210\"" Dec 13 13:08:33.519830 containerd[1444]: time="2024-12-13T13:08:33.519760176Z" level=info msg="Forcibly stopping sandbox \"f4d68f57e58ff45a4362d2c135d3ef7842ccf3f45df10b8b2eb6543d36fc2210\"" Dec 13 13:08:33.519993 containerd[1444]: time="2024-12-13T13:08:33.519917136Z" level=info msg="TearDown network for sandbox \"f4d68f57e58ff45a4362d2c135d3ef7842ccf3f45df10b8b2eb6543d36fc2210\" successfully" Dec 13 13:08:33.525129 containerd[1444]: time="2024-12-13T13:08:33.524998816Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f4d68f57e58ff45a4362d2c135d3ef7842ccf3f45df10b8b2eb6543d36fc2210\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:08:33.525129 containerd[1444]: time="2024-12-13T13:08:33.525057896Z" level=info msg="RemovePodSandbox \"f4d68f57e58ff45a4362d2c135d3ef7842ccf3f45df10b8b2eb6543d36fc2210\" returns successfully" Dec 13 13:08:33.525367 containerd[1444]: time="2024-12-13T13:08:33.525343896Z" level=info msg="StopPodSandbox for \"8ce431818123ae357a10e2a06319f4801d9da0db3bdc80e6ff88ce4fbaacbf6b\"" Dec 13 13:08:33.525451 containerd[1444]: time="2024-12-13T13:08:33.525436456Z" level=info msg="TearDown network for sandbox \"8ce431818123ae357a10e2a06319f4801d9da0db3bdc80e6ff88ce4fbaacbf6b\" successfully" Dec 13 13:08:33.525478 containerd[1444]: time="2024-12-13T13:08:33.525450216Z" level=info msg="StopPodSandbox for \"8ce431818123ae357a10e2a06319f4801d9da0db3bdc80e6ff88ce4fbaacbf6b\" returns successfully" Dec 13 13:08:33.525695 containerd[1444]: time="2024-12-13T13:08:33.525674016Z" level=info msg="RemovePodSandbox for \"8ce431818123ae357a10e2a06319f4801d9da0db3bdc80e6ff88ce4fbaacbf6b\"" Dec 13 13:08:33.525722 containerd[1444]: time="2024-12-13T13:08:33.525702096Z" level=info msg="Forcibly stopping sandbox \"8ce431818123ae357a10e2a06319f4801d9da0db3bdc80e6ff88ce4fbaacbf6b\"" Dec 13 13:08:33.525787 containerd[1444]: time="2024-12-13T13:08:33.525771936Z" level=info msg="TearDown network for sandbox \"8ce431818123ae357a10e2a06319f4801d9da0db3bdc80e6ff88ce4fbaacbf6b\" successfully" Dec 13 13:08:33.528303 containerd[1444]: time="2024-12-13T13:08:33.528263576Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8ce431818123ae357a10e2a06319f4801d9da0db3bdc80e6ff88ce4fbaacbf6b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:08:33.528354 containerd[1444]: time="2024-12-13T13:08:33.528325376Z" level=info msg="RemovePodSandbox \"8ce431818123ae357a10e2a06319f4801d9da0db3bdc80e6ff88ce4fbaacbf6b\" returns successfully" Dec 13 13:08:33.528624 containerd[1444]: time="2024-12-13T13:08:33.528601016Z" level=info msg="StopPodSandbox for \"7bcc2f392126e4c11de3a5ac865152355315160a77673196a6cb7e7f0fe1b3b3\"" Dec 13 13:08:33.528718 containerd[1444]: time="2024-12-13T13:08:33.528681296Z" level=info msg="TearDown network for sandbox \"7bcc2f392126e4c11de3a5ac865152355315160a77673196a6cb7e7f0fe1b3b3\" successfully" Dec 13 13:08:33.528718 containerd[1444]: time="2024-12-13T13:08:33.528690616Z" level=info msg="StopPodSandbox for \"7bcc2f392126e4c11de3a5ac865152355315160a77673196a6cb7e7f0fe1b3b3\" returns successfully" Dec 13 13:08:33.529019 containerd[1444]: time="2024-12-13T13:08:33.528985936Z" level=info msg="RemovePodSandbox for \"7bcc2f392126e4c11de3a5ac865152355315160a77673196a6cb7e7f0fe1b3b3\"" Dec 13 13:08:33.529623 containerd[1444]: time="2024-12-13T13:08:33.529109577Z" level=info msg="Forcibly stopping sandbox \"7bcc2f392126e4c11de3a5ac865152355315160a77673196a6cb7e7f0fe1b3b3\"" Dec 13 13:08:33.529623 containerd[1444]: time="2024-12-13T13:08:33.529190657Z" level=info msg="TearDown network for sandbox \"7bcc2f392126e4c11de3a5ac865152355315160a77673196a6cb7e7f0fe1b3b3\" successfully" Dec 13 13:08:33.531794 containerd[1444]: time="2024-12-13T13:08:33.531753257Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7bcc2f392126e4c11de3a5ac865152355315160a77673196a6cb7e7f0fe1b3b3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:08:33.531862 containerd[1444]: time="2024-12-13T13:08:33.531820737Z" level=info msg="RemovePodSandbox \"7bcc2f392126e4c11de3a5ac865152355315160a77673196a6cb7e7f0fe1b3b3\" returns successfully" Dec 13 13:08:33.532242 containerd[1444]: time="2024-12-13T13:08:33.532185777Z" level=info msg="StopPodSandbox for \"11af14cbd2e99571b75d9d81ccfc84e1972eee71b3e867bf76c66e8364d5bd46\"" Dec 13 13:08:33.532298 containerd[1444]: time="2024-12-13T13:08:33.532280217Z" level=info msg="TearDown network for sandbox \"11af14cbd2e99571b75d9d81ccfc84e1972eee71b3e867bf76c66e8364d5bd46\" successfully" Dec 13 13:08:33.532347 containerd[1444]: time="2024-12-13T13:08:33.532300897Z" level=info msg="StopPodSandbox for \"11af14cbd2e99571b75d9d81ccfc84e1972eee71b3e867bf76c66e8364d5bd46\" returns successfully" Dec 13 13:08:33.532634 containerd[1444]: time="2024-12-13T13:08:33.532610897Z" level=info msg="RemovePodSandbox for \"11af14cbd2e99571b75d9d81ccfc84e1972eee71b3e867bf76c66e8364d5bd46\"" Dec 13 13:08:33.532682 containerd[1444]: time="2024-12-13T13:08:33.532640137Z" level=info msg="Forcibly stopping sandbox \"11af14cbd2e99571b75d9d81ccfc84e1972eee71b3e867bf76c66e8364d5bd46\"" Dec 13 13:08:33.532712 containerd[1444]: time="2024-12-13T13:08:33.532703537Z" level=info msg="TearDown network for sandbox \"11af14cbd2e99571b75d9d81ccfc84e1972eee71b3e867bf76c66e8364d5bd46\" successfully" Dec 13 13:08:33.534845 containerd[1444]: time="2024-12-13T13:08:33.534815337Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"11af14cbd2e99571b75d9d81ccfc84e1972eee71b3e867bf76c66e8364d5bd46\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:08:33.534902 containerd[1444]: time="2024-12-13T13:08:33.534867937Z" level=info msg="RemovePodSandbox \"11af14cbd2e99571b75d9d81ccfc84e1972eee71b3e867bf76c66e8364d5bd46\" returns successfully" Dec 13 13:08:33.535275 containerd[1444]: time="2024-12-13T13:08:33.535249217Z" level=info msg="StopPodSandbox for \"7645574e91d41985d8b928f0ec11cd2aece9951aa0ed1bce8534f8d7c5f8c50e\"" Dec 13 13:08:33.535369 containerd[1444]: time="2024-12-13T13:08:33.535348977Z" level=info msg="TearDown network for sandbox \"7645574e91d41985d8b928f0ec11cd2aece9951aa0ed1bce8534f8d7c5f8c50e\" successfully" Dec 13 13:08:33.535369 containerd[1444]: time="2024-12-13T13:08:33.535364737Z" level=info msg="StopPodSandbox for \"7645574e91d41985d8b928f0ec11cd2aece9951aa0ed1bce8534f8d7c5f8c50e\" returns successfully" Dec 13 13:08:33.536205 containerd[1444]: time="2024-12-13T13:08:33.535675737Z" level=info msg="RemovePodSandbox for \"7645574e91d41985d8b928f0ec11cd2aece9951aa0ed1bce8534f8d7c5f8c50e\"" Dec 13 13:08:33.536205 containerd[1444]: time="2024-12-13T13:08:33.535743497Z" level=info msg="Forcibly stopping sandbox \"7645574e91d41985d8b928f0ec11cd2aece9951aa0ed1bce8534f8d7c5f8c50e\"" Dec 13 13:08:33.536205 containerd[1444]: time="2024-12-13T13:08:33.535822657Z" level=info msg="TearDown network for sandbox \"7645574e91d41985d8b928f0ec11cd2aece9951aa0ed1bce8534f8d7c5f8c50e\" successfully" Dec 13 13:08:33.538276 containerd[1444]: time="2024-12-13T13:08:33.538244177Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7645574e91d41985d8b928f0ec11cd2aece9951aa0ed1bce8534f8d7c5f8c50e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:08:33.538443 containerd[1444]: time="2024-12-13T13:08:33.538423577Z" level=info msg="RemovePodSandbox \"7645574e91d41985d8b928f0ec11cd2aece9951aa0ed1bce8534f8d7c5f8c50e\" returns successfully" Dec 13 13:08:33.538850 containerd[1444]: time="2024-12-13T13:08:33.538821337Z" level=info msg="StopPodSandbox for \"ca002f6339855b3efe0262cb09378d4272682cfbc73b94d9029dfa74c5ff09b9\"" Dec 13 13:08:33.538948 containerd[1444]: time="2024-12-13T13:08:33.538913497Z" level=info msg="TearDown network for sandbox \"ca002f6339855b3efe0262cb09378d4272682cfbc73b94d9029dfa74c5ff09b9\" successfully" Dec 13 13:08:33.538948 containerd[1444]: time="2024-12-13T13:08:33.538945737Z" level=info msg="StopPodSandbox for \"ca002f6339855b3efe0262cb09378d4272682cfbc73b94d9029dfa74c5ff09b9\" returns successfully" Dec 13 13:08:33.539882 containerd[1444]: time="2024-12-13T13:08:33.539267897Z" level=info msg="RemovePodSandbox for \"ca002f6339855b3efe0262cb09378d4272682cfbc73b94d9029dfa74c5ff09b9\"" Dec 13 13:08:33.539882 containerd[1444]: time="2024-12-13T13:08:33.539308777Z" level=info msg="Forcibly stopping sandbox \"ca002f6339855b3efe0262cb09378d4272682cfbc73b94d9029dfa74c5ff09b9\"" Dec 13 13:08:33.539882 containerd[1444]: time="2024-12-13T13:08:33.539389137Z" level=info msg="TearDown network for sandbox \"ca002f6339855b3efe0262cb09378d4272682cfbc73b94d9029dfa74c5ff09b9\" successfully" Dec 13 13:08:33.541689 containerd[1444]: time="2024-12-13T13:08:33.541658138Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ca002f6339855b3efe0262cb09378d4272682cfbc73b94d9029dfa74c5ff09b9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:08:33.541738 containerd[1444]: time="2024-12-13T13:08:33.541712978Z" level=info msg="RemovePodSandbox \"ca002f6339855b3efe0262cb09378d4272682cfbc73b94d9029dfa74c5ff09b9\" returns successfully" Dec 13 13:08:36.454811 containerd[1444]: time="2024-12-13T13:08:36.454767593Z" level=info msg="StopContainer for \"13dc86d3f71ca4c57f42f80c118040aa6600433c45a499b9e4c1af7db68f44f9\" with timeout 300 (s)" Dec 13 13:08:36.455404 containerd[1444]: time="2024-12-13T13:08:36.455191553Z" level=info msg="Stop container \"13dc86d3f71ca4c57f42f80c118040aa6600433c45a499b9e4c1af7db68f44f9\" with signal terminated" Dec 13 13:08:36.561211 systemd[1]: Started sshd@19-10.0.0.33:22-10.0.0.1:36454.service - OpenSSH per-connection server daemon (10.0.0.1:36454). Dec 13 13:08:36.594355 containerd[1444]: time="2024-12-13T13:08:36.594226682Z" level=info msg="StopContainer for \"1b124a7f299743cae986fc4fd6deb5407aca3e8f4243b0efdffa65b26d1b8ea2\" with timeout 30 (s)" Dec 13 13:08:36.596039 containerd[1444]: time="2024-12-13T13:08:36.595996923Z" level=info msg="Stop container \"1b124a7f299743cae986fc4fd6deb5407aca3e8f4243b0efdffa65b26d1b8ea2\" with signal terminated" Dec 13 13:08:36.619565 sshd[5564]: Accepted publickey for core from 10.0.0.1 port 36454 ssh2: RSA SHA256:q9cWvSR3bBxu+L28Z4JmOHhvW5qF2BbU+1GVJNGhIf4 Dec 13 13:08:36.619171 sshd-session[5564]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:08:36.623623 systemd-logind[1431]: New session 20 of user core. Dec 13 13:08:36.633438 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 13 13:08:36.635122 systemd[1]: cri-containerd-1b124a7f299743cae986fc4fd6deb5407aca3e8f4243b0efdffa65b26d1b8ea2.scope: Deactivated successfully. Dec 13 13:08:36.661499 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1b124a7f299743cae986fc4fd6deb5407aca3e8f4243b0efdffa65b26d1b8ea2-rootfs.mount: Deactivated successfully. Dec 13 13:08:36.664260 containerd[1444]: time="2024-12-13T13:08:36.659267647Z" level=info msg="shim disconnected" id=1b124a7f299743cae986fc4fd6deb5407aca3e8f4243b0efdffa65b26d1b8ea2 namespace=k8s.io Dec 13 13:08:36.664423 containerd[1444]: time="2024-12-13T13:08:36.664269287Z" level=warning msg="cleaning up after shim disconnected" id=1b124a7f299743cae986fc4fd6deb5407aca3e8f4243b0efdffa65b26d1b8ea2 namespace=k8s.io Dec 13 13:08:36.664423 containerd[1444]: time="2024-12-13T13:08:36.664294727Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:08:36.690716 containerd[1444]: time="2024-12-13T13:08:36.690667969Z" level=info msg="StopContainer for \"1b124a7f299743cae986fc4fd6deb5407aca3e8f4243b0efdffa65b26d1b8ea2\" returns successfully" Dec 13 13:08:36.691650 containerd[1444]: time="2024-12-13T13:08:36.691597089Z" level=info msg="StopPodSandbox for \"bbfb8b8434ddc0c3b5ec525780df1a6536bea6ca543f1738e37956e7cb347d83\"" Dec 13 13:08:36.691650 containerd[1444]: time="2024-12-13T13:08:36.691638849Z" level=info msg="Container to stop \"1b124a7f299743cae986fc4fd6deb5407aca3e8f4243b0efdffa65b26d1b8ea2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 13:08:36.696825 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bbfb8b8434ddc0c3b5ec525780df1a6536bea6ca543f1738e37956e7cb347d83-shm.mount: Deactivated successfully. Dec 13 13:08:36.722858 systemd[1]: cri-containerd-bbfb8b8434ddc0c3b5ec525780df1a6536bea6ca543f1738e37956e7cb347d83.scope: Deactivated successfully. Dec 13 13:08:36.745222 containerd[1444]: time="2024-12-13T13:08:36.745140493Z" level=info msg="shim disconnected" id=bbfb8b8434ddc0c3b5ec525780df1a6536bea6ca543f1738e37956e7cb347d83 namespace=k8s.io Dec 13 13:08:36.745222 containerd[1444]: time="2024-12-13T13:08:36.745212373Z" level=warning msg="cleaning up after shim disconnected" id=bbfb8b8434ddc0c3b5ec525780df1a6536bea6ca543f1738e37956e7cb347d83 namespace=k8s.io Dec 13 13:08:36.745222 containerd[1444]: time="2024-12-13T13:08:36.745221013Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:08:36.746845 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bbfb8b8434ddc0c3b5ec525780df1a6536bea6ca543f1738e37956e7cb347d83-rootfs.mount: Deactivated successfully. Dec 13 13:08:36.775132 kubelet[2622]: I1213 13:08:36.775063 2622 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bbfb8b8434ddc0c3b5ec525780df1a6536bea6ca543f1738e37956e7cb347d83" Dec 13 13:08:36.847008 systemd-networkd[1389]: calice785f8c15a: Link DOWN Dec 13 13:08:36.847020 systemd-networkd[1389]: calice785f8c15a: Lost carrier Dec 13 13:08:36.926689 sshd[5578]: Connection closed by 10.0.0.1 port 36454 Dec 13 13:08:36.927079 sshd-session[5564]: pam_unix(sshd:session): session closed for user core Dec 13 13:08:36.931096 systemd[1]: sshd@19-10.0.0.33:22-10.0.0.1:36454.service: Deactivated successfully. Dec 13 13:08:36.934641 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 13:08:36.939070 systemd-logind[1431]: Session 20 logged out. Waiting for processes to exit. Dec 13 13:08:36.940840 systemd-logind[1431]: Removed session 20. Dec 13 13:08:36.944127 containerd[1444]: 2024-12-13 13:08:36.843 [INFO][5654] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="bbfb8b8434ddc0c3b5ec525780df1a6536bea6ca543f1738e37956e7cb347d83" Dec 13 13:08:36.944127 containerd[1444]: 2024-12-13 13:08:36.843 [INFO][5654] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="bbfb8b8434ddc0c3b5ec525780df1a6536bea6ca543f1738e37956e7cb347d83" iface="eth0" netns="/var/run/netns/cni-357921ce-1031-1d25-f932-923108917886" Dec 13 13:08:36.944127 containerd[1444]: 2024-12-13 13:08:36.844 [INFO][5654] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="bbfb8b8434ddc0c3b5ec525780df1a6536bea6ca543f1738e37956e7cb347d83" iface="eth0" netns="/var/run/netns/cni-357921ce-1031-1d25-f932-923108917886" Dec 13 13:08:36.944127 containerd[1444]: 2024-12-13 13:08:36.865 [INFO][5654] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="bbfb8b8434ddc0c3b5ec525780df1a6536bea6ca543f1738e37956e7cb347d83" after=21.252282ms iface="eth0" netns="/var/run/netns/cni-357921ce-1031-1d25-f932-923108917886" Dec 13 13:08:36.944127 containerd[1444]: 2024-12-13 13:08:36.865 [INFO][5654] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="bbfb8b8434ddc0c3b5ec525780df1a6536bea6ca543f1738e37956e7cb347d83" Dec 13 13:08:36.944127 containerd[1444]: 2024-12-13 13:08:36.865 [INFO][5654] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bbfb8b8434ddc0c3b5ec525780df1a6536bea6ca543f1738e37956e7cb347d83" Dec 13 13:08:36.944127 containerd[1444]: 2024-12-13 13:08:36.895 [INFO][5669] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bbfb8b8434ddc0c3b5ec525780df1a6536bea6ca543f1738e37956e7cb347d83" HandleID="k8s-pod-network.bbfb8b8434ddc0c3b5ec525780df1a6536bea6ca543f1738e37956e7cb347d83" Workload="localhost-k8s-calico--kube--controllers--6895d58756--pfggb-eth0" Dec 13 13:08:36.944127 containerd[1444]: 2024-12-13 13:08:36.895 [INFO][5669] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 13:08:36.944127 containerd[1444]: 2024-12-13 13:08:36.896 [INFO][5669] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 13:08:36.944127 containerd[1444]: 2024-12-13 13:08:36.938 [INFO][5669] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="bbfb8b8434ddc0c3b5ec525780df1a6536bea6ca543f1738e37956e7cb347d83" HandleID="k8s-pod-network.bbfb8b8434ddc0c3b5ec525780df1a6536bea6ca543f1738e37956e7cb347d83" Workload="localhost-k8s-calico--kube--controllers--6895d58756--pfggb-eth0" Dec 13 13:08:36.944127 containerd[1444]: 2024-12-13 13:08:36.938 [INFO][5669] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bbfb8b8434ddc0c3b5ec525780df1a6536bea6ca543f1738e37956e7cb347d83" HandleID="k8s-pod-network.bbfb8b8434ddc0c3b5ec525780df1a6536bea6ca543f1738e37956e7cb347d83" Workload="localhost-k8s-calico--kube--controllers--6895d58756--pfggb-eth0" Dec 13 13:08:36.944127 containerd[1444]: 2024-12-13 13:08:36.939 [INFO][5669] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 13:08:36.944127 containerd[1444]: 2024-12-13 13:08:36.941 [INFO][5654] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="bbfb8b8434ddc0c3b5ec525780df1a6536bea6ca543f1738e37956e7cb347d83" Dec 13 13:08:36.945128 containerd[1444]: time="2024-12-13T13:08:36.945091946Z" level=info msg="TearDown network for sandbox \"bbfb8b8434ddc0c3b5ec525780df1a6536bea6ca543f1738e37956e7cb347d83\" successfully" Dec 13 13:08:36.945128 containerd[1444]: time="2024-12-13T13:08:36.945123266Z" level=info msg="StopPodSandbox for \"bbfb8b8434ddc0c3b5ec525780df1a6536bea6ca543f1738e37956e7cb347d83\" returns successfully" Dec 13 13:08:36.946575 systemd[1]: run-netns-cni\x2d357921ce\x2d1031\x2d1d25\x2df932\x2d923108917886.mount: Deactivated successfully. Dec 13 13:08:37.087700 kubelet[2622]: I1213 13:08:37.087319 2622 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftlp6\" (UniqueName: \"kubernetes.io/projected/afc0c628-56bd-4014-86d9-0b030f93cf65-kube-api-access-ftlp6\") pod \"afc0c628-56bd-4014-86d9-0b030f93cf65\" (UID: \"afc0c628-56bd-4014-86d9-0b030f93cf65\") " Dec 13 13:08:37.087700 kubelet[2622]: I1213 13:08:37.087369 2622 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/afc0c628-56bd-4014-86d9-0b030f93cf65-tigera-ca-bundle\") pod \"afc0c628-56bd-4014-86d9-0b030f93cf65\" (UID: \"afc0c628-56bd-4014-86d9-0b030f93cf65\") " Dec 13 13:08:37.093783 systemd[1]: var-lib-kubelet-pods-afc0c628\x2d56bd\x2d4014\x2d86d9\x2d0b030f93cf65-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dkube\x2dcontrollers-1.mount: Deactivated successfully. Dec 13 13:08:37.097656 kubelet[2622]: I1213 13:08:37.097614 2622 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afc0c628-56bd-4014-86d9-0b030f93cf65-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "afc0c628-56bd-4014-86d9-0b030f93cf65" (UID: "afc0c628-56bd-4014-86d9-0b030f93cf65"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 13:08:37.098724 kubelet[2622]: I1213 13:08:37.098697 2622 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afc0c628-56bd-4014-86d9-0b030f93cf65-kube-api-access-ftlp6" (OuterVolumeSpecName: "kube-api-access-ftlp6") pod "afc0c628-56bd-4014-86d9-0b030f93cf65" (UID: "afc0c628-56bd-4014-86d9-0b030f93cf65"). InnerVolumeSpecName "kube-api-access-ftlp6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 13:08:37.188153 kubelet[2622]: I1213 13:08:37.188099 2622 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-ftlp6\" (UniqueName: \"kubernetes.io/projected/afc0c628-56bd-4014-86d9-0b030f93cf65-kube-api-access-ftlp6\") on node \"localhost\" DevicePath \"\"" Dec 13 13:08:37.188153 kubelet[2622]: I1213 13:08:37.188137 2622 reconciler_common.go:300] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/afc0c628-56bd-4014-86d9-0b030f93cf65-tigera-ca-bundle\") on node \"localhost\" DevicePath \"\"" Dec 13 13:08:37.465321 systemd[1]: Removed slice kubepods-besteffort-podafc0c628_56bd_4014_86d9_0b030f93cf65.slice - libcontainer container kubepods-besteffort-podafc0c628_56bd_4014_86d9_0b030f93cf65.slice. Dec 13 13:08:37.661049 systemd[1]: var-lib-kubelet-pods-afc0c628\x2d56bd\x2d4014\x2d86d9\x2d0b030f93cf65-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dftlp6.mount: Deactivated successfully. Dec 13 13:08:37.829129 kubelet[2622]: I1213 13:08:37.828869 2622 topology_manager.go:215] "Topology Admit Handler" podUID="75a5c351-d17a-49fb-ad92-79c91f02d4bf" podNamespace="calico-system" podName="calico-kube-controllers-8579758b7c-cftsd" Dec 13 13:08:37.829129 kubelet[2622]: E1213 13:08:37.828962 2622 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="afc0c628-56bd-4014-86d9-0b030f93cf65" containerName="calico-kube-controllers" Dec 13 13:08:37.833837 kubelet[2622]: I1213 13:08:37.833373 2622 memory_manager.go:354] "RemoveStaleState removing state" podUID="afc0c628-56bd-4014-86d9-0b030f93cf65" containerName="calico-kube-controllers" Dec 13 13:08:37.844467 systemd[1]: Created slice kubepods-besteffort-pod75a5c351_d17a_49fb_ad92_79c91f02d4bf.slice - libcontainer container kubepods-besteffort-pod75a5c351_d17a_49fb_ad92_79c91f02d4bf.slice. Dec 13 13:08:37.895585 kubelet[2622]: I1213 13:08:37.895541 2622 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/75a5c351-d17a-49fb-ad92-79c91f02d4bf-tigera-ca-bundle\") pod \"calico-kube-controllers-8579758b7c-cftsd\" (UID: \"75a5c351-d17a-49fb-ad92-79c91f02d4bf\") " pod="calico-system/calico-kube-controllers-8579758b7c-cftsd" Dec 13 13:08:37.895733 kubelet[2622]: I1213 13:08:37.895668 2622 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7sc5\" (UniqueName: \"kubernetes.io/projected/75a5c351-d17a-49fb-ad92-79c91f02d4bf-kube-api-access-v7sc5\") pod \"calico-kube-controllers-8579758b7c-cftsd\" (UID: \"75a5c351-d17a-49fb-ad92-79c91f02d4bf\") " pod="calico-system/calico-kube-controllers-8579758b7c-cftsd" Dec 13 13:08:38.156181 containerd[1444]: time="2024-12-13T13:08:38.156064244Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8579758b7c-cftsd,Uid:75a5c351-d17a-49fb-ad92-79c91f02d4bf,Namespace:calico-system,Attempt:0,}" Dec 13 13:08:38.288228 systemd-networkd[1389]: cali454af256ac1: Link UP Dec 13 13:08:38.288499 systemd-networkd[1389]: cali454af256ac1: Gained carrier Dec 13 13:08:38.302657 containerd[1444]: 2024-12-13 13:08:38.217 [INFO][5690] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--8579758b7c--cftsd-eth0 calico-kube-controllers-8579758b7c- calico-system 75a5c351-d17a-49fb-ad92-79c91f02d4bf 1267 0 2024-12-13 13:08:37 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:8579758b7c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-8579758b7c-cftsd eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali454af256ac1 [] []}} ContainerID="71600e8e6d3d6d40327680367c69fc189c4e837e3d5cfc795a9a62677ab9719d" Namespace="calico-system" Pod="calico-kube-controllers-8579758b7c-cftsd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8579758b7c--cftsd-" Dec 13 13:08:38.302657 containerd[1444]: 2024-12-13 13:08:38.217 [INFO][5690] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="71600e8e6d3d6d40327680367c69fc189c4e837e3d5cfc795a9a62677ab9719d" Namespace="calico-system" Pod="calico-kube-controllers-8579758b7c-cftsd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8579758b7c--cftsd-eth0" Dec 13 13:08:38.302657 containerd[1444]: 2024-12-13 13:08:38.247 [INFO][5703] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="71600e8e6d3d6d40327680367c69fc189c4e837e3d5cfc795a9a62677ab9719d" HandleID="k8s-pod-network.71600e8e6d3d6d40327680367c69fc189c4e837e3d5cfc795a9a62677ab9719d" Workload="localhost-k8s-calico--kube--controllers--8579758b7c--cftsd-eth0" Dec 13 13:08:38.302657 containerd[1444]: 2024-12-13 13:08:38.258 [INFO][5703] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="71600e8e6d3d6d40327680367c69fc189c4e837e3d5cfc795a9a62677ab9719d" HandleID="k8s-pod-network.71600e8e6d3d6d40327680367c69fc189c4e837e3d5cfc795a9a62677ab9719d" Workload="localhost-k8s-calico--kube--controllers--8579758b7c--cftsd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400039e950), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-8579758b7c-cftsd", "timestamp":"2024-12-13 13:08:38.247077518 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 13:08:38.302657 containerd[1444]: 2024-12-13 13:08:38.258 [INFO][5703] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 13:08:38.302657 containerd[1444]: 2024-12-13 13:08:38.258 [INFO][5703] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 13:08:38.302657 containerd[1444]: 2024-12-13 13:08:38.258 [INFO][5703] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 13:08:38.302657 containerd[1444]: 2024-12-13 13:08:38.260 [INFO][5703] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.71600e8e6d3d6d40327680367c69fc189c4e837e3d5cfc795a9a62677ab9719d" host="localhost" Dec 13 13:08:38.302657 containerd[1444]: 2024-12-13 13:08:38.263 [INFO][5703] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 13:08:38.302657 containerd[1444]: 2024-12-13 13:08:38.268 [INFO][5703] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 13:08:38.302657 containerd[1444]: 2024-12-13 13:08:38.269 [INFO][5703] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 13:08:38.302657 containerd[1444]: 2024-12-13 13:08:38.271 [INFO][5703] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 13:08:38.302657 containerd[1444]: 2024-12-13 13:08:38.271 [INFO][5703] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.71600e8e6d3d6d40327680367c69fc189c4e837e3d5cfc795a9a62677ab9719d" host="localhost" Dec 13 13:08:38.302657 containerd[1444]: 2024-12-13 13:08:38.273 [INFO][5703] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.71600e8e6d3d6d40327680367c69fc189c4e837e3d5cfc795a9a62677ab9719d Dec 13 13:08:38.302657 containerd[1444]: 2024-12-13 13:08:38.276 [INFO][5703] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.71600e8e6d3d6d40327680367c69fc189c4e837e3d5cfc795a9a62677ab9719d" host="localhost" Dec 13 13:08:38.302657 containerd[1444]: 2024-12-13 13:08:38.281 [INFO][5703] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.71600e8e6d3d6d40327680367c69fc189c4e837e3d5cfc795a9a62677ab9719d" host="localhost" Dec 13 13:08:38.302657 containerd[1444]: 2024-12-13 13:08:38.282 [INFO][5703] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.71600e8e6d3d6d40327680367c69fc189c4e837e3d5cfc795a9a62677ab9719d" host="localhost" Dec 13 13:08:38.302657 containerd[1444]: 2024-12-13 13:08:38.282 [INFO][5703] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 13:08:38.302657 containerd[1444]: 2024-12-13 13:08:38.282 [INFO][5703] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="71600e8e6d3d6d40327680367c69fc189c4e837e3d5cfc795a9a62677ab9719d" HandleID="k8s-pod-network.71600e8e6d3d6d40327680367c69fc189c4e837e3d5cfc795a9a62677ab9719d" Workload="localhost-k8s-calico--kube--controllers--8579758b7c--cftsd-eth0" Dec 13 13:08:38.303459 containerd[1444]: 2024-12-13 13:08:38.283 [INFO][5690] cni-plugin/k8s.go 386: Populated endpoint ContainerID="71600e8e6d3d6d40327680367c69fc189c4e837e3d5cfc795a9a62677ab9719d" Namespace="calico-system" Pod="calico-kube-controllers-8579758b7c-cftsd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8579758b7c--cftsd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--8579758b7c--cftsd-eth0", GenerateName:"calico-kube-controllers-8579758b7c-", Namespace:"calico-system", SelfLink:"", UID:"75a5c351-d17a-49fb-ad92-79c91f02d4bf", ResourceVersion:"1267", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 13, 8, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8579758b7c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-8579758b7c-cftsd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali454af256ac1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 13:08:38.303459 containerd[1444]: 2024-12-13 13:08:38.285 [INFO][5690] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.135/32] ContainerID="71600e8e6d3d6d40327680367c69fc189c4e837e3d5cfc795a9a62677ab9719d" Namespace="calico-system" Pod="calico-kube-controllers-8579758b7c-cftsd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8579758b7c--cftsd-eth0" Dec 13 13:08:38.303459 containerd[1444]: 2024-12-13 13:08:38.285 [INFO][5690] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali454af256ac1 ContainerID="71600e8e6d3d6d40327680367c69fc189c4e837e3d5cfc795a9a62677ab9719d" Namespace="calico-system" Pod="calico-kube-controllers-8579758b7c-cftsd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8579758b7c--cftsd-eth0" Dec 13 13:08:38.303459 containerd[1444]: 2024-12-13 13:08:38.287 [INFO][5690] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="71600e8e6d3d6d40327680367c69fc189c4e837e3d5cfc795a9a62677ab9719d" Namespace="calico-system" Pod="calico-kube-controllers-8579758b7c-cftsd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8579758b7c--cftsd-eth0" Dec 13 13:08:38.303459 containerd[1444]: 2024-12-13 13:08:38.290 [INFO][5690] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="71600e8e6d3d6d40327680367c69fc189c4e837e3d5cfc795a9a62677ab9719d" Namespace="calico-system" Pod="calico-kube-controllers-8579758b7c-cftsd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8579758b7c--cftsd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--8579758b7c--cftsd-eth0", GenerateName:"calico-kube-controllers-8579758b7c-", Namespace:"calico-system", SelfLink:"", UID:"75a5c351-d17a-49fb-ad92-79c91f02d4bf", ResourceVersion:"1267", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 13, 8, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8579758b7c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"71600e8e6d3d6d40327680367c69fc189c4e837e3d5cfc795a9a62677ab9719d", Pod:"calico-kube-controllers-8579758b7c-cftsd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali454af256ac1", MAC:"2e:41:58:f8:a0:86", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 13:08:38.303459 containerd[1444]: 2024-12-13 13:08:38.299 [INFO][5690] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="71600e8e6d3d6d40327680367c69fc189c4e837e3d5cfc795a9a62677ab9719d" Namespace="calico-system" Pod="calico-kube-controllers-8579758b7c-cftsd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8579758b7c--cftsd-eth0" Dec 13 13:08:38.324374 containerd[1444]: time="2024-12-13T13:08:38.324280644Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:08:38.324374 containerd[1444]: time="2024-12-13T13:08:38.324330684Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:08:38.324688 containerd[1444]: time="2024-12-13T13:08:38.324551886Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:08:38.324736 containerd[1444]: time="2024-12-13T13:08:38.324652207Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:08:38.351087 systemd[1]: Started cri-containerd-71600e8e6d3d6d40327680367c69fc189c4e837e3d5cfc795a9a62677ab9719d.scope - libcontainer container 71600e8e6d3d6d40327680367c69fc189c4e837e3d5cfc795a9a62677ab9719d. Dec 13 13:08:38.360908 systemd-resolved[1308]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 13:08:38.387403 containerd[1444]: time="2024-12-13T13:08:38.387370219Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8579758b7c-cftsd,Uid:75a5c351-d17a-49fb-ad92-79c91f02d4bf,Namespace:calico-system,Attempt:0,} returns sandbox id \"71600e8e6d3d6d40327680367c69fc189c4e837e3d5cfc795a9a62677ab9719d\"" Dec 13 13:08:38.408037 containerd[1444]: time="2024-12-13T13:08:38.407725179Z" level=info msg="CreateContainer within sandbox \"71600e8e6d3d6d40327680367c69fc189c4e837e3d5cfc795a9a62677ab9719d\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Dec 13 13:08:38.420470 containerd[1444]: time="2024-12-13T13:08:38.420407318Z" level=info msg="CreateContainer within sandbox \"71600e8e6d3d6d40327680367c69fc189c4e837e3d5cfc795a9a62677ab9719d\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"1534a0f581b5d93fd5dcd85c081b8c796389efa3c84d1379dd6a0ee4ca17d7d9\"" Dec 13 13:08:38.421005 containerd[1444]: time="2024-12-13T13:08:38.420956163Z" level=info msg="StartContainer for \"1534a0f581b5d93fd5dcd85c081b8c796389efa3c84d1379dd6a0ee4ca17d7d9\"" Dec 13 13:08:38.464112 systemd[1]: Started cri-containerd-1534a0f581b5d93fd5dcd85c081b8c796389efa3c84d1379dd6a0ee4ca17d7d9.scope - libcontainer container 1534a0f581b5d93fd5dcd85c081b8c796389efa3c84d1379dd6a0ee4ca17d7d9. Dec 13 13:08:38.502586 containerd[1444]: time="2024-12-13T13:08:38.502470082Z" level=info msg="StartContainer for \"1534a0f581b5d93fd5dcd85c081b8c796389efa3c84d1379dd6a0ee4ca17d7d9\" returns successfully"