Jan 13 20:29:35.900291 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 13 20:29:35.900312 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Mon Jan 13 18:56:28 -00 2025 Jan 13 20:29:35.900321 kernel: KASLR enabled Jan 13 20:29:35.900327 kernel: efi: EFI v2.7 by EDK II Jan 13 20:29:35.900333 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbae018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40218 Jan 13 20:29:35.900338 kernel: random: crng init done Jan 13 20:29:35.900345 kernel: secureboot: Secure boot disabled Jan 13 20:29:35.900351 kernel: ACPI: Early table checksum verification disabled Jan 13 20:29:35.900357 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) Jan 13 20:29:35.900364 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) Jan 13 20:29:35.900370 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:29:35.900376 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:29:35.900381 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:29:35.900387 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:29:35.900394 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:29:35.900401 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:29:35.900407 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:29:35.900413 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:29:35.900420 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:29:35.900425 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jan 13 20:29:35.900431 kernel: NUMA: Failed to initialise from firmware Jan 13 20:29:35.900437 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jan 13 20:29:35.900443 kernel: NUMA: NODE_DATA [mem 0xdc959800-0xdc95efff] Jan 13 20:29:35.900449 kernel: Zone ranges: Jan 13 20:29:35.900455 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jan 13 20:29:35.900463 kernel: DMA32 empty Jan 13 20:29:35.900468 kernel: Normal empty Jan 13 20:29:35.900474 kernel: Movable zone start for each node Jan 13 20:29:35.900480 kernel: Early memory node ranges Jan 13 20:29:35.900486 kernel: node 0: [mem 0x0000000040000000-0x00000000d967ffff] Jan 13 20:29:35.900492 kernel: node 0: [mem 0x00000000d9680000-0x00000000d968ffff] Jan 13 20:29:35.900498 kernel: node 0: [mem 0x00000000d9690000-0x00000000d976ffff] Jan 13 20:29:35.900504 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Jan 13 20:29:35.900510 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Jan 13 20:29:35.900516 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jan 13 20:29:35.900522 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jan 13 20:29:35.900528 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jan 13 20:29:35.900535 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jan 13 20:29:35.900553 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jan 13 20:29:35.900560 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jan 13 20:29:35.900569 kernel: psci: probing for conduit method from ACPI. Jan 13 20:29:35.900575 kernel: psci: PSCIv1.1 detected in firmware. Jan 13 20:29:35.900581 kernel: psci: Using standard PSCI v0.2 function IDs Jan 13 20:29:35.900589 kernel: psci: Trusted OS migration not required Jan 13 20:29:35.900595 kernel: psci: SMC Calling Convention v1.1 Jan 13 20:29:35.900602 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jan 13 20:29:35.900608 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 13 20:29:35.900615 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 13 20:29:35.900622 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jan 13 20:29:35.900628 kernel: Detected PIPT I-cache on CPU0 Jan 13 20:29:35.900635 kernel: CPU features: detected: GIC system register CPU interface Jan 13 20:29:35.900641 kernel: CPU features: detected: Hardware dirty bit management Jan 13 20:29:35.900647 kernel: CPU features: detected: Spectre-v4 Jan 13 20:29:35.900655 kernel: CPU features: detected: Spectre-BHB Jan 13 20:29:35.900661 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 13 20:29:35.900668 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 13 20:29:35.900674 kernel: CPU features: detected: ARM erratum 1418040 Jan 13 20:29:35.900681 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 13 20:29:35.900687 kernel: alternatives: applying boot alternatives Jan 13 20:29:35.900694 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=9798117b3b15ef802e3d618077f87253cc08e0d5280b8fe28b307e7558b7ebcc Jan 13 20:29:35.900701 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 13 20:29:35.900708 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 13 20:29:35.900714 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 13 20:29:35.900721 kernel: Fallback order for Node 0: 0 Jan 13 20:29:35.900728 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jan 13 20:29:35.900735 kernel: Policy zone: DMA Jan 13 20:29:35.900741 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 13 20:29:35.900747 kernel: software IO TLB: area num 4. Jan 13 20:29:35.900754 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Jan 13 20:29:35.900761 kernel: Memory: 2385944K/2572288K available (10304K kernel code, 2184K rwdata, 8092K rodata, 39936K init, 897K bss, 186344K reserved, 0K cma-reserved) Jan 13 20:29:35.900767 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 13 20:29:35.900774 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 13 20:29:35.900781 kernel: rcu: RCU event tracing is enabled. Jan 13 20:29:35.900788 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 13 20:29:35.900794 kernel: Trampoline variant of Tasks RCU enabled. Jan 13 20:29:35.900801 kernel: Tracing variant of Tasks RCU enabled. Jan 13 20:29:35.900809 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 13 20:29:35.900815 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 13 20:29:35.900822 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 13 20:29:35.900828 kernel: GICv3: 256 SPIs implemented Jan 13 20:29:35.900835 kernel: GICv3: 0 Extended SPIs implemented Jan 13 20:29:35.900841 kernel: Root IRQ handler: gic_handle_irq Jan 13 20:29:35.900848 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 13 20:29:35.900854 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jan 13 20:29:35.900860 kernel: ITS [mem 0x08080000-0x0809ffff] Jan 13 20:29:35.900867 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Jan 13 20:29:35.900874 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Jan 13 20:29:35.900886 kernel: GICv3: using LPI property table @0x00000000400f0000 Jan 13 20:29:35.900893 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Jan 13 20:29:35.900900 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 13 20:29:35.900907 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 13 20:29:35.900913 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 13 20:29:35.900920 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 13 20:29:35.900926 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 13 20:29:35.900933 kernel: arm-pv: using stolen time PV Jan 13 20:29:35.900940 kernel: Console: colour dummy device 80x25 Jan 13 20:29:35.900947 kernel: ACPI: Core revision 20230628 Jan 13 20:29:35.900953 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 13 20:29:35.900962 kernel: pid_max: default: 32768 minimum: 301 Jan 13 20:29:35.900969 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 13 20:29:35.900975 kernel: landlock: Up and running. Jan 13 20:29:35.900982 kernel: SELinux: Initializing. Jan 13 20:29:35.900989 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 20:29:35.900995 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 20:29:35.901002 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 13 20:29:35.901009 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 13 20:29:35.901016 kernel: rcu: Hierarchical SRCU implementation. Jan 13 20:29:35.901024 kernel: rcu: Max phase no-delay instances is 400. Jan 13 20:29:35.901030 kernel: Platform MSI: ITS@0x8080000 domain created Jan 13 20:29:35.901037 kernel: PCI/MSI: ITS@0x8080000 domain created Jan 13 20:29:35.901043 kernel: Remapping and enabling EFI services. Jan 13 20:29:35.901050 kernel: smp: Bringing up secondary CPUs ... Jan 13 20:29:35.901057 kernel: Detected PIPT I-cache on CPU1 Jan 13 20:29:35.901063 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jan 13 20:29:35.901070 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Jan 13 20:29:35.901077 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 13 20:29:35.901085 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 13 20:29:35.901092 kernel: Detected PIPT I-cache on CPU2 Jan 13 20:29:35.901103 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jan 13 20:29:35.901111 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Jan 13 20:29:35.901118 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 13 20:29:35.901125 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jan 13 20:29:35.901132 kernel: Detected PIPT I-cache on CPU3 Jan 13 20:29:35.901138 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jan 13 20:29:35.901146 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Jan 13 20:29:35.901154 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 13 20:29:35.901161 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jan 13 20:29:35.901168 kernel: smp: Brought up 1 node, 4 CPUs Jan 13 20:29:35.901174 kernel: SMP: Total of 4 processors activated. Jan 13 20:29:35.901181 kernel: CPU features: detected: 32-bit EL0 Support Jan 13 20:29:35.901188 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 13 20:29:35.901195 kernel: CPU features: detected: Common not Private translations Jan 13 20:29:35.901202 kernel: CPU features: detected: CRC32 instructions Jan 13 20:29:35.901210 kernel: CPU features: detected: Enhanced Virtualization Traps Jan 13 20:29:35.901217 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 13 20:29:35.901224 kernel: CPU features: detected: LSE atomic instructions Jan 13 20:29:35.901231 kernel: CPU features: detected: Privileged Access Never Jan 13 20:29:35.901238 kernel: CPU features: detected: RAS Extension Support Jan 13 20:29:35.901245 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 13 20:29:35.901252 kernel: CPU: All CPU(s) started at EL1 Jan 13 20:29:35.901259 kernel: alternatives: applying system-wide alternatives Jan 13 20:29:35.901266 kernel: devtmpfs: initialized Jan 13 20:29:35.901274 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 13 20:29:35.901282 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 13 20:29:35.901289 kernel: pinctrl core: initialized pinctrl subsystem Jan 13 20:29:35.901295 kernel: SMBIOS 3.0.0 present. Jan 13 20:29:35.901302 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Jan 13 20:29:35.901309 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 13 20:29:35.901317 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 13 20:29:35.901324 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 13 20:29:35.901331 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 13 20:29:35.901339 kernel: audit: initializing netlink subsys (disabled) Jan 13 20:29:35.901346 kernel: audit: type=2000 audit(0.019:1): state=initialized audit_enabled=0 res=1 Jan 13 20:29:35.901353 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 13 20:29:35.901359 kernel: cpuidle: using governor menu Jan 13 20:29:35.901366 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 13 20:29:35.901374 kernel: ASID allocator initialised with 32768 entries Jan 13 20:29:35.901380 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 13 20:29:35.901387 kernel: Serial: AMBA PL011 UART driver Jan 13 20:29:35.901394 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 13 20:29:35.901402 kernel: Modules: 0 pages in range for non-PLT usage Jan 13 20:29:35.901409 kernel: Modules: 508880 pages in range for PLT usage Jan 13 20:29:35.901416 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 13 20:29:35.901423 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 13 20:29:35.901430 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 13 20:29:35.901437 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 13 20:29:35.901444 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 13 20:29:35.901451 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 13 20:29:35.901458 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 13 20:29:35.901466 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 13 20:29:35.901473 kernel: ACPI: Added _OSI(Module Device) Jan 13 20:29:35.901480 kernel: ACPI: Added _OSI(Processor Device) Jan 13 20:29:35.901487 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 13 20:29:35.901494 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 13 20:29:35.901501 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 13 20:29:35.901508 kernel: ACPI: Interpreter enabled Jan 13 20:29:35.901514 kernel: ACPI: Using GIC for interrupt routing Jan 13 20:29:35.901521 kernel: ACPI: MCFG table detected, 1 entries Jan 13 20:29:35.901528 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jan 13 20:29:35.901536 kernel: printk: console [ttyAMA0] enabled Jan 13 20:29:35.901550 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 13 20:29:35.901677 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 13 20:29:35.901749 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 13 20:29:35.901812 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 13 20:29:35.901874 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jan 13 20:29:35.901948 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jan 13 20:29:35.901961 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jan 13 20:29:35.901968 kernel: PCI host bridge to bus 0000:00 Jan 13 20:29:35.902035 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jan 13 20:29:35.902093 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 13 20:29:35.902148 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jan 13 20:29:35.902203 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 13 20:29:35.902281 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jan 13 20:29:35.902358 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jan 13 20:29:35.902421 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jan 13 20:29:35.902484 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jan 13 20:29:35.902568 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jan 13 20:29:35.902636 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jan 13 20:29:35.902701 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jan 13 20:29:35.902764 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jan 13 20:29:35.902827 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jan 13 20:29:35.902892 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 13 20:29:35.902955 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jan 13 20:29:35.902965 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 13 20:29:35.902972 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 13 20:29:35.902979 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 13 20:29:35.902986 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 13 20:29:35.902995 kernel: iommu: Default domain type: Translated Jan 13 20:29:35.903003 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 13 20:29:35.903009 kernel: efivars: Registered efivars operations Jan 13 20:29:35.903016 kernel: vgaarb: loaded Jan 13 20:29:35.903023 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 13 20:29:35.903030 kernel: VFS: Disk quotas dquot_6.6.0 Jan 13 20:29:35.903037 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 13 20:29:35.903044 kernel: pnp: PnP ACPI init Jan 13 20:29:35.903117 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jan 13 20:29:35.903129 kernel: pnp: PnP ACPI: found 1 devices Jan 13 20:29:35.903136 kernel: NET: Registered PF_INET protocol family Jan 13 20:29:35.903143 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 13 20:29:35.903150 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 13 20:29:35.903157 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 13 20:29:35.903164 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 13 20:29:35.903172 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 13 20:29:35.903179 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 13 20:29:35.903187 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 20:29:35.903194 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 20:29:35.903201 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 13 20:29:35.903208 kernel: PCI: CLS 0 bytes, default 64 Jan 13 20:29:35.903215 kernel: kvm [1]: HYP mode not available Jan 13 20:29:35.903222 kernel: Initialise system trusted keyrings Jan 13 20:29:35.903229 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 13 20:29:35.903236 kernel: Key type asymmetric registered Jan 13 20:29:35.903243 kernel: Asymmetric key parser 'x509' registered Jan 13 20:29:35.903250 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 13 20:29:35.903258 kernel: io scheduler mq-deadline registered Jan 13 20:29:35.903265 kernel: io scheduler kyber registered Jan 13 20:29:35.903272 kernel: io scheduler bfq registered Jan 13 20:29:35.903279 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 13 20:29:35.903286 kernel: ACPI: button: Power Button [PWRB] Jan 13 20:29:35.903294 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 13 20:29:35.903359 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jan 13 20:29:35.903368 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 13 20:29:35.903375 kernel: thunder_xcv, ver 1.0 Jan 13 20:29:35.903384 kernel: thunder_bgx, ver 1.0 Jan 13 20:29:35.903392 kernel: nicpf, ver 1.0 Jan 13 20:29:35.903399 kernel: nicvf, ver 1.0 Jan 13 20:29:35.903471 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 13 20:29:35.903533 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-13T20:29:35 UTC (1736800175) Jan 13 20:29:35.903617 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 13 20:29:35.903625 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jan 13 20:29:35.903632 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 13 20:29:35.903643 kernel: watchdog: Hard watchdog permanently disabled Jan 13 20:29:35.903650 kernel: NET: Registered PF_INET6 protocol family Jan 13 20:29:35.903657 kernel: Segment Routing with IPv6 Jan 13 20:29:35.903664 kernel: In-situ OAM (IOAM) with IPv6 Jan 13 20:29:35.903671 kernel: NET: Registered PF_PACKET protocol family Jan 13 20:29:35.903678 kernel: Key type dns_resolver registered Jan 13 20:29:35.903685 kernel: registered taskstats version 1 Jan 13 20:29:35.903692 kernel: Loading compiled-in X.509 certificates Jan 13 20:29:35.903699 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 46cb4d1b22f3a5974766fe7d7b651e2f296d4fe0' Jan 13 20:29:35.903708 kernel: Key type .fscrypt registered Jan 13 20:29:35.903714 kernel: Key type fscrypt-provisioning registered Jan 13 20:29:35.903722 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 13 20:29:35.903729 kernel: ima: Allocated hash algorithm: sha1 Jan 13 20:29:35.903736 kernel: ima: No architecture policies found Jan 13 20:29:35.903743 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 13 20:29:35.903750 kernel: clk: Disabling unused clocks Jan 13 20:29:35.903757 kernel: Freeing unused kernel memory: 39936K Jan 13 20:29:35.903764 kernel: Run /init as init process Jan 13 20:29:35.903772 kernel: with arguments: Jan 13 20:29:35.903779 kernel: /init Jan 13 20:29:35.903786 kernel: with environment: Jan 13 20:29:35.903792 kernel: HOME=/ Jan 13 20:29:35.903799 kernel: TERM=linux Jan 13 20:29:35.903806 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 13 20:29:35.903815 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 20:29:35.903824 systemd[1]: Detected virtualization kvm. Jan 13 20:29:35.903832 systemd[1]: Detected architecture arm64. Jan 13 20:29:35.903840 systemd[1]: Running in initrd. Jan 13 20:29:35.903847 systemd[1]: No hostname configured, using default hostname. Jan 13 20:29:35.903854 systemd[1]: Hostname set to . Jan 13 20:29:35.903862 systemd[1]: Initializing machine ID from VM UUID. Jan 13 20:29:35.903869 systemd[1]: Queued start job for default target initrd.target. Jan 13 20:29:35.903877 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:29:35.903892 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:29:35.903904 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 13 20:29:35.903911 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 20:29:35.903919 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 13 20:29:35.903927 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 13 20:29:35.903936 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 13 20:29:35.903943 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 13 20:29:35.903953 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:29:35.903960 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:29:35.903968 systemd[1]: Reached target paths.target - Path Units. Jan 13 20:29:35.903975 systemd[1]: Reached target slices.target - Slice Units. Jan 13 20:29:35.903983 systemd[1]: Reached target swap.target - Swaps. Jan 13 20:29:35.903990 systemd[1]: Reached target timers.target - Timer Units. Jan 13 20:29:35.903998 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 20:29:35.904005 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 20:29:35.904013 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 20:29:35.904022 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 20:29:35.904030 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:29:35.904037 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 20:29:35.904045 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:29:35.904052 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 20:29:35.904060 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 13 20:29:35.904067 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 20:29:35.904075 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 13 20:29:35.904084 systemd[1]: Starting systemd-fsck-usr.service... Jan 13 20:29:35.904092 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 20:29:35.904099 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 20:29:35.904107 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:29:35.904115 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 13 20:29:35.904122 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:29:35.904129 systemd[1]: Finished systemd-fsck-usr.service. Jan 13 20:29:35.904139 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 20:29:35.904147 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:29:35.904154 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:29:35.904162 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 13 20:29:35.904170 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 20:29:35.904198 systemd-journald[239]: Collecting audit messages is disabled. Jan 13 20:29:35.904217 kernel: Bridge firewalling registered Jan 13 20:29:35.904225 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 20:29:35.904233 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:29:35.904241 systemd-journald[239]: Journal started Jan 13 20:29:35.904264 systemd-journald[239]: Runtime Journal (/run/log/journal/493be33340e843e886cef3bbe465cbd9) is 5.9M, max 47.3M, 41.4M free. Jan 13 20:29:35.883065 systemd-modules-load[240]: Inserted module 'overlay' Jan 13 20:29:35.899046 systemd-modules-load[240]: Inserted module 'br_netfilter' Jan 13 20:29:35.906863 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 20:29:35.909652 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 20:29:35.912288 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 20:29:35.915997 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:29:35.917756 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:29:35.921253 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:29:35.923060 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 13 20:29:35.925341 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:29:35.927712 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 20:29:35.937051 dracut-cmdline[274]: dracut-dracut-053 Jan 13 20:29:35.939418 dracut-cmdline[274]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=9798117b3b15ef802e3d618077f87253cc08e0d5280b8fe28b307e7558b7ebcc Jan 13 20:29:35.955032 systemd-resolved[276]: Positive Trust Anchors: Jan 13 20:29:35.955051 systemd-resolved[276]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 20:29:35.955083 systemd-resolved[276]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 20:29:35.959611 systemd-resolved[276]: Defaulting to hostname 'linux'. Jan 13 20:29:35.960617 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 20:29:35.961844 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:29:36.013574 kernel: SCSI subsystem initialized Jan 13 20:29:36.018559 kernel: Loading iSCSI transport class v2.0-870. Jan 13 20:29:36.025560 kernel: iscsi: registered transport (tcp) Jan 13 20:29:36.038656 kernel: iscsi: registered transport (qla4xxx) Jan 13 20:29:36.038716 kernel: QLogic iSCSI HBA Driver Jan 13 20:29:36.079509 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 13 20:29:36.086715 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 13 20:29:36.103487 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 13 20:29:36.105057 kernel: device-mapper: uevent: version 1.0.3 Jan 13 20:29:36.105094 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 13 20:29:36.151574 kernel: raid6: neonx8 gen() 15779 MB/s Jan 13 20:29:36.168562 kernel: raid6: neonx4 gen() 15798 MB/s Jan 13 20:29:36.185561 kernel: raid6: neonx2 gen() 13205 MB/s Jan 13 20:29:36.202559 kernel: raid6: neonx1 gen() 10519 MB/s Jan 13 20:29:36.219567 kernel: raid6: int64x8 gen() 6792 MB/s Jan 13 20:29:36.236559 kernel: raid6: int64x4 gen() 7347 MB/s Jan 13 20:29:36.253563 kernel: raid6: int64x2 gen() 6108 MB/s Jan 13 20:29:36.270554 kernel: raid6: int64x1 gen() 5059 MB/s Jan 13 20:29:36.270578 kernel: raid6: using algorithm neonx4 gen() 15798 MB/s Jan 13 20:29:36.287555 kernel: raid6: .... xor() 12491 MB/s, rmw enabled Jan 13 20:29:36.287595 kernel: raid6: using neon recovery algorithm Jan 13 20:29:36.292925 kernel: xor: measuring software checksum speed Jan 13 20:29:36.292949 kernel: 8regs : 20844 MB/sec Jan 13 20:29:36.292959 kernel: 32regs : 21699 MB/sec Jan 13 20:29:36.293851 kernel: arm64_neon : 27908 MB/sec Jan 13 20:29:36.293866 kernel: xor: using function: arm64_neon (27908 MB/sec) Jan 13 20:29:36.344574 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 13 20:29:36.365601 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 13 20:29:36.383728 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:29:36.396553 systemd-udevd[461]: Using default interface naming scheme 'v255'. Jan 13 20:29:36.399638 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:29:36.402109 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 13 20:29:36.416050 dracut-pre-trigger[464]: rd.md=0: removing MD RAID activation Jan 13 20:29:36.441951 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 20:29:36.457756 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 20:29:36.497263 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:29:36.505985 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 13 20:29:36.517740 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 13 20:29:36.519213 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 20:29:36.520638 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:29:36.522513 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 20:29:36.532729 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 13 20:29:36.538565 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jan 13 20:29:36.555597 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 13 20:29:36.555715 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 13 20:29:36.555728 kernel: GPT:9289727 != 19775487 Jan 13 20:29:36.555737 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 13 20:29:36.555745 kernel: GPT:9289727 != 19775487 Jan 13 20:29:36.555754 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 13 20:29:36.555763 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 20:29:36.545801 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 20:29:36.545921 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:29:36.551330 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:29:36.553921 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:29:36.554110 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:29:36.556041 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:29:36.561798 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:29:36.563904 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 13 20:29:36.575599 kernel: BTRFS: device fsid 2be7cc1c-29d4-4496-b29b-8561323213d2 devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (507) Jan 13 20:29:36.577516 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (516) Jan 13 20:29:36.579689 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 13 20:29:36.582565 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:29:36.592345 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 13 20:29:36.596328 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 13 20:29:36.597513 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 13 20:29:36.602817 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 13 20:29:36.621734 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 13 20:29:36.623593 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:29:36.629368 disk-uuid[550]: Primary Header is updated. Jan 13 20:29:36.629368 disk-uuid[550]: Secondary Entries is updated. Jan 13 20:29:36.629368 disk-uuid[550]: Secondary Header is updated. Jan 13 20:29:36.632562 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 20:29:36.649045 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:29:37.642563 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 20:29:37.643075 disk-uuid[551]: The operation has completed successfully. Jan 13 20:29:37.669258 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 13 20:29:37.669356 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 13 20:29:37.687713 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 13 20:29:37.690346 sh[572]: Success Jan 13 20:29:37.702592 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 13 20:29:37.732836 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 13 20:29:37.745829 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 13 20:29:37.747430 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 13 20:29:37.760138 kernel: BTRFS info (device dm-0): first mount of filesystem 2be7cc1c-29d4-4496-b29b-8561323213d2 Jan 13 20:29:37.760182 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 13 20:29:37.760192 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 13 20:29:37.761362 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 13 20:29:37.761401 kernel: BTRFS info (device dm-0): using free space tree Jan 13 20:29:37.764658 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 13 20:29:37.765771 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 13 20:29:37.781730 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 13 20:29:37.783450 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 13 20:29:37.790914 kernel: BTRFS info (device vda6): first mount of filesystem 9f8ecb6c-ace6-4d16-8781-f4e964dc0779 Jan 13 20:29:37.790959 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 13 20:29:37.790971 kernel: BTRFS info (device vda6): using free space tree Jan 13 20:29:37.793967 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 20:29:37.800779 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 13 20:29:37.802262 kernel: BTRFS info (device vda6): last unmount of filesystem 9f8ecb6c-ace6-4d16-8781-f4e964dc0779 Jan 13 20:29:37.807483 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 13 20:29:37.813715 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 13 20:29:37.878530 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 20:29:37.885761 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 20:29:37.910672 systemd-networkd[766]: lo: Link UP Jan 13 20:29:37.910682 systemd-networkd[766]: lo: Gained carrier Jan 13 20:29:37.911447 systemd-networkd[766]: Enumeration completed Jan 13 20:29:37.911586 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 20:29:37.911912 systemd-networkd[766]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:29:37.911916 systemd-networkd[766]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:29:37.921264 ignition[664]: Ignition 2.20.0 Jan 13 20:29:37.912687 systemd-networkd[766]: eth0: Link UP Jan 13 20:29:37.921270 ignition[664]: Stage: fetch-offline Jan 13 20:29:37.912690 systemd-networkd[766]: eth0: Gained carrier Jan 13 20:29:37.921300 ignition[664]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:29:37.912697 systemd-networkd[766]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:29:37.921308 ignition[664]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:29:37.915579 systemd[1]: Reached target network.target - Network. Jan 13 20:29:37.921453 ignition[664]: parsed url from cmdline: "" Jan 13 20:29:37.921456 ignition[664]: no config URL provided Jan 13 20:29:37.921461 ignition[664]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 20:29:37.933618 systemd-networkd[766]: eth0: DHCPv4 address 10.0.0.136/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 13 20:29:37.921468 ignition[664]: no config at "/usr/lib/ignition/user.ign" Jan 13 20:29:37.921491 ignition[664]: op(1): [started] loading QEMU firmware config module Jan 13 20:29:37.921495 ignition[664]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 13 20:29:37.937209 ignition[664]: op(1): [finished] loading QEMU firmware config module Jan 13 20:29:37.976315 ignition[664]: parsing config with SHA512: 3e3cdb1480f0257577cd267fe0761766c4989de553a82d92ff4d1fdd187a23a228ecd24e958add978c7773241f189fea39a9299af3ae41aa75d851980fb2f1e6 Jan 13 20:29:37.981192 unknown[664]: fetched base config from "system" Jan 13 20:29:37.981202 unknown[664]: fetched user config from "qemu" Jan 13 20:29:37.982293 ignition[664]: fetch-offline: fetch-offline passed Jan 13 20:29:37.982480 ignition[664]: Ignition finished successfully Jan 13 20:29:37.984964 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 20:29:37.986287 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 13 20:29:37.996734 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 13 20:29:38.006704 ignition[772]: Ignition 2.20.0 Jan 13 20:29:38.006715 ignition[772]: Stage: kargs Jan 13 20:29:38.006886 ignition[772]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:29:38.006896 ignition[772]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:29:38.007821 ignition[772]: kargs: kargs passed Jan 13 20:29:38.007861 ignition[772]: Ignition finished successfully Jan 13 20:29:38.011169 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 13 20:29:38.013072 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 13 20:29:38.027103 ignition[781]: Ignition 2.20.0 Jan 13 20:29:38.027113 ignition[781]: Stage: disks Jan 13 20:29:38.027273 ignition[781]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:29:38.027282 ignition[781]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:29:38.028115 ignition[781]: disks: disks passed Jan 13 20:29:38.028157 ignition[781]: Ignition finished successfully Jan 13 20:29:38.030503 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 13 20:29:38.032102 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 13 20:29:38.033702 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 20:29:38.034525 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 20:29:38.035204 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 20:29:38.037026 systemd[1]: Reached target basic.target - Basic System. Jan 13 20:29:38.047670 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 13 20:29:38.061586 systemd-fsck[792]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 13 20:29:38.065515 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 13 20:29:38.076626 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 13 20:29:38.124563 kernel: EXT4-fs (vda9): mounted filesystem f9a95e53-2d63-4443-b523-cb2108fb48f6 r/w with ordered data mode. Quota mode: none. Jan 13 20:29:38.124635 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 13 20:29:38.125668 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 13 20:29:38.136615 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 20:29:38.138104 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 13 20:29:38.139217 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 13 20:29:38.139270 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 13 20:29:38.139292 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 20:29:38.145382 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 13 20:29:38.149309 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (800) Jan 13 20:29:38.149330 kernel: BTRFS info (device vda6): first mount of filesystem 9f8ecb6c-ace6-4d16-8781-f4e964dc0779 Jan 13 20:29:38.149340 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 13 20:29:38.148352 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 13 20:29:38.152271 kernel: BTRFS info (device vda6): using free space tree Jan 13 20:29:38.153589 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 20:29:38.154862 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 20:29:38.198581 initrd-setup-root[825]: cut: /sysroot/etc/passwd: No such file or directory Jan 13 20:29:38.203075 initrd-setup-root[832]: cut: /sysroot/etc/group: No such file or directory Jan 13 20:29:38.206633 initrd-setup-root[839]: cut: /sysroot/etc/shadow: No such file or directory Jan 13 20:29:38.210494 initrd-setup-root[846]: cut: /sysroot/etc/gshadow: No such file or directory Jan 13 20:29:38.283605 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 13 20:29:38.294701 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 13 20:29:38.297275 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 13 20:29:38.301558 kernel: BTRFS info (device vda6): last unmount of filesystem 9f8ecb6c-ace6-4d16-8781-f4e964dc0779 Jan 13 20:29:38.316757 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 13 20:29:38.324868 ignition[914]: INFO : Ignition 2.20.0 Jan 13 20:29:38.324868 ignition[914]: INFO : Stage: mount Jan 13 20:29:38.327212 ignition[914]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:29:38.327212 ignition[914]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:29:38.327212 ignition[914]: INFO : mount: mount passed Jan 13 20:29:38.327212 ignition[914]: INFO : Ignition finished successfully Jan 13 20:29:38.327390 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 13 20:29:38.343676 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 13 20:29:38.758723 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 13 20:29:38.768723 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 20:29:38.774739 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (929) Jan 13 20:29:38.774782 kernel: BTRFS info (device vda6): first mount of filesystem 9f8ecb6c-ace6-4d16-8781-f4e964dc0779 Jan 13 20:29:38.774793 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 13 20:29:38.775846 kernel: BTRFS info (device vda6): using free space tree Jan 13 20:29:38.777554 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 20:29:38.778736 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 20:29:38.794478 ignition[946]: INFO : Ignition 2.20.0 Jan 13 20:29:38.794478 ignition[946]: INFO : Stage: files Jan 13 20:29:38.795899 ignition[946]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:29:38.795899 ignition[946]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:29:38.795899 ignition[946]: DEBUG : files: compiled without relabeling support, skipping Jan 13 20:29:38.798329 ignition[946]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 13 20:29:38.798329 ignition[946]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 13 20:29:38.801467 ignition[946]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 13 20:29:38.802626 ignition[946]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 13 20:29:38.802626 ignition[946]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 13 20:29:38.802004 unknown[946]: wrote ssh authorized keys file for user: core Jan 13 20:29:38.806060 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 13 20:29:38.806060 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jan 13 20:29:38.866045 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 13 20:29:39.000742 systemd-networkd[766]: eth0: Gained IPv6LL Jan 13 20:29:39.066168 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 13 20:29:39.066168 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 13 20:29:39.069194 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 13 20:29:39.069194 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 13 20:29:39.069194 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 13 20:29:39.069194 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 20:29:39.069194 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 20:29:39.069194 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 20:29:39.069194 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 20:29:39.069194 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 20:29:39.069194 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 20:29:39.069194 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jan 13 20:29:39.069194 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jan 13 20:29:39.069194 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jan 13 20:29:39.069194 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Jan 13 20:29:39.405513 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 13 20:29:39.603776 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jan 13 20:29:39.603776 ignition[946]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 13 20:29:39.607457 ignition[946]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 20:29:39.607457 ignition[946]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 20:29:39.607457 ignition[946]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 13 20:29:39.607457 ignition[946]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 13 20:29:39.607457 ignition[946]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 13 20:29:39.615182 ignition[946]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 13 20:29:39.615182 ignition[946]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 13 20:29:39.615182 ignition[946]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 13 20:29:39.630457 ignition[946]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 13 20:29:39.633945 ignition[946]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 13 20:29:39.635166 ignition[946]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 13 20:29:39.635166 ignition[946]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 13 20:29:39.635166 ignition[946]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 13 20:29:39.635166 ignition[946]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 13 20:29:39.635166 ignition[946]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 13 20:29:39.635166 ignition[946]: INFO : files: files passed Jan 13 20:29:39.635166 ignition[946]: INFO : Ignition finished successfully Jan 13 20:29:39.636579 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 13 20:29:39.648702 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 13 20:29:39.650950 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 13 20:29:39.652028 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 13 20:29:39.652107 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 13 20:29:39.657612 initrd-setup-root-after-ignition[975]: grep: /sysroot/oem/oem-release: No such file or directory Jan 13 20:29:39.660607 initrd-setup-root-after-ignition[977]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:29:39.660607 initrd-setup-root-after-ignition[977]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:29:39.663269 initrd-setup-root-after-ignition[981]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:29:39.663675 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 20:29:39.665438 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 13 20:29:39.676716 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 13 20:29:39.695628 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 13 20:29:39.695732 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 13 20:29:39.697611 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 13 20:29:39.699176 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 13 20:29:39.700784 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 13 20:29:39.701528 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 13 20:29:39.715819 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 20:29:39.717904 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 13 20:29:39.729172 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:29:39.730271 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:29:39.732027 systemd[1]: Stopped target timers.target - Timer Units. Jan 13 20:29:39.733615 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 13 20:29:39.733728 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 20:29:39.735987 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 13 20:29:39.737690 systemd[1]: Stopped target basic.target - Basic System. Jan 13 20:29:39.739139 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 13 20:29:39.740782 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 20:29:39.742396 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 13 20:29:39.744185 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 13 20:29:39.745672 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 20:29:39.747365 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 13 20:29:39.749278 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 13 20:29:39.750820 systemd[1]: Stopped target swap.target - Swaps. Jan 13 20:29:39.752309 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 13 20:29:39.752422 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 13 20:29:39.754622 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:29:39.756383 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:29:39.758075 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 13 20:29:39.758172 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:29:39.759836 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 13 20:29:39.759948 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 13 20:29:39.762437 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 13 20:29:39.762547 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 20:29:39.764269 systemd[1]: Stopped target paths.target - Path Units. Jan 13 20:29:39.765704 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 13 20:29:39.765791 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:29:39.767391 systemd[1]: Stopped target slices.target - Slice Units. Jan 13 20:29:39.768929 systemd[1]: Stopped target sockets.target - Socket Units. Jan 13 20:29:39.770418 systemd[1]: iscsid.socket: Deactivated successfully. Jan 13 20:29:39.770507 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 20:29:39.772274 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 13 20:29:39.772347 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 20:29:39.774300 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 13 20:29:39.774398 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 20:29:39.775852 systemd[1]: ignition-files.service: Deactivated successfully. Jan 13 20:29:39.775954 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 13 20:29:39.788706 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 13 20:29:39.790033 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 13 20:29:39.790765 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 13 20:29:39.790870 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:29:39.792478 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 13 20:29:39.792586 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 20:29:39.797838 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 13 20:29:39.797937 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 13 20:29:39.802745 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 13 20:29:39.803585 ignition[1001]: INFO : Ignition 2.20.0 Jan 13 20:29:39.803585 ignition[1001]: INFO : Stage: umount Jan 13 20:29:39.803585 ignition[1001]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:29:39.803585 ignition[1001]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:29:39.806331 ignition[1001]: INFO : umount: umount passed Jan 13 20:29:39.806331 ignition[1001]: INFO : Ignition finished successfully Jan 13 20:29:39.806932 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 13 20:29:39.808282 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 13 20:29:39.809762 systemd[1]: Stopped target network.target - Network. Jan 13 20:29:39.810603 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 13 20:29:39.810661 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 13 20:29:39.811834 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 13 20:29:39.811878 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 13 20:29:39.813115 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 13 20:29:39.813159 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 13 20:29:39.814375 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 13 20:29:39.814409 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 13 20:29:39.815823 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 13 20:29:39.817083 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 13 20:29:39.822602 systemd-networkd[766]: eth0: DHCPv6 lease lost Jan 13 20:29:39.823095 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 13 20:29:39.823242 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 13 20:29:39.825519 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 13 20:29:39.827334 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 13 20:29:39.828892 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 13 20:29:39.828970 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:29:39.842641 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 13 20:29:39.843296 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 13 20:29:39.843352 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 20:29:39.844892 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 20:29:39.844932 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:29:39.846375 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 13 20:29:39.846415 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 13 20:29:39.847965 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 13 20:29:39.848001 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:29:39.849714 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:29:39.861033 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 13 20:29:39.861193 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:29:39.862945 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 13 20:29:39.863004 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 13 20:29:39.865959 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 13 20:29:39.865992 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:29:39.867492 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 13 20:29:39.867531 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 13 20:29:39.872032 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 13 20:29:39.872078 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 13 20:29:39.876361 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 20:29:39.876409 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:29:39.890753 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 13 20:29:39.891583 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 13 20:29:39.891634 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:29:39.893216 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 13 20:29:39.893255 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 20:29:39.894827 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 13 20:29:39.894897 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:29:39.896531 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:29:39.896592 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:29:39.898391 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 13 20:29:39.899569 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 13 20:29:39.900413 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 13 20:29:39.900497 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 13 20:29:39.902164 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 13 20:29:39.902259 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 13 20:29:39.904956 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 13 20:29:39.906422 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 13 20:29:39.906497 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 13 20:29:39.917675 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 13 20:29:39.923072 systemd[1]: Switching root. Jan 13 20:29:39.949435 systemd-journald[239]: Journal stopped Jan 13 20:29:40.626263 systemd-journald[239]: Received SIGTERM from PID 1 (systemd). Jan 13 20:29:40.626315 kernel: SELinux: policy capability network_peer_controls=1 Jan 13 20:29:40.626330 kernel: SELinux: policy capability open_perms=1 Jan 13 20:29:40.626350 kernel: SELinux: policy capability extended_socket_class=1 Jan 13 20:29:40.626359 kernel: SELinux: policy capability always_check_network=0 Jan 13 20:29:40.626371 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 13 20:29:40.626384 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 13 20:29:40.626393 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 13 20:29:40.626402 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 13 20:29:40.626412 kernel: audit: type=1403 audit(1736800180.088:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 13 20:29:40.626422 systemd[1]: Successfully loaded SELinux policy in 29.705ms. Jan 13 20:29:40.626437 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.710ms. Jan 13 20:29:40.626450 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 20:29:40.626461 systemd[1]: Detected virtualization kvm. Jan 13 20:29:40.626471 systemd[1]: Detected architecture arm64. Jan 13 20:29:40.626480 systemd[1]: Detected first boot. Jan 13 20:29:40.626491 systemd[1]: Initializing machine ID from VM UUID. Jan 13 20:29:40.626501 zram_generator::config[1046]: No configuration found. Jan 13 20:29:40.626512 systemd[1]: Populated /etc with preset unit settings. Jan 13 20:29:40.626522 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 13 20:29:40.626533 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 13 20:29:40.626576 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 13 20:29:40.626590 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 13 20:29:40.626601 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 13 20:29:40.626612 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 13 20:29:40.626626 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 13 20:29:40.626636 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 13 20:29:40.626646 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 13 20:29:40.626657 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 13 20:29:40.626670 systemd[1]: Created slice user.slice - User and Session Slice. Jan 13 20:29:40.626680 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:29:40.626690 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:29:40.626701 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 13 20:29:40.626711 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 13 20:29:40.626721 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 13 20:29:40.626732 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 20:29:40.626743 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 13 20:29:40.626753 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:29:40.626765 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 13 20:29:40.626775 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 13 20:29:40.626786 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 13 20:29:40.626796 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 13 20:29:40.626807 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:29:40.626817 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 20:29:40.626828 systemd[1]: Reached target slices.target - Slice Units. Jan 13 20:29:40.626843 systemd[1]: Reached target swap.target - Swaps. Jan 13 20:29:40.626855 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 13 20:29:40.626866 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 13 20:29:40.626882 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:29:40.626894 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 20:29:40.626905 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:29:40.626915 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 13 20:29:40.626925 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 13 20:29:40.626936 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 13 20:29:40.626946 systemd[1]: Mounting media.mount - External Media Directory... Jan 13 20:29:40.626958 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 13 20:29:40.626969 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 13 20:29:40.626979 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 13 20:29:40.626990 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 13 20:29:40.627001 systemd[1]: Reached target machines.target - Containers. Jan 13 20:29:40.627011 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 13 20:29:40.627022 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:29:40.627033 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 20:29:40.627045 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 13 20:29:40.627055 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:29:40.627066 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 20:29:40.627078 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:29:40.627088 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 13 20:29:40.627098 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:29:40.627109 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 13 20:29:40.627119 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 13 20:29:40.627130 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 13 20:29:40.627141 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 13 20:29:40.627152 systemd[1]: Stopped systemd-fsck-usr.service. Jan 13 20:29:40.627162 kernel: fuse: init (API version 7.39) Jan 13 20:29:40.627172 kernel: loop: module loaded Jan 13 20:29:40.627181 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 20:29:40.627192 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 20:29:40.627203 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 13 20:29:40.627213 kernel: ACPI: bus type drm_connector registered Jan 13 20:29:40.627223 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 13 20:29:40.627236 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 20:29:40.627247 systemd[1]: verity-setup.service: Deactivated successfully. Jan 13 20:29:40.627257 systemd[1]: Stopped verity-setup.service. Jan 13 20:29:40.627267 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 13 20:29:40.627280 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 13 20:29:40.627311 systemd-journald[1113]: Collecting audit messages is disabled. Jan 13 20:29:40.627335 systemd[1]: Mounted media.mount - External Media Directory. Jan 13 20:29:40.627346 systemd-journald[1113]: Journal started Jan 13 20:29:40.627371 systemd-journald[1113]: Runtime Journal (/run/log/journal/493be33340e843e886cef3bbe465cbd9) is 5.9M, max 47.3M, 41.4M free. Jan 13 20:29:40.436236 systemd[1]: Queued start job for default target multi-user.target. Jan 13 20:29:40.453403 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 13 20:29:40.453775 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 13 20:29:40.629569 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 20:29:40.630059 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 13 20:29:40.631063 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 13 20:29:40.632017 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 13 20:29:40.634596 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 13 20:29:40.635787 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:29:40.637088 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 13 20:29:40.637333 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 13 20:29:40.638624 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:29:40.638856 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:29:40.640165 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 20:29:40.640371 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 20:29:40.641583 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:29:40.641791 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:29:40.643146 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 13 20:29:40.643376 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 13 20:29:40.644526 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:29:40.644746 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:29:40.645914 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 20:29:40.647294 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 13 20:29:40.648557 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 13 20:29:40.661702 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 13 20:29:40.676664 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 13 20:29:40.678565 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 13 20:29:40.679439 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 13 20:29:40.679477 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 20:29:40.681205 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 13 20:29:40.683107 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 13 20:29:40.684986 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 13 20:29:40.685954 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:29:40.687469 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 13 20:29:40.690709 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 13 20:29:40.691739 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:29:40.692728 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 13 20:29:40.693660 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 20:29:40.697736 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:29:40.699761 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 13 20:29:40.707124 systemd-journald[1113]: Time spent on flushing to /var/log/journal/493be33340e843e886cef3bbe465cbd9 is 34.904ms for 858 entries. Jan 13 20:29:40.707124 systemd-journald[1113]: System Journal (/var/log/journal/493be33340e843e886cef3bbe465cbd9) is 8.0M, max 195.6M, 187.6M free. Jan 13 20:29:40.752325 systemd-journald[1113]: Received client request to flush runtime journal. Jan 13 20:29:40.752366 kernel: loop0: detected capacity change from 0 to 113552 Jan 13 20:29:40.704716 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 20:29:40.706977 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:29:40.708988 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 13 20:29:40.713315 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 13 20:29:40.714526 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 13 20:29:40.716328 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 13 20:29:40.720156 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 13 20:29:40.724847 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 13 20:29:40.729796 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 13 20:29:40.737195 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:29:40.743401 udevadm[1167]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 13 20:29:40.754067 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 13 20:29:40.756598 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 13 20:29:40.758607 systemd-tmpfiles[1158]: ACLs are not supported, ignoring. Jan 13 20:29:40.758623 systemd-tmpfiles[1158]: ACLs are not supported, ignoring. Jan 13 20:29:40.762193 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 13 20:29:40.766686 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 13 20:29:40.767920 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 20:29:40.776780 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 13 20:29:40.792590 kernel: loop1: detected capacity change from 0 to 116784 Jan 13 20:29:40.794262 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 13 20:29:40.800703 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 20:29:40.817202 systemd-tmpfiles[1180]: ACLs are not supported, ignoring. Jan 13 20:29:40.817222 systemd-tmpfiles[1180]: ACLs are not supported, ignoring. Jan 13 20:29:40.822175 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:29:40.824559 kernel: loop2: detected capacity change from 0 to 194512 Jan 13 20:29:40.867662 kernel: loop3: detected capacity change from 0 to 113552 Jan 13 20:29:40.873593 kernel: loop4: detected capacity change from 0 to 116784 Jan 13 20:29:40.878558 kernel: loop5: detected capacity change from 0 to 194512 Jan 13 20:29:40.884035 (sd-merge)[1185]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 13 20:29:40.884415 (sd-merge)[1185]: Merged extensions into '/usr'. Jan 13 20:29:40.887796 systemd[1]: Reloading requested from client PID 1157 ('systemd-sysext') (unit systemd-sysext.service)... Jan 13 20:29:40.887816 systemd[1]: Reloading... Jan 13 20:29:40.946578 zram_generator::config[1211]: No configuration found. Jan 13 20:29:40.967779 ldconfig[1152]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 13 20:29:41.035233 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:29:41.070464 systemd[1]: Reloading finished in 182 ms. Jan 13 20:29:41.111375 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 13 20:29:41.112822 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 13 20:29:41.128720 systemd[1]: Starting ensure-sysext.service... Jan 13 20:29:41.130427 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 20:29:41.145030 systemd[1]: Reloading requested from client PID 1245 ('systemctl') (unit ensure-sysext.service)... Jan 13 20:29:41.145045 systemd[1]: Reloading... Jan 13 20:29:41.146767 systemd-tmpfiles[1246]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 13 20:29:41.146987 systemd-tmpfiles[1246]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 13 20:29:41.147607 systemd-tmpfiles[1246]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 13 20:29:41.147811 systemd-tmpfiles[1246]: ACLs are not supported, ignoring. Jan 13 20:29:41.147854 systemd-tmpfiles[1246]: ACLs are not supported, ignoring. Jan 13 20:29:41.150317 systemd-tmpfiles[1246]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 20:29:41.150332 systemd-tmpfiles[1246]: Skipping /boot Jan 13 20:29:41.158196 systemd-tmpfiles[1246]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 20:29:41.158210 systemd-tmpfiles[1246]: Skipping /boot Jan 13 20:29:41.191582 zram_generator::config[1276]: No configuration found. Jan 13 20:29:41.264180 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:29:41.299220 systemd[1]: Reloading finished in 153 ms. Jan 13 20:29:41.314575 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 13 20:29:41.326933 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:29:41.334520 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 20:29:41.336880 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 13 20:29:41.338939 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 13 20:29:41.342352 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 20:29:41.350733 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:29:41.355972 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 13 20:29:41.359448 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:29:41.360992 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:29:41.364663 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:29:41.367091 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:29:41.368703 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:29:41.370964 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 13 20:29:41.372842 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 13 20:29:41.377218 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:29:41.377358 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:29:41.378699 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:29:41.378840 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:29:41.380468 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:29:41.380620 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:29:41.390796 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:29:41.392259 systemd-udevd[1314]: Using default interface naming scheme 'v255'. Jan 13 20:29:41.403365 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:29:41.411842 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:29:41.414023 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:29:41.416804 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:29:41.424990 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 13 20:29:41.427852 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:29:41.429362 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 13 20:29:41.432092 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 13 20:29:41.435197 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:29:41.435356 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:29:41.437112 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:29:41.437590 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:29:41.439459 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:29:41.439606 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:29:41.457004 systemd[1]: Finished ensure-sysext.service. Jan 13 20:29:41.466646 augenrules[1372]: No rules Jan 13 20:29:41.467378 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 20:29:41.467659 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 20:29:41.471602 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1346) Jan 13 20:29:41.476368 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 13 20:29:41.479595 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 13 20:29:41.484393 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 13 20:29:41.498275 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:29:41.505182 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:29:41.507432 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 20:29:41.512900 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:29:41.517001 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:29:41.518836 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:29:41.522745 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 20:29:41.525473 systemd-resolved[1312]: Positive Trust Anchors: Jan 13 20:29:41.525492 systemd-resolved[1312]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 20:29:41.525524 systemd-resolved[1312]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 20:29:41.526320 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 13 20:29:41.528096 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 13 20:29:41.528615 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:29:41.528990 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:29:41.530709 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 20:29:41.530924 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 20:29:41.532497 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:29:41.532666 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:29:41.533301 systemd-resolved[1312]: Defaulting to hostname 'linux'. Jan 13 20:29:41.534411 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:29:41.534831 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:29:41.536364 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 20:29:41.543085 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:29:41.544887 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:29:41.544956 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 20:29:41.547614 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 13 20:29:41.550513 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 13 20:29:41.574612 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 13 20:29:41.591795 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:29:41.601585 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 13 20:29:41.603045 systemd[1]: Reached target time-set.target - System Time Set. Jan 13 20:29:41.607313 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 13 20:29:41.609168 systemd-networkd[1389]: lo: Link UP Jan 13 20:29:41.609181 systemd-networkd[1389]: lo: Gained carrier Jan 13 20:29:41.610111 systemd-networkd[1389]: Enumeration completed Jan 13 20:29:41.613901 systemd-networkd[1389]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:29:41.613909 systemd-networkd[1389]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:29:41.615095 systemd-networkd[1389]: eth0: Link UP Jan 13 20:29:41.615104 systemd-networkd[1389]: eth0: Gained carrier Jan 13 20:29:41.615117 systemd-networkd[1389]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:29:41.626824 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 13 20:29:41.627925 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 20:29:41.629150 systemd[1]: Reached target network.target - Network. Jan 13 20:29:41.631110 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 13 20:29:41.634672 systemd-networkd[1389]: eth0: DHCPv4 address 10.0.0.136/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 13 20:29:41.635393 systemd-timesyncd[1391]: Network configuration changed, trying to establish connection. Jan 13 20:29:41.636603 systemd-timesyncd[1391]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 13 20:29:41.636666 systemd-timesyncd[1391]: Initial clock synchronization to Mon 2025-01-13 20:29:41.987910 UTC. Jan 13 20:29:41.647109 lvm[1410]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 20:29:41.655614 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:29:41.671030 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 13 20:29:41.672362 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:29:41.673229 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 20:29:41.674087 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 13 20:29:41.675018 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 13 20:29:41.676067 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 13 20:29:41.676983 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 13 20:29:41.677921 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 13 20:29:41.678768 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 13 20:29:41.678799 systemd[1]: Reached target paths.target - Path Units. Jan 13 20:29:41.679436 systemd[1]: Reached target timers.target - Timer Units. Jan 13 20:29:41.680926 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 13 20:29:41.683056 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 13 20:29:41.691471 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 13 20:29:41.693608 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 13 20:29:41.694921 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 13 20:29:41.695773 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 20:29:41.696435 systemd[1]: Reached target basic.target - Basic System. Jan 13 20:29:41.697184 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 13 20:29:41.697216 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 13 20:29:41.698145 systemd[1]: Starting containerd.service - containerd container runtime... Jan 13 20:29:41.699934 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 13 20:29:41.702683 lvm[1418]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 20:29:41.703265 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 13 20:29:41.705741 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 13 20:29:41.706556 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 13 20:29:41.709522 jq[1421]: false Jan 13 20:29:41.708648 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 13 20:29:41.711767 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 13 20:29:41.714751 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 13 20:29:41.717718 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 13 20:29:41.720657 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 13 20:29:41.726187 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 13 20:29:41.726738 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 13 20:29:41.727018 dbus-daemon[1420]: [system] SELinux support is enabled Jan 13 20:29:41.729912 extend-filesystems[1422]: Found loop3 Jan 13 20:29:41.734325 extend-filesystems[1422]: Found loop4 Jan 13 20:29:41.734325 extend-filesystems[1422]: Found loop5 Jan 13 20:29:41.734325 extend-filesystems[1422]: Found vda Jan 13 20:29:41.734325 extend-filesystems[1422]: Found vda1 Jan 13 20:29:41.734325 extend-filesystems[1422]: Found vda2 Jan 13 20:29:41.734325 extend-filesystems[1422]: Found vda3 Jan 13 20:29:41.734325 extend-filesystems[1422]: Found usr Jan 13 20:29:41.734325 extend-filesystems[1422]: Found vda4 Jan 13 20:29:41.734325 extend-filesystems[1422]: Found vda6 Jan 13 20:29:41.734325 extend-filesystems[1422]: Found vda7 Jan 13 20:29:41.734325 extend-filesystems[1422]: Found vda9 Jan 13 20:29:41.734325 extend-filesystems[1422]: Checking size of /dev/vda9 Jan 13 20:29:41.730851 systemd[1]: Starting update-engine.service - Update Engine... Jan 13 20:29:41.734677 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 13 20:29:41.737922 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 13 20:29:41.743584 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 13 20:29:41.746304 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 13 20:29:41.749848 jq[1436]: true Jan 13 20:29:41.747612 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 13 20:29:41.747920 systemd[1]: motdgen.service: Deactivated successfully. Jan 13 20:29:41.748056 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 13 20:29:41.751081 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 13 20:29:41.751437 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 13 20:29:41.754927 extend-filesystems[1422]: Resized partition /dev/vda9 Jan 13 20:29:41.774096 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 13 20:29:41.776709 update_engine[1434]: I20250113 20:29:41.776012 1434 main.cc:92] Flatcar Update Engine starting Jan 13 20:29:41.774136 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 13 20:29:41.782683 extend-filesystems[1446]: resize2fs 1.47.1 (20-May-2024) Jan 13 20:29:41.783601 update_engine[1434]: I20250113 20:29:41.777756 1434 update_check_scheduler.cc:74] Next update check in 6m23s Jan 13 20:29:41.775147 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 13 20:29:41.775166 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 13 20:29:41.777882 systemd[1]: Started update-engine.service - Update Engine. Jan 13 20:29:41.784223 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 13 20:29:41.785831 (ntainerd)[1449]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 13 20:29:41.789607 jq[1445]: true Jan 13 20:29:41.790556 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1365) Jan 13 20:29:41.790601 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 13 20:29:41.799088 systemd-logind[1430]: Watching system buttons on /dev/input/event0 (Power Button) Jan 13 20:29:41.799509 systemd-logind[1430]: New seat seat0. Jan 13 20:29:41.800612 systemd[1]: Started systemd-logind.service - User Login Management. Jan 13 20:29:41.805636 tar[1444]: linux-arm64/helm Jan 13 20:29:41.830569 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 13 20:29:41.848558 extend-filesystems[1446]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 13 20:29:41.848558 extend-filesystems[1446]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 13 20:29:41.848558 extend-filesystems[1446]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 13 20:29:41.856595 extend-filesystems[1422]: Resized filesystem in /dev/vda9 Jan 13 20:29:41.853847 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 13 20:29:41.855580 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 13 20:29:41.863506 locksmithd[1456]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 13 20:29:41.867803 bash[1477]: Updated "/home/core/.ssh/authorized_keys" Jan 13 20:29:41.869916 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 13 20:29:41.872522 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 13 20:29:42.000192 containerd[1449]: time="2025-01-13T20:29:42.000107120Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 13 20:29:42.029052 containerd[1449]: time="2025-01-13T20:29:42.028963912Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:29:42.030527 containerd[1449]: time="2025-01-13T20:29:42.030482907Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:29:42.030527 containerd[1449]: time="2025-01-13T20:29:42.030525500Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 13 20:29:42.030574 containerd[1449]: time="2025-01-13T20:29:42.030544667Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 13 20:29:42.030750 containerd[1449]: time="2025-01-13T20:29:42.030728946Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 13 20:29:42.030786 containerd[1449]: time="2025-01-13T20:29:42.030753499Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 13 20:29:42.030835 containerd[1449]: time="2025-01-13T20:29:42.030815552Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:29:42.030835 containerd[1449]: time="2025-01-13T20:29:42.030832172Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:29:42.031020 containerd[1449]: time="2025-01-13T20:29:42.030998954Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:29:42.031020 containerd[1449]: time="2025-01-13T20:29:42.031018539Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 13 20:29:42.031068 containerd[1449]: time="2025-01-13T20:29:42.031031108Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:29:42.031068 containerd[1449]: time="2025-01-13T20:29:42.031040378Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 13 20:29:42.031134 containerd[1449]: time="2025-01-13T20:29:42.031115668Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:29:42.031353 containerd[1449]: time="2025-01-13T20:29:42.031331224Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:29:42.031452 containerd[1449]: time="2025-01-13T20:29:42.031433030Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:29:42.031479 containerd[1449]: time="2025-01-13T20:29:42.031450234Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 13 20:29:42.031542 containerd[1449]: time="2025-01-13T20:29:42.031525107Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 13 20:29:42.031605 containerd[1449]: time="2025-01-13T20:29:42.031574047Z" level=info msg="metadata content store policy set" policy=shared Jan 13 20:29:42.035465 containerd[1449]: time="2025-01-13T20:29:42.035429954Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 13 20:29:42.035529 containerd[1449]: time="2025-01-13T20:29:42.035487622Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 13 20:29:42.035755 containerd[1449]: time="2025-01-13T20:29:42.035677496Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 13 20:29:42.035784 containerd[1449]: time="2025-01-13T20:29:42.035764813Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 13 20:29:42.035784 containerd[1449]: time="2025-01-13T20:29:42.035780723Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 13 20:29:42.036021 containerd[1449]: time="2025-01-13T20:29:42.035996738Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 13 20:29:42.036394 containerd[1449]: time="2025-01-13T20:29:42.036365754Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 13 20:29:42.036513 containerd[1449]: time="2025-01-13T20:29:42.036492866Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 13 20:29:42.036537 containerd[1449]: time="2025-01-13T20:29:42.036515875Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 13 20:29:42.036537 containerd[1449]: time="2025-01-13T20:29:42.036531451Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 13 20:29:42.036571 containerd[1449]: time="2025-01-13T20:29:42.036546066Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 13 20:29:42.036710 containerd[1449]: time="2025-01-13T20:29:42.036670046Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 13 20:29:42.036747 containerd[1449]: time="2025-01-13T20:29:42.036714101Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 13 20:29:42.036786 containerd[1449]: time="2025-01-13T20:29:42.036728883Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 13 20:29:42.036813 containerd[1449]: time="2025-01-13T20:29:42.036795446Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 13 20:29:42.036832 containerd[1449]: time="2025-01-13T20:29:42.036812609Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 13 20:29:42.036832 containerd[1449]: time="2025-01-13T20:29:42.036825721Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 13 20:29:42.036867 containerd[1449]: time="2025-01-13T20:29:42.036837204Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 13 20:29:42.036867 containerd[1449]: time="2025-01-13T20:29:42.036858584Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 13 20:29:42.036911 containerd[1449]: time="2025-01-13T20:29:42.036872782Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 13 20:29:42.036911 containerd[1449]: time="2025-01-13T20:29:42.036886228Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 13 20:29:42.036911 containerd[1449]: time="2025-01-13T20:29:42.036908193Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 13 20:29:42.037002 containerd[1449]: time="2025-01-13T20:29:42.036980643Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 13 20:29:42.037030 containerd[1449]: time="2025-01-13T20:29:42.037006659Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 13 20:29:42.037030 containerd[1449]: time="2025-01-13T20:29:42.037019604Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 13 20:29:42.037065 containerd[1449]: time="2025-01-13T20:29:42.037032465Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 13 20:29:42.037065 containerd[1449]: time="2025-01-13T20:29:42.037046037Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 13 20:29:42.037104 containerd[1449]: time="2025-01-13T20:29:42.037062949Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 13 20:29:42.037104 containerd[1449]: time="2025-01-13T20:29:42.037075518Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 13 20:29:42.037104 containerd[1449]: time="2025-01-13T20:29:42.037087377Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 13 20:29:42.037160 containerd[1449]: time="2025-01-13T20:29:42.037105417Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 13 20:29:42.037179 containerd[1449]: time="2025-01-13T20:29:42.037166634Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 13 20:29:42.038050 containerd[1449]: time="2025-01-13T20:29:42.037195698Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 13 20:29:42.038050 containerd[1449]: time="2025-01-13T20:29:42.037215115Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 13 20:29:42.038050 containerd[1449]: time="2025-01-13T20:29:42.037226724Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 13 20:29:42.038050 containerd[1449]: time="2025-01-13T20:29:42.037497818Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 13 20:29:42.038050 containerd[1449]: time="2025-01-13T20:29:42.037521495Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 13 20:29:42.038050 containerd[1449]: time="2025-01-13T20:29:42.037532645Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 13 20:29:42.038050 containerd[1449]: time="2025-01-13T20:29:42.037544504Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 13 20:29:42.038050 containerd[1449]: time="2025-01-13T20:29:42.037554401Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 13 20:29:42.038050 containerd[1449]: time="2025-01-13T20:29:42.037887923Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 13 20:29:42.038050 containerd[1449]: time="2025-01-13T20:29:42.037909930Z" level=info msg="NRI interface is disabled by configuration." Jan 13 20:29:42.038050 containerd[1449]: time="2025-01-13T20:29:42.037921580Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 13 20:29:42.040636 containerd[1449]: time="2025-01-13T20:29:42.038293186Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 13 20:29:42.040636 containerd[1449]: time="2025-01-13T20:29:42.040062270Z" level=info msg="Connect containerd service" Jan 13 20:29:42.040636 containerd[1449]: time="2025-01-13T20:29:42.040129710Z" level=info msg="using legacy CRI server" Jan 13 20:29:42.040636 containerd[1449]: time="2025-01-13T20:29:42.040138228Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 13 20:29:42.040880 containerd[1449]: time="2025-01-13T20:29:42.040847157Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 13 20:29:42.043913 containerd[1449]: time="2025-01-13T20:29:42.043876127Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 20:29:42.044293 containerd[1449]: time="2025-01-13T20:29:42.044167640Z" level=info msg="Start subscribing containerd event" Jan 13 20:29:42.044293 containerd[1449]: time="2025-01-13T20:29:42.044238295Z" level=info msg="Start recovering state" Jan 13 20:29:42.044405 containerd[1449]: time="2025-01-13T20:29:42.044390921Z" level=info msg="Start event monitor" Jan 13 20:29:42.044581 containerd[1449]: time="2025-01-13T20:29:42.044464541Z" level=info msg="Start snapshots syncer" Jan 13 20:29:42.044581 containerd[1449]: time="2025-01-13T20:29:42.044479407Z" level=info msg="Start cni network conf syncer for default" Jan 13 20:29:42.044581 containerd[1449]: time="2025-01-13T20:29:42.044487675Z" level=info msg="Start streaming server" Jan 13 20:29:42.045302 containerd[1449]: time="2025-01-13T20:29:42.044995538Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 13 20:29:42.045369 containerd[1449]: time="2025-01-13T20:29:42.045351275Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 13 20:29:42.045506 systemd[1]: Started containerd.service - containerd container runtime. Jan 13 20:29:42.048552 containerd[1449]: time="2025-01-13T20:29:42.047384773Z" level=info msg="containerd successfully booted in 0.048797s" Jan 13 20:29:42.159806 tar[1444]: linux-arm64/LICENSE Jan 13 20:29:42.159923 tar[1444]: linux-arm64/README.md Jan 13 20:29:42.171202 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 13 20:29:42.379065 sshd_keygen[1442]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 13 20:29:42.398641 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 13 20:29:42.410889 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 13 20:29:42.417201 systemd[1]: issuegen.service: Deactivated successfully. Jan 13 20:29:42.418648 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 13 20:29:42.421178 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 13 20:29:42.433903 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 13 20:29:42.436888 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 13 20:29:42.438863 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 13 20:29:42.440073 systemd[1]: Reached target getty.target - Login Prompts. Jan 13 20:29:43.224951 systemd-networkd[1389]: eth0: Gained IPv6LL Jan 13 20:29:43.227776 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 13 20:29:43.229424 systemd[1]: Reached target network-online.target - Network is Online. Jan 13 20:29:43.244796 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 13 20:29:43.247045 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:29:43.248809 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 13 20:29:43.262857 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 13 20:29:43.263090 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 13 20:29:43.264739 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 13 20:29:43.269618 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 13 20:29:43.737457 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:29:43.738776 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 13 20:29:43.739990 systemd[1]: Startup finished in 532ms (kernel) + 4.392s (initrd) + 3.684s (userspace) = 8.610s. Jan 13 20:29:43.741448 (kubelet)[1531]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:29:43.767624 agetty[1507]: failed to open credentials directory Jan 13 20:29:43.767671 agetty[1508]: failed to open credentials directory Jan 13 20:29:44.232244 kubelet[1531]: E0113 20:29:44.232102 1531 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:29:44.234850 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:29:44.234992 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:29:48.433245 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 13 20:29:48.434381 systemd[1]: Started sshd@0-10.0.0.136:22-10.0.0.1:37858.service - OpenSSH per-connection server daemon (10.0.0.1:37858). Jan 13 20:29:48.500746 sshd[1545]: Accepted publickey for core from 10.0.0.1 port 37858 ssh2: RSA SHA256:dHV21v/TvsC6tzdcDH8HHQ5Gsjsp+3vcXiRjWYQ6Qqo Jan 13 20:29:48.502575 sshd-session[1545]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:29:48.513462 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 13 20:29:48.522881 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 13 20:29:48.524608 systemd-logind[1430]: New session 1 of user core. Jan 13 20:29:48.531801 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 13 20:29:48.534899 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 13 20:29:48.540493 (systemd)[1549]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 13 20:29:48.615534 systemd[1549]: Queued start job for default target default.target. Jan 13 20:29:48.625498 systemd[1549]: Created slice app.slice - User Application Slice. Jan 13 20:29:48.625543 systemd[1549]: Reached target paths.target - Paths. Jan 13 20:29:48.625554 systemd[1549]: Reached target timers.target - Timers. Jan 13 20:29:48.626817 systemd[1549]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 13 20:29:48.636567 systemd[1549]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 13 20:29:48.636630 systemd[1549]: Reached target sockets.target - Sockets. Jan 13 20:29:48.636642 systemd[1549]: Reached target basic.target - Basic System. Jan 13 20:29:48.636679 systemd[1549]: Reached target default.target - Main User Target. Jan 13 20:29:48.636707 systemd[1549]: Startup finished in 91ms. Jan 13 20:29:48.636967 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 13 20:29:48.638519 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 13 20:29:48.698599 systemd[1]: Started sshd@1-10.0.0.136:22-10.0.0.1:37864.service - OpenSSH per-connection server daemon (10.0.0.1:37864). Jan 13 20:29:48.759487 sshd[1560]: Accepted publickey for core from 10.0.0.1 port 37864 ssh2: RSA SHA256:dHV21v/TvsC6tzdcDH8HHQ5Gsjsp+3vcXiRjWYQ6Qqo Jan 13 20:29:48.760680 sshd-session[1560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:29:48.764639 systemd-logind[1430]: New session 2 of user core. Jan 13 20:29:48.777729 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 13 20:29:48.830608 sshd[1562]: Connection closed by 10.0.0.1 port 37864 Jan 13 20:29:48.831753 sshd-session[1560]: pam_unix(sshd:session): session closed for user core Jan 13 20:29:48.841883 systemd[1]: sshd@1-10.0.0.136:22-10.0.0.1:37864.service: Deactivated successfully. Jan 13 20:29:48.843309 systemd[1]: session-2.scope: Deactivated successfully. Jan 13 20:29:48.845911 systemd-logind[1430]: Session 2 logged out. Waiting for processes to exit. Jan 13 20:29:48.847660 systemd[1]: Started sshd@2-10.0.0.136:22-10.0.0.1:37866.service - OpenSSH per-connection server daemon (10.0.0.1:37866). Jan 13 20:29:48.848690 systemd-logind[1430]: Removed session 2. Jan 13 20:29:48.892707 sshd[1567]: Accepted publickey for core from 10.0.0.1 port 37866 ssh2: RSA SHA256:dHV21v/TvsC6tzdcDH8HHQ5Gsjsp+3vcXiRjWYQ6Qqo Jan 13 20:29:48.893936 sshd-session[1567]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:29:48.897689 systemd-logind[1430]: New session 3 of user core. Jan 13 20:29:48.908714 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 13 20:29:48.957165 sshd[1569]: Connection closed by 10.0.0.1 port 37866 Jan 13 20:29:48.957783 sshd-session[1567]: pam_unix(sshd:session): session closed for user core Jan 13 20:29:48.970156 systemd[1]: sshd@2-10.0.0.136:22-10.0.0.1:37866.service: Deactivated successfully. Jan 13 20:29:48.972953 systemd[1]: session-3.scope: Deactivated successfully. Jan 13 20:29:48.974279 systemd-logind[1430]: Session 3 logged out. Waiting for processes to exit. Jan 13 20:29:48.975523 systemd[1]: Started sshd@3-10.0.0.136:22-10.0.0.1:37872.service - OpenSSH per-connection server daemon (10.0.0.1:37872). Jan 13 20:29:48.976235 systemd-logind[1430]: Removed session 3. Jan 13 20:29:49.020536 sshd[1574]: Accepted publickey for core from 10.0.0.1 port 37872 ssh2: RSA SHA256:dHV21v/TvsC6tzdcDH8HHQ5Gsjsp+3vcXiRjWYQ6Qqo Jan 13 20:29:49.021971 sshd-session[1574]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:29:49.026227 systemd-logind[1430]: New session 4 of user core. Jan 13 20:29:49.037702 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 13 20:29:49.089877 sshd[1576]: Connection closed by 10.0.0.1 port 37872 Jan 13 20:29:49.090310 sshd-session[1574]: pam_unix(sshd:session): session closed for user core Jan 13 20:29:49.105918 systemd[1]: sshd@3-10.0.0.136:22-10.0.0.1:37872.service: Deactivated successfully. Jan 13 20:29:49.107268 systemd[1]: session-4.scope: Deactivated successfully. Jan 13 20:29:49.108515 systemd-logind[1430]: Session 4 logged out. Waiting for processes to exit. Jan 13 20:29:49.109633 systemd[1]: Started sshd@4-10.0.0.136:22-10.0.0.1:37886.service - OpenSSH per-connection server daemon (10.0.0.1:37886). Jan 13 20:29:49.110442 systemd-logind[1430]: Removed session 4. Jan 13 20:29:49.154397 sshd[1581]: Accepted publickey for core from 10.0.0.1 port 37886 ssh2: RSA SHA256:dHV21v/TvsC6tzdcDH8HHQ5Gsjsp+3vcXiRjWYQ6Qqo Jan 13 20:29:49.155584 sshd-session[1581]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:29:49.159395 systemd-logind[1430]: New session 5 of user core. Jan 13 20:29:49.170697 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 13 20:29:49.229090 sudo[1584]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 13 20:29:49.229672 sudo[1584]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:29:49.244475 sudo[1584]: pam_unix(sudo:session): session closed for user root Jan 13 20:29:49.245901 sshd[1583]: Connection closed by 10.0.0.1 port 37886 Jan 13 20:29:49.246455 sshd-session[1581]: pam_unix(sshd:session): session closed for user core Jan 13 20:29:49.259120 systemd[1]: sshd@4-10.0.0.136:22-10.0.0.1:37886.service: Deactivated successfully. Jan 13 20:29:49.260718 systemd[1]: session-5.scope: Deactivated successfully. Jan 13 20:29:49.263746 systemd-logind[1430]: Session 5 logged out. Waiting for processes to exit. Jan 13 20:29:49.265111 systemd[1]: Started sshd@5-10.0.0.136:22-10.0.0.1:37900.service - OpenSSH per-connection server daemon (10.0.0.1:37900). Jan 13 20:29:49.266941 systemd-logind[1430]: Removed session 5. Jan 13 20:29:49.311067 sshd[1589]: Accepted publickey for core from 10.0.0.1 port 37900 ssh2: RSA SHA256:dHV21v/TvsC6tzdcDH8HHQ5Gsjsp+3vcXiRjWYQ6Qqo Jan 13 20:29:49.312287 sshd-session[1589]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:29:49.315853 systemd-logind[1430]: New session 6 of user core. Jan 13 20:29:49.335722 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 13 20:29:49.387413 sudo[1593]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 13 20:29:49.387708 sudo[1593]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:29:49.390787 sudo[1593]: pam_unix(sudo:session): session closed for user root Jan 13 20:29:49.395457 sudo[1592]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 13 20:29:49.395745 sudo[1592]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:29:49.415947 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 20:29:49.437734 augenrules[1615]: No rules Jan 13 20:29:49.438896 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 20:29:49.440605 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 20:29:49.441573 sudo[1592]: pam_unix(sudo:session): session closed for user root Jan 13 20:29:49.443025 sshd[1591]: Connection closed by 10.0.0.1 port 37900 Jan 13 20:29:49.443468 sshd-session[1589]: pam_unix(sshd:session): session closed for user core Jan 13 20:29:49.453938 systemd[1]: sshd@5-10.0.0.136:22-10.0.0.1:37900.service: Deactivated successfully. Jan 13 20:29:49.455346 systemd[1]: session-6.scope: Deactivated successfully. Jan 13 20:29:49.456516 systemd-logind[1430]: Session 6 logged out. Waiting for processes to exit. Jan 13 20:29:49.471918 systemd[1]: Started sshd@6-10.0.0.136:22-10.0.0.1:37912.service - OpenSSH per-connection server daemon (10.0.0.1:37912). Jan 13 20:29:49.472853 systemd-logind[1430]: Removed session 6. Jan 13 20:29:49.513228 sshd[1623]: Accepted publickey for core from 10.0.0.1 port 37912 ssh2: RSA SHA256:dHV21v/TvsC6tzdcDH8HHQ5Gsjsp+3vcXiRjWYQ6Qqo Jan 13 20:29:49.514240 sshd-session[1623]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:29:49.518200 systemd-logind[1430]: New session 7 of user core. Jan 13 20:29:49.531714 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 13 20:29:49.582463 sudo[1626]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 13 20:29:49.583159 sudo[1626]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:29:49.924839 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 13 20:29:49.924953 (dockerd)[1647]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 13 20:29:50.175203 dockerd[1647]: time="2025-01-13T20:29:50.175078437Z" level=info msg="Starting up" Jan 13 20:29:50.348177 dockerd[1647]: time="2025-01-13T20:29:50.348124546Z" level=info msg="Loading containers: start." Jan 13 20:29:50.496601 kernel: Initializing XFRM netlink socket Jan 13 20:29:50.573423 systemd-networkd[1389]: docker0: Link UP Jan 13 20:29:50.604983 dockerd[1647]: time="2025-01-13T20:29:50.604924083Z" level=info msg="Loading containers: done." Jan 13 20:29:50.618305 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2198096878-merged.mount: Deactivated successfully. Jan 13 20:29:50.622014 dockerd[1647]: time="2025-01-13T20:29:50.621957836Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 13 20:29:50.622104 dockerd[1647]: time="2025-01-13T20:29:50.622080988Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Jan 13 20:29:50.622304 dockerd[1647]: time="2025-01-13T20:29:50.622275279Z" level=info msg="Daemon has completed initialization" Jan 13 20:29:50.652411 dockerd[1647]: time="2025-01-13T20:29:50.652353065Z" level=info msg="API listen on /run/docker.sock" Jan 13 20:29:50.652586 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 13 20:29:51.435418 containerd[1449]: time="2025-01-13T20:29:51.435374104Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Jan 13 20:29:52.206079 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount721970966.mount: Deactivated successfully. Jan 13 20:29:54.400574 containerd[1449]: time="2025-01-13T20:29:54.400366341Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:29:54.401477 containerd[1449]: time="2025-01-13T20:29:54.401239403Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.12: active requests=0, bytes read=32201252" Jan 13 20:29:54.402420 containerd[1449]: time="2025-01-13T20:29:54.402365121Z" level=info msg="ImageCreate event name:\"sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:29:54.405193 containerd[1449]: time="2025-01-13T20:29:54.405142978Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:29:54.406380 containerd[1449]: time="2025-01-13T20:29:54.406277413Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.12\" with image id \"sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\", size \"32198050\" in 2.970860104s" Jan 13 20:29:54.406380 containerd[1449]: time="2025-01-13T20:29:54.406311190Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f\"" Jan 13 20:29:54.424129 containerd[1449]: time="2025-01-13T20:29:54.424094526Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Jan 13 20:29:54.485597 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 13 20:29:54.494722 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:29:54.586792 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:29:54.590975 (kubelet)[1924]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:29:54.666386 kubelet[1924]: E0113 20:29:54.665497 1924 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:29:54.669175 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:29:54.669319 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:29:57.193258 containerd[1449]: time="2025-01-13T20:29:57.193206302Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:29:57.194999 containerd[1449]: time="2025-01-13T20:29:57.194867255Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.12: active requests=0, bytes read=29381299" Jan 13 20:29:57.195715 containerd[1449]: time="2025-01-13T20:29:57.195685721Z" level=info msg="ImageCreate event name:\"sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:29:57.199031 containerd[1449]: time="2025-01-13T20:29:57.198971332Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:29:57.200036 containerd[1449]: time="2025-01-13T20:29:57.199991024Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.12\" with image id \"sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\", size \"30783618\" in 2.775859238s" Jan 13 20:29:57.200222 containerd[1449]: time="2025-01-13T20:29:57.200121433Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7\"" Jan 13 20:29:57.218220 containerd[1449]: time="2025-01-13T20:29:57.218179220Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Jan 13 20:29:58.980213 containerd[1449]: time="2025-01-13T20:29:58.980160844Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:29:58.980647 containerd[1449]: time="2025-01-13T20:29:58.980608797Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.12: active requests=0, bytes read=15765642" Jan 13 20:29:58.981492 containerd[1449]: time="2025-01-13T20:29:58.981465138Z" level=info msg="ImageCreate event name:\"sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:29:58.984187 containerd[1449]: time="2025-01-13T20:29:58.984150080Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:29:58.985365 containerd[1449]: time="2025-01-13T20:29:58.985333228Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.12\" with image id \"sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\", size \"17167979\" in 1.767107016s" Jan 13 20:29:58.985411 containerd[1449]: time="2025-01-13T20:29:58.985364710Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db\"" Jan 13 20:29:59.003430 containerd[1449]: time="2025-01-13T20:29:59.003381509Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Jan 13 20:30:00.148672 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3774994785.mount: Deactivated successfully. Jan 13 20:30:00.333515 containerd[1449]: time="2025-01-13T20:30:00.333425611Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:30:00.333856 containerd[1449]: time="2025-01-13T20:30:00.333814149Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.12: active requests=0, bytes read=25273979" Jan 13 20:30:00.334633 containerd[1449]: time="2025-01-13T20:30:00.334595963Z" level=info msg="ImageCreate event name:\"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:30:00.336342 containerd[1449]: time="2025-01-13T20:30:00.336298259Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:30:00.336928 containerd[1449]: time="2025-01-13T20:30:00.336891849Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.12\" with image id \"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\", repo tag \"registry.k8s.io/kube-proxy:v1.29.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\", size \"25272996\" in 1.333469319s" Jan 13 20:30:00.336966 containerd[1449]: time="2025-01-13T20:30:00.336928755Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\"" Jan 13 20:30:00.355271 containerd[1449]: time="2025-01-13T20:30:00.355236608Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 13 20:30:00.916515 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2504646716.mount: Deactivated successfully. Jan 13 20:30:01.768983 containerd[1449]: time="2025-01-13T20:30:01.768924614Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:30:01.769785 containerd[1449]: time="2025-01-13T20:30:01.769744294Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Jan 13 20:30:01.772054 containerd[1449]: time="2025-01-13T20:30:01.772017409Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:30:01.775256 containerd[1449]: time="2025-01-13T20:30:01.774710981Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:30:01.775967 containerd[1449]: time="2025-01-13T20:30:01.775886012Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.420500218s" Jan 13 20:30:01.775967 containerd[1449]: time="2025-01-13T20:30:01.775919408Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Jan 13 20:30:01.794070 containerd[1449]: time="2025-01-13T20:30:01.793915355Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 13 20:30:02.216946 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2334705158.mount: Deactivated successfully. Jan 13 20:30:02.220950 containerd[1449]: time="2025-01-13T20:30:02.220683477Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:30:02.222510 containerd[1449]: time="2025-01-13T20:30:02.222445621Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" Jan 13 20:30:02.223506 containerd[1449]: time="2025-01-13T20:30:02.223457008Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:30:02.225530 containerd[1449]: time="2025-01-13T20:30:02.225464296Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:30:02.226573 containerd[1449]: time="2025-01-13T20:30:02.226512635Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 432.562444ms" Jan 13 20:30:02.226624 containerd[1449]: time="2025-01-13T20:30:02.226576951Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Jan 13 20:30:02.245283 containerd[1449]: time="2025-01-13T20:30:02.245247133Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jan 13 20:30:02.815322 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3350302507.mount: Deactivated successfully. Jan 13 20:30:04.919599 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 13 20:30:04.928797 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:30:05.018865 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:30:05.022197 (kubelet)[2081]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:30:05.065415 kubelet[2081]: E0113 20:30:05.065300 2081 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:30:05.068881 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:30:05.069159 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:30:05.499002 containerd[1449]: time="2025-01-13T20:30:05.498937126Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:30:05.499499 containerd[1449]: time="2025-01-13T20:30:05.499440470Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200788" Jan 13 20:30:05.500263 containerd[1449]: time="2025-01-13T20:30:05.500228713Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:30:05.503336 containerd[1449]: time="2025-01-13T20:30:05.503298476Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:30:05.504579 containerd[1449]: time="2025-01-13T20:30:05.504534750Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 3.259253563s" Jan 13 20:30:05.504617 containerd[1449]: time="2025-01-13T20:30:05.504578359Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Jan 13 20:30:10.283172 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:30:10.293841 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:30:10.308898 systemd[1]: Reloading requested from client PID 2173 ('systemctl') (unit session-7.scope)... Jan 13 20:30:10.308914 systemd[1]: Reloading... Jan 13 20:30:10.388615 zram_generator::config[2209]: No configuration found. Jan 13 20:30:10.482073 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:30:10.539053 systemd[1]: Reloading finished in 229 ms. Jan 13 20:30:10.576140 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 13 20:30:10.576206 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 13 20:30:10.576417 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:30:10.578597 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:30:10.669313 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:30:10.674516 (kubelet)[2258]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 20:30:10.721181 kubelet[2258]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:30:10.721181 kubelet[2258]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 20:30:10.721181 kubelet[2258]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:30:10.721557 kubelet[2258]: I0113 20:30:10.721189 2258 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 20:30:11.500462 kubelet[2258]: I0113 20:30:11.500428 2258 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 13 20:30:11.501524 kubelet[2258]: I0113 20:30:11.500681 2258 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 20:30:11.501524 kubelet[2258]: I0113 20:30:11.500896 2258 server.go:919] "Client rotation is on, will bootstrap in background" Jan 13 20:30:11.532718 kubelet[2258]: I0113 20:30:11.532681 2258 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 20:30:11.532949 kubelet[2258]: E0113 20:30:11.532923 2258 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.136:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.136:6443: connect: connection refused Jan 13 20:30:11.542756 kubelet[2258]: I0113 20:30:11.542724 2258 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 20:30:11.543697 kubelet[2258]: I0113 20:30:11.543669 2258 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 20:30:11.543898 kubelet[2258]: I0113 20:30:11.543876 2258 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 20:30:11.543979 kubelet[2258]: I0113 20:30:11.543901 2258 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 20:30:11.543979 kubelet[2258]: I0113 20:30:11.543910 2258 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 20:30:11.544977 kubelet[2258]: I0113 20:30:11.544940 2258 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:30:11.548834 kubelet[2258]: I0113 20:30:11.548807 2258 kubelet.go:396] "Attempting to sync node with API server" Jan 13 20:30:11.548859 kubelet[2258]: I0113 20:30:11.548837 2258 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 20:30:11.548887 kubelet[2258]: I0113 20:30:11.548859 2258 kubelet.go:312] "Adding apiserver pod source" Jan 13 20:30:11.548887 kubelet[2258]: I0113 20:30:11.548874 2258 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 20:30:11.549413 kubelet[2258]: W0113 20:30:11.549359 2258 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.136:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.136:6443: connect: connection refused Jan 13 20:30:11.549443 kubelet[2258]: E0113 20:30:11.549416 2258 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.136:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.136:6443: connect: connection refused Jan 13 20:30:11.549891 kubelet[2258]: W0113 20:30:11.549846 2258 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.136:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.136:6443: connect: connection refused Jan 13 20:30:11.549891 kubelet[2258]: E0113 20:30:11.549891 2258 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.136:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.136:6443: connect: connection refused Jan 13 20:30:11.550526 kubelet[2258]: I0113 20:30:11.550490 2258 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 13 20:30:11.551008 kubelet[2258]: I0113 20:30:11.550981 2258 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 20:30:11.551821 kubelet[2258]: W0113 20:30:11.551612 2258 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 13 20:30:11.554352 kubelet[2258]: I0113 20:30:11.554326 2258 server.go:1256] "Started kubelet" Jan 13 20:30:11.554790 kubelet[2258]: I0113 20:30:11.554732 2258 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 20:30:11.555357 kubelet[2258]: I0113 20:30:11.555003 2258 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 20:30:11.555357 kubelet[2258]: I0113 20:30:11.555064 2258 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 20:30:11.556114 kubelet[2258]: I0113 20:30:11.556091 2258 server.go:461] "Adding debug handlers to kubelet server" Jan 13 20:30:11.556299 kubelet[2258]: I0113 20:30:11.556274 2258 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 20:30:11.558927 kubelet[2258]: I0113 20:30:11.557798 2258 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 20:30:11.558927 kubelet[2258]: I0113 20:30:11.557875 2258 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 13 20:30:11.558927 kubelet[2258]: I0113 20:30:11.557936 2258 reconciler_new.go:29] "Reconciler: start to sync state" Jan 13 20:30:11.558927 kubelet[2258]: W0113 20:30:11.558188 2258 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.136:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.136:6443: connect: connection refused Jan 13 20:30:11.558927 kubelet[2258]: E0113 20:30:11.558223 2258 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.136:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.136:6443: connect: connection refused Jan 13 20:30:11.558927 kubelet[2258]: E0113 20:30:11.558269 2258 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 20:30:11.558927 kubelet[2258]: E0113 20:30:11.558468 2258 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.136:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.136:6443: connect: connection refused" interval="200ms" Jan 13 20:30:11.559648 kubelet[2258]: I0113 20:30:11.559627 2258 factory.go:221] Registration of the systemd container factory successfully Jan 13 20:30:11.559742 kubelet[2258]: I0113 20:30:11.559724 2258 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 20:30:11.560998 kubelet[2258]: E0113 20:30:11.560976 2258 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 20:30:11.561061 kubelet[2258]: I0113 20:30:11.561015 2258 factory.go:221] Registration of the containerd container factory successfully Jan 13 20:30:11.561362 kubelet[2258]: E0113 20:30:11.561342 2258 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.136:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.136:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181a5a97278460b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-13 20:30:11.554295993 +0000 UTC m=+0.876484197,LastTimestamp:2025-01-13 20:30:11.554295993 +0000 UTC m=+0.876484197,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 13 20:30:11.572490 kubelet[2258]: I0113 20:30:11.572463 2258 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 20:30:11.572490 kubelet[2258]: I0113 20:30:11.572485 2258 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 20:30:11.572639 kubelet[2258]: I0113 20:30:11.572504 2258 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:30:11.573445 kubelet[2258]: I0113 20:30:11.573417 2258 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 20:30:11.574784 kubelet[2258]: I0113 20:30:11.574763 2258 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 20:30:11.574849 kubelet[2258]: I0113 20:30:11.574796 2258 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 20:30:11.574849 kubelet[2258]: I0113 20:30:11.574814 2258 kubelet.go:2329] "Starting kubelet main sync loop" Jan 13 20:30:11.574894 kubelet[2258]: E0113 20:30:11.574871 2258 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 20:30:11.575631 kubelet[2258]: W0113 20:30:11.575573 2258 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.136:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.136:6443: connect: connection refused Jan 13 20:30:11.575631 kubelet[2258]: E0113 20:30:11.575634 2258 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.136:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.136:6443: connect: connection refused Jan 13 20:30:11.596208 kubelet[2258]: I0113 20:30:11.596100 2258 policy_none.go:49] "None policy: Start" Jan 13 20:30:11.596953 kubelet[2258]: I0113 20:30:11.596925 2258 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 20:30:11.597010 kubelet[2258]: I0113 20:30:11.596976 2258 state_mem.go:35] "Initializing new in-memory state store" Jan 13 20:30:11.602826 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 13 20:30:11.614245 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 13 20:30:11.617044 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 13 20:30:11.625771 kubelet[2258]: I0113 20:30:11.625274 2258 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 20:30:11.625771 kubelet[2258]: I0113 20:30:11.625575 2258 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 20:30:11.627646 kubelet[2258]: E0113 20:30:11.627620 2258 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 13 20:30:11.660307 kubelet[2258]: I0113 20:30:11.660230 2258 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 20:30:11.660662 kubelet[2258]: E0113 20:30:11.660645 2258 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.136:6443/api/v1/nodes\": dial tcp 10.0.0.136:6443: connect: connection refused" node="localhost" Jan 13 20:30:11.675920 kubelet[2258]: I0113 20:30:11.675846 2258 topology_manager.go:215] "Topology Admit Handler" podUID="63c12522a4d6c32add2ff16340ce1b58" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 13 20:30:11.676803 kubelet[2258]: I0113 20:30:11.676781 2258 topology_manager.go:215] "Topology Admit Handler" podUID="4f8e0d694c07e04969646aa3c152c34a" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 13 20:30:11.677854 kubelet[2258]: I0113 20:30:11.677719 2258 topology_manager.go:215] "Topology Admit Handler" podUID="c4144e8f85b2123a6afada0c1705bbba" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 13 20:30:11.682751 systemd[1]: Created slice kubepods-burstable-pod63c12522a4d6c32add2ff16340ce1b58.slice - libcontainer container kubepods-burstable-pod63c12522a4d6c32add2ff16340ce1b58.slice. Jan 13 20:30:11.692657 systemd[1]: Created slice kubepods-burstable-pod4f8e0d694c07e04969646aa3c152c34a.slice - libcontainer container kubepods-burstable-pod4f8e0d694c07e04969646aa3c152c34a.slice. Jan 13 20:30:11.707908 systemd[1]: Created slice kubepods-burstable-podc4144e8f85b2123a6afada0c1705bbba.slice - libcontainer container kubepods-burstable-podc4144e8f85b2123a6afada0c1705bbba.slice. Jan 13 20:30:11.759181 kubelet[2258]: E0113 20:30:11.759047 2258 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.136:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.136:6443: connect: connection refused" interval="400ms" Jan 13 20:30:11.859342 kubelet[2258]: I0113 20:30:11.859274 2258 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:30:11.859342 kubelet[2258]: I0113 20:30:11.859316 2258 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:30:11.859342 kubelet[2258]: I0113 20:30:11.859343 2258 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:30:11.859583 kubelet[2258]: I0113 20:30:11.859367 2258 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:30:11.859583 kubelet[2258]: I0113 20:30:11.859387 2258 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c4144e8f85b2123a6afada0c1705bbba-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c4144e8f85b2123a6afada0c1705bbba\") " pod="kube-system/kube-scheduler-localhost" Jan 13 20:30:11.859583 kubelet[2258]: I0113 20:30:11.859407 2258 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/63c12522a4d6c32add2ff16340ce1b58-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"63c12522a4d6c32add2ff16340ce1b58\") " pod="kube-system/kube-apiserver-localhost" Jan 13 20:30:11.859583 kubelet[2258]: I0113 20:30:11.859424 2258 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/63c12522a4d6c32add2ff16340ce1b58-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"63c12522a4d6c32add2ff16340ce1b58\") " pod="kube-system/kube-apiserver-localhost" Jan 13 20:30:11.859583 kubelet[2258]: I0113 20:30:11.859444 2258 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:30:11.859705 kubelet[2258]: I0113 20:30:11.859461 2258 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/63c12522a4d6c32add2ff16340ce1b58-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"63c12522a4d6c32add2ff16340ce1b58\") " pod="kube-system/kube-apiserver-localhost" Jan 13 20:30:11.862596 kubelet[2258]: I0113 20:30:11.862312 2258 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 20:30:11.862670 kubelet[2258]: E0113 20:30:11.862649 2258 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.136:6443/api/v1/nodes\": dial tcp 10.0.0.136:6443: connect: connection refused" node="localhost" Jan 13 20:30:11.993288 kubelet[2258]: E0113 20:30:11.993242 2258 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:30:11.993978 containerd[1449]: time="2025-01-13T20:30:11.993937242Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:63c12522a4d6c32add2ff16340ce1b58,Namespace:kube-system,Attempt:0,}" Jan 13 20:30:12.006120 kubelet[2258]: E0113 20:30:12.006080 2258 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:30:12.006558 containerd[1449]: time="2025-01-13T20:30:12.006519373Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4f8e0d694c07e04969646aa3c152c34a,Namespace:kube-system,Attempt:0,}" Jan 13 20:30:12.009822 kubelet[2258]: E0113 20:30:12.009724 2258 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:30:12.010036 containerd[1449]: time="2025-01-13T20:30:12.010012126Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c4144e8f85b2123a6afada0c1705bbba,Namespace:kube-system,Attempt:0,}" Jan 13 20:30:12.159698 kubelet[2258]: E0113 20:30:12.159649 2258 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.136:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.136:6443: connect: connection refused" interval="800ms" Jan 13 20:30:12.264329 kubelet[2258]: I0113 20:30:12.264223 2258 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 20:30:12.264576 kubelet[2258]: E0113 20:30:12.264534 2258 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.136:6443/api/v1/nodes\": dial tcp 10.0.0.136:6443: connect: connection refused" node="localhost" Jan 13 20:30:12.439627 kubelet[2258]: W0113 20:30:12.439582 2258 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.136:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.136:6443: connect: connection refused Jan 13 20:30:12.439627 kubelet[2258]: E0113 20:30:12.439623 2258 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.136:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.136:6443: connect: connection refused Jan 13 20:30:12.504692 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2261620977.mount: Deactivated successfully. Jan 13 20:30:12.510392 containerd[1449]: time="2025-01-13T20:30:12.510285082Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:30:12.512888 containerd[1449]: time="2025-01-13T20:30:12.512827515Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Jan 13 20:30:12.513607 containerd[1449]: time="2025-01-13T20:30:12.513570229Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:30:12.514673 containerd[1449]: time="2025-01-13T20:30:12.514554016Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:30:12.515182 containerd[1449]: time="2025-01-13T20:30:12.515055897Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 20:30:12.516158 containerd[1449]: time="2025-01-13T20:30:12.516119628Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:30:12.516638 containerd[1449]: time="2025-01-13T20:30:12.516519948Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 20:30:12.517769 containerd[1449]: time="2025-01-13T20:30:12.517738362Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:30:12.520788 containerd[1449]: time="2025-01-13T20:30:12.520717384Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 526.702071ms" Jan 13 20:30:12.522215 containerd[1449]: time="2025-01-13T20:30:12.522047208Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 515.440325ms" Jan 13 20:30:12.524447 containerd[1449]: time="2025-01-13T20:30:12.524412219Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 514.344489ms" Jan 13 20:30:12.646914 containerd[1449]: time="2025-01-13T20:30:12.646666338Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:30:12.646914 containerd[1449]: time="2025-01-13T20:30:12.646754168Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:30:12.646914 containerd[1449]: time="2025-01-13T20:30:12.646770141Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:30:12.646914 containerd[1449]: time="2025-01-13T20:30:12.646857130Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:30:12.648220 containerd[1449]: time="2025-01-13T20:30:12.647941718Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:30:12.648220 containerd[1449]: time="2025-01-13T20:30:12.648049204Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:30:12.648220 containerd[1449]: time="2025-01-13T20:30:12.648064816Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:30:12.648220 containerd[1449]: time="2025-01-13T20:30:12.648136954Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:30:12.648590 containerd[1449]: time="2025-01-13T20:30:12.648500645Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:30:12.648990 containerd[1449]: time="2025-01-13T20:30:12.648917418Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:30:12.649071 containerd[1449]: time="2025-01-13T20:30:12.648984712Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:30:12.649840 containerd[1449]: time="2025-01-13T20:30:12.649789515Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:30:12.667768 systemd[1]: Started cri-containerd-4a42d7fb29bad209cbcffbb64025808cf442fe721b105e19e72c11ca59a8f4a4.scope - libcontainer container 4a42d7fb29bad209cbcffbb64025808cf442fe721b105e19e72c11ca59a8f4a4. Jan 13 20:30:12.671172 systemd[1]: Started cri-containerd-666cd7fe8a8ea0655fa7162c2175aa1ea03dc0530e25846375329150b527d45f.scope - libcontainer container 666cd7fe8a8ea0655fa7162c2175aa1ea03dc0530e25846375329150b527d45f. Jan 13 20:30:12.672238 systemd[1]: Started cri-containerd-e8d869d90daa04a26b95643e7182fe42c354bc8cded1b4770e9af7e8e16bf923.scope - libcontainer container e8d869d90daa04a26b95643e7182fe42c354bc8cded1b4770e9af7e8e16bf923. Jan 13 20:30:12.698471 containerd[1449]: time="2025-01-13T20:30:12.698336135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4f8e0d694c07e04969646aa3c152c34a,Namespace:kube-system,Attempt:0,} returns sandbox id \"4a42d7fb29bad209cbcffbb64025808cf442fe721b105e19e72c11ca59a8f4a4\"" Jan 13 20:30:12.700773 kubelet[2258]: E0113 20:30:12.700706 2258 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:30:12.706574 containerd[1449]: time="2025-01-13T20:30:12.705927886Z" level=info msg="CreateContainer within sandbox \"4a42d7fb29bad209cbcffbb64025808cf442fe721b105e19e72c11ca59a8f4a4\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 13 20:30:12.712361 containerd[1449]: time="2025-01-13T20:30:12.712223600Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:63c12522a4d6c32add2ff16340ce1b58,Namespace:kube-system,Attempt:0,} returns sandbox id \"e8d869d90daa04a26b95643e7182fe42c354bc8cded1b4770e9af7e8e16bf923\"" Jan 13 20:30:12.712753 containerd[1449]: time="2025-01-13T20:30:12.712725321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c4144e8f85b2123a6afada0c1705bbba,Namespace:kube-system,Attempt:0,} returns sandbox id \"666cd7fe8a8ea0655fa7162c2175aa1ea03dc0530e25846375329150b527d45f\"" Jan 13 20:30:12.713329 kubelet[2258]: E0113 20:30:12.713310 2258 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:30:12.713769 kubelet[2258]: E0113 20:30:12.713748 2258 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:30:12.715240 containerd[1449]: time="2025-01-13T20:30:12.715143815Z" level=info msg="CreateContainer within sandbox \"666cd7fe8a8ea0655fa7162c2175aa1ea03dc0530e25846375329150b527d45f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 13 20:30:12.716056 containerd[1449]: time="2025-01-13T20:30:12.716011789Z" level=info msg="CreateContainer within sandbox \"e8d869d90daa04a26b95643e7182fe42c354bc8cded1b4770e9af7e8e16bf923\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 13 20:30:12.717967 kubelet[2258]: W0113 20:30:12.717899 2258 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.136:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.136:6443: connect: connection refused Jan 13 20:30:12.718026 kubelet[2258]: E0113 20:30:12.717976 2258 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.136:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.136:6443: connect: connection refused Jan 13 20:30:12.724826 containerd[1449]: time="2025-01-13T20:30:12.724781842Z" level=info msg="CreateContainer within sandbox \"4a42d7fb29bad209cbcffbb64025808cf442fe721b105e19e72c11ca59a8f4a4\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3946b7b9665a6c99f2af133567596b4bd51e018c9802fd96eeb6d746f87510cd\"" Jan 13 20:30:12.725685 containerd[1449]: time="2025-01-13T20:30:12.725663867Z" level=info msg="StartContainer for \"3946b7b9665a6c99f2af133567596b4bd51e018c9802fd96eeb6d746f87510cd\"" Jan 13 20:30:12.728186 containerd[1449]: time="2025-01-13T20:30:12.728152617Z" level=info msg="CreateContainer within sandbox \"666cd7fe8a8ea0655fa7162c2175aa1ea03dc0530e25846375329150b527d45f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f6b29057b9e3894d43e85b4c7fe7e74fd136ce5e09d60a0d33e9df06ba6c205e\"" Jan 13 20:30:12.728990 containerd[1449]: time="2025-01-13T20:30:12.728964546Z" level=info msg="StartContainer for \"f6b29057b9e3894d43e85b4c7fe7e74fd136ce5e09d60a0d33e9df06ba6c205e\"" Jan 13 20:30:12.730579 containerd[1449]: time="2025-01-13T20:30:12.730436283Z" level=info msg="CreateContainer within sandbox \"e8d869d90daa04a26b95643e7182fe42c354bc8cded1b4770e9af7e8e16bf923\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b48640d8042d8b14619f33c4838794b1c4173af5b8bbace8247d08e76963ceac\"" Jan 13 20:30:12.730889 containerd[1449]: time="2025-01-13T20:30:12.730867348Z" level=info msg="StartContainer for \"b48640d8042d8b14619f33c4838794b1c4173af5b8bbace8247d08e76963ceac\"" Jan 13 20:30:12.760762 systemd[1]: Started cri-containerd-f6b29057b9e3894d43e85b4c7fe7e74fd136ce5e09d60a0d33e9df06ba6c205e.scope - libcontainer container f6b29057b9e3894d43e85b4c7fe7e74fd136ce5e09d60a0d33e9df06ba6c205e. Jan 13 20:30:12.763937 systemd[1]: Started cri-containerd-3946b7b9665a6c99f2af133567596b4bd51e018c9802fd96eeb6d746f87510cd.scope - libcontainer container 3946b7b9665a6c99f2af133567596b4bd51e018c9802fd96eeb6d746f87510cd. Jan 13 20:30:12.764858 systemd[1]: Started cri-containerd-b48640d8042d8b14619f33c4838794b1c4173af5b8bbace8247d08e76963ceac.scope - libcontainer container b48640d8042d8b14619f33c4838794b1c4173af5b8bbace8247d08e76963ceac. Jan 13 20:30:12.813706 containerd[1449]: time="2025-01-13T20:30:12.813472482Z" level=info msg="StartContainer for \"b48640d8042d8b14619f33c4838794b1c4173af5b8bbace8247d08e76963ceac\" returns successfully" Jan 13 20:30:12.814006 containerd[1449]: time="2025-01-13T20:30:12.813506389Z" level=info msg="StartContainer for \"f6b29057b9e3894d43e85b4c7fe7e74fd136ce5e09d60a0d33e9df06ba6c205e\" returns successfully" Jan 13 20:30:12.824694 containerd[1449]: time="2025-01-13T20:30:12.824584127Z" level=info msg="StartContainer for \"3946b7b9665a6c99f2af133567596b4bd51e018c9802fd96eeb6d746f87510cd\" returns successfully" Jan 13 20:30:12.948933 kubelet[2258]: W0113 20:30:12.948879 2258 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.136:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.136:6443: connect: connection refused Jan 13 20:30:12.948933 kubelet[2258]: E0113 20:30:12.948936 2258 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.136:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.136:6443: connect: connection refused Jan 13 20:30:12.960884 kubelet[2258]: E0113 20:30:12.960848 2258 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.136:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.136:6443: connect: connection refused" interval="1.6s" Jan 13 20:30:12.987884 kubelet[2258]: W0113 20:30:12.987774 2258 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.136:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.136:6443: connect: connection refused Jan 13 20:30:12.987884 kubelet[2258]: E0113 20:30:12.987858 2258 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.136:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.136:6443: connect: connection refused Jan 13 20:30:13.066627 kubelet[2258]: I0113 20:30:13.066501 2258 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 20:30:13.583332 kubelet[2258]: E0113 20:30:13.583247 2258 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:30:13.586650 kubelet[2258]: E0113 20:30:13.586555 2258 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:30:13.586924 kubelet[2258]: E0113 20:30:13.586879 2258 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:30:14.588429 kubelet[2258]: E0113 20:30:14.588401 2258 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:30:14.790127 kubelet[2258]: E0113 20:30:14.790094 2258 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 13 20:30:14.912474 kubelet[2258]: I0113 20:30:14.909530 2258 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 13 20:30:14.924970 kubelet[2258]: E0113 20:30:14.924933 2258 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 20:30:15.025901 kubelet[2258]: E0113 20:30:15.025846 2258 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 20:30:15.126644 kubelet[2258]: E0113 20:30:15.126602 2258 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 20:30:15.227164 kubelet[2258]: E0113 20:30:15.227044 2258 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 20:30:15.327548 kubelet[2258]: E0113 20:30:15.327496 2258 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 20:30:15.427989 kubelet[2258]: E0113 20:30:15.427951 2258 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 20:30:15.528730 kubelet[2258]: E0113 20:30:15.528638 2258 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 20:30:15.630653 kubelet[2258]: E0113 20:30:15.629693 2258 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 20:30:15.730452 kubelet[2258]: E0113 20:30:15.730411 2258 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 20:30:16.552390 kubelet[2258]: I0113 20:30:16.552361 2258 apiserver.go:52] "Watching apiserver" Jan 13 20:30:16.558600 kubelet[2258]: I0113 20:30:16.558567 2258 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 13 20:30:17.085127 kubelet[2258]: E0113 20:30:17.085033 2258 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:30:17.592513 kubelet[2258]: E0113 20:30:17.592067 2258 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:30:17.830330 systemd[1]: Reloading requested from client PID 2537 ('systemctl') (unit session-7.scope)... Jan 13 20:30:17.830346 systemd[1]: Reloading... Jan 13 20:30:17.893571 zram_generator::config[2579]: No configuration found. Jan 13 20:30:17.978734 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:30:18.043838 systemd[1]: Reloading finished in 213 ms. Jan 13 20:30:18.074582 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:30:18.074840 kubelet[2258]: I0113 20:30:18.074471 2258 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 20:30:18.080609 systemd[1]: kubelet.service: Deactivated successfully. Jan 13 20:30:18.080883 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:30:18.080963 systemd[1]: kubelet.service: Consumed 1.272s CPU time, 119.8M memory peak, 0B memory swap peak. Jan 13 20:30:18.095804 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:30:18.183399 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:30:18.187196 (kubelet)[2618]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 20:30:18.237194 kubelet[2618]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:30:18.237194 kubelet[2618]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 20:30:18.237194 kubelet[2618]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:30:18.237625 kubelet[2618]: I0113 20:30:18.237235 2618 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 20:30:18.241302 kubelet[2618]: I0113 20:30:18.241272 2618 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 13 20:30:18.241302 kubelet[2618]: I0113 20:30:18.241300 2618 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 20:30:18.241475 kubelet[2618]: I0113 20:30:18.241461 2618 server.go:919] "Client rotation is on, will bootstrap in background" Jan 13 20:30:18.243059 kubelet[2618]: I0113 20:30:18.243024 2618 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 13 20:30:18.246574 kubelet[2618]: I0113 20:30:18.246485 2618 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 20:30:18.256670 kubelet[2618]: I0113 20:30:18.256639 2618 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 20:30:18.256929 kubelet[2618]: I0113 20:30:18.256899 2618 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 20:30:18.257225 kubelet[2618]: I0113 20:30:18.257193 2618 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 20:30:18.257225 kubelet[2618]: I0113 20:30:18.257225 2618 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 20:30:18.257336 kubelet[2618]: I0113 20:30:18.257234 2618 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 20:30:18.257336 kubelet[2618]: I0113 20:30:18.257266 2618 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:30:18.257387 kubelet[2618]: I0113 20:30:18.257357 2618 kubelet.go:396] "Attempting to sync node with API server" Jan 13 20:30:18.257387 kubelet[2618]: I0113 20:30:18.257370 2618 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 20:30:18.257424 kubelet[2618]: I0113 20:30:18.257393 2618 kubelet.go:312] "Adding apiserver pod source" Jan 13 20:30:18.257424 kubelet[2618]: I0113 20:30:18.257406 2618 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 20:30:18.263584 kubelet[2618]: I0113 20:30:18.260870 2618 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 13 20:30:18.263584 kubelet[2618]: I0113 20:30:18.261065 2618 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 20:30:18.263584 kubelet[2618]: I0113 20:30:18.261498 2618 server.go:1256] "Started kubelet" Jan 13 20:30:18.263584 kubelet[2618]: I0113 20:30:18.261794 2618 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 20:30:18.263584 kubelet[2618]: I0113 20:30:18.262523 2618 server.go:461] "Adding debug handlers to kubelet server" Jan 13 20:30:18.263584 kubelet[2618]: I0113 20:30:18.263451 2618 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 20:30:18.263798 kubelet[2618]: I0113 20:30:18.263699 2618 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 20:30:18.263928 kubelet[2618]: I0113 20:30:18.263910 2618 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 20:30:18.273965 kubelet[2618]: I0113 20:30:18.272247 2618 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 20:30:18.273965 kubelet[2618]: I0113 20:30:18.272354 2618 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 13 20:30:18.273965 kubelet[2618]: I0113 20:30:18.272480 2618 reconciler_new.go:29] "Reconciler: start to sync state" Jan 13 20:30:18.278653 kubelet[2618]: E0113 20:30:18.278169 2618 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 20:30:18.278653 kubelet[2618]: I0113 20:30:18.278316 2618 factory.go:221] Registration of the systemd container factory successfully Jan 13 20:30:18.278653 kubelet[2618]: I0113 20:30:18.278399 2618 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 20:30:18.289445 kubelet[2618]: I0113 20:30:18.286786 2618 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 20:30:18.289445 kubelet[2618]: I0113 20:30:18.286848 2618 factory.go:221] Registration of the containerd container factory successfully Jan 13 20:30:18.295186 kubelet[2618]: I0113 20:30:18.295063 2618 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 20:30:18.295186 kubelet[2618]: I0113 20:30:18.295192 2618 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 20:30:18.296119 kubelet[2618]: I0113 20:30:18.295210 2618 kubelet.go:2329] "Starting kubelet main sync loop" Jan 13 20:30:18.296119 kubelet[2618]: E0113 20:30:18.295373 2618 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 20:30:18.327829 kubelet[2618]: I0113 20:30:18.327795 2618 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 20:30:18.327829 kubelet[2618]: I0113 20:30:18.327821 2618 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 20:30:18.327974 kubelet[2618]: I0113 20:30:18.327851 2618 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:30:18.328252 kubelet[2618]: I0113 20:30:18.327998 2618 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 13 20:30:18.328252 kubelet[2618]: I0113 20:30:18.328022 2618 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 13 20:30:18.328252 kubelet[2618]: I0113 20:30:18.328029 2618 policy_none.go:49] "None policy: Start" Jan 13 20:30:18.328659 kubelet[2618]: I0113 20:30:18.328640 2618 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 20:30:18.328705 kubelet[2618]: I0113 20:30:18.328666 2618 state_mem.go:35] "Initializing new in-memory state store" Jan 13 20:30:18.328810 kubelet[2618]: I0113 20:30:18.328794 2618 state_mem.go:75] "Updated machine memory state" Jan 13 20:30:18.332716 kubelet[2618]: I0113 20:30:18.332693 2618 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 20:30:18.333591 kubelet[2618]: I0113 20:30:18.332908 2618 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 20:30:18.375948 kubelet[2618]: I0113 20:30:18.375915 2618 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 20:30:18.383120 kubelet[2618]: I0113 20:30:18.383058 2618 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Jan 13 20:30:18.383289 kubelet[2618]: I0113 20:30:18.383142 2618 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 13 20:30:18.395971 kubelet[2618]: I0113 20:30:18.395727 2618 topology_manager.go:215] "Topology Admit Handler" podUID="63c12522a4d6c32add2ff16340ce1b58" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 13 20:30:18.395971 kubelet[2618]: I0113 20:30:18.395808 2618 topology_manager.go:215] "Topology Admit Handler" podUID="4f8e0d694c07e04969646aa3c152c34a" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 13 20:30:18.395971 kubelet[2618]: I0113 20:30:18.395857 2618 topology_manager.go:215] "Topology Admit Handler" podUID="c4144e8f85b2123a6afada0c1705bbba" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 13 20:30:18.403116 kubelet[2618]: E0113 20:30:18.403087 2618 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 13 20:30:18.573958 kubelet[2618]: I0113 20:30:18.573841 2618 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/63c12522a4d6c32add2ff16340ce1b58-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"63c12522a4d6c32add2ff16340ce1b58\") " pod="kube-system/kube-apiserver-localhost" Jan 13 20:30:18.573958 kubelet[2618]: I0113 20:30:18.573892 2618 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/63c12522a4d6c32add2ff16340ce1b58-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"63c12522a4d6c32add2ff16340ce1b58\") " pod="kube-system/kube-apiserver-localhost" Jan 13 20:30:18.573958 kubelet[2618]: I0113 20:30:18.573916 2618 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:30:18.573958 kubelet[2618]: I0113 20:30:18.573942 2618 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:30:18.573958 kubelet[2618]: I0113 20:30:18.573965 2618 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/63c12522a4d6c32add2ff16340ce1b58-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"63c12522a4d6c32add2ff16340ce1b58\") " pod="kube-system/kube-apiserver-localhost" Jan 13 20:30:18.574152 kubelet[2618]: I0113 20:30:18.573984 2618 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:30:18.574152 kubelet[2618]: I0113 20:30:18.574004 2618 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:30:18.574152 kubelet[2618]: I0113 20:30:18.574025 2618 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:30:18.574152 kubelet[2618]: I0113 20:30:18.574045 2618 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c4144e8f85b2123a6afada0c1705bbba-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c4144e8f85b2123a6afada0c1705bbba\") " pod="kube-system/kube-scheduler-localhost" Jan 13 20:30:18.702061 kubelet[2618]: E0113 20:30:18.701981 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:30:18.702520 kubelet[2618]: E0113 20:30:18.702481 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:30:18.704475 kubelet[2618]: E0113 20:30:18.704445 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:30:19.260568 kubelet[2618]: I0113 20:30:19.258000 2618 apiserver.go:52] "Watching apiserver" Jan 13 20:30:19.272998 kubelet[2618]: I0113 20:30:19.272942 2618 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 13 20:30:19.309887 kubelet[2618]: E0113 20:30:19.309793 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:30:19.310711 kubelet[2618]: E0113 20:30:19.310369 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:30:19.312231 kubelet[2618]: I0113 20:30:19.312204 2618 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.31216698 podStartE2EDuration="1.31216698s" podCreationTimestamp="2025-01-13 20:30:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:30:19.312038456 +0000 UTC m=+1.120944962" watchObservedRunningTime="2025-01-13 20:30:19.31216698 +0000 UTC m=+1.121073446" Jan 13 20:30:19.319521 kubelet[2618]: E0113 20:30:19.319483 2618 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 13 20:30:19.319986 kubelet[2618]: E0113 20:30:19.319961 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:30:19.333056 kubelet[2618]: I0113 20:30:19.332942 2618 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.332901503 podStartE2EDuration="1.332901503s" podCreationTimestamp="2025-01-13 20:30:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:30:19.332124435 +0000 UTC m=+1.141030901" watchObservedRunningTime="2025-01-13 20:30:19.332901503 +0000 UTC m=+1.141807969" Jan 13 20:30:19.340601 kubelet[2618]: I0113 20:30:19.340564 2618 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.340497272 podStartE2EDuration="2.340497272s" podCreationTimestamp="2025-01-13 20:30:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:30:19.340355499 +0000 UTC m=+1.149261965" watchObservedRunningTime="2025-01-13 20:30:19.340497272 +0000 UTC m=+1.149403698" Jan 13 20:30:20.313570 kubelet[2618]: E0113 20:30:20.312467 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:30:20.601315 kubelet[2618]: E0113 20:30:20.601212 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:30:22.631448 sudo[1626]: pam_unix(sudo:session): session closed for user root Jan 13 20:30:22.634684 sshd[1625]: Connection closed by 10.0.0.1 port 37912 Jan 13 20:30:22.635653 sshd-session[1623]: pam_unix(sshd:session): session closed for user core Jan 13 20:30:22.639119 systemd[1]: sshd@6-10.0.0.136:22-10.0.0.1:37912.service: Deactivated successfully. Jan 13 20:30:22.642634 systemd[1]: session-7.scope: Deactivated successfully. Jan 13 20:30:22.642847 systemd[1]: session-7.scope: Consumed 7.005s CPU time, 195.1M memory peak, 0B memory swap peak. Jan 13 20:30:22.643667 systemd-logind[1430]: Session 7 logged out. Waiting for processes to exit. Jan 13 20:30:22.644841 systemd-logind[1430]: Removed session 7. Jan 13 20:30:22.818975 kubelet[2618]: E0113 20:30:22.818906 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:30:23.318724 kubelet[2618]: E0113 20:30:23.316131 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:30:27.133129 update_engine[1434]: I20250113 20:30:27.133045 1434 update_attempter.cc:509] Updating boot flags... Jan 13 20:30:27.169571 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2715) Jan 13 20:30:27.195590 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2717) Jan 13 20:30:27.230603 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2717) Jan 13 20:30:27.537349 kubelet[2618]: E0113 20:30:27.537244 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:30:28.325590 kubelet[2618]: E0113 20:30:28.323289 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:30:30.081695 kubelet[2618]: I0113 20:30:30.081666 2618 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 13 20:30:30.082058 containerd[1449]: time="2025-01-13T20:30:30.081999838Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 13 20:30:30.082487 kubelet[2618]: I0113 20:30:30.082319 2618 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 13 20:30:30.610170 kubelet[2618]: E0113 20:30:30.610132 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:30:31.169621 kubelet[2618]: I0113 20:30:31.169521 2618 topology_manager.go:215] "Topology Admit Handler" podUID="8f3e5a13-fae1-42d1-822f-5d366effbc26" podNamespace="kube-system" podName="kube-proxy-4dv5j" Jan 13 20:30:31.183686 systemd[1]: Created slice kubepods-besteffort-pod8f3e5a13_fae1_42d1_822f_5d366effbc26.slice - libcontainer container kubepods-besteffort-pod8f3e5a13_fae1_42d1_822f_5d366effbc26.slice. Jan 13 20:30:31.236208 kubelet[2618]: I0113 20:30:31.236110 2618 topology_manager.go:215] "Topology Admit Handler" podUID="392c36b8-f33e-4689-9bca-9b0d1a58d435" podNamespace="tigera-operator" podName="tigera-operator-c7ccbd65-2r7sg" Jan 13 20:30:31.245931 systemd[1]: Created slice kubepods-besteffort-pod392c36b8_f33e_4689_9bca_9b0d1a58d435.slice - libcontainer container kubepods-besteffort-pod392c36b8_f33e_4689_9bca_9b0d1a58d435.slice. Jan 13 20:30:31.360737 kubelet[2618]: I0113 20:30:31.360686 2618 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6wbp\" (UniqueName: \"kubernetes.io/projected/8f3e5a13-fae1-42d1-822f-5d366effbc26-kube-api-access-p6wbp\") pod \"kube-proxy-4dv5j\" (UID: \"8f3e5a13-fae1-42d1-822f-5d366effbc26\") " pod="kube-system/kube-proxy-4dv5j" Jan 13 20:30:31.360737 kubelet[2618]: I0113 20:30:31.360735 2618 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8f3e5a13-fae1-42d1-822f-5d366effbc26-kube-proxy\") pod \"kube-proxy-4dv5j\" (UID: \"8f3e5a13-fae1-42d1-822f-5d366effbc26\") " pod="kube-system/kube-proxy-4dv5j" Jan 13 20:30:31.360877 kubelet[2618]: I0113 20:30:31.360761 2618 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8f3e5a13-fae1-42d1-822f-5d366effbc26-lib-modules\") pod \"kube-proxy-4dv5j\" (UID: \"8f3e5a13-fae1-42d1-822f-5d366effbc26\") " pod="kube-system/kube-proxy-4dv5j" Jan 13 20:30:31.360877 kubelet[2618]: I0113 20:30:31.360782 2618 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/392c36b8-f33e-4689-9bca-9b0d1a58d435-var-lib-calico\") pod \"tigera-operator-c7ccbd65-2r7sg\" (UID: \"392c36b8-f33e-4689-9bca-9b0d1a58d435\") " pod="tigera-operator/tigera-operator-c7ccbd65-2r7sg" Jan 13 20:30:31.361719 kubelet[2618]: I0113 20:30:31.361685 2618 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6xcw\" (UniqueName: \"kubernetes.io/projected/392c36b8-f33e-4689-9bca-9b0d1a58d435-kube-api-access-t6xcw\") pod \"tigera-operator-c7ccbd65-2r7sg\" (UID: \"392c36b8-f33e-4689-9bca-9b0d1a58d435\") " pod="tigera-operator/tigera-operator-c7ccbd65-2r7sg" Jan 13 20:30:31.361758 kubelet[2618]: I0113 20:30:31.361743 2618 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8f3e5a13-fae1-42d1-822f-5d366effbc26-xtables-lock\") pod \"kube-proxy-4dv5j\" (UID: \"8f3e5a13-fae1-42d1-822f-5d366effbc26\") " pod="kube-system/kube-proxy-4dv5j" Jan 13 20:30:31.495335 kubelet[2618]: E0113 20:30:31.495200 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:30:31.495975 containerd[1449]: time="2025-01-13T20:30:31.495804725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4dv5j,Uid:8f3e5a13-fae1-42d1-822f-5d366effbc26,Namespace:kube-system,Attempt:0,}" Jan 13 20:30:31.519650 containerd[1449]: time="2025-01-13T20:30:31.519446123Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:30:31.519650 containerd[1449]: time="2025-01-13T20:30:31.519519348Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:30:31.520310 containerd[1449]: time="2025-01-13T20:30:31.519534594Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:30:31.520828 containerd[1449]: time="2025-01-13T20:30:31.520427950Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:30:31.542779 systemd[1]: Started cri-containerd-7286d0eb71e124fa3c8c5f446b3ea3a618aac5f0e4a617046d1a39c86843c054.scope - libcontainer container 7286d0eb71e124fa3c8c5f446b3ea3a618aac5f0e4a617046d1a39c86843c054. Jan 13 20:30:31.551245 containerd[1449]: time="2025-01-13T20:30:31.551188785Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-2r7sg,Uid:392c36b8-f33e-4689-9bca-9b0d1a58d435,Namespace:tigera-operator,Attempt:0,}" Jan 13 20:30:31.563993 containerd[1449]: time="2025-01-13T20:30:31.563943814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4dv5j,Uid:8f3e5a13-fae1-42d1-822f-5d366effbc26,Namespace:kube-system,Attempt:0,} returns sandbox id \"7286d0eb71e124fa3c8c5f446b3ea3a618aac5f0e4a617046d1a39c86843c054\"" Jan 13 20:30:31.573040 kubelet[2618]: E0113 20:30:31.573004 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:30:31.575557 containerd[1449]: time="2025-01-13T20:30:31.575505581Z" level=info msg="CreateContainer within sandbox \"7286d0eb71e124fa3c8c5f446b3ea3a618aac5f0e4a617046d1a39c86843c054\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 13 20:30:31.579402 containerd[1449]: time="2025-01-13T20:30:31.579307685Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:30:31.579402 containerd[1449]: time="2025-01-13T20:30:31.579381271Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:30:31.579402 containerd[1449]: time="2025-01-13T20:30:31.579394516Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:30:31.580804 containerd[1449]: time="2025-01-13T20:30:31.579629639Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:30:31.599740 systemd[1]: Started cri-containerd-d9cc0fee16116d18c0e771c2336a33184131846593691572e31dd274598e271b.scope - libcontainer container d9cc0fee16116d18c0e771c2336a33184131846593691572e31dd274598e271b. Jan 13 20:30:31.610297 containerd[1449]: time="2025-01-13T20:30:31.610248864Z" level=info msg="CreateContainer within sandbox \"7286d0eb71e124fa3c8c5f446b3ea3a618aac5f0e4a617046d1a39c86843c054\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b15af1e0d83e2cdf6e5f91c41de891cd55a40a034ae438348067bc667f8cb612\"" Jan 13 20:30:31.611242 containerd[1449]: time="2025-01-13T20:30:31.611137498Z" level=info msg="StartContainer for \"b15af1e0d83e2cdf6e5f91c41de891cd55a40a034ae438348067bc667f8cb612\"" Jan 13 20:30:31.630670 containerd[1449]: time="2025-01-13T20:30:31.630630149Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-2r7sg,Uid:392c36b8-f33e-4689-9bca-9b0d1a58d435,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"d9cc0fee16116d18c0e771c2336a33184131846593691572e31dd274598e271b\"" Jan 13 20:30:31.638793 containerd[1449]: time="2025-01-13T20:30:31.638758103Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 13 20:30:31.644733 systemd[1]: Started cri-containerd-b15af1e0d83e2cdf6e5f91c41de891cd55a40a034ae438348067bc667f8cb612.scope - libcontainer container b15af1e0d83e2cdf6e5f91c41de891cd55a40a034ae438348067bc667f8cb612. Jan 13 20:30:31.675903 containerd[1449]: time="2025-01-13T20:30:31.675862660Z" level=info msg="StartContainer for \"b15af1e0d83e2cdf6e5f91c41de891cd55a40a034ae438348067bc667f8cb612\" returns successfully" Jan 13 20:30:32.332791 kubelet[2618]: E0113 20:30:32.332756 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:30:32.344729 kubelet[2618]: I0113 20:30:32.343752 2618 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-4dv5j" podStartSLOduration=1.343700662 podStartE2EDuration="1.343700662s" podCreationTimestamp="2025-01-13 20:30:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:30:32.342712569 +0000 UTC m=+14.151619035" watchObservedRunningTime="2025-01-13 20:30:32.343700662 +0000 UTC m=+14.152607128" Jan 13 20:30:33.221741 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2332592730.mount: Deactivated successfully. Jan 13 20:30:33.470760 containerd[1449]: time="2025-01-13T20:30:33.470702834Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:30:33.471821 containerd[1449]: time="2025-01-13T20:30:33.471723723Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=19125960" Jan 13 20:30:33.473301 containerd[1449]: time="2025-01-13T20:30:33.472766139Z" level=info msg="ImageCreate event name:\"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:30:33.475257 containerd[1449]: time="2025-01-13T20:30:33.475228093Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:30:33.475932 containerd[1449]: time="2025-01-13T20:30:33.475892307Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"19120155\" in 1.837090389s" Jan 13 20:30:33.475991 containerd[1449]: time="2025-01-13T20:30:33.475935361Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\"" Jan 13 20:30:33.489108 containerd[1449]: time="2025-01-13T20:30:33.489074396Z" level=info msg="CreateContainer within sandbox \"d9cc0fee16116d18c0e771c2336a33184131846593691572e31dd274598e271b\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 13 20:30:33.498975 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2553380152.mount: Deactivated successfully. Jan 13 20:30:33.500322 containerd[1449]: time="2025-01-13T20:30:33.500268005Z" level=info msg="CreateContainer within sandbox \"d9cc0fee16116d18c0e771c2336a33184131846593691572e31dd274598e271b\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"a51dde541c0912f2c585376d9d632a5233b2e5fab5e58d24128657ea5df90067\"" Jan 13 20:30:33.503198 containerd[1449]: time="2025-01-13T20:30:33.503171221Z" level=info msg="StartContainer for \"a51dde541c0912f2c585376d9d632a5233b2e5fab5e58d24128657ea5df90067\"" Jan 13 20:30:33.534711 systemd[1]: Started cri-containerd-a51dde541c0912f2c585376d9d632a5233b2e5fab5e58d24128657ea5df90067.scope - libcontainer container a51dde541c0912f2c585376d9d632a5233b2e5fab5e58d24128657ea5df90067. Jan 13 20:30:33.556878 containerd[1449]: time="2025-01-13T20:30:33.556741572Z" level=info msg="StartContainer for \"a51dde541c0912f2c585376d9d632a5233b2e5fab5e58d24128657ea5df90067\" returns successfully" Jan 13 20:30:37.911151 kubelet[2618]: I0113 20:30:37.911097 2618 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-c7ccbd65-2r7sg" podStartSLOduration=5.065465323 podStartE2EDuration="6.911045017s" podCreationTimestamp="2025-01-13 20:30:31 +0000 UTC" firstStartedPulling="2025-01-13 20:30:31.632099709 +0000 UTC m=+13.441006175" lastFinishedPulling="2025-01-13 20:30:33.477679403 +0000 UTC m=+15.286585869" observedRunningTime="2025-01-13 20:30:34.468028922 +0000 UTC m=+16.276935388" watchObservedRunningTime="2025-01-13 20:30:37.911045017 +0000 UTC m=+19.719951483" Jan 13 20:30:37.911908 kubelet[2618]: I0113 20:30:37.911872 2618 topology_manager.go:215] "Topology Admit Handler" podUID="9873965f-8ae0-4029-ab74-2758b65617cb" podNamespace="calico-system" podName="calico-typha-6f76f774c9-q7pdp" Jan 13 20:30:37.925906 systemd[1]: Created slice kubepods-besteffort-pod9873965f_8ae0_4029_ab74_2758b65617cb.slice - libcontainer container kubepods-besteffort-pod9873965f_8ae0_4029_ab74_2758b65617cb.slice. Jan 13 20:30:37.965293 kubelet[2618]: I0113 20:30:37.965246 2618 topology_manager.go:215] "Topology Admit Handler" podUID="504d0597-cd75-4257-beda-cc4e8cbbf672" podNamespace="calico-system" podName="calico-node-2d9jq" Jan 13 20:30:37.972765 systemd[1]: Created slice kubepods-besteffort-pod504d0597_cd75_4257_beda_cc4e8cbbf672.slice - libcontainer container kubepods-besteffort-pod504d0597_cd75_4257_beda_cc4e8cbbf672.slice. Jan 13 20:30:38.004800 kubelet[2618]: I0113 20:30:38.004752 2618 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/504d0597-cd75-4257-beda-cc4e8cbbf672-xtables-lock\") pod \"calico-node-2d9jq\" (UID: \"504d0597-cd75-4257-beda-cc4e8cbbf672\") " pod="calico-system/calico-node-2d9jq" Jan 13 20:30:38.004800 kubelet[2618]: I0113 20:30:38.004801 2618 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/504d0597-cd75-4257-beda-cc4e8cbbf672-cni-net-dir\") pod \"calico-node-2d9jq\" (UID: \"504d0597-cd75-4257-beda-cc4e8cbbf672\") " pod="calico-system/calico-node-2d9jq" Jan 13 20:30:38.004965 kubelet[2618]: I0113 20:30:38.004825 2618 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/504d0597-cd75-4257-beda-cc4e8cbbf672-cni-log-dir\") pod \"calico-node-2d9jq\" (UID: \"504d0597-cd75-4257-beda-cc4e8cbbf672\") " pod="calico-system/calico-node-2d9jq" Jan 13 20:30:38.004965 kubelet[2618]: I0113 20:30:38.004849 2618 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/504d0597-cd75-4257-beda-cc4e8cbbf672-tigera-ca-bundle\") pod \"calico-node-2d9jq\" (UID: \"504d0597-cd75-4257-beda-cc4e8cbbf672\") " pod="calico-system/calico-node-2d9jq" Jan 13 20:30:38.004965 kubelet[2618]: I0113 20:30:38.004869 2618 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/504d0597-cd75-4257-beda-cc4e8cbbf672-var-run-calico\") pod \"calico-node-2d9jq\" (UID: \"504d0597-cd75-4257-beda-cc4e8cbbf672\") " pod="calico-system/calico-node-2d9jq" Jan 13 20:30:38.004965 kubelet[2618]: I0113 20:30:38.004890 2618 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9873965f-8ae0-4029-ab74-2758b65617cb-tigera-ca-bundle\") pod \"calico-typha-6f76f774c9-q7pdp\" (UID: \"9873965f-8ae0-4029-ab74-2758b65617cb\") " pod="calico-system/calico-typha-6f76f774c9-q7pdp" Jan 13 20:30:38.004965 kubelet[2618]: I0113 20:30:38.004909 2618 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/9873965f-8ae0-4029-ab74-2758b65617cb-typha-certs\") pod \"calico-typha-6f76f774c9-q7pdp\" (UID: \"9873965f-8ae0-4029-ab74-2758b65617cb\") " pod="calico-system/calico-typha-6f76f774c9-q7pdp" Jan 13 20:30:38.005682 kubelet[2618]: I0113 20:30:38.004933 2618 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8cxk\" (UniqueName: \"kubernetes.io/projected/504d0597-cd75-4257-beda-cc4e8cbbf672-kube-api-access-m8cxk\") pod \"calico-node-2d9jq\" (UID: \"504d0597-cd75-4257-beda-cc4e8cbbf672\") " pod="calico-system/calico-node-2d9jq" Jan 13 20:30:38.005682 kubelet[2618]: I0113 20:30:38.004953 2618 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/504d0597-cd75-4257-beda-cc4e8cbbf672-var-lib-calico\") pod \"calico-node-2d9jq\" (UID: \"504d0597-cd75-4257-beda-cc4e8cbbf672\") " pod="calico-system/calico-node-2d9jq" Jan 13 20:30:38.005682 kubelet[2618]: I0113 20:30:38.004975 2618 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/504d0597-cd75-4257-beda-cc4e8cbbf672-node-certs\") pod \"calico-node-2d9jq\" (UID: \"504d0597-cd75-4257-beda-cc4e8cbbf672\") " pod="calico-system/calico-node-2d9jq" Jan 13 20:30:38.005682 kubelet[2618]: I0113 20:30:38.004994 2618 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/504d0597-cd75-4257-beda-cc4e8cbbf672-cni-bin-dir\") pod \"calico-node-2d9jq\" (UID: \"504d0597-cd75-4257-beda-cc4e8cbbf672\") " pod="calico-system/calico-node-2d9jq" Jan 13 20:30:38.005682 kubelet[2618]: I0113 20:30:38.005014 2618 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/504d0597-cd75-4257-beda-cc4e8cbbf672-lib-modules\") pod \"calico-node-2d9jq\" (UID: \"504d0597-cd75-4257-beda-cc4e8cbbf672\") " pod="calico-system/calico-node-2d9jq" Jan 13 20:30:38.005835 kubelet[2618]: I0113 20:30:38.005032 2618 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/504d0597-cd75-4257-beda-cc4e8cbbf672-flexvol-driver-host\") pod \"calico-node-2d9jq\" (UID: \"504d0597-cd75-4257-beda-cc4e8cbbf672\") " pod="calico-system/calico-node-2d9jq" Jan 13 20:30:38.005835 kubelet[2618]: I0113 20:30:38.005054 2618 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7k8m\" (UniqueName: \"kubernetes.io/projected/9873965f-8ae0-4029-ab74-2758b65617cb-kube-api-access-p7k8m\") pod \"calico-typha-6f76f774c9-q7pdp\" (UID: \"9873965f-8ae0-4029-ab74-2758b65617cb\") " pod="calico-system/calico-typha-6f76f774c9-q7pdp" Jan 13 20:30:38.005835 kubelet[2618]: I0113 20:30:38.005073 2618 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/504d0597-cd75-4257-beda-cc4e8cbbf672-policysync\") pod \"calico-node-2d9jq\" (UID: \"504d0597-cd75-4257-beda-cc4e8cbbf672\") " pod="calico-system/calico-node-2d9jq" Jan 13 20:30:38.079531 kubelet[2618]: I0113 20:30:38.079288 2618 topology_manager.go:215] "Topology Admit Handler" podUID="727a9f8b-291c-4cff-81c1-972e6591d923" podNamespace="calico-system" podName="csi-node-driver-jl4cq" Jan 13 20:30:38.082153 kubelet[2618]: E0113 20:30:38.080673 2618 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jl4cq" podUID="727a9f8b-291c-4cff-81c1-972e6591d923" Jan 13 20:30:38.106213 kubelet[2618]: I0113 20:30:38.106171 2618 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cz5lv\" (UniqueName: \"kubernetes.io/projected/727a9f8b-291c-4cff-81c1-972e6591d923-kube-api-access-cz5lv\") pod \"csi-node-driver-jl4cq\" (UID: \"727a9f8b-291c-4cff-81c1-972e6591d923\") " pod="calico-system/csi-node-driver-jl4cq" Jan 13 20:30:38.107268 kubelet[2618]: I0113 20:30:38.106853 2618 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/727a9f8b-291c-4cff-81c1-972e6591d923-kubelet-dir\") pod \"csi-node-driver-jl4cq\" (UID: \"727a9f8b-291c-4cff-81c1-972e6591d923\") " pod="calico-system/csi-node-driver-jl4cq" Jan 13 20:30:38.107268 kubelet[2618]: I0113 20:30:38.107070 2618 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/727a9f8b-291c-4cff-81c1-972e6591d923-varrun\") pod \"csi-node-driver-jl4cq\" (UID: \"727a9f8b-291c-4cff-81c1-972e6591d923\") " pod="calico-system/csi-node-driver-jl4cq" Jan 13 20:30:38.107268 kubelet[2618]: I0113 20:30:38.107130 2618 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/727a9f8b-291c-4cff-81c1-972e6591d923-socket-dir\") pod \"csi-node-driver-jl4cq\" (UID: \"727a9f8b-291c-4cff-81c1-972e6591d923\") " pod="calico-system/csi-node-driver-jl4cq" Jan 13 20:30:38.107268 kubelet[2618]: I0113 20:30:38.107190 2618 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/727a9f8b-291c-4cff-81c1-972e6591d923-registration-dir\") pod \"csi-node-driver-jl4cq\" (UID: \"727a9f8b-291c-4cff-81c1-972e6591d923\") " pod="calico-system/csi-node-driver-jl4cq" Jan 13 20:30:38.112624 kubelet[2618]: E0113 20:30:38.112403 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:30:38.112624 kubelet[2618]: W0113 20:30:38.112450 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:30:38.112624 kubelet[2618]: E0113 20:30:38.112500 2618 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:30:38.114764 kubelet[2618]: E0113 20:30:38.114616 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:30:38.114764 kubelet[2618]: W0113 20:30:38.114636 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:30:38.114764 kubelet[2618]: E0113 20:30:38.114690 2618 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:30:38.115031 kubelet[2618]: E0113 20:30:38.115018 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:30:38.115096 kubelet[2618]: W0113 20:30:38.115085 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:30:38.115200 kubelet[2618]: E0113 20:30:38.115176 2618 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:30:38.115379 kubelet[2618]: E0113 20:30:38.115367 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:30:38.115566 kubelet[2618]: W0113 20:30:38.115471 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:30:38.116180 kubelet[2618]: E0113 20:30:38.115660 2618 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:30:38.116287 kubelet[2618]: E0113 20:30:38.116274 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:30:38.116421 kubelet[2618]: W0113 20:30:38.116337 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:30:38.116500 kubelet[2618]: E0113 20:30:38.116487 2618 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:30:38.117004 kubelet[2618]: E0113 20:30:38.116685 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:30:38.117113 kubelet[2618]: W0113 20:30:38.117088 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:30:38.117652 kubelet[2618]: E0113 20:30:38.117583 2618 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:30:38.119138 kubelet[2618]: E0113 20:30:38.119044 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:30:38.119138 kubelet[2618]: W0113 20:30:38.119059 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:30:38.119279 kubelet[2618]: E0113 20:30:38.119262 2618 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:30:38.119473 kubelet[2618]: E0113 20:30:38.119463 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:30:38.119588 kubelet[2618]: W0113 20:30:38.119527 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:30:38.119641 kubelet[2618]: E0113 20:30:38.119617 2618 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:30:38.121938 kubelet[2618]: E0113 20:30:38.121918 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:30:38.123566 kubelet[2618]: W0113 20:30:38.122361 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:30:38.123728 kubelet[2618]: E0113 20:30:38.123711 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:30:38.123820 kubelet[2618]: W0113 20:30:38.123807 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:30:38.124105 kubelet[2618]: E0113 20:30:38.124084 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:30:38.124303 kubelet[2618]: W0113 20:30:38.124291 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:30:38.124760 kubelet[2618]: E0113 20:30:38.124265 2618 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:30:38.124760 kubelet[2618]: E0113 20:30:38.124274 2618 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:30:38.124847 kubelet[2618]: E0113 20:30:38.124773 2618 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:30:38.125187 kubelet[2618]: E0113 20:30:38.125172 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:30:38.125633 kubelet[2618]: W0113 20:30:38.125598 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:30:38.125690 kubelet[2618]: E0113 20:30:38.125664 2618 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:30:38.126090 kubelet[2618]: E0113 20:30:38.126057 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:30:38.126090 kubelet[2618]: W0113 20:30:38.126071 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:30:38.126174 kubelet[2618]: E0113 20:30:38.126140 2618 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:30:38.126307 kubelet[2618]: E0113 20:30:38.126289 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:30:38.126307 kubelet[2618]: W0113 20:30:38.126301 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:30:38.126380 kubelet[2618]: E0113 20:30:38.126360 2618 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:30:38.126511 kubelet[2618]: E0113 20:30:38.126488 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:30:38.126511 kubelet[2618]: W0113 20:30:38.126498 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:30:38.126604 kubelet[2618]: E0113 20:30:38.126572 2618 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:30:38.126701 kubelet[2618]: E0113 20:30:38.126681 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:30:38.126701 kubelet[2618]: W0113 20:30:38.126693 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:30:38.126776 kubelet[2618]: E0113 20:30:38.126757 2618 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:30:38.128161 kubelet[2618]: E0113 20:30:38.128107 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:30:38.128161 kubelet[2618]: W0113 20:30:38.128155 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:30:38.128369 kubelet[2618]: E0113 20:30:38.128319 2618 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:30:38.131091 kubelet[2618]: E0113 20:30:38.131067 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:30:38.131091 kubelet[2618]: W0113 20:30:38.131088 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:30:38.131225 kubelet[2618]: E0113 20:30:38.131148 2618 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:30:38.131312 kubelet[2618]: E0113 20:30:38.131300 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:30:38.131342 kubelet[2618]: W0113 20:30:38.131312 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:30:38.131380 kubelet[2618]: E0113 20:30:38.131355 2618 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:30:38.133413 kubelet[2618]: E0113 20:30:38.133104 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:30:38.133413 kubelet[2618]: W0113 20:30:38.133123 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:30:38.133413 kubelet[2618]: E0113 20:30:38.133213 2618 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:30:38.133983 kubelet[2618]: E0113 20:30:38.133963 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:30:38.134039 kubelet[2618]: W0113 20:30:38.134002 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:30:38.134136 kubelet[2618]: E0113 20:30:38.134052 2618 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:30:38.134319 kubelet[2618]: E0113 20:30:38.134296 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:30:38.134319 kubelet[2618]: W0113 20:30:38.134309 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:30:38.134410 kubelet[2618]: E0113 20:30:38.134379 2618 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:30:38.134552 kubelet[2618]: E0113 20:30:38.134526 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:30:38.134600 kubelet[2618]: W0113 20:30:38.134569 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:30:38.134815 kubelet[2618]: E0113 20:30:38.134789 2618 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:30:38.134905 kubelet[2618]: E0113 20:30:38.134800 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:30:38.135246 kubelet[2618]: W0113 20:30:38.134908 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:30:38.135431 kubelet[2618]: E0113 20:30:38.135332 2618 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:30:38.135633 kubelet[2618]: E0113 20:30:38.135557 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:30:38.135633 kubelet[2618]: W0113 20:30:38.135576 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:30:38.135727 kubelet[2618]: E0113 20:30:38.135645 2618 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:30:38.135994 kubelet[2618]: E0113 20:30:38.135975 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:30:38.135994 kubelet[2618]: W0113 20:30:38.135991 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:30:38.136096 kubelet[2618]: E0113 20:30:38.136078 2618 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:30:38.137673 kubelet[2618]: E0113 20:30:38.137504 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:30:38.137673 kubelet[2618]: W0113 20:30:38.137525 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:30:38.137673 kubelet[2618]: E0113 20:30:38.137603 2618 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:30:38.137939 kubelet[2618]: E0113 20:30:38.137924 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:30:38.138000 kubelet[2618]: W0113 20:30:38.137989 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:30:38.138083 kubelet[2618]: E0113 20:30:38.138062 2618 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:30:38.138392 kubelet[2618]: E0113 20:30:38.138376 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:30:38.138464 kubelet[2618]: W0113 20:30:38.138452 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:30:38.138841 kubelet[2618]: E0113 20:30:38.138739 2618 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:30:38.138979 kubelet[2618]: E0113 20:30:38.138958 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:30:38.139248 kubelet[2618]: W0113 20:30:38.139024 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:30:38.139248 kubelet[2618]: E0113 20:30:38.139044 2618 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:30:38.141483 kubelet[2618]: E0113 20:30:38.141458 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:30:38.141625 kubelet[2618]: W0113 20:30:38.141608 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:30:38.141705 kubelet[2618]: E0113 20:30:38.141693 2618 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:30:38.152735 kubelet[2618]: E0113 20:30:38.152693 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:30:38.152735 kubelet[2618]: W0113 20:30:38.152723 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:30:38.152867 kubelet[2618]: E0113 20:30:38.152747 2618 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:30:38.207953 kubelet[2618]: E0113 20:30:38.207844 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:30:38.207953 kubelet[2618]: W0113 20:30:38.207865 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:30:38.207953 kubelet[2618]: E0113 20:30:38.207884 2618 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:30:38.208104 kubelet[2618]: E0113 20:30:38.208082 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:30:38.208104 kubelet[2618]: W0113 20:30:38.208091 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:30:38.208147 kubelet[2618]: E0113 20:30:38.208109 2618 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:30:38.209044 kubelet[2618]: E0113 20:30:38.209009 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:30:38.209044 kubelet[2618]: W0113 20:30:38.209035 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:30:38.209138 kubelet[2618]: E0113 20:30:38.209055 2618 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:30:38.209595 kubelet[2618]: E0113 20:30:38.209571 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:30:38.209595 kubelet[2618]: W0113 20:30:38.209590 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:30:38.209677 kubelet[2618]: E0113 20:30:38.209613 2618 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:30:38.212719 kubelet[2618]: E0113 20:30:38.212676 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:30:38.212719 kubelet[2618]: W0113 20:30:38.212707 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:30:38.212719 kubelet[2618]: E0113 20:30:38.212725 2618 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:30:38.213839 kubelet[2618]: E0113 20:30:38.213010 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:30:38.213839 kubelet[2618]: W0113 20:30:38.213023 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:30:38.213839 kubelet[2618]: E0113 20:30:38.213036 2618 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:30:38.213839 kubelet[2618]: E0113 20:30:38.213198 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:30:38.213839 kubelet[2618]: W0113 20:30:38.213206 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:30:38.213839 kubelet[2618]: E0113 20:30:38.213217 2618 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:30:38.213839 kubelet[2618]: E0113 20:30:38.213362 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:30:38.213839 kubelet[2618]: W0113 20:30:38.213377 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:30:38.213839 kubelet[2618]: E0113 20:30:38.213396 2618 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:30:38.214385 kubelet[2618]: E0113 20:30:38.214368 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:30:38.214455 kubelet[2618]: W0113 20:30:38.214444 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:30:38.214568 kubelet[2618]: E0113 20:30:38.214523 2618 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:30:38.214764 kubelet[2618]: E0113 20:30:38.214752 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:30:38.214821 kubelet[2618]: W0113 20:30:38.214810 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:30:38.214910 kubelet[2618]: E0113 20:30:38.214886 2618 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:30:38.215089 kubelet[2618]: E0113 20:30:38.215077 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:30:38.215154 kubelet[2618]: W0113 20:30:38.215143 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:30:38.215228 kubelet[2618]: E0113 20:30:38.215211 2618 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:30:38.215404 kubelet[2618]: E0113 20:30:38.215391 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:30:38.215461 kubelet[2618]: W0113 20:30:38.215451 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:30:38.215557 kubelet[2618]: E0113 20:30:38.215523 2618 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:30:38.215754 kubelet[2618]: E0113 20:30:38.215740 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:30:38.215825 kubelet[2618]: W0113 20:30:38.215814 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:30:38.215899 kubelet[2618]: E0113 20:30:38.215884 2618 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:30:38.216072 kubelet[2618]: E0113 20:30:38.216060 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:30:38.216136 kubelet[2618]: W0113 20:30:38.216125 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:30:38.216206 kubelet[2618]: E0113 20:30:38.216195 2618 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:30:38.216412 kubelet[2618]: E0113 20:30:38.216396 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:30:38.216412 kubelet[2618]: W0113 20:30:38.216408 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:30:38.216480 kubelet[2618]: E0113 20:30:38.216425 2618 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:30:38.216613 kubelet[2618]: E0113 20:30:38.216602 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:30:38.216613 kubelet[2618]: W0113 20:30:38.216611 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:30:38.216678 kubelet[2618]: E0113 20:30:38.216626 2618 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:30:38.216801 kubelet[2618]: E0113 20:30:38.216789 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:30:38.216801 kubelet[2618]: W0113 20:30:38.216799 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:30:38.216870 kubelet[2618]: E0113 20:30:38.216848 2618 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:30:38.216934 kubelet[2618]: E0113 20:30:38.216924 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:30:38.216934 kubelet[2618]: W0113 20:30:38.216933 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:30:38.216992 kubelet[2618]: E0113 20:30:38.216955 2618 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:30:38.217060 kubelet[2618]: E0113 20:30:38.217051 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:30:38.217060 kubelet[2618]: W0113 20:30:38.217060 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:30:38.217110 kubelet[2618]: E0113 20:30:38.217081 2618 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:30:38.217184 kubelet[2618]: E0113 20:30:38.217175 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:30:38.217184 kubelet[2618]: W0113 20:30:38.217184 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:30:38.217233 kubelet[2618]: E0113 20:30:38.217197 2618 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:30:38.217387 kubelet[2618]: E0113 20:30:38.217377 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:30:38.217387 kubelet[2618]: W0113 20:30:38.217387 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:30:38.217442 kubelet[2618]: E0113 20:30:38.217403 2618 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:30:38.217585 kubelet[2618]: E0113 20:30:38.217575 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:30:38.217585 kubelet[2618]: W0113 20:30:38.217585 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:30:38.217639 kubelet[2618]: E0113 20:30:38.217601 2618 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:30:38.217768 kubelet[2618]: E0113 20:30:38.217757 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:30:38.217794 kubelet[2618]: W0113 20:30:38.217768 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:30:38.217794 kubelet[2618]: E0113 20:30:38.217782 2618 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:30:38.218033 kubelet[2618]: E0113 20:30:38.218020 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:30:38.218061 kubelet[2618]: W0113 20:30:38.218034 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:30:38.218061 kubelet[2618]: E0113 20:30:38.218047 2618 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:30:38.218293 kubelet[2618]: E0113 20:30:38.218280 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:30:38.218293 kubelet[2618]: W0113 20:30:38.218292 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:30:38.218360 kubelet[2618]: E0113 20:30:38.218305 2618 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:30:38.230218 kubelet[2618]: E0113 20:30:38.230182 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:30:38.230218 kubelet[2618]: W0113 20:30:38.230202 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:30:38.230218 kubelet[2618]: E0113 20:30:38.230222 2618 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:30:38.232089 kubelet[2618]: E0113 20:30:38.232034 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:30:38.233295 containerd[1449]: time="2025-01-13T20:30:38.232694641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6f76f774c9-q7pdp,Uid:9873965f-8ae0-4029-ab74-2758b65617cb,Namespace:calico-system,Attempt:0,}" Jan 13 20:30:38.275314 containerd[1449]: time="2025-01-13T20:30:38.275165286Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:30:38.275314 containerd[1449]: time="2025-01-13T20:30:38.275219780Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:30:38.275314 containerd[1449]: time="2025-01-13T20:30:38.275230623Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:30:38.275314 containerd[1449]: time="2025-01-13T20:30:38.275315325Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:30:38.276827 kubelet[2618]: E0113 20:30:38.276415 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:30:38.278852 containerd[1449]: time="2025-01-13T20:30:38.277164486Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2d9jq,Uid:504d0597-cd75-4257-beda-cc4e8cbbf672,Namespace:calico-system,Attempt:0,}" Jan 13 20:30:38.298778 systemd[1]: Started cri-containerd-0f844f62d3731ea64b94573ea76fdc660381c4312ab1e940916a923bd8c42b42.scope - libcontainer container 0f844f62d3731ea64b94573ea76fdc660381c4312ab1e940916a923bd8c42b42. Jan 13 20:30:38.334753 containerd[1449]: time="2025-01-13T20:30:38.334702049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6f76f774c9-q7pdp,Uid:9873965f-8ae0-4029-ab74-2758b65617cb,Namespace:calico-system,Attempt:0,} returns sandbox id \"0f844f62d3731ea64b94573ea76fdc660381c4312ab1e940916a923bd8c42b42\"" Jan 13 20:30:38.335787 kubelet[2618]: E0113 20:30:38.335768 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:30:38.337995 containerd[1449]: time="2025-01-13T20:30:38.337960736Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 13 20:30:38.368483 containerd[1449]: time="2025-01-13T20:30:38.368188677Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:30:38.368483 containerd[1449]: time="2025-01-13T20:30:38.368276100Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:30:38.368483 containerd[1449]: time="2025-01-13T20:30:38.368288263Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:30:38.371422 containerd[1449]: time="2025-01-13T20:30:38.370559614Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:30:38.397764 systemd[1]: Started cri-containerd-fe47cf139320ec72ecca393b45a54ac7f8cef29fd5b8d6bd8b982698ceb85d27.scope - libcontainer container fe47cf139320ec72ecca393b45a54ac7f8cef29fd5b8d6bd8b982698ceb85d27. Jan 13 20:30:38.418693 containerd[1449]: time="2025-01-13T20:30:38.418636957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2d9jq,Uid:504d0597-cd75-4257-beda-cc4e8cbbf672,Namespace:calico-system,Attempt:0,} returns sandbox id \"fe47cf139320ec72ecca393b45a54ac7f8cef29fd5b8d6bd8b982698ceb85d27\"" Jan 13 20:30:38.419406 kubelet[2618]: E0113 20:30:38.419372 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:30:39.296216 kubelet[2618]: E0113 20:30:39.296168 2618 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jl4cq" podUID="727a9f8b-291c-4cff-81c1-972e6591d923" Jan 13 20:30:39.642397 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3258593140.mount: Deactivated successfully. Jan 13 20:30:39.969960 containerd[1449]: time="2025-01-13T20:30:39.969819406Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:30:39.970972 containerd[1449]: time="2025-01-13T20:30:39.970916360Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29231308" Jan 13 20:30:39.971898 containerd[1449]: time="2025-01-13T20:30:39.971867917Z" level=info msg="ImageCreate event name:\"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:30:39.973702 containerd[1449]: time="2025-01-13T20:30:39.973657605Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:30:39.974836 containerd[1449]: time="2025-01-13T20:30:39.974798570Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"29231162\" in 1.636795182s" Jan 13 20:30:39.974892 containerd[1449]: time="2025-01-13T20:30:39.974875149Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\"" Jan 13 20:30:39.977922 containerd[1449]: time="2025-01-13T20:30:39.977750987Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 13 20:30:39.987177 containerd[1449]: time="2025-01-13T20:30:39.987039788Z" level=info msg="CreateContainer within sandbox \"0f844f62d3731ea64b94573ea76fdc660381c4312ab1e940916a923bd8c42b42\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 13 20:30:40.005481 containerd[1449]: time="2025-01-13T20:30:40.005431185Z" level=info msg="CreateContainer within sandbox \"0f844f62d3731ea64b94573ea76fdc660381c4312ab1e940916a923bd8c42b42\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"49431e98e45298ad83a7ed10719a1c8d35c435ce46686664c28b575c4d733715\"" Jan 13 20:30:40.006058 containerd[1449]: time="2025-01-13T20:30:40.006031169Z" level=info msg="StartContainer for \"49431e98e45298ad83a7ed10719a1c8d35c435ce46686664c28b575c4d733715\"" Jan 13 20:30:40.030723 systemd[1]: Started cri-containerd-49431e98e45298ad83a7ed10719a1c8d35c435ce46686664c28b575c4d733715.scope - libcontainer container 49431e98e45298ad83a7ed10719a1c8d35c435ce46686664c28b575c4d733715. Jan 13 20:30:40.065327 containerd[1449]: time="2025-01-13T20:30:40.065255477Z" level=info msg="StartContainer for \"49431e98e45298ad83a7ed10719a1c8d35c435ce46686664c28b575c4d733715\" returns successfully" Jan 13 20:30:40.363442 kubelet[2618]: E0113 20:30:40.359575 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:30:40.376066 kubelet[2618]: I0113 20:30:40.376016 2618 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-6f76f774c9-q7pdp" podStartSLOduration=1.738222575 podStartE2EDuration="3.375978523s" podCreationTimestamp="2025-01-13 20:30:37 +0000 UTC" firstStartedPulling="2025-01-13 20:30:38.337420436 +0000 UTC m=+20.146326902" lastFinishedPulling="2025-01-13 20:30:39.975176384 +0000 UTC m=+21.784082850" observedRunningTime="2025-01-13 20:30:40.375009611 +0000 UTC m=+22.183916077" watchObservedRunningTime="2025-01-13 20:30:40.375978523 +0000 UTC m=+22.184884989" Jan 13 20:30:40.419689 kubelet[2618]: E0113 20:30:40.419646 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:30:40.419689 kubelet[2618]: W0113 20:30:40.419676 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:30:40.419689 kubelet[2618]: E0113 20:30:40.419698 2618 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:30:40.419932 kubelet[2618]: E0113 20:30:40.419908 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:30:40.419932 kubelet[2618]: W0113 20:30:40.419919 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:30:40.419932 kubelet[2618]: E0113 20:30:40.419929 2618 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:30:40.420082 kubelet[2618]: E0113 20:30:40.420060 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:30:40.420082 kubelet[2618]: W0113 20:30:40.420074 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:30:40.420082 kubelet[2618]: E0113 20:30:40.420083 2618 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:30:40.420233 kubelet[2618]: E0113 20:30:40.420211 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:30:40.420233 kubelet[2618]: W0113 20:30:40.420226 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:30:40.420282 kubelet[2618]: E0113 20:30:40.420236 2618 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:30:40.420414 kubelet[2618]: E0113 20:30:40.420401 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:30:40.420414 kubelet[2618]: W0113 20:30:40.420412 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:30:40.420479 kubelet[2618]: E0113 20:30:40.420422 2618 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:30:40.421049 kubelet[2618]: E0113 20:30:40.421021 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:30:40.421049 kubelet[2618]: W0113 20:30:40.421039 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:30:40.421049 kubelet[2618]: E0113 20:30:40.421054 2618 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:30:40.421049 kubelet[2618]: E0113 20:30:40.421268 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:30:40.421049 kubelet[2618]: W0113 20:30:40.421276 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:30:40.421049 kubelet[2618]: E0113 20:30:40.421286 2618 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:30:40.421627 kubelet[2618]: E0113 20:30:40.421452 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:30:40.421627 kubelet[2618]: W0113 20:30:40.421459 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:30:40.421627 kubelet[2618]: E0113 20:30:40.421469 2618 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:30:40.421704 kubelet[2618]: E0113 20:30:40.421662 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:30:40.421704 kubelet[2618]: W0113 20:30:40.421670 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:30:40.421704 kubelet[2618]: E0113 20:30:40.421680 2618 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:30:40.422354 kubelet[2618]: E0113 20:30:40.421818 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:30:40.422354 kubelet[2618]: W0113 20:30:40.421828 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:30:40.422354 kubelet[2618]: E0113 20:30:40.421840 2618 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:30:40.422354 kubelet[2618]: E0113 20:30:40.421993 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:30:40.422354 kubelet[2618]: W0113 20:30:40.421999 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:30:40.422354 kubelet[2618]: E0113 20:30:40.422008 2618 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:30:40.422354 kubelet[2618]: E0113 20:30:40.422146 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:30:40.422354 kubelet[2618]: W0113 20:30:40.422154 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:30:40.422354 kubelet[2618]: E0113 20:30:40.422163 2618 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:30:40.422627 kubelet[2618]: E0113 20:30:40.422411 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:30:40.422627 kubelet[2618]: W0113 20:30:40.422420 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:30:40.422627 kubelet[2618]: E0113 20:30:40.422430 2618 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:30:40.422627 kubelet[2618]: E0113 20:30:40.422605 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:30:40.422627 kubelet[2618]: W0113 20:30:40.422613 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:30:40.422627 kubelet[2618]: E0113 20:30:40.422623 2618 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:30:40.422781 kubelet[2618]: E0113 20:30:40.422760 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:30:40.422781 kubelet[2618]: W0113 20:30:40.422775 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:30:40.422840 kubelet[2618]: E0113 20:30:40.422786 2618 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:30:40.425171 kubelet[2618]: E0113 20:30:40.425137 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:30:40.425171 kubelet[2618]: W0113 20:30:40.425156 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:30:40.425283 kubelet[2618]: E0113 20:30:40.425170 2618 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:30:40.425445 kubelet[2618]: E0113 20:30:40.425431 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:30:40.425445 kubelet[2618]: W0113 20:30:40.425442 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:30:40.425529 kubelet[2618]: E0113 20:30:40.425457 2618 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:30:40.425682 kubelet[2618]: E0113 20:30:40.425654 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:30:40.425682 kubelet[2618]: W0113 20:30:40.425667 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:30:40.425682 kubelet[2618]: E0113 20:30:40.425684 2618 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:30:40.425860 kubelet[2618]: E0113 20:30:40.425848 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:30:40.425897 kubelet[2618]: W0113 20:30:40.425861 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:30:40.425897 kubelet[2618]: E0113 20:30:40.425883 2618 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:30:40.426044 kubelet[2618]: E0113 20:30:40.426032 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:30:40.426044 kubelet[2618]: W0113 20:30:40.426042 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:30:40.426119 kubelet[2618]: E0113 20:30:40.426054 2618 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:30:40.426222 kubelet[2618]: E0113 20:30:40.426211 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:30:40.426222 kubelet[2618]: W0113 20:30:40.426222 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:30:40.426283 kubelet[2618]: E0113 20:30:40.426239 2618 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:30:40.426794 kubelet[2618]: E0113 20:30:40.426639 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:30:40.426794 kubelet[2618]: W0113 20:30:40.426656 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:30:40.426794 kubelet[2618]: E0113 20:30:40.426674 2618 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:30:40.426972 kubelet[2618]: E0113 20:30:40.426959 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:30:40.427031 kubelet[2618]: W0113 20:30:40.427018 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:30:40.427152 kubelet[2618]: E0113 20:30:40.427132 2618 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:30:40.427338 kubelet[2618]: E0113 20:30:40.427314 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:30:40.427338 kubelet[2618]: W0113 20:30:40.427326 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:30:40.427497 kubelet[2618]: E0113 20:30:40.427451 2618 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:30:40.427638 kubelet[2618]: E0113 20:30:40.427613 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:30:40.427638 kubelet[2618]: W0113 20:30:40.427625 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:30:40.427781 kubelet[2618]: E0113 20:30:40.427729 2618 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:30:40.427995 kubelet[2618]: E0113 20:30:40.427981 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:30:40.428132 kubelet[2618]: W0113 20:30:40.428051 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:30:40.428132 kubelet[2618]: E0113 20:30:40.428083 2618 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:30:40.428363 kubelet[2618]: E0113 20:30:40.428350 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:30:40.428420 kubelet[2618]: W0113 20:30:40.428410 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:30:40.428476 kubelet[2618]: E0113 20:30:40.428468 2618 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:30:40.428721 kubelet[2618]: E0113 20:30:40.428708 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:30:40.428797 kubelet[2618]: W0113 20:30:40.428787 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:30:40.428874 kubelet[2618]: E0113 20:30:40.428863 2618 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:30:40.429091 kubelet[2618]: E0113 20:30:40.429078 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:30:40.429364 kubelet[2618]: W0113 20:30:40.429145 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:30:40.429364 kubelet[2618]: E0113 20:30:40.429166 2618 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:30:40.429520 kubelet[2618]: E0113 20:30:40.429508 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:30:40.429607 kubelet[2618]: W0113 20:30:40.429596 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:30:40.429673 kubelet[2618]: E0113 20:30:40.429665 2618 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:30:40.429912 kubelet[2618]: E0113 20:30:40.429898 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:30:40.429990 kubelet[2618]: W0113 20:30:40.429977 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:30:40.430040 kubelet[2618]: E0113 20:30:40.430031 2618 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:30:40.430272 kubelet[2618]: E0113 20:30:40.430260 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:30:40.430341 kubelet[2618]: W0113 20:30:40.430330 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:30:40.430391 kubelet[2618]: E0113 20:30:40.430383 2618 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:30:40.430934 kubelet[2618]: E0113 20:30:40.430917 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:30:40.431062 kubelet[2618]: W0113 20:30:40.431012 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:30:40.431062 kubelet[2618]: E0113 20:30:40.431033 2618 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:30:41.054056 containerd[1449]: time="2025-01-13T20:30:41.053982483Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:30:41.055699 containerd[1449]: time="2025-01-13T20:30:41.055645947Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5117811" Jan 13 20:30:41.056801 containerd[1449]: time="2025-01-13T20:30:41.056756724Z" level=info msg="ImageCreate event name:\"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:30:41.058677 containerd[1449]: time="2025-01-13T20:30:41.058642000Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:30:41.059601 containerd[1449]: time="2025-01-13T20:30:41.059435503Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6487425\" in 1.081639265s" Jan 13 20:30:41.059601 containerd[1449]: time="2025-01-13T20:30:41.059473432Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\"" Jan 13 20:30:41.064078 containerd[1449]: time="2025-01-13T20:30:41.062773075Z" level=info msg="CreateContainer within sandbox \"fe47cf139320ec72ecca393b45a54ac7f8cef29fd5b8d6bd8b982698ceb85d27\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 13 20:30:41.081623 containerd[1449]: time="2025-01-13T20:30:41.081583225Z" level=info msg="CreateContainer within sandbox \"fe47cf139320ec72ecca393b45a54ac7f8cef29fd5b8d6bd8b982698ceb85d27\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"b7603eec64d6d309a125a8404e0eea05e076dd3fc4b64226b544bae13ade8743\"" Jan 13 20:30:41.083296 containerd[1449]: time="2025-01-13T20:30:41.082077539Z" level=info msg="StartContainer for \"b7603eec64d6d309a125a8404e0eea05e076dd3fc4b64226b544bae13ade8743\"" Jan 13 20:30:41.117723 systemd[1]: Started cri-containerd-b7603eec64d6d309a125a8404e0eea05e076dd3fc4b64226b544bae13ade8743.scope - libcontainer container b7603eec64d6d309a125a8404e0eea05e076dd3fc4b64226b544bae13ade8743. Jan 13 20:30:41.171099 systemd[1]: cri-containerd-b7603eec64d6d309a125a8404e0eea05e076dd3fc4b64226b544bae13ade8743.scope: Deactivated successfully. Jan 13 20:30:41.184338 containerd[1449]: time="2025-01-13T20:30:41.184281533Z" level=info msg="StartContainer for \"b7603eec64d6d309a125a8404e0eea05e076dd3fc4b64226b544bae13ade8743\" returns successfully" Jan 13 20:30:41.203767 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b7603eec64d6d309a125a8404e0eea05e076dd3fc4b64226b544bae13ade8743-rootfs.mount: Deactivated successfully. Jan 13 20:30:41.216203 containerd[1449]: time="2025-01-13T20:30:41.208975764Z" level=info msg="shim disconnected" id=b7603eec64d6d309a125a8404e0eea05e076dd3fc4b64226b544bae13ade8743 namespace=k8s.io Jan 13 20:30:41.216203 containerd[1449]: time="2025-01-13T20:30:41.216200714Z" level=warning msg="cleaning up after shim disconnected" id=b7603eec64d6d309a125a8404e0eea05e076dd3fc4b64226b544bae13ade8743 namespace=k8s.io Jan 13 20:30:41.216203 containerd[1449]: time="2025-01-13T20:30:41.216217718Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:30:41.295798 kubelet[2618]: E0113 20:30:41.295746 2618 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jl4cq" podUID="727a9f8b-291c-4cff-81c1-972e6591d923" Jan 13 20:30:41.366419 kubelet[2618]: I0113 20:30:41.365916 2618 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 20:30:41.366419 kubelet[2618]: E0113 20:30:41.366203 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:30:41.369003 containerd[1449]: time="2025-01-13T20:30:41.368959719Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 13 20:30:41.369110 kubelet[2618]: E0113 20:30:41.369044 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:30:43.295794 kubelet[2618]: E0113 20:30:43.295746 2618 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jl4cq" podUID="727a9f8b-291c-4cff-81c1-972e6591d923" Jan 13 20:30:45.296230 kubelet[2618]: E0113 20:30:45.296196 2618 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jl4cq" podUID="727a9f8b-291c-4cff-81c1-972e6591d923" Jan 13 20:30:45.372875 kubelet[2618]: I0113 20:30:45.372668 2618 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 20:30:45.373852 kubelet[2618]: E0113 20:30:45.373793 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:30:46.225959 containerd[1449]: time="2025-01-13T20:30:46.225907002Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:30:46.227090 containerd[1449]: time="2025-01-13T20:30:46.227036021Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=89703123" Jan 13 20:30:46.227633 containerd[1449]: time="2025-01-13T20:30:46.227601491Z" level=info msg="ImageCreate event name:\"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:30:46.232077 containerd[1449]: time="2025-01-13T20:30:46.232015908Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:30:46.232881 containerd[1449]: time="2025-01-13T20:30:46.232734167Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"91072777\" in 4.86373416s" Jan 13 20:30:46.232881 containerd[1449]: time="2025-01-13T20:30:46.232760732Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\"" Jan 13 20:30:46.235479 containerd[1449]: time="2025-01-13T20:30:46.235452935Z" level=info msg="CreateContainer within sandbox \"fe47cf139320ec72ecca393b45a54ac7f8cef29fd5b8d6bd8b982698ceb85d27\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 13 20:30:46.251475 containerd[1449]: time="2025-01-13T20:30:46.251429035Z" level=info msg="CreateContainer within sandbox \"fe47cf139320ec72ecca393b45a54ac7f8cef29fd5b8d6bd8b982698ceb85d27\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"e6de1e32ac2caf5a72dcd61ee51974e2139cf683c3b67038c33c3df1c62034be\"" Jan 13 20:30:46.251909 containerd[1449]: time="2025-01-13T20:30:46.251885443Z" level=info msg="StartContainer for \"e6de1e32ac2caf5a72dcd61ee51974e2139cf683c3b67038c33c3df1c62034be\"" Jan 13 20:30:46.286716 systemd[1]: Started cri-containerd-e6de1e32ac2caf5a72dcd61ee51974e2139cf683c3b67038c33c3df1c62034be.scope - libcontainer container e6de1e32ac2caf5a72dcd61ee51974e2139cf683c3b67038c33c3df1c62034be. Jan 13 20:30:46.338820 systemd[1]: Started sshd@7-10.0.0.136:22-10.0.0.1:52690.service - OpenSSH per-connection server daemon (10.0.0.1:52690). Jan 13 20:30:46.358969 containerd[1449]: time="2025-01-13T20:30:46.358924173Z" level=info msg="StartContainer for \"e6de1e32ac2caf5a72dcd61ee51974e2139cf683c3b67038c33c3df1c62034be\" returns successfully" Jan 13 20:30:46.378269 kubelet[2618]: E0113 20:30:46.378231 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:30:46.384208 kubelet[2618]: E0113 20:30:46.378391 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:30:46.446749 sshd[3367]: Accepted publickey for core from 10.0.0.1 port 52690 ssh2: RSA SHA256:dHV21v/TvsC6tzdcDH8HHQ5Gsjsp+3vcXiRjWYQ6Qqo Jan 13 20:30:46.448010 sshd-session[3367]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:30:46.451843 systemd-logind[1430]: New session 8 of user core. Jan 13 20:30:46.463754 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 13 20:30:46.590843 sshd[3370]: Connection closed by 10.0.0.1 port 52690 Jan 13 20:30:46.591101 sshd-session[3367]: pam_unix(sshd:session): session closed for user core Jan 13 20:30:46.593911 systemd[1]: sshd@7-10.0.0.136:22-10.0.0.1:52690.service: Deactivated successfully. Jan 13 20:30:46.595680 systemd[1]: session-8.scope: Deactivated successfully. Jan 13 20:30:46.597643 systemd-logind[1430]: Session 8 logged out. Waiting for processes to exit. Jan 13 20:30:46.599399 systemd-logind[1430]: Removed session 8. Jan 13 20:30:46.913041 systemd[1]: cri-containerd-e6de1e32ac2caf5a72dcd61ee51974e2139cf683c3b67038c33c3df1c62034be.scope: Deactivated successfully. Jan 13 20:30:46.935520 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e6de1e32ac2caf5a72dcd61ee51974e2139cf683c3b67038c33c3df1c62034be-rootfs.mount: Deactivated successfully. Jan 13 20:30:46.938147 kubelet[2618]: I0113 20:30:46.938111 2618 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 13 20:30:46.967216 containerd[1449]: time="2025-01-13T20:30:46.966762998Z" level=info msg="shim disconnected" id=e6de1e32ac2caf5a72dcd61ee51974e2139cf683c3b67038c33c3df1c62034be namespace=k8s.io Jan 13 20:30:46.967216 containerd[1449]: time="2025-01-13T20:30:46.966873939Z" level=warning msg="cleaning up after shim disconnected" id=e6de1e32ac2caf5a72dcd61ee51974e2139cf683c3b67038c33c3df1c62034be namespace=k8s.io Jan 13 20:30:46.967216 containerd[1449]: time="2025-01-13T20:30:46.966883301Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:30:46.988782 kubelet[2618]: I0113 20:30:46.988699 2618 topology_manager.go:215] "Topology Admit Handler" podUID="bfa3473d-43a3-447d-b0a7-c066cdd14301" podNamespace="kube-system" podName="coredns-76f75df574-rlqw7" Jan 13 20:30:46.991930 kubelet[2618]: I0113 20:30:46.991349 2618 topology_manager.go:215] "Topology Admit Handler" podUID="9a4b274c-0db7-4f24-b51e-a8ee914d4260" podNamespace="kube-system" podName="coredns-76f75df574-kr682" Jan 13 20:30:46.991930 kubelet[2618]: I0113 20:30:46.991492 2618 topology_manager.go:215] "Topology Admit Handler" podUID="1881e196-e398-402d-91c4-c538f30e9a68" podNamespace="calico-system" podName="calico-kube-controllers-75ff5498fd-l6pnm" Jan 13 20:30:46.994167 kubelet[2618]: I0113 20:30:46.994128 2618 topology_manager.go:215] "Topology Admit Handler" podUID="5e2bb27f-e9a8-4574-9125-ac3ff1f5546b" podNamespace="calico-apiserver" podName="calico-apiserver-d98fcfdcc-j9cwm" Jan 13 20:30:46.994308 kubelet[2618]: I0113 20:30:46.994286 2618 topology_manager.go:215] "Topology Admit Handler" podUID="32e2fd28-1c60-4d81-883d-85b833d714fc" podNamespace="calico-apiserver" podName="calico-apiserver-d98fcfdcc-xd4cg" Jan 13 20:30:47.005614 systemd[1]: Created slice kubepods-burstable-podbfa3473d_43a3_447d_b0a7_c066cdd14301.slice - libcontainer container kubepods-burstable-podbfa3473d_43a3_447d_b0a7_c066cdd14301.slice. Jan 13 20:30:47.014473 systemd[1]: Created slice kubepods-burstable-pod9a4b274c_0db7_4f24_b51e_a8ee914d4260.slice - libcontainer container kubepods-burstable-pod9a4b274c_0db7_4f24_b51e_a8ee914d4260.slice. Jan 13 20:30:47.020804 systemd[1]: Created slice kubepods-besteffort-pod1881e196_e398_402d_91c4_c538f30e9a68.slice - libcontainer container kubepods-besteffort-pod1881e196_e398_402d_91c4_c538f30e9a68.slice. Jan 13 20:30:47.026935 systemd[1]: Created slice kubepods-besteffort-pod5e2bb27f_e9a8_4574_9125_ac3ff1f5546b.slice - libcontainer container kubepods-besteffort-pod5e2bb27f_e9a8_4574_9125_ac3ff1f5546b.slice. Jan 13 20:30:47.033317 systemd[1]: Created slice kubepods-besteffort-pod32e2fd28_1c60_4d81_883d_85b833d714fc.slice - libcontainer container kubepods-besteffort-pod32e2fd28_1c60_4d81_883d_85b833d714fc.slice. Jan 13 20:30:47.078812 kubelet[2618]: I0113 20:30:47.078770 2618 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9a4b274c-0db7-4f24-b51e-a8ee914d4260-config-volume\") pod \"coredns-76f75df574-kr682\" (UID: \"9a4b274c-0db7-4f24-b51e-a8ee914d4260\") " pod="kube-system/coredns-76f75df574-kr682" Jan 13 20:30:47.078812 kubelet[2618]: I0113 20:30:47.078825 2618 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vf8p\" (UniqueName: \"kubernetes.io/projected/bfa3473d-43a3-447d-b0a7-c066cdd14301-kube-api-access-2vf8p\") pod \"coredns-76f75df574-rlqw7\" (UID: \"bfa3473d-43a3-447d-b0a7-c066cdd14301\") " pod="kube-system/coredns-76f75df574-rlqw7" Jan 13 20:30:47.078993 kubelet[2618]: I0113 20:30:47.078849 2618 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/32e2fd28-1c60-4d81-883d-85b833d714fc-calico-apiserver-certs\") pod \"calico-apiserver-d98fcfdcc-xd4cg\" (UID: \"32e2fd28-1c60-4d81-883d-85b833d714fc\") " pod="calico-apiserver/calico-apiserver-d98fcfdcc-xd4cg" Jan 13 20:30:47.078993 kubelet[2618]: I0113 20:30:47.078871 2618 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsw57\" (UniqueName: \"kubernetes.io/projected/9a4b274c-0db7-4f24-b51e-a8ee914d4260-kube-api-access-jsw57\") pod \"coredns-76f75df574-kr682\" (UID: \"9a4b274c-0db7-4f24-b51e-a8ee914d4260\") " pod="kube-system/coredns-76f75df574-kr682" Jan 13 20:30:47.078993 kubelet[2618]: I0113 20:30:47.078892 2618 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bfa3473d-43a3-447d-b0a7-c066cdd14301-config-volume\") pod \"coredns-76f75df574-rlqw7\" (UID: \"bfa3473d-43a3-447d-b0a7-c066cdd14301\") " pod="kube-system/coredns-76f75df574-rlqw7" Jan 13 20:30:47.078993 kubelet[2618]: I0113 20:30:47.078915 2618 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6t6l8\" (UniqueName: \"kubernetes.io/projected/32e2fd28-1c60-4d81-883d-85b833d714fc-kube-api-access-6t6l8\") pod \"calico-apiserver-d98fcfdcc-xd4cg\" (UID: \"32e2fd28-1c60-4d81-883d-85b833d714fc\") " pod="calico-apiserver/calico-apiserver-d98fcfdcc-xd4cg" Jan 13 20:30:47.078993 kubelet[2618]: I0113 20:30:47.078937 2618 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1881e196-e398-402d-91c4-c538f30e9a68-tigera-ca-bundle\") pod \"calico-kube-controllers-75ff5498fd-l6pnm\" (UID: \"1881e196-e398-402d-91c4-c538f30e9a68\") " pod="calico-system/calico-kube-controllers-75ff5498fd-l6pnm" Jan 13 20:30:47.079102 kubelet[2618]: I0113 20:30:47.078958 2618 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9chbw\" (UniqueName: \"kubernetes.io/projected/1881e196-e398-402d-91c4-c538f30e9a68-kube-api-access-9chbw\") pod \"calico-kube-controllers-75ff5498fd-l6pnm\" (UID: \"1881e196-e398-402d-91c4-c538f30e9a68\") " pod="calico-system/calico-kube-controllers-75ff5498fd-l6pnm" Jan 13 20:30:47.079102 kubelet[2618]: I0113 20:30:47.078978 2618 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqntx\" (UniqueName: \"kubernetes.io/projected/5e2bb27f-e9a8-4574-9125-ac3ff1f5546b-kube-api-access-rqntx\") pod \"calico-apiserver-d98fcfdcc-j9cwm\" (UID: \"5e2bb27f-e9a8-4574-9125-ac3ff1f5546b\") " pod="calico-apiserver/calico-apiserver-d98fcfdcc-j9cwm" Jan 13 20:30:47.079102 kubelet[2618]: I0113 20:30:47.079002 2618 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5e2bb27f-e9a8-4574-9125-ac3ff1f5546b-calico-apiserver-certs\") pod \"calico-apiserver-d98fcfdcc-j9cwm\" (UID: \"5e2bb27f-e9a8-4574-9125-ac3ff1f5546b\") " pod="calico-apiserver/calico-apiserver-d98fcfdcc-j9cwm" Jan 13 20:30:47.301204 systemd[1]: Created slice kubepods-besteffort-pod727a9f8b_291c_4cff_81c1_972e6591d923.slice - libcontainer container kubepods-besteffort-pod727a9f8b_291c_4cff_81c1_972e6591d923.slice. Jan 13 20:30:47.303303 containerd[1449]: time="2025-01-13T20:30:47.303266404Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jl4cq,Uid:727a9f8b-291c-4cff-81c1-972e6591d923,Namespace:calico-system,Attempt:0,}" Jan 13 20:30:47.311899 kubelet[2618]: E0113 20:30:47.311864 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:30:47.312727 containerd[1449]: time="2025-01-13T20:30:47.312686174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-rlqw7,Uid:bfa3473d-43a3-447d-b0a7-c066cdd14301,Namespace:kube-system,Attempt:0,}" Jan 13 20:30:47.319080 kubelet[2618]: E0113 20:30:47.319040 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:30:47.319971 containerd[1449]: time="2025-01-13T20:30:47.319733699Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-kr682,Uid:9a4b274c-0db7-4f24-b51e-a8ee914d4260,Namespace:kube-system,Attempt:0,}" Jan 13 20:30:47.327156 containerd[1449]: time="2025-01-13T20:30:47.327116366Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-75ff5498fd-l6pnm,Uid:1881e196-e398-402d-91c4-c538f30e9a68,Namespace:calico-system,Attempt:0,}" Jan 13 20:30:47.333695 containerd[1449]: time="2025-01-13T20:30:47.333235116Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d98fcfdcc-j9cwm,Uid:5e2bb27f-e9a8-4574-9125-ac3ff1f5546b,Namespace:calico-apiserver,Attempt:0,}" Jan 13 20:30:47.344046 containerd[1449]: time="2025-01-13T20:30:47.344006580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d98fcfdcc-xd4cg,Uid:32e2fd28-1c60-4d81-883d-85b833d714fc,Namespace:calico-apiserver,Attempt:0,}" Jan 13 20:30:47.383207 kubelet[2618]: E0113 20:30:47.380263 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:30:47.383829 containerd[1449]: time="2025-01-13T20:30:47.382381712Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 13 20:30:47.722173 containerd[1449]: time="2025-01-13T20:30:47.722128802Z" level=error msg="Failed to destroy network for sandbox \"d39152a6fc1537621236a59d0c339b87e86a5ef4286937e0fdb62843acfe45ae\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:30:47.724559 containerd[1449]: time="2025-01-13T20:30:47.722212378Z" level=error msg="Failed to destroy network for sandbox \"01d5bbe17e56f7c7083cb740ffa0476c833343a17ac74aeb1a900b08fcaf2d63\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:30:47.724759 containerd[1449]: time="2025-01-13T20:30:47.722265428Z" level=error msg="Failed to destroy network for sandbox \"9337bb1f603c24ba6ca35c3654eb6cf05598f7e4d660f21b53662788b856b030\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:30:47.725510 containerd[1449]: time="2025-01-13T20:30:47.725345007Z" level=error msg="encountered an error cleaning up failed sandbox \"d39152a6fc1537621236a59d0c339b87e86a5ef4286937e0fdb62843acfe45ae\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:30:47.725709 containerd[1449]: time="2025-01-13T20:30:47.725684311Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d98fcfdcc-xd4cg,Uid:32e2fd28-1c60-4d81-883d-85b833d714fc,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d39152a6fc1537621236a59d0c339b87e86a5ef4286937e0fdb62843acfe45ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:30:47.726168 containerd[1449]: time="2025-01-13T20:30:47.725881108Z" level=error msg="encountered an error cleaning up failed sandbox \"9337bb1f603c24ba6ca35c3654eb6cf05598f7e4d660f21b53662788b856b030\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:30:47.726231 containerd[1449]: time="2025-01-13T20:30:47.726196207Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d98fcfdcc-j9cwm,Uid:5e2bb27f-e9a8-4574-9125-ac3ff1f5546b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9337bb1f603c24ba6ca35c3654eb6cf05598f7e4d660f21b53662788b856b030\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:30:47.726599 containerd[1449]: time="2025-01-13T20:30:47.726562836Z" level=error msg="encountered an error cleaning up failed sandbox \"01d5bbe17e56f7c7083cb740ffa0476c833343a17ac74aeb1a900b08fcaf2d63\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:30:47.727222 containerd[1449]: time="2025-01-13T20:30:47.727181152Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-75ff5498fd-l6pnm,Uid:1881e196-e398-402d-91c4-c538f30e9a68,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"01d5bbe17e56f7c7083cb740ffa0476c833343a17ac74aeb1a900b08fcaf2d63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:30:47.728813 kubelet[2618]: E0113 20:30:47.728778 2618 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"01d5bbe17e56f7c7083cb740ffa0476c833343a17ac74aeb1a900b08fcaf2d63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:30:47.728928 kubelet[2618]: E0113 20:30:47.728850 2618 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"01d5bbe17e56f7c7083cb740ffa0476c833343a17ac74aeb1a900b08fcaf2d63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-75ff5498fd-l6pnm" Jan 13 20:30:47.728928 kubelet[2618]: E0113 20:30:47.728872 2618 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"01d5bbe17e56f7c7083cb740ffa0476c833343a17ac74aeb1a900b08fcaf2d63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-75ff5498fd-l6pnm" Jan 13 20:30:47.729004 kubelet[2618]: E0113 20:30:47.728928 2618 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-75ff5498fd-l6pnm_calico-system(1881e196-e398-402d-91c4-c538f30e9a68)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-75ff5498fd-l6pnm_calico-system(1881e196-e398-402d-91c4-c538f30e9a68)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"01d5bbe17e56f7c7083cb740ffa0476c833343a17ac74aeb1a900b08fcaf2d63\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-75ff5498fd-l6pnm" podUID="1881e196-e398-402d-91c4-c538f30e9a68" Jan 13 20:30:47.729152 kubelet[2618]: E0113 20:30:47.729127 2618 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9337bb1f603c24ba6ca35c3654eb6cf05598f7e4d660f21b53662788b856b030\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:30:47.729201 kubelet[2618]: E0113 20:30:47.729168 2618 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9337bb1f603c24ba6ca35c3654eb6cf05598f7e4d660f21b53662788b856b030\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d98fcfdcc-j9cwm" Jan 13 20:30:47.729201 kubelet[2618]: E0113 20:30:47.729184 2618 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9337bb1f603c24ba6ca35c3654eb6cf05598f7e4d660f21b53662788b856b030\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d98fcfdcc-j9cwm" Jan 13 20:30:47.729271 kubelet[2618]: E0113 20:30:47.729221 2618 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-d98fcfdcc-j9cwm_calico-apiserver(5e2bb27f-e9a8-4574-9125-ac3ff1f5546b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-d98fcfdcc-j9cwm_calico-apiserver(5e2bb27f-e9a8-4574-9125-ac3ff1f5546b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9337bb1f603c24ba6ca35c3654eb6cf05598f7e4d660f21b53662788b856b030\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-d98fcfdcc-j9cwm" podUID="5e2bb27f-e9a8-4574-9125-ac3ff1f5546b" Jan 13 20:30:47.730120 kubelet[2618]: E0113 20:30:47.729596 2618 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d39152a6fc1537621236a59d0c339b87e86a5ef4286937e0fdb62843acfe45ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:30:47.730120 kubelet[2618]: E0113 20:30:47.729654 2618 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d39152a6fc1537621236a59d0c339b87e86a5ef4286937e0fdb62843acfe45ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d98fcfdcc-xd4cg" Jan 13 20:30:47.730120 kubelet[2618]: E0113 20:30:47.729675 2618 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d39152a6fc1537621236a59d0c339b87e86a5ef4286937e0fdb62843acfe45ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d98fcfdcc-xd4cg" Jan 13 20:30:47.730244 kubelet[2618]: E0113 20:30:47.729730 2618 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-d98fcfdcc-xd4cg_calico-apiserver(32e2fd28-1c60-4d81-883d-85b833d714fc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-d98fcfdcc-xd4cg_calico-apiserver(32e2fd28-1c60-4d81-883d-85b833d714fc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d39152a6fc1537621236a59d0c339b87e86a5ef4286937e0fdb62843acfe45ae\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-d98fcfdcc-xd4cg" podUID="32e2fd28-1c60-4d81-883d-85b833d714fc" Jan 13 20:30:47.732726 containerd[1449]: time="2025-01-13T20:30:47.732644058Z" level=error msg="Failed to destroy network for sandbox \"e2805a8dda438eff1d0c2f55b32b782d9c1dc5901d5efac2a41313f78dd5f358\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:30:47.733301 containerd[1449]: time="2025-01-13T20:30:47.733172238Z" level=error msg="encountered an error cleaning up failed sandbox \"e2805a8dda438eff1d0c2f55b32b782d9c1dc5901d5efac2a41313f78dd5f358\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:30:47.733301 containerd[1449]: time="2025-01-13T20:30:47.733277297Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-kr682,Uid:9a4b274c-0db7-4f24-b51e-a8ee914d4260,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e2805a8dda438eff1d0c2f55b32b782d9c1dc5901d5efac2a41313f78dd5f358\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:30:47.733704 kubelet[2618]: E0113 20:30:47.733682 2618 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e2805a8dda438eff1d0c2f55b32b782d9c1dc5901d5efac2a41313f78dd5f358\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:30:47.733751 kubelet[2618]: E0113 20:30:47.733724 2618 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e2805a8dda438eff1d0c2f55b32b782d9c1dc5901d5efac2a41313f78dd5f358\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-kr682" Jan 13 20:30:47.733751 kubelet[2618]: E0113 20:30:47.733743 2618 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e2805a8dda438eff1d0c2f55b32b782d9c1dc5901d5efac2a41313f78dd5f358\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-kr682" Jan 13 20:30:47.733799 kubelet[2618]: E0113 20:30:47.733788 2618 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-kr682_kube-system(9a4b274c-0db7-4f24-b51e-a8ee914d4260)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-kr682_kube-system(9a4b274c-0db7-4f24-b51e-a8ee914d4260)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e2805a8dda438eff1d0c2f55b32b782d9c1dc5901d5efac2a41313f78dd5f358\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-kr682" podUID="9a4b274c-0db7-4f24-b51e-a8ee914d4260" Jan 13 20:30:47.737297 containerd[1449]: time="2025-01-13T20:30:47.737256405Z" level=error msg="Failed to destroy network for sandbox \"6ff3f878365f5c93583995c40a4b88bb4d10b8b6cbd07d067f697265269986fb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:30:47.738272 containerd[1449]: time="2025-01-13T20:30:47.738192421Z" level=error msg="encountered an error cleaning up failed sandbox \"6ff3f878365f5c93583995c40a4b88bb4d10b8b6cbd07d067f697265269986fb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:30:47.738272 containerd[1449]: time="2025-01-13T20:30:47.738260994Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-rlqw7,Uid:bfa3473d-43a3-447d-b0a7-c066cdd14301,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6ff3f878365f5c93583995c40a4b88bb4d10b8b6cbd07d067f697265269986fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:30:47.738551 kubelet[2618]: E0113 20:30:47.738517 2618 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ff3f878365f5c93583995c40a4b88bb4d10b8b6cbd07d067f697265269986fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:30:47.739628 kubelet[2618]: E0113 20:30:47.739602 2618 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ff3f878365f5c93583995c40a4b88bb4d10b8b6cbd07d067f697265269986fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-rlqw7" Jan 13 20:30:47.739679 kubelet[2618]: E0113 20:30:47.739648 2618 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ff3f878365f5c93583995c40a4b88bb4d10b8b6cbd07d067f697265269986fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-rlqw7" Jan 13 20:30:47.739721 kubelet[2618]: E0113 20:30:47.739708 2618 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-rlqw7_kube-system(bfa3473d-43a3-447d-b0a7-c066cdd14301)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-rlqw7_kube-system(bfa3473d-43a3-447d-b0a7-c066cdd14301)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6ff3f878365f5c93583995c40a4b88bb4d10b8b6cbd07d067f697265269986fb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-rlqw7" podUID="bfa3473d-43a3-447d-b0a7-c066cdd14301" Jan 13 20:30:47.739940 containerd[1449]: time="2025-01-13T20:30:47.739891981Z" level=error msg="Failed to destroy network for sandbox \"43643c7e5ccf2f6123a60334abfa99ecb9c3c9ae6377489073238964a83b68d7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:30:47.740410 containerd[1449]: time="2025-01-13T20:30:47.740214801Z" level=error msg="encountered an error cleaning up failed sandbox \"43643c7e5ccf2f6123a60334abfa99ecb9c3c9ae6377489073238964a83b68d7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:30:47.740410 containerd[1449]: time="2025-01-13T20:30:47.740277693Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jl4cq,Uid:727a9f8b-291c-4cff-81c1-972e6591d923,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"43643c7e5ccf2f6123a60334abfa99ecb9c3c9ae6377489073238964a83b68d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:30:47.740503 kubelet[2618]: E0113 20:30:47.740466 2618 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43643c7e5ccf2f6123a60334abfa99ecb9c3c9ae6377489073238964a83b68d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:30:47.740503 kubelet[2618]: E0113 20:30:47.740500 2618 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43643c7e5ccf2f6123a60334abfa99ecb9c3c9ae6377489073238964a83b68d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jl4cq" Jan 13 20:30:47.740566 kubelet[2618]: E0113 20:30:47.740517 2618 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43643c7e5ccf2f6123a60334abfa99ecb9c3c9ae6377489073238964a83b68d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jl4cq" Jan 13 20:30:47.740598 kubelet[2618]: E0113 20:30:47.740564 2618 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-jl4cq_calico-system(727a9f8b-291c-4cff-81c1-972e6591d923)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-jl4cq_calico-system(727a9f8b-291c-4cff-81c1-972e6591d923)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"43643c7e5ccf2f6123a60334abfa99ecb9c3c9ae6377489073238964a83b68d7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jl4cq" podUID="727a9f8b-291c-4cff-81c1-972e6591d923" Jan 13 20:30:48.243996 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e2805a8dda438eff1d0c2f55b32b782d9c1dc5901d5efac2a41313f78dd5f358-shm.mount: Deactivated successfully. Jan 13 20:30:48.244092 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6ff3f878365f5c93583995c40a4b88bb4d10b8b6cbd07d067f697265269986fb-shm.mount: Deactivated successfully. Jan 13 20:30:48.244144 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-43643c7e5ccf2f6123a60334abfa99ecb9c3c9ae6377489073238964a83b68d7-shm.mount: Deactivated successfully. Jan 13 20:30:48.382410 kubelet[2618]: I0113 20:30:48.382367 2618 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="43643c7e5ccf2f6123a60334abfa99ecb9c3c9ae6377489073238964a83b68d7" Jan 13 20:30:48.382995 containerd[1449]: time="2025-01-13T20:30:48.382960412Z" level=info msg="StopPodSandbox for \"43643c7e5ccf2f6123a60334abfa99ecb9c3c9ae6377489073238964a83b68d7\"" Jan 13 20:30:48.384733 kubelet[2618]: I0113 20:30:48.384396 2618 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e2805a8dda438eff1d0c2f55b32b782d9c1dc5901d5efac2a41313f78dd5f358" Jan 13 20:30:48.385039 containerd[1449]: time="2025-01-13T20:30:48.385007785Z" level=info msg="StopPodSandbox for \"e2805a8dda438eff1d0c2f55b32b782d9c1dc5901d5efac2a41313f78dd5f358\"" Jan 13 20:30:48.385210 containerd[1449]: time="2025-01-13T20:30:48.385192499Z" level=info msg="Ensure that sandbox e2805a8dda438eff1d0c2f55b32b782d9c1dc5901d5efac2a41313f78dd5f358 in task-service has been cleanup successfully" Jan 13 20:30:48.385706 containerd[1449]: time="2025-01-13T20:30:48.385672306Z" level=info msg="Ensure that sandbox 43643c7e5ccf2f6123a60334abfa99ecb9c3c9ae6377489073238964a83b68d7 in task-service has been cleanup successfully" Jan 13 20:30:48.386347 containerd[1449]: time="2025-01-13T20:30:48.386111346Z" level=info msg="TearDown network for sandbox \"43643c7e5ccf2f6123a60334abfa99ecb9c3c9ae6377489073238964a83b68d7\" successfully" Jan 13 20:30:48.386347 containerd[1449]: time="2025-01-13T20:30:48.386130790Z" level=info msg="StopPodSandbox for \"43643c7e5ccf2f6123a60334abfa99ecb9c3c9ae6377489073238964a83b68d7\" returns successfully" Jan 13 20:30:48.389089 containerd[1449]: time="2025-01-13T20:30:48.387259035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jl4cq,Uid:727a9f8b-291c-4cff-81c1-972e6591d923,Namespace:calico-system,Attempt:1,}" Jan 13 20:30:48.388471 systemd[1]: run-netns-cni\x2d1cdd217b\x2d5a08\x2d9e08\x2d485c\x2daeac506391b9.mount: Deactivated successfully. Jan 13 20:30:48.388571 systemd[1]: run-netns-cni\x2deaf12058\x2d7956\x2dad27\x2deb47\x2d723431d7ba08.mount: Deactivated successfully. Jan 13 20:30:48.390146 containerd[1449]: time="2025-01-13T20:30:48.389681117Z" level=info msg="TearDown network for sandbox \"e2805a8dda438eff1d0c2f55b32b782d9c1dc5901d5efac2a41313f78dd5f358\" successfully" Jan 13 20:30:48.390146 containerd[1449]: time="2025-01-13T20:30:48.389700640Z" level=info msg="StopPodSandbox for \"e2805a8dda438eff1d0c2f55b32b782d9c1dc5901d5efac2a41313f78dd5f358\" returns successfully" Jan 13 20:30:48.390228 kubelet[2618]: I0113 20:30:48.389840 2618 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d39152a6fc1537621236a59d0c339b87e86a5ef4286937e0fdb62843acfe45ae" Jan 13 20:30:48.390469 kubelet[2618]: E0113 20:30:48.390440 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:30:48.391661 containerd[1449]: time="2025-01-13T20:30:48.391341099Z" level=info msg="StopPodSandbox for \"d39152a6fc1537621236a59d0c339b87e86a5ef4286937e0fdb62843acfe45ae\"" Jan 13 20:30:48.391661 containerd[1449]: time="2025-01-13T20:30:48.391516051Z" level=info msg="Ensure that sandbox d39152a6fc1537621236a59d0c339b87e86a5ef4286937e0fdb62843acfe45ae in task-service has been cleanup successfully" Jan 13 20:30:48.391900 containerd[1449]: time="2025-01-13T20:30:48.391878397Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-kr682,Uid:9a4b274c-0db7-4f24-b51e-a8ee914d4260,Namespace:kube-system,Attempt:1,}" Jan 13 20:30:48.392556 kubelet[2618]: I0113 20:30:48.392491 2618 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9337bb1f603c24ba6ca35c3654eb6cf05598f7e4d660f21b53662788b856b030" Jan 13 20:30:48.393095 containerd[1449]: time="2025-01-13T20:30:48.393066494Z" level=info msg="TearDown network for sandbox \"d39152a6fc1537621236a59d0c339b87e86a5ef4286937e0fdb62843acfe45ae\" successfully" Jan 13 20:30:48.393095 containerd[1449]: time="2025-01-13T20:30:48.393090698Z" level=info msg="StopPodSandbox for \"d39152a6fc1537621236a59d0c339b87e86a5ef4286937e0fdb62843acfe45ae\" returns successfully" Jan 13 20:30:48.393176 containerd[1449]: time="2025-01-13T20:30:48.393145228Z" level=info msg="StopPodSandbox for \"9337bb1f603c24ba6ca35c3654eb6cf05598f7e4d660f21b53662788b856b030\"" Jan 13 20:30:48.393451 containerd[1449]: time="2025-01-13T20:30:48.393284973Z" level=info msg="Ensure that sandbox 9337bb1f603c24ba6ca35c3654eb6cf05598f7e4d660f21b53662788b856b030 in task-service has been cleanup successfully" Jan 13 20:30:48.394926 containerd[1449]: time="2025-01-13T20:30:48.394196299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d98fcfdcc-xd4cg,Uid:32e2fd28-1c60-4d81-883d-85b833d714fc,Namespace:calico-apiserver,Attempt:1,}" Jan 13 20:30:48.395226 containerd[1449]: time="2025-01-13T20:30:48.395066978Z" level=info msg="TearDown network for sandbox \"9337bb1f603c24ba6ca35c3654eb6cf05598f7e4d660f21b53662788b856b030\" successfully" Jan 13 20:30:48.395226 containerd[1449]: time="2025-01-13T20:30:48.395098504Z" level=info msg="StopPodSandbox for \"9337bb1f603c24ba6ca35c3654eb6cf05598f7e4d660f21b53662788b856b030\" returns successfully" Jan 13 20:30:48.395977 systemd[1]: run-netns-cni\x2d5057eeb8\x2d040c\x2d0838\x2daf1c\x2d584af1acaf14.mount: Deactivated successfully. Jan 13 20:30:48.396101 systemd[1]: run-netns-cni\x2d78dd7d00\x2dd1ad\x2d7dd3\x2dd5f3\x2df0c40175a709.mount: Deactivated successfully. Jan 13 20:30:48.397279 containerd[1449]: time="2025-01-13T20:30:48.396942400Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d98fcfdcc-j9cwm,Uid:5e2bb27f-e9a8-4574-9125-ac3ff1f5546b,Namespace:calico-apiserver,Attempt:1,}" Jan 13 20:30:48.397440 kubelet[2618]: I0113 20:30:48.397335 2618 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ff3f878365f5c93583995c40a4b88bb4d10b8b6cbd07d067f697265269986fb" Jan 13 20:30:48.398276 containerd[1449]: time="2025-01-13T20:30:48.397998152Z" level=info msg="StopPodSandbox for \"6ff3f878365f5c93583995c40a4b88bb4d10b8b6cbd07d067f697265269986fb\"" Jan 13 20:30:48.398276 containerd[1449]: time="2025-01-13T20:30:48.398140978Z" level=info msg="Ensure that sandbox 6ff3f878365f5c93583995c40a4b88bb4d10b8b6cbd07d067f697265269986fb in task-service has been cleanup successfully" Jan 13 20:30:48.398523 containerd[1449]: time="2025-01-13T20:30:48.398500964Z" level=info msg="TearDown network for sandbox \"6ff3f878365f5c93583995c40a4b88bb4d10b8b6cbd07d067f697265269986fb\" successfully" Jan 13 20:30:48.398651 containerd[1449]: time="2025-01-13T20:30:48.398629227Z" level=info msg="StopPodSandbox for \"6ff3f878365f5c93583995c40a4b88bb4d10b8b6cbd07d067f697265269986fb\" returns successfully" Jan 13 20:30:48.398858 kubelet[2618]: E0113 20:30:48.398841 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:30:48.399305 containerd[1449]: time="2025-01-13T20:30:48.399093632Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-rlqw7,Uid:bfa3473d-43a3-447d-b0a7-c066cdd14301,Namespace:kube-system,Attempt:1,}" Jan 13 20:30:48.399969 kubelet[2618]: I0113 20:30:48.399946 2618 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="01d5bbe17e56f7c7083cb740ffa0476c833343a17ac74aeb1a900b08fcaf2d63" Jan 13 20:30:48.400537 containerd[1449]: time="2025-01-13T20:30:48.400515331Z" level=info msg="StopPodSandbox for \"01d5bbe17e56f7c7083cb740ffa0476c833343a17ac74aeb1a900b08fcaf2d63\"" Jan 13 20:30:48.401125 systemd[1]: run-netns-cni\x2dc3079a27\x2da15f\x2db925\x2d3932\x2de70b008e1580.mount: Deactivated successfully. Jan 13 20:30:48.401873 containerd[1449]: time="2025-01-13T20:30:48.401845773Z" level=info msg="Ensure that sandbox 01d5bbe17e56f7c7083cb740ffa0476c833343a17ac74aeb1a900b08fcaf2d63 in task-service has been cleanup successfully" Jan 13 20:30:48.402182 containerd[1449]: time="2025-01-13T20:30:48.402162511Z" level=info msg="TearDown network for sandbox \"01d5bbe17e56f7c7083cb740ffa0476c833343a17ac74aeb1a900b08fcaf2d63\" successfully" Jan 13 20:30:48.402272 containerd[1449]: time="2025-01-13T20:30:48.402258328Z" level=info msg="StopPodSandbox for \"01d5bbe17e56f7c7083cb740ffa0476c833343a17ac74aeb1a900b08fcaf2d63\" returns successfully" Jan 13 20:30:48.403703 containerd[1449]: time="2025-01-13T20:30:48.403618496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-75ff5498fd-l6pnm,Uid:1881e196-e398-402d-91c4-c538f30e9a68,Namespace:calico-system,Attempt:1,}" Jan 13 20:30:48.483519 containerd[1449]: time="2025-01-13T20:30:48.483463565Z" level=error msg="Failed to destroy network for sandbox \"3ed141ee74fd8466fc2858bf0a83b765adcda31e8a45969b6575916e813952b8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:30:48.483804 containerd[1449]: time="2025-01-13T20:30:48.483780703Z" level=error msg="encountered an error cleaning up failed sandbox \"3ed141ee74fd8466fc2858bf0a83b765adcda31e8a45969b6575916e813952b8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:30:48.483863 containerd[1449]: time="2025-01-13T20:30:48.483845274Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jl4cq,Uid:727a9f8b-291c-4cff-81c1-972e6591d923,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"3ed141ee74fd8466fc2858bf0a83b765adcda31e8a45969b6575916e813952b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:30:48.484056 kubelet[2618]: E0113 20:30:48.484037 2618 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ed141ee74fd8466fc2858bf0a83b765adcda31e8a45969b6575916e813952b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:30:48.484112 kubelet[2618]: E0113 20:30:48.484100 2618 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ed141ee74fd8466fc2858bf0a83b765adcda31e8a45969b6575916e813952b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jl4cq" Jan 13 20:30:48.484140 kubelet[2618]: E0113 20:30:48.484120 2618 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ed141ee74fd8466fc2858bf0a83b765adcda31e8a45969b6575916e813952b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jl4cq" Jan 13 20:30:48.484187 kubelet[2618]: E0113 20:30:48.484174 2618 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-jl4cq_calico-system(727a9f8b-291c-4cff-81c1-972e6591d923)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-jl4cq_calico-system(727a9f8b-291c-4cff-81c1-972e6591d923)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3ed141ee74fd8466fc2858bf0a83b765adcda31e8a45969b6575916e813952b8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jl4cq" podUID="727a9f8b-291c-4cff-81c1-972e6591d923" Jan 13 20:30:48.590609 containerd[1449]: time="2025-01-13T20:30:48.590398769Z" level=error msg="Failed to destroy network for sandbox \"822d26c1b26dbe791c1f7b69d5ddd74ee6c9ad2e3c5e4663ee663ad0b60316a9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:30:48.591633 containerd[1449]: time="2025-01-13T20:30:48.591445920Z" level=error msg="encountered an error cleaning up failed sandbox \"822d26c1b26dbe791c1f7b69d5ddd74ee6c9ad2e3c5e4663ee663ad0b60316a9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:30:48.592428 containerd[1449]: time="2025-01-13T20:30:48.591524215Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d98fcfdcc-xd4cg,Uid:32e2fd28-1c60-4d81-883d-85b833d714fc,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"822d26c1b26dbe791c1f7b69d5ddd74ee6c9ad2e3c5e4663ee663ad0b60316a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:30:48.592739 kubelet[2618]: E0113 20:30:48.592706 2618 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"822d26c1b26dbe791c1f7b69d5ddd74ee6c9ad2e3c5e4663ee663ad0b60316a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:30:48.592799 kubelet[2618]: E0113 20:30:48.592761 2618 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"822d26c1b26dbe791c1f7b69d5ddd74ee6c9ad2e3c5e4663ee663ad0b60316a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d98fcfdcc-xd4cg" Jan 13 20:30:48.592799 kubelet[2618]: E0113 20:30:48.592791 2618 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"822d26c1b26dbe791c1f7b69d5ddd74ee6c9ad2e3c5e4663ee663ad0b60316a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d98fcfdcc-xd4cg" Jan 13 20:30:48.592868 kubelet[2618]: E0113 20:30:48.592840 2618 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-d98fcfdcc-xd4cg_calico-apiserver(32e2fd28-1c60-4d81-883d-85b833d714fc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-d98fcfdcc-xd4cg_calico-apiserver(32e2fd28-1c60-4d81-883d-85b833d714fc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"822d26c1b26dbe791c1f7b69d5ddd74ee6c9ad2e3c5e4663ee663ad0b60316a9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-d98fcfdcc-xd4cg" podUID="32e2fd28-1c60-4d81-883d-85b833d714fc" Jan 13 20:30:48.613342 containerd[1449]: time="2025-01-13T20:30:48.613294581Z" level=error msg="Failed to destroy network for sandbox \"2647bc3b6ece374655386d1a80524db37cf5858a2fee36103340216679d81300\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:30:48.613813 containerd[1449]: time="2025-01-13T20:30:48.613782470Z" level=error msg="encountered an error cleaning up failed sandbox \"2647bc3b6ece374655386d1a80524db37cf5858a2fee36103340216679d81300\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:30:48.613884 containerd[1449]: time="2025-01-13T20:30:48.613847442Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-75ff5498fd-l6pnm,Uid:1881e196-e398-402d-91c4-c538f30e9a68,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"2647bc3b6ece374655386d1a80524db37cf5858a2fee36103340216679d81300\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:30:48.614774 kubelet[2618]: E0113 20:30:48.614751 2618 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2647bc3b6ece374655386d1a80524db37cf5858a2fee36103340216679d81300\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:30:48.614931 kubelet[2618]: E0113 20:30:48.614919 2618 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2647bc3b6ece374655386d1a80524db37cf5858a2fee36103340216679d81300\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-75ff5498fd-l6pnm" Jan 13 20:30:48.615220 kubelet[2618]: E0113 20:30:48.614997 2618 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2647bc3b6ece374655386d1a80524db37cf5858a2fee36103340216679d81300\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-75ff5498fd-l6pnm" Jan 13 20:30:48.615220 kubelet[2618]: E0113 20:30:48.615058 2618 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-75ff5498fd-l6pnm_calico-system(1881e196-e398-402d-91c4-c538f30e9a68)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-75ff5498fd-l6pnm_calico-system(1881e196-e398-402d-91c4-c538f30e9a68)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2647bc3b6ece374655386d1a80524db37cf5858a2fee36103340216679d81300\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-75ff5498fd-l6pnm" podUID="1881e196-e398-402d-91c4-c538f30e9a68" Jan 13 20:30:48.623656 containerd[1449]: time="2025-01-13T20:30:48.623599099Z" level=error msg="Failed to destroy network for sandbox \"d759147d0355725660a4a0b3bb78339917fdf984cb56b79b18e8509ae207c0b0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:30:48.623782 containerd[1449]: time="2025-01-13T20:30:48.623727122Z" level=error msg="Failed to destroy network for sandbox \"4b056f2b56340a603cfdc19318f83a97ca8aea763054ab5414cfa299ed314b21\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:30:48.624030 containerd[1449]: time="2025-01-13T20:30:48.623978128Z" level=error msg="encountered an error cleaning up failed sandbox \"d759147d0355725660a4a0b3bb78339917fdf984cb56b79b18e8509ae207c0b0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:30:48.624068 containerd[1449]: time="2025-01-13T20:30:48.624047421Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d98fcfdcc-j9cwm,Uid:5e2bb27f-e9a8-4574-9125-ac3ff1f5546b,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"d759147d0355725660a4a0b3bb78339917fdf984cb56b79b18e8509ae207c0b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:30:48.624738 containerd[1449]: time="2025-01-13T20:30:48.624131716Z" level=error msg="encountered an error cleaning up failed sandbox \"4b056f2b56340a603cfdc19318f83a97ca8aea763054ab5414cfa299ed314b21\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:30:48.624738 containerd[1449]: time="2025-01-13T20:30:48.624187526Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-rlqw7,Uid:bfa3473d-43a3-447d-b0a7-c066cdd14301,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"4b056f2b56340a603cfdc19318f83a97ca8aea763054ab5414cfa299ed314b21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:30:48.624866 kubelet[2618]: E0113 20:30:48.624420 2618 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b056f2b56340a603cfdc19318f83a97ca8aea763054ab5414cfa299ed314b21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:30:48.624866 kubelet[2618]: E0113 20:30:48.624467 2618 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b056f2b56340a603cfdc19318f83a97ca8aea763054ab5414cfa299ed314b21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-rlqw7" Jan 13 20:30:48.624866 kubelet[2618]: E0113 20:30:48.624486 2618 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b056f2b56340a603cfdc19318f83a97ca8aea763054ab5414cfa299ed314b21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-rlqw7" Jan 13 20:30:48.624866 kubelet[2618]: E0113 20:30:48.624588 2618 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d759147d0355725660a4a0b3bb78339917fdf984cb56b79b18e8509ae207c0b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:30:48.624970 kubelet[2618]: E0113 20:30:48.624611 2618 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d759147d0355725660a4a0b3bb78339917fdf984cb56b79b18e8509ae207c0b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d98fcfdcc-j9cwm" Jan 13 20:30:48.624970 kubelet[2618]: E0113 20:30:48.624627 2618 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d759147d0355725660a4a0b3bb78339917fdf984cb56b79b18e8509ae207c0b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d98fcfdcc-j9cwm" Jan 13 20:30:48.624970 kubelet[2618]: E0113 20:30:48.624672 2618 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-d98fcfdcc-j9cwm_calico-apiserver(5e2bb27f-e9a8-4574-9125-ac3ff1f5546b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-d98fcfdcc-j9cwm_calico-apiserver(5e2bb27f-e9a8-4574-9125-ac3ff1f5546b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d759147d0355725660a4a0b3bb78339917fdf984cb56b79b18e8509ae207c0b0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-d98fcfdcc-j9cwm" podUID="5e2bb27f-e9a8-4574-9125-ac3ff1f5546b" Jan 13 20:30:48.625049 kubelet[2618]: E0113 20:30:48.624706 2618 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-rlqw7_kube-system(bfa3473d-43a3-447d-b0a7-c066cdd14301)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-rlqw7_kube-system(bfa3473d-43a3-447d-b0a7-c066cdd14301)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4b056f2b56340a603cfdc19318f83a97ca8aea763054ab5414cfa299ed314b21\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-rlqw7" podUID="bfa3473d-43a3-447d-b0a7-c066cdd14301" Jan 13 20:30:48.629575 containerd[1449]: time="2025-01-13T20:30:48.629480090Z" level=error msg="Failed to destroy network for sandbox \"597fc95d0d65139c5ae8f7bc69afebc185cc01968b607fea11abd75b884cb672\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:30:48.631593 containerd[1449]: time="2025-01-13T20:30:48.630473271Z" level=error msg="encountered an error cleaning up failed sandbox \"597fc95d0d65139c5ae8f7bc69afebc185cc01968b607fea11abd75b884cb672\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:30:48.631593 containerd[1449]: time="2025-01-13T20:30:48.630563568Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-kr682,Uid:9a4b274c-0db7-4f24-b51e-a8ee914d4260,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"597fc95d0d65139c5ae8f7bc69afebc185cc01968b607fea11abd75b884cb672\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:30:48.631970 kubelet[2618]: E0113 20:30:48.631797 2618 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"597fc95d0d65139c5ae8f7bc69afebc185cc01968b607fea11abd75b884cb672\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:30:48.631970 kubelet[2618]: E0113 20:30:48.631847 2618 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"597fc95d0d65139c5ae8f7bc69afebc185cc01968b607fea11abd75b884cb672\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-kr682" Jan 13 20:30:48.631970 kubelet[2618]: E0113 20:30:48.631881 2618 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"597fc95d0d65139c5ae8f7bc69afebc185cc01968b607fea11abd75b884cb672\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-kr682" Jan 13 20:30:48.632085 kubelet[2618]: E0113 20:30:48.631934 2618 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-kr682_kube-system(9a4b274c-0db7-4f24-b51e-a8ee914d4260)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-kr682_kube-system(9a4b274c-0db7-4f24-b51e-a8ee914d4260)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"597fc95d0d65139c5ae8f7bc69afebc185cc01968b607fea11abd75b884cb672\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-kr682" podUID="9a4b274c-0db7-4f24-b51e-a8ee914d4260" Jan 13 20:30:49.246851 systemd[1]: run-netns-cni\x2d4743d7a2\x2d6185\x2d3571\x2dc052\x2d238f2232259d.mount: Deactivated successfully. Jan 13 20:30:49.404631 kubelet[2618]: I0113 20:30:49.404362 2618 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="822d26c1b26dbe791c1f7b69d5ddd74ee6c9ad2e3c5e4663ee663ad0b60316a9" Jan 13 20:30:49.405212 containerd[1449]: time="2025-01-13T20:30:49.405176584Z" level=info msg="StopPodSandbox for \"822d26c1b26dbe791c1f7b69d5ddd74ee6c9ad2e3c5e4663ee663ad0b60316a9\"" Jan 13 20:30:49.405387 containerd[1449]: time="2025-01-13T20:30:49.405347534Z" level=info msg="Ensure that sandbox 822d26c1b26dbe791c1f7b69d5ddd74ee6c9ad2e3c5e4663ee663ad0b60316a9 in task-service has been cleanup successfully" Jan 13 20:30:49.407862 containerd[1449]: time="2025-01-13T20:30:49.407824893Z" level=info msg="TearDown network for sandbox \"822d26c1b26dbe791c1f7b69d5ddd74ee6c9ad2e3c5e4663ee663ad0b60316a9\" successfully" Jan 13 20:30:49.407862 containerd[1449]: time="2025-01-13T20:30:49.407849617Z" level=info msg="StopPodSandbox for \"822d26c1b26dbe791c1f7b69d5ddd74ee6c9ad2e3c5e4663ee663ad0b60316a9\" returns successfully" Jan 13 20:30:49.408325 kubelet[2618]: I0113 20:30:49.408282 2618 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="597fc95d0d65139c5ae8f7bc69afebc185cc01968b607fea11abd75b884cb672" Jan 13 20:30:49.409599 containerd[1449]: time="2025-01-13T20:30:49.408796984Z" level=info msg="StopPodSandbox for \"597fc95d0d65139c5ae8f7bc69afebc185cc01968b607fea11abd75b884cb672\"" Jan 13 20:30:49.409599 containerd[1449]: time="2025-01-13T20:30:49.408967935Z" level=info msg="Ensure that sandbox 597fc95d0d65139c5ae8f7bc69afebc185cc01968b607fea11abd75b884cb672 in task-service has been cleanup successfully" Jan 13 20:30:49.409599 containerd[1449]: time="2025-01-13T20:30:49.409073833Z" level=info msg="StopPodSandbox for \"d39152a6fc1537621236a59d0c339b87e86a5ef4286937e0fdb62843acfe45ae\"" Jan 13 20:30:49.409599 containerd[1449]: time="2025-01-13T20:30:49.409158768Z" level=info msg="TearDown network for sandbox \"d39152a6fc1537621236a59d0c339b87e86a5ef4286937e0fdb62843acfe45ae\" successfully" Jan 13 20:30:49.409599 containerd[1449]: time="2025-01-13T20:30:49.409168610Z" level=info msg="StopPodSandbox for \"d39152a6fc1537621236a59d0c339b87e86a5ef4286937e0fdb62843acfe45ae\" returns successfully" Jan 13 20:30:49.409004 systemd[1]: run-netns-cni\x2da39e8559\x2d7414\x2d6517\x2d7bf0\x2d9d780917cddf.mount: Deactivated successfully. Jan 13 20:30:49.410086 containerd[1449]: time="2025-01-13T20:30:49.410054607Z" level=info msg="TearDown network for sandbox \"597fc95d0d65139c5ae8f7bc69afebc185cc01968b607fea11abd75b884cb672\" successfully" Jan 13 20:30:49.410086 containerd[1449]: time="2025-01-13T20:30:49.410077451Z" level=info msg="StopPodSandbox for \"597fc95d0d65139c5ae8f7bc69afebc185cc01968b607fea11abd75b884cb672\" returns successfully" Jan 13 20:30:49.410292 containerd[1449]: time="2025-01-13T20:30:49.410267004Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d98fcfdcc-xd4cg,Uid:32e2fd28-1c60-4d81-883d-85b833d714fc,Namespace:calico-apiserver,Attempt:2,}" Jan 13 20:30:49.412246 containerd[1449]: time="2025-01-13T20:30:49.412217829Z" level=info msg="StopPodSandbox for \"e2805a8dda438eff1d0c2f55b32b782d9c1dc5901d5efac2a41313f78dd5f358\"" Jan 13 20:30:49.412384 containerd[1449]: time="2025-01-13T20:30:49.412314326Z" level=info msg="TearDown network for sandbox \"e2805a8dda438eff1d0c2f55b32b782d9c1dc5901d5efac2a41313f78dd5f358\" successfully" Jan 13 20:30:49.412384 containerd[1449]: time="2025-01-13T20:30:49.412325208Z" level=info msg="StopPodSandbox for \"e2805a8dda438eff1d0c2f55b32b782d9c1dc5901d5efac2a41313f78dd5f358\" returns successfully" Jan 13 20:30:49.412393 systemd[1]: run-netns-cni\x2d7560b87f\x2d2e60\x2d74c2\x2dd81b\x2daddda10bf1d7.mount: Deactivated successfully. Jan 13 20:30:49.412650 kubelet[2618]: E0113 20:30:49.412526 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:30:49.412959 kubelet[2618]: I0113 20:30:49.412747 2618 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d759147d0355725660a4a0b3bb78339917fdf984cb56b79b18e8509ae207c0b0" Jan 13 20:30:49.415209 containerd[1449]: time="2025-01-13T20:30:49.415018005Z" level=info msg="StopPodSandbox for \"d759147d0355725660a4a0b3bb78339917fdf984cb56b79b18e8509ae207c0b0\"" Jan 13 20:30:49.415402 containerd[1449]: time="2025-01-13T20:30:49.415150708Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-kr682,Uid:9a4b274c-0db7-4f24-b51e-a8ee914d4260,Namespace:kube-system,Attempt:2,}" Jan 13 20:30:49.415473 containerd[1449]: time="2025-01-13T20:30:49.415434278Z" level=info msg="Ensure that sandbox d759147d0355725660a4a0b3bb78339917fdf984cb56b79b18e8509ae207c0b0 in task-service has been cleanup successfully" Jan 13 20:30:49.417278 containerd[1449]: time="2025-01-13T20:30:49.416855570Z" level=info msg="TearDown network for sandbox \"d759147d0355725660a4a0b3bb78339917fdf984cb56b79b18e8509ae207c0b0\" successfully" Jan 13 20:30:49.417278 containerd[1449]: time="2025-01-13T20:30:49.416884735Z" level=info msg="StopPodSandbox for \"d759147d0355725660a4a0b3bb78339917fdf984cb56b79b18e8509ae207c0b0\" returns successfully" Jan 13 20:30:49.417584 containerd[1449]: time="2025-01-13T20:30:49.417381143Z" level=info msg="StopPodSandbox for \"9337bb1f603c24ba6ca35c3654eb6cf05598f7e4d660f21b53662788b856b030\"" Jan 13 20:30:49.417584 containerd[1449]: time="2025-01-13T20:30:49.417472359Z" level=info msg="TearDown network for sandbox \"9337bb1f603c24ba6ca35c3654eb6cf05598f7e4d660f21b53662788b856b030\" successfully" Jan 13 20:30:49.417584 containerd[1449]: time="2025-01-13T20:30:49.417483161Z" level=info msg="StopPodSandbox for \"9337bb1f603c24ba6ca35c3654eb6cf05598f7e4d660f21b53662788b856b030\" returns successfully" Jan 13 20:30:49.417853 systemd[1]: run-netns-cni\x2d45284048\x2d9d91\x2d23d1\x2d05db\x2d6bc205fe6d1d.mount: Deactivated successfully. Jan 13 20:30:49.418865 containerd[1449]: time="2025-01-13T20:30:49.418340632Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d98fcfdcc-j9cwm,Uid:5e2bb27f-e9a8-4574-9125-ac3ff1f5546b,Namespace:calico-apiserver,Attempt:2,}" Jan 13 20:30:49.419700 kubelet[2618]: I0113 20:30:49.418616 2618 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b056f2b56340a603cfdc19318f83a97ca8aea763054ab5414cfa299ed314b21" Jan 13 20:30:49.423205 containerd[1449]: time="2025-01-13T20:30:49.422761294Z" level=info msg="StopPodSandbox for \"4b056f2b56340a603cfdc19318f83a97ca8aea763054ab5414cfa299ed314b21\"" Jan 13 20:30:49.423205 containerd[1449]: time="2025-01-13T20:30:49.422924283Z" level=info msg="Ensure that sandbox 4b056f2b56340a603cfdc19318f83a97ca8aea763054ab5414cfa299ed314b21 in task-service has been cleanup successfully" Jan 13 20:30:49.423330 containerd[1449]: time="2025-01-13T20:30:49.423212014Z" level=info msg="TearDown network for sandbox \"4b056f2b56340a603cfdc19318f83a97ca8aea763054ab5414cfa299ed314b21\" successfully" Jan 13 20:30:49.423330 containerd[1449]: time="2025-01-13T20:30:49.423227736Z" level=info msg="StopPodSandbox for \"4b056f2b56340a603cfdc19318f83a97ca8aea763054ab5414cfa299ed314b21\" returns successfully" Jan 13 20:30:49.425158 systemd[1]: run-netns-cni\x2daa4c4597\x2dcd08\x2d0fb6\x2d9e3e\x2d4fce7014c973.mount: Deactivated successfully. Jan 13 20:30:49.429369 containerd[1449]: time="2025-01-13T20:30:49.429149584Z" level=info msg="StopPodSandbox for \"6ff3f878365f5c93583995c40a4b88bb4d10b8b6cbd07d067f697265269986fb\"" Jan 13 20:30:49.429369 containerd[1449]: time="2025-01-13T20:30:49.429308892Z" level=info msg="TearDown network for sandbox \"6ff3f878365f5c93583995c40a4b88bb4d10b8b6cbd07d067f697265269986fb\" successfully" Jan 13 20:30:49.429369 containerd[1449]: time="2025-01-13T20:30:49.429322134Z" level=info msg="StopPodSandbox for \"6ff3f878365f5c93583995c40a4b88bb4d10b8b6cbd07d067f697265269986fb\" returns successfully" Jan 13 20:30:49.431563 kubelet[2618]: E0113 20:30:49.429770 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:30:49.431672 containerd[1449]: time="2025-01-13T20:30:49.430130037Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-rlqw7,Uid:bfa3473d-43a3-447d-b0a7-c066cdd14301,Namespace:kube-system,Attempt:2,}" Jan 13 20:30:49.435582 kubelet[2618]: I0113 20:30:49.431935 2618 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2647bc3b6ece374655386d1a80524db37cf5858a2fee36103340216679d81300" Jan 13 20:30:49.436370 containerd[1449]: time="2025-01-13T20:30:49.436341976Z" level=info msg="StopPodSandbox for \"2647bc3b6ece374655386d1a80524db37cf5858a2fee36103340216679d81300\"" Jan 13 20:30:49.438385 containerd[1449]: time="2025-01-13T20:30:49.438352811Z" level=info msg="Ensure that sandbox 2647bc3b6ece374655386d1a80524db37cf5858a2fee36103340216679d81300 in task-service has been cleanup successfully" Jan 13 20:30:49.440298 kubelet[2618]: I0113 20:30:49.439689 2618 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ed141ee74fd8466fc2858bf0a83b765adcda31e8a45969b6575916e813952b8" Jan 13 20:30:49.442769 containerd[1449]: time="2025-01-13T20:30:49.442728585Z" level=info msg="StopPodSandbox for \"3ed141ee74fd8466fc2858bf0a83b765adcda31e8a45969b6575916e813952b8\"" Jan 13 20:30:49.442958 containerd[1449]: time="2025-01-13T20:30:49.442935662Z" level=info msg="Ensure that sandbox 3ed141ee74fd8466fc2858bf0a83b765adcda31e8a45969b6575916e813952b8 in task-service has been cleanup successfully" Jan 13 20:30:49.443624 containerd[1449]: time="2025-01-13T20:30:49.443593738Z" level=info msg="TearDown network for sandbox \"3ed141ee74fd8466fc2858bf0a83b765adcda31e8a45969b6575916e813952b8\" successfully" Jan 13 20:30:49.443624 containerd[1449]: time="2025-01-13T20:30:49.443619943Z" level=info msg="StopPodSandbox for \"3ed141ee74fd8466fc2858bf0a83b765adcda31e8a45969b6575916e813952b8\" returns successfully" Jan 13 20:30:49.443738 containerd[1449]: time="2025-01-13T20:30:49.443611261Z" level=info msg="TearDown network for sandbox \"2647bc3b6ece374655386d1a80524db37cf5858a2fee36103340216679d81300\" successfully" Jan 13 20:30:49.443738 containerd[1449]: time="2025-01-13T20:30:49.443683994Z" level=info msg="StopPodSandbox for \"2647bc3b6ece374655386d1a80524db37cf5858a2fee36103340216679d81300\" returns successfully" Jan 13 20:30:49.444078 containerd[1449]: time="2025-01-13T20:30:49.444050139Z" level=info msg="StopPodSandbox for \"01d5bbe17e56f7c7083cb740ffa0476c833343a17ac74aeb1a900b08fcaf2d63\"" Jan 13 20:30:49.444190 containerd[1449]: time="2025-01-13T20:30:49.444128113Z" level=info msg="TearDown network for sandbox \"01d5bbe17e56f7c7083cb740ffa0476c833343a17ac74aeb1a900b08fcaf2d63\" successfully" Jan 13 20:30:49.444190 containerd[1449]: time="2025-01-13T20:30:49.444142355Z" level=info msg="StopPodSandbox for \"01d5bbe17e56f7c7083cb740ffa0476c833343a17ac74aeb1a900b08fcaf2d63\" returns successfully" Jan 13 20:30:49.444243 containerd[1449]: time="2025-01-13T20:30:49.444192724Z" level=info msg="StopPodSandbox for \"43643c7e5ccf2f6123a60334abfa99ecb9c3c9ae6377489073238964a83b68d7\"" Jan 13 20:30:49.447057 containerd[1449]: time="2025-01-13T20:30:49.446743855Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-75ff5498fd-l6pnm,Uid:1881e196-e398-402d-91c4-c538f30e9a68,Namespace:calico-system,Attempt:2,}" Jan 13 20:30:49.447057 containerd[1449]: time="2025-01-13T20:30:49.446997140Z" level=info msg="TearDown network for sandbox \"43643c7e5ccf2f6123a60334abfa99ecb9c3c9ae6377489073238964a83b68d7\" successfully" Jan 13 20:30:49.447057 containerd[1449]: time="2025-01-13T20:30:49.447011422Z" level=info msg="StopPodSandbox for \"43643c7e5ccf2f6123a60334abfa99ecb9c3c9ae6377489073238964a83b68d7\" returns successfully" Jan 13 20:30:49.450048 containerd[1449]: time="2025-01-13T20:30:49.448295129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jl4cq,Uid:727a9f8b-291c-4cff-81c1-972e6591d923,Namespace:calico-system,Attempt:2,}" Jan 13 20:30:49.557796 containerd[1449]: time="2025-01-13T20:30:49.557649068Z" level=error msg="Failed to destroy network for sandbox \"8756a856d4b009fabe0ee6301dd638e4fa9e16801d79908294c945cb1bffd67b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:30:49.561199 containerd[1449]: time="2025-01-13T20:30:49.561153368Z" level=error msg="encountered an error cleaning up failed sandbox \"8756a856d4b009fabe0ee6301dd638e4fa9e16801d79908294c945cb1bffd67b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:30:49.561392 containerd[1449]: time="2025-01-13T20:30:49.561369806Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d98fcfdcc-xd4cg,Uid:32e2fd28-1c60-4d81-883d-85b833d714fc,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"8756a856d4b009fabe0ee6301dd638e4fa9e16801d79908294c945cb1bffd67b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:30:49.561964 kubelet[2618]: E0113 20:30:49.561825 2618 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8756a856d4b009fabe0ee6301dd638e4fa9e16801d79908294c945cb1bffd67b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:30:49.562071 kubelet[2618]: E0113 20:30:49.562000 2618 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8756a856d4b009fabe0ee6301dd638e4fa9e16801d79908294c945cb1bffd67b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d98fcfdcc-xd4cg" Jan 13 20:30:49.562071 kubelet[2618]: E0113 20:30:49.562025 2618 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8756a856d4b009fabe0ee6301dd638e4fa9e16801d79908294c945cb1bffd67b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d98fcfdcc-xd4cg" Jan 13 20:30:49.562129 kubelet[2618]: E0113 20:30:49.562079 2618 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-d98fcfdcc-xd4cg_calico-apiserver(32e2fd28-1c60-4d81-883d-85b833d714fc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-d98fcfdcc-xd4cg_calico-apiserver(32e2fd28-1c60-4d81-883d-85b833d714fc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8756a856d4b009fabe0ee6301dd638e4fa9e16801d79908294c945cb1bffd67b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-d98fcfdcc-xd4cg" podUID="32e2fd28-1c60-4d81-883d-85b833d714fc" Jan 13 20:30:49.586330 containerd[1449]: time="2025-01-13T20:30:49.586231362Z" level=error msg="Failed to destroy network for sandbox \"fe58a6998064b3e9242653fb761145feca56f91a7b11a41150a14cadbeb92f21\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:30:49.586598 containerd[1449]: time="2025-01-13T20:30:49.586571943Z" level=error msg="encountered an error cleaning up failed sandbox \"fe58a6998064b3e9242653fb761145feca56f91a7b11a41150a14cadbeb92f21\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:30:49.586650 containerd[1449]: time="2025-01-13T20:30:49.586631873Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-rlqw7,Uid:bfa3473d-43a3-447d-b0a7-c066cdd14301,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"fe58a6998064b3e9242653fb761145feca56f91a7b11a41150a14cadbeb92f21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:30:49.586946 kubelet[2618]: E0113 20:30:49.586922 2618 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe58a6998064b3e9242653fb761145feca56f91a7b11a41150a14cadbeb92f21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:30:49.587006 kubelet[2618]: E0113 20:30:49.586982 2618 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe58a6998064b3e9242653fb761145feca56f91a7b11a41150a14cadbeb92f21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-rlqw7" Jan 13 20:30:49.587006 kubelet[2618]: E0113 20:30:49.587005 2618 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe58a6998064b3e9242653fb761145feca56f91a7b11a41150a14cadbeb92f21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-rlqw7" Jan 13 20:30:49.587072 kubelet[2618]: E0113 20:30:49.587060 2618 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-rlqw7_kube-system(bfa3473d-43a3-447d-b0a7-c066cdd14301)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-rlqw7_kube-system(bfa3473d-43a3-447d-b0a7-c066cdd14301)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fe58a6998064b3e9242653fb761145feca56f91a7b11a41150a14cadbeb92f21\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-rlqw7" podUID="bfa3473d-43a3-447d-b0a7-c066cdd14301" Jan 13 20:30:49.610384 containerd[1449]: time="2025-01-13T20:30:49.610320262Z" level=error msg="Failed to destroy network for sandbox \"fc6d0aa128b08ee847c9327aaab1a518a44f6b9116caef01b282aee1404e1ed3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:30:49.610741 containerd[1449]: time="2025-01-13T20:30:49.610701690Z" level=error msg="encountered an error cleaning up failed sandbox \"fc6d0aa128b08ee847c9327aaab1a518a44f6b9116caef01b282aee1404e1ed3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:30:49.610833 containerd[1449]: time="2025-01-13T20:30:49.610801548Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d98fcfdcc-j9cwm,Uid:5e2bb27f-e9a8-4574-9125-ac3ff1f5546b,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"fc6d0aa128b08ee847c9327aaab1a518a44f6b9116caef01b282aee1404e1ed3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:30:49.611168 kubelet[2618]: E0113 20:30:49.611136 2618 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc6d0aa128b08ee847c9327aaab1a518a44f6b9116caef01b282aee1404e1ed3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:30:49.611224 kubelet[2618]: E0113 20:30:49.611205 2618 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc6d0aa128b08ee847c9327aaab1a518a44f6b9116caef01b282aee1404e1ed3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d98fcfdcc-j9cwm" Jan 13 20:30:49.611250 kubelet[2618]: E0113 20:30:49.611226 2618 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc6d0aa128b08ee847c9327aaab1a518a44f6b9116caef01b282aee1404e1ed3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d98fcfdcc-j9cwm" Jan 13 20:30:49.611299 kubelet[2618]: E0113 20:30:49.611286 2618 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-d98fcfdcc-j9cwm_calico-apiserver(5e2bb27f-e9a8-4574-9125-ac3ff1f5546b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-d98fcfdcc-j9cwm_calico-apiserver(5e2bb27f-e9a8-4574-9125-ac3ff1f5546b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fc6d0aa128b08ee847c9327aaab1a518a44f6b9116caef01b282aee1404e1ed3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-d98fcfdcc-j9cwm" podUID="5e2bb27f-e9a8-4574-9125-ac3ff1f5546b" Jan 13 20:30:49.632694 containerd[1449]: time="2025-01-13T20:30:49.632641290Z" level=error msg="Failed to destroy network for sandbox \"12075a8be0ad7e3b6adc6798d491144f903fffff3ebdf8ebd973a2543f85c392\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:30:49.633674 containerd[1449]: time="2025-01-13T20:30:49.633502802Z" level=error msg="encountered an error cleaning up failed sandbox \"12075a8be0ad7e3b6adc6798d491144f903fffff3ebdf8ebd973a2543f85c392\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:30:49.633674 containerd[1449]: time="2025-01-13T20:30:49.633579736Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-kr682,Uid:9a4b274c-0db7-4f24-b51e-a8ee914d4260,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"12075a8be0ad7e3b6adc6798d491144f903fffff3ebdf8ebd973a2543f85c392\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:30:49.633978 kubelet[2618]: E0113 20:30:49.633946 2618 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12075a8be0ad7e3b6adc6798d491144f903fffff3ebdf8ebd973a2543f85c392\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:30:49.634041 kubelet[2618]: E0113 20:30:49.634009 2618 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12075a8be0ad7e3b6adc6798d491144f903fffff3ebdf8ebd973a2543f85c392\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-kr682" Jan 13 20:30:49.634041 kubelet[2618]: E0113 20:30:49.634030 2618 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12075a8be0ad7e3b6adc6798d491144f903fffff3ebdf8ebd973a2543f85c392\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-kr682" Jan 13 20:30:49.634089 kubelet[2618]: E0113 20:30:49.634081 2618 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-kr682_kube-system(9a4b274c-0db7-4f24-b51e-a8ee914d4260)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-kr682_kube-system(9a4b274c-0db7-4f24-b51e-a8ee914d4260)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"12075a8be0ad7e3b6adc6798d491144f903fffff3ebdf8ebd973a2543f85c392\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-kr682" podUID="9a4b274c-0db7-4f24-b51e-a8ee914d4260" Jan 13 20:30:49.665802 containerd[1449]: time="2025-01-13T20:30:49.665740983Z" level=error msg="Failed to destroy network for sandbox \"7dc39cf12b6679d45e439963f1d8ea527ed639331f599299008cb3a8d971d87d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:30:49.666638 containerd[1449]: time="2025-01-13T20:30:49.666150816Z" level=error msg="encountered an error cleaning up failed sandbox \"7dc39cf12b6679d45e439963f1d8ea527ed639331f599299008cb3a8d971d87d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:30:49.666638 containerd[1449]: time="2025-01-13T20:30:49.666209746Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-75ff5498fd-l6pnm,Uid:1881e196-e398-402d-91c4-c538f30e9a68,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"7dc39cf12b6679d45e439963f1d8ea527ed639331f599299008cb3a8d971d87d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:30:49.666753 kubelet[2618]: E0113 20:30:49.666439 2618 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7dc39cf12b6679d45e439963f1d8ea527ed639331f599299008cb3a8d971d87d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:30:49.666753 kubelet[2618]: E0113 20:30:49.666494 2618 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7dc39cf12b6679d45e439963f1d8ea527ed639331f599299008cb3a8d971d87d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-75ff5498fd-l6pnm" Jan 13 20:30:49.666753 kubelet[2618]: E0113 20:30:49.666513 2618 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7dc39cf12b6679d45e439963f1d8ea527ed639331f599299008cb3a8d971d87d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-75ff5498fd-l6pnm" Jan 13 20:30:49.666847 kubelet[2618]: E0113 20:30:49.666709 2618 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-75ff5498fd-l6pnm_calico-system(1881e196-e398-402d-91c4-c538f30e9a68)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-75ff5498fd-l6pnm_calico-system(1881e196-e398-402d-91c4-c538f30e9a68)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7dc39cf12b6679d45e439963f1d8ea527ed639331f599299008cb3a8d971d87d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-75ff5498fd-l6pnm" podUID="1881e196-e398-402d-91c4-c538f30e9a68" Jan 13 20:30:49.672029 containerd[1449]: time="2025-01-13T20:30:49.671976966Z" level=error msg="Failed to destroy network for sandbox \"02833b59f867cfdf5eba3e30bff932e672dc2fe1c1880d847ccdba9c056d9423\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:30:49.672596 containerd[1449]: time="2025-01-13T20:30:49.672557989Z" level=error msg="encountered an error cleaning up failed sandbox \"02833b59f867cfdf5eba3e30bff932e672dc2fe1c1880d847ccdba9c056d9423\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:30:49.672782 containerd[1449]: time="2025-01-13T20:30:49.672751303Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jl4cq,Uid:727a9f8b-291c-4cff-81c1-972e6591d923,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"02833b59f867cfdf5eba3e30bff932e672dc2fe1c1880d847ccdba9c056d9423\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:30:49.672992 kubelet[2618]: E0113 20:30:49.672962 2618 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02833b59f867cfdf5eba3e30bff932e672dc2fe1c1880d847ccdba9c056d9423\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:30:49.673035 kubelet[2618]: E0113 20:30:49.673013 2618 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02833b59f867cfdf5eba3e30bff932e672dc2fe1c1880d847ccdba9c056d9423\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jl4cq" Jan 13 20:30:49.673035 kubelet[2618]: E0113 20:30:49.673033 2618 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02833b59f867cfdf5eba3e30bff932e672dc2fe1c1880d847ccdba9c056d9423\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jl4cq" Jan 13 20:30:49.673091 kubelet[2618]: E0113 20:30:49.673080 2618 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-jl4cq_calico-system(727a9f8b-291c-4cff-81c1-972e6591d923)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-jl4cq_calico-system(727a9f8b-291c-4cff-81c1-972e6591d923)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"02833b59f867cfdf5eba3e30bff932e672dc2fe1c1880d847ccdba9c056d9423\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jl4cq" podUID="727a9f8b-291c-4cff-81c1-972e6591d923" Jan 13 20:30:50.245925 systemd[1]: run-netns-cni\x2d65ffa61d\x2d2852\x2d7300\x2dbc88\x2d33e843bb15d2.mount: Deactivated successfully. Jan 13 20:30:50.246052 systemd[1]: run-netns-cni\x2dd50e77e2\x2d6efa\x2d81dc\x2d3b8e\x2dd6134f1a52df.mount: Deactivated successfully. Jan 13 20:30:50.444604 kubelet[2618]: I0113 20:30:50.443917 2618 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7dc39cf12b6679d45e439963f1d8ea527ed639331f599299008cb3a8d971d87d" Jan 13 20:30:50.446237 containerd[1449]: time="2025-01-13T20:30:50.446187438Z" level=info msg="StopPodSandbox for \"7dc39cf12b6679d45e439963f1d8ea527ed639331f599299008cb3a8d971d87d\"" Jan 13 20:30:50.446999 containerd[1449]: time="2025-01-13T20:30:50.446967932Z" level=info msg="Ensure that sandbox 7dc39cf12b6679d45e439963f1d8ea527ed639331f599299008cb3a8d971d87d in task-service has been cleanup successfully" Jan 13 20:30:50.447534 containerd[1449]: time="2025-01-13T20:30:50.447433252Z" level=info msg="TearDown network for sandbox \"7dc39cf12b6679d45e439963f1d8ea527ed639331f599299008cb3a8d971d87d\" successfully" Jan 13 20:30:50.447534 containerd[1449]: time="2025-01-13T20:30:50.447453935Z" level=info msg="StopPodSandbox for \"7dc39cf12b6679d45e439963f1d8ea527ed639331f599299008cb3a8d971d87d\" returns successfully" Jan 13 20:30:50.447827 containerd[1449]: time="2025-01-13T20:30:50.447792993Z" level=info msg="StopPodSandbox for \"2647bc3b6ece374655386d1a80524db37cf5858a2fee36103340216679d81300\"" Jan 13 20:30:50.447908 containerd[1449]: time="2025-01-13T20:30:50.447891930Z" level=info msg="TearDown network for sandbox \"2647bc3b6ece374655386d1a80524db37cf5858a2fee36103340216679d81300\" successfully" Jan 13 20:30:50.447908 containerd[1449]: time="2025-01-13T20:30:50.447906933Z" level=info msg="StopPodSandbox for \"2647bc3b6ece374655386d1a80524db37cf5858a2fee36103340216679d81300\" returns successfully" Jan 13 20:30:50.448368 containerd[1449]: time="2025-01-13T20:30:50.448141933Z" level=info msg="StopPodSandbox for \"01d5bbe17e56f7c7083cb740ffa0476c833343a17ac74aeb1a900b08fcaf2d63\"" Jan 13 20:30:50.448368 containerd[1449]: time="2025-01-13T20:30:50.448221507Z" level=info msg="TearDown network for sandbox \"01d5bbe17e56f7c7083cb740ffa0476c833343a17ac74aeb1a900b08fcaf2d63\" successfully" Jan 13 20:30:50.448368 containerd[1449]: time="2025-01-13T20:30:50.448231349Z" level=info msg="StopPodSandbox for \"01d5bbe17e56f7c7083cb740ffa0476c833343a17ac74aeb1a900b08fcaf2d63\" returns successfully" Jan 13 20:30:50.449195 systemd[1]: run-netns-cni\x2d04ba0be0\x2dc057\x2d2faa\x2d6f95\x2d0f0197779f2b.mount: Deactivated successfully. Jan 13 20:30:50.450128 kubelet[2618]: I0113 20:30:50.450100 2618 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02833b59f867cfdf5eba3e30bff932e672dc2fe1c1880d847ccdba9c056d9423" Jan 13 20:30:50.450988 containerd[1449]: time="2025-01-13T20:30:50.450758903Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-75ff5498fd-l6pnm,Uid:1881e196-e398-402d-91c4-c538f30e9a68,Namespace:calico-system,Attempt:3,}" Jan 13 20:30:50.451282 containerd[1449]: time="2025-01-13T20:30:50.451260629Z" level=info msg="StopPodSandbox for \"02833b59f867cfdf5eba3e30bff932e672dc2fe1c1880d847ccdba9c056d9423\"" Jan 13 20:30:50.451598 containerd[1449]: time="2025-01-13T20:30:50.451576083Z" level=info msg="Ensure that sandbox 02833b59f867cfdf5eba3e30bff932e672dc2fe1c1880d847ccdba9c056d9423 in task-service has been cleanup successfully" Jan 13 20:30:50.451867 containerd[1449]: time="2025-01-13T20:30:50.451848930Z" level=info msg="TearDown network for sandbox \"02833b59f867cfdf5eba3e30bff932e672dc2fe1c1880d847ccdba9c056d9423\" successfully" Jan 13 20:30:50.451939 containerd[1449]: time="2025-01-13T20:30:50.451925984Z" level=info msg="StopPodSandbox for \"02833b59f867cfdf5eba3e30bff932e672dc2fe1c1880d847ccdba9c056d9423\" returns successfully" Jan 13 20:30:50.454317 containerd[1449]: time="2025-01-13T20:30:50.454293830Z" level=info msg="StopPodSandbox for \"3ed141ee74fd8466fc2858bf0a83b765adcda31e8a45969b6575916e813952b8\"" Jan 13 20:30:50.454572 containerd[1449]: time="2025-01-13T20:30:50.454518349Z" level=info msg="TearDown network for sandbox \"3ed141ee74fd8466fc2858bf0a83b765adcda31e8a45969b6575916e813952b8\" successfully" Jan 13 20:30:50.454790 kubelet[2618]: I0113 20:30:50.454768 2618 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="12075a8be0ad7e3b6adc6798d491144f903fffff3ebdf8ebd973a2543f85c392" Jan 13 20:30:50.455274 containerd[1449]: time="2025-01-13T20:30:50.454898134Z" level=info msg="StopPodSandbox for \"3ed141ee74fd8466fc2858bf0a83b765adcda31e8a45969b6575916e813952b8\" returns successfully" Jan 13 20:30:50.455520 containerd[1449]: time="2025-01-13T20:30:50.455426225Z" level=info msg="StopPodSandbox for \"12075a8be0ad7e3b6adc6798d491144f903fffff3ebdf8ebd973a2543f85c392\"" Jan 13 20:30:50.456237 containerd[1449]: time="2025-01-13T20:30:50.456017647Z" level=info msg="Ensure that sandbox 12075a8be0ad7e3b6adc6798d491144f903fffff3ebdf8ebd973a2543f85c392 in task-service has been cleanup successfully" Jan 13 20:30:50.457914 containerd[1449]: time="2025-01-13T20:30:50.456884836Z" level=info msg="StopPodSandbox for \"43643c7e5ccf2f6123a60334abfa99ecb9c3c9ae6377489073238964a83b68d7\"" Jan 13 20:30:50.457914 containerd[1449]: time="2025-01-13T20:30:50.457783830Z" level=info msg="TearDown network for sandbox \"43643c7e5ccf2f6123a60334abfa99ecb9c3c9ae6377489073238964a83b68d7\" successfully" Jan 13 20:30:50.457914 containerd[1449]: time="2025-01-13T20:30:50.457799513Z" level=info msg="StopPodSandbox for \"43643c7e5ccf2f6123a60334abfa99ecb9c3c9ae6377489073238964a83b68d7\" returns successfully" Jan 13 20:30:50.458275 containerd[1449]: time="2025-01-13T20:30:50.456918521Z" level=info msg="TearDown network for sandbox \"12075a8be0ad7e3b6adc6798d491144f903fffff3ebdf8ebd973a2543f85c392\" successfully" Jan 13 20:30:50.458275 containerd[1449]: time="2025-01-13T20:30:50.458157814Z" level=info msg="StopPodSandbox for \"12075a8be0ad7e3b6adc6798d491144f903fffff3ebdf8ebd973a2543f85c392\" returns successfully" Jan 13 20:30:50.458305 systemd[1]: run-netns-cni\x2d7f18b193\x2d3d5a\x2db31e\x2d81de\x2dd77073f88749.mount: Deactivated successfully. Jan 13 20:30:50.459875 containerd[1449]: time="2025-01-13T20:30:50.459645230Z" level=info msg="StopPodSandbox for \"597fc95d0d65139c5ae8f7bc69afebc185cc01968b607fea11abd75b884cb672\"" Jan 13 20:30:50.459875 containerd[1449]: time="2025-01-13T20:30:50.459731845Z" level=info msg="TearDown network for sandbox \"597fc95d0d65139c5ae8f7bc69afebc185cc01968b607fea11abd75b884cb672\" successfully" Jan 13 20:30:50.459875 containerd[1449]: time="2025-01-13T20:30:50.459741886Z" level=info msg="StopPodSandbox for \"597fc95d0d65139c5ae8f7bc69afebc185cc01968b607fea11abd75b884cb672\" returns successfully" Jan 13 20:30:50.459875 containerd[1449]: time="2025-01-13T20:30:50.459771332Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jl4cq,Uid:727a9f8b-291c-4cff-81c1-972e6591d923,Namespace:calico-system,Attempt:3,}" Jan 13 20:30:50.461102 containerd[1449]: time="2025-01-13T20:30:50.460928290Z" level=info msg="StopPodSandbox for \"e2805a8dda438eff1d0c2f55b32b782d9c1dc5901d5efac2a41313f78dd5f358\"" Jan 13 20:30:50.461102 containerd[1449]: time="2025-01-13T20:30:50.461019466Z" level=info msg="TearDown network for sandbox \"e2805a8dda438eff1d0c2f55b32b782d9c1dc5901d5efac2a41313f78dd5f358\" successfully" Jan 13 20:30:50.461102 containerd[1449]: time="2025-01-13T20:30:50.461030468Z" level=info msg="StopPodSandbox for \"e2805a8dda438eff1d0c2f55b32b782d9c1dc5901d5efac2a41313f78dd5f358\" returns successfully" Jan 13 20:30:50.461666 kubelet[2618]: E0113 20:30:50.461295 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:30:50.461666 kubelet[2618]: I0113 20:30:50.461315 2618 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8756a856d4b009fabe0ee6301dd638e4fa9e16801d79908294c945cb1bffd67b" Jan 13 20:30:50.461872 containerd[1449]: time="2025-01-13T20:30:50.461836926Z" level=info msg="StopPodSandbox for \"8756a856d4b009fabe0ee6301dd638e4fa9e16801d79908294c945cb1bffd67b\"" Jan 13 20:30:50.462052 containerd[1449]: time="2025-01-13T20:30:50.462030360Z" level=info msg="Ensure that sandbox 8756a856d4b009fabe0ee6301dd638e4fa9e16801d79908294c945cb1bffd67b in task-service has been cleanup successfully" Jan 13 20:30:50.462251 systemd[1]: run-netns-cni\x2d61b1e5b8\x2daf8e\x2d1eae\x2d4f32\x2dd1d4ea8b3dce.mount: Deactivated successfully. Jan 13 20:30:50.464374 containerd[1449]: time="2025-01-13T20:30:50.462590296Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-kr682,Uid:9a4b274c-0db7-4f24-b51e-a8ee914d4260,Namespace:kube-system,Attempt:3,}" Jan 13 20:30:50.464374 containerd[1449]: time="2025-01-13T20:30:50.464187370Z" level=info msg="TearDown network for sandbox \"8756a856d4b009fabe0ee6301dd638e4fa9e16801d79908294c945cb1bffd67b\" successfully" Jan 13 20:30:50.464374 containerd[1449]: time="2025-01-13T20:30:50.464331315Z" level=info msg="StopPodSandbox for \"8756a856d4b009fabe0ee6301dd638e4fa9e16801d79908294c945cb1bffd67b\" returns successfully" Jan 13 20:30:50.465577 containerd[1449]: time="2025-01-13T20:30:50.465549524Z" level=info msg="StopPodSandbox for \"822d26c1b26dbe791c1f7b69d5ddd74ee6c9ad2e3c5e4663ee663ad0b60316a9\"" Jan 13 20:30:50.465749 containerd[1449]: time="2025-01-13T20:30:50.465730035Z" level=info msg="TearDown network for sandbox \"822d26c1b26dbe791c1f7b69d5ddd74ee6c9ad2e3c5e4663ee663ad0b60316a9\" successfully" Jan 13 20:30:50.465807 containerd[1449]: time="2025-01-13T20:30:50.465793486Z" level=info msg="StopPodSandbox for \"822d26c1b26dbe791c1f7b69d5ddd74ee6c9ad2e3c5e4663ee663ad0b60316a9\" returns successfully" Jan 13 20:30:50.465909 systemd[1]: run-netns-cni\x2d1590a691\x2d2ecd\x2dca1c\x2d5703\x2d17d670183985.mount: Deactivated successfully. Jan 13 20:30:50.467679 containerd[1449]: time="2025-01-13T20:30:50.466732167Z" level=info msg="StopPodSandbox for \"d39152a6fc1537621236a59d0c339b87e86a5ef4286937e0fdb62843acfe45ae\"" Jan 13 20:30:50.467679 containerd[1449]: time="2025-01-13T20:30:50.466901757Z" level=info msg="TearDown network for sandbox \"d39152a6fc1537621236a59d0c339b87e86a5ef4286937e0fdb62843acfe45ae\" successfully" Jan 13 20:30:50.467679 containerd[1449]: time="2025-01-13T20:30:50.466914759Z" level=info msg="StopPodSandbox for \"d39152a6fc1537621236a59d0c339b87e86a5ef4286937e0fdb62843acfe45ae\" returns successfully" Jan 13 20:30:50.467679 containerd[1449]: time="2025-01-13T20:30:50.467667208Z" level=info msg="StopPodSandbox for \"fc6d0aa128b08ee847c9327aaab1a518a44f6b9116caef01b282aee1404e1ed3\"" Jan 13 20:30:50.467879 kubelet[2618]: I0113 20:30:50.467178 2618 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc6d0aa128b08ee847c9327aaab1a518a44f6b9116caef01b282aee1404e1ed3" Jan 13 20:30:50.467922 containerd[1449]: time="2025-01-13T20:30:50.467817514Z" level=info msg="Ensure that sandbox fc6d0aa128b08ee847c9327aaab1a518a44f6b9116caef01b282aee1404e1ed3 in task-service has been cleanup successfully" Jan 13 20:30:50.469370 containerd[1449]: time="2025-01-13T20:30:50.468045953Z" level=info msg="TearDown network for sandbox \"fc6d0aa128b08ee847c9327aaab1a518a44f6b9116caef01b282aee1404e1ed3\" successfully" Jan 13 20:30:50.469370 containerd[1449]: time="2025-01-13T20:30:50.468072558Z" level=info msg="StopPodSandbox for \"fc6d0aa128b08ee847c9327aaab1a518a44f6b9116caef01b282aee1404e1ed3\" returns successfully" Jan 13 20:30:50.469370 containerd[1449]: time="2025-01-13T20:30:50.468632574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d98fcfdcc-xd4cg,Uid:32e2fd28-1c60-4d81-883d-85b833d714fc,Namespace:calico-apiserver,Attempt:3,}" Jan 13 20:30:50.470114 containerd[1449]: time="2025-01-13T20:30:50.469879508Z" level=info msg="StopPodSandbox for \"d759147d0355725660a4a0b3bb78339917fdf984cb56b79b18e8509ae207c0b0\"" Jan 13 20:30:50.470114 containerd[1449]: time="2025-01-13T20:30:50.469958162Z" level=info msg="TearDown network for sandbox \"d759147d0355725660a4a0b3bb78339917fdf984cb56b79b18e8509ae207c0b0\" successfully" Jan 13 20:30:50.470114 containerd[1449]: time="2025-01-13T20:30:50.469968243Z" level=info msg="StopPodSandbox for \"d759147d0355725660a4a0b3bb78339917fdf984cb56b79b18e8509ae207c0b0\" returns successfully" Jan 13 20:30:50.472598 containerd[1449]: time="2025-01-13T20:30:50.472399221Z" level=info msg="StopPodSandbox for \"9337bb1f603c24ba6ca35c3654eb6cf05598f7e4d660f21b53662788b856b030\"" Jan 13 20:30:50.472598 containerd[1449]: time="2025-01-13T20:30:50.472505159Z" level=info msg="TearDown network for sandbox \"9337bb1f603c24ba6ca35c3654eb6cf05598f7e4d660f21b53662788b856b030\" successfully" Jan 13 20:30:50.472598 containerd[1449]: time="2025-01-13T20:30:50.472515881Z" level=info msg="StopPodSandbox for \"9337bb1f603c24ba6ca35c3654eb6cf05598f7e4d660f21b53662788b856b030\" returns successfully" Jan 13 20:30:50.473261 containerd[1449]: time="2025-01-13T20:30:50.473082298Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d98fcfdcc-j9cwm,Uid:5e2bb27f-e9a8-4574-9125-ac3ff1f5546b,Namespace:calico-apiserver,Attempt:3,}" Jan 13 20:30:50.474064 kubelet[2618]: I0113 20:30:50.473954 2618 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe58a6998064b3e9242653fb761145feca56f91a7b11a41150a14cadbeb92f21" Jan 13 20:30:50.474948 containerd[1449]: time="2025-01-13T20:30:50.474735342Z" level=info msg="StopPodSandbox for \"fe58a6998064b3e9242653fb761145feca56f91a7b11a41150a14cadbeb92f21\"" Jan 13 20:30:50.475061 containerd[1449]: time="2025-01-13T20:30:50.475018071Z" level=info msg="Ensure that sandbox fe58a6998064b3e9242653fb761145feca56f91a7b11a41150a14cadbeb92f21 in task-service has been cleanup successfully" Jan 13 20:30:50.476086 containerd[1449]: time="2025-01-13T20:30:50.475625495Z" level=info msg="TearDown network for sandbox \"fe58a6998064b3e9242653fb761145feca56f91a7b11a41150a14cadbeb92f21\" successfully" Jan 13 20:30:50.476086 containerd[1449]: time="2025-01-13T20:30:50.475655181Z" level=info msg="StopPodSandbox for \"fe58a6998064b3e9242653fb761145feca56f91a7b11a41150a14cadbeb92f21\" returns successfully" Jan 13 20:30:50.476086 containerd[1449]: time="2025-01-13T20:30:50.475996119Z" level=info msg="StopPodSandbox for \"4b056f2b56340a603cfdc19318f83a97ca8aea763054ab5414cfa299ed314b21\"" Jan 13 20:30:50.476309 containerd[1449]: time="2025-01-13T20:30:50.476255244Z" level=info msg="TearDown network for sandbox \"4b056f2b56340a603cfdc19318f83a97ca8aea763054ab5414cfa299ed314b21\" successfully" Jan 13 20:30:50.476309 containerd[1449]: time="2025-01-13T20:30:50.476273807Z" level=info msg="StopPodSandbox for \"4b056f2b56340a603cfdc19318f83a97ca8aea763054ab5414cfa299ed314b21\" returns successfully" Jan 13 20:30:50.477131 containerd[1449]: time="2025-01-13T20:30:50.476532091Z" level=info msg="StopPodSandbox for \"6ff3f878365f5c93583995c40a4b88bb4d10b8b6cbd07d067f697265269986fb\"" Jan 13 20:30:50.477131 containerd[1449]: time="2025-01-13T20:30:50.476628228Z" level=info msg="TearDown network for sandbox \"6ff3f878365f5c93583995c40a4b88bb4d10b8b6cbd07d067f697265269986fb\" successfully" Jan 13 20:30:50.477131 containerd[1449]: time="2025-01-13T20:30:50.476638429Z" level=info msg="StopPodSandbox for \"6ff3f878365f5c93583995c40a4b88bb4d10b8b6cbd07d067f697265269986fb\" returns successfully" Jan 13 20:30:50.477131 containerd[1449]: time="2025-01-13T20:30:50.477053021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-rlqw7,Uid:bfa3473d-43a3-447d-b0a7-c066cdd14301,Namespace:kube-system,Attempt:3,}" Jan 13 20:30:50.477769 kubelet[2618]: E0113 20:30:50.476782 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:30:50.549670 containerd[1449]: time="2025-01-13T20:30:50.548039457Z" level=error msg="Failed to destroy network for sandbox \"9554b1bca4b213353c5ba998a5dd52eb548f7df94ec5ff46724a5aaeb70f9be9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:30:50.565133 containerd[1449]: time="2025-01-13T20:30:50.565080585Z" level=error msg="encountered an error cleaning up failed sandbox \"9554b1bca4b213353c5ba998a5dd52eb548f7df94ec5ff46724a5aaeb70f9be9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:30:50.565478 containerd[1449]: time="2025-01-13T20:30:50.565274818Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-75ff5498fd-l6pnm,Uid:1881e196-e398-402d-91c4-c538f30e9a68,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"9554b1bca4b213353c5ba998a5dd52eb548f7df94ec5ff46724a5aaeb70f9be9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:30:50.565588 kubelet[2618]: E0113 20:30:50.565558 2618 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9554b1bca4b213353c5ba998a5dd52eb548f7df94ec5ff46724a5aaeb70f9be9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:30:50.565659 kubelet[2618]: E0113 20:30:50.565614 2618 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9554b1bca4b213353c5ba998a5dd52eb548f7df94ec5ff46724a5aaeb70f9be9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-75ff5498fd-l6pnm" Jan 13 20:30:50.565659 kubelet[2618]: E0113 20:30:50.565640 2618 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9554b1bca4b213353c5ba998a5dd52eb548f7df94ec5ff46724a5aaeb70f9be9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-75ff5498fd-l6pnm" Jan 13 20:30:50.565718 kubelet[2618]: E0113 20:30:50.565692 2618 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-75ff5498fd-l6pnm_calico-system(1881e196-e398-402d-91c4-c538f30e9a68)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-75ff5498fd-l6pnm_calico-system(1881e196-e398-402d-91c4-c538f30e9a68)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9554b1bca4b213353c5ba998a5dd52eb548f7df94ec5ff46724a5aaeb70f9be9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-75ff5498fd-l6pnm" podUID="1881e196-e398-402d-91c4-c538f30e9a68" Jan 13 20:30:50.857279 containerd[1449]: time="2025-01-13T20:30:50.857157327Z" level=error msg="Failed to destroy network for sandbox \"8d724079b815cab68c80f805f3c9db36f335b9bf91e15a1e108f2559c044d35b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:30:50.858589 containerd[1449]: time="2025-01-13T20:30:50.858387539Z" level=error msg="encountered an error cleaning up failed sandbox \"8d724079b815cab68c80f805f3c9db36f335b9bf91e15a1e108f2559c044d35b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:30:50.858589 containerd[1449]: time="2025-01-13T20:30:50.858462151Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jl4cq,Uid:727a9f8b-291c-4cff-81c1-972e6591d923,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"8d724079b815cab68c80f805f3c9db36f335b9bf91e15a1e108f2559c044d35b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:30:50.858756 kubelet[2618]: E0113 20:30:50.858718 2618 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d724079b815cab68c80f805f3c9db36f335b9bf91e15a1e108f2559c044d35b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:30:50.858828 kubelet[2618]: E0113 20:30:50.858783 2618 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d724079b815cab68c80f805f3c9db36f335b9bf91e15a1e108f2559c044d35b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jl4cq" Jan 13 20:30:50.858828 kubelet[2618]: E0113 20:30:50.858810 2618 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d724079b815cab68c80f805f3c9db36f335b9bf91e15a1e108f2559c044d35b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jl4cq" Jan 13 20:30:50.858923 kubelet[2618]: E0113 20:30:50.858865 2618 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-jl4cq_calico-system(727a9f8b-291c-4cff-81c1-972e6591d923)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-jl4cq_calico-system(727a9f8b-291c-4cff-81c1-972e6591d923)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8d724079b815cab68c80f805f3c9db36f335b9bf91e15a1e108f2559c044d35b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jl4cq" podUID="727a9f8b-291c-4cff-81c1-972e6591d923" Jan 13 20:30:50.865141 containerd[1449]: time="2025-01-13T20:30:50.864995954Z" level=error msg="Failed to destroy network for sandbox \"9abf490ce38226a786f2d986301dfedc02cd069f6afacf77e348719211c56812\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:30:50.865530 containerd[1449]: time="2025-01-13T20:30:50.865505041Z" level=error msg="encountered an error cleaning up failed sandbox \"9abf490ce38226a786f2d986301dfedc02cd069f6afacf77e348719211c56812\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:30:50.865751 containerd[1449]: time="2025-01-13T20:30:50.865728680Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d98fcfdcc-j9cwm,Uid:5e2bb27f-e9a8-4574-9125-ac3ff1f5546b,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"9abf490ce38226a786f2d986301dfedc02cd069f6afacf77e348719211c56812\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:30:50.866115 kubelet[2618]: E0113 20:30:50.866090 2618 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9abf490ce38226a786f2d986301dfedc02cd069f6afacf77e348719211c56812\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:30:50.866192 kubelet[2618]: E0113 20:30:50.866146 2618 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9abf490ce38226a786f2d986301dfedc02cd069f6afacf77e348719211c56812\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d98fcfdcc-j9cwm" Jan 13 20:30:50.866192 kubelet[2618]: E0113 20:30:50.866167 2618 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9abf490ce38226a786f2d986301dfedc02cd069f6afacf77e348719211c56812\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d98fcfdcc-j9cwm" Jan 13 20:30:50.866252 kubelet[2618]: E0113 20:30:50.866223 2618 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-d98fcfdcc-j9cwm_calico-apiserver(5e2bb27f-e9a8-4574-9125-ac3ff1f5546b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-d98fcfdcc-j9cwm_calico-apiserver(5e2bb27f-e9a8-4574-9125-ac3ff1f5546b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9abf490ce38226a786f2d986301dfedc02cd069f6afacf77e348719211c56812\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-d98fcfdcc-j9cwm" podUID="5e2bb27f-e9a8-4574-9125-ac3ff1f5546b" Jan 13 20:30:50.871847 containerd[1449]: time="2025-01-13T20:30:50.871673581Z" level=error msg="Failed to destroy network for sandbox \"4248f57e7d5002ec4ded68e8604c05f3f557aaf1a7e22985b0ef5de114a7859c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:30:50.873825 containerd[1449]: time="2025-01-13T20:30:50.873271096Z" level=error msg="encountered an error cleaning up failed sandbox \"4248f57e7d5002ec4ded68e8604c05f3f557aaf1a7e22985b0ef5de114a7859c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:30:50.874071 containerd[1449]: time="2025-01-13T20:30:50.874031266Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-rlqw7,Uid:bfa3473d-43a3-447d-b0a7-c066cdd14301,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"4248f57e7d5002ec4ded68e8604c05f3f557aaf1a7e22985b0ef5de114a7859c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:30:50.874353 kubelet[2618]: E0113 20:30:50.874320 2618 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4248f57e7d5002ec4ded68e8604c05f3f557aaf1a7e22985b0ef5de114a7859c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:30:50.874440 kubelet[2618]: E0113 20:30:50.874375 2618 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4248f57e7d5002ec4ded68e8604c05f3f557aaf1a7e22985b0ef5de114a7859c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-rlqw7" Jan 13 20:30:50.874440 kubelet[2618]: E0113 20:30:50.874395 2618 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4248f57e7d5002ec4ded68e8604c05f3f557aaf1a7e22985b0ef5de114a7859c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-rlqw7" Jan 13 20:30:50.874495 kubelet[2618]: E0113 20:30:50.874452 2618 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-rlqw7_kube-system(bfa3473d-43a3-447d-b0a7-c066cdd14301)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-rlqw7_kube-system(bfa3473d-43a3-447d-b0a7-c066cdd14301)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4248f57e7d5002ec4ded68e8604c05f3f557aaf1a7e22985b0ef5de114a7859c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-rlqw7" podUID="bfa3473d-43a3-447d-b0a7-c066cdd14301" Jan 13 20:30:50.874741 containerd[1449]: time="2025-01-13T20:30:50.874657534Z" level=error msg="Failed to destroy network for sandbox \"b49d8bbc5872aa71827f6c91bd741ba4297fa8a631a80fb080ede02d8dd26312\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:30:50.875986 containerd[1449]: time="2025-01-13T20:30:50.875485236Z" level=error msg="encountered an error cleaning up failed sandbox \"b49d8bbc5872aa71827f6c91bd741ba4297fa8a631a80fb080ede02d8dd26312\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:30:50.876606 containerd[1449]: time="2025-01-13T20:30:50.876496570Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d98fcfdcc-xd4cg,Uid:32e2fd28-1c60-4d81-883d-85b833d714fc,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"b49d8bbc5872aa71827f6c91bd741ba4297fa8a631a80fb080ede02d8dd26312\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:30:50.877356 containerd[1449]: time="2025-01-13T20:30:50.877261821Z" level=error msg="Failed to destroy network for sandbox \"a0f6b2a96bbcddb21e2e35ca6ec1a7093fa5fa273e6581ee9a9af22197c639d8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:30:50.877578 kubelet[2618]: E0113 20:30:50.877507 2618 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b49d8bbc5872aa71827f6c91bd741ba4297fa8a631a80fb080ede02d8dd26312\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:30:50.877755 kubelet[2618]: E0113 20:30:50.877632 2618 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b49d8bbc5872aa71827f6c91bd741ba4297fa8a631a80fb080ede02d8dd26312\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d98fcfdcc-xd4cg" Jan 13 20:30:50.877755 kubelet[2618]: E0113 20:30:50.877660 2618 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b49d8bbc5872aa71827f6c91bd741ba4297fa8a631a80fb080ede02d8dd26312\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d98fcfdcc-xd4cg" Jan 13 20:30:50.877993 kubelet[2618]: E0113 20:30:50.877793 2618 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-d98fcfdcc-xd4cg_calico-apiserver(32e2fd28-1c60-4d81-883d-85b833d714fc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-d98fcfdcc-xd4cg_calico-apiserver(32e2fd28-1c60-4d81-883d-85b833d714fc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b49d8bbc5872aa71827f6c91bd741ba4297fa8a631a80fb080ede02d8dd26312\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-d98fcfdcc-xd4cg" podUID="32e2fd28-1c60-4d81-883d-85b833d714fc" Jan 13 20:30:50.878834 containerd[1449]: time="2025-01-13T20:30:50.878745156Z" level=error msg="encountered an error cleaning up failed sandbox \"a0f6b2a96bbcddb21e2e35ca6ec1a7093fa5fa273e6581ee9a9af22197c639d8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:30:50.879114 containerd[1449]: time="2025-01-13T20:30:50.878800366Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-kr682,Uid:9a4b274c-0db7-4f24-b51e-a8ee914d4260,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"a0f6b2a96bbcddb21e2e35ca6ec1a7093fa5fa273e6581ee9a9af22197c639d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:30:50.879282 kubelet[2618]: E0113 20:30:50.879259 2618 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0f6b2a96bbcddb21e2e35ca6ec1a7093fa5fa273e6581ee9a9af22197c639d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:30:50.879372 kubelet[2618]: E0113 20:30:50.879299 2618 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0f6b2a96bbcddb21e2e35ca6ec1a7093fa5fa273e6581ee9a9af22197c639d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-kr682" Jan 13 20:30:50.879372 kubelet[2618]: E0113 20:30:50.879321 2618 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0f6b2a96bbcddb21e2e35ca6ec1a7093fa5fa273e6581ee9a9af22197c639d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-kr682" Jan 13 20:30:50.879372 kubelet[2618]: E0113 20:30:50.879369 2618 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-kr682_kube-system(9a4b274c-0db7-4f24-b51e-a8ee914d4260)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-kr682_kube-system(9a4b274c-0db7-4f24-b51e-a8ee914d4260)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a0f6b2a96bbcddb21e2e35ca6ec1a7093fa5fa273e6581ee9a9af22197c639d8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-kr682" podUID="9a4b274c-0db7-4f24-b51e-a8ee914d4260" Jan 13 20:30:51.246959 systemd[1]: run-netns-cni\x2d7097f512\x2d264f\x2df15e\x2de2ea\x2d047208faee61.mount: Deactivated successfully. Jan 13 20:30:51.247518 systemd[1]: run-netns-cni\x2dd5fd2ced\x2d00d8\x2d03bf\x2d4c82\x2def85023305fc.mount: Deactivated successfully. Jan 13 20:30:51.484883 kubelet[2618]: I0113 20:30:51.484230 2618 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9554b1bca4b213353c5ba998a5dd52eb548f7df94ec5ff46724a5aaeb70f9be9" Jan 13 20:30:51.486064 containerd[1449]: time="2025-01-13T20:30:51.486002601Z" level=info msg="StopPodSandbox for \"9554b1bca4b213353c5ba998a5dd52eb548f7df94ec5ff46724a5aaeb70f9be9\"" Jan 13 20:30:51.486881 containerd[1449]: time="2025-01-13T20:30:51.486650029Z" level=info msg="Ensure that sandbox 9554b1bca4b213353c5ba998a5dd52eb548f7df94ec5ff46724a5aaeb70f9be9 in task-service has been cleanup successfully" Jan 13 20:30:51.487758 containerd[1449]: time="2025-01-13T20:30:51.487616550Z" level=info msg="TearDown network for sandbox \"9554b1bca4b213353c5ba998a5dd52eb548f7df94ec5ff46724a5aaeb70f9be9\" successfully" Jan 13 20:30:51.487758 containerd[1449]: time="2025-01-13T20:30:51.487637634Z" level=info msg="StopPodSandbox for \"9554b1bca4b213353c5ba998a5dd52eb548f7df94ec5ff46724a5aaeb70f9be9\" returns successfully" Jan 13 20:30:51.488788 containerd[1449]: time="2025-01-13T20:30:51.488748179Z" level=info msg="StopPodSandbox for \"7dc39cf12b6679d45e439963f1d8ea527ed639331f599299008cb3a8d971d87d\"" Jan 13 20:30:51.488856 containerd[1449]: time="2025-01-13T20:30:51.488842195Z" level=info msg="TearDown network for sandbox \"7dc39cf12b6679d45e439963f1d8ea527ed639331f599299008cb3a8d971d87d\" successfully" Jan 13 20:30:51.488856 containerd[1449]: time="2025-01-13T20:30:51.488853197Z" level=info msg="StopPodSandbox for \"7dc39cf12b6679d45e439963f1d8ea527ed639331f599299008cb3a8d971d87d\" returns successfully" Jan 13 20:30:51.490348 systemd[1]: run-netns-cni\x2dae292167\x2d589c\x2d1834\x2d2dce\x2d7228eb4f285f.mount: Deactivated successfully. Jan 13 20:30:51.491690 containerd[1449]: time="2025-01-13T20:30:51.491621579Z" level=info msg="StopPodSandbox for \"2647bc3b6ece374655386d1a80524db37cf5858a2fee36103340216679d81300\"" Jan 13 20:30:51.492094 containerd[1449]: time="2025-01-13T20:30:51.491729237Z" level=info msg="TearDown network for sandbox \"2647bc3b6ece374655386d1a80524db37cf5858a2fee36103340216679d81300\" successfully" Jan 13 20:30:51.492094 containerd[1449]: time="2025-01-13T20:30:51.491751601Z" level=info msg="StopPodSandbox for \"2647bc3b6ece374655386d1a80524db37cf5858a2fee36103340216679d81300\" returns successfully" Jan 13 20:30:51.493065 containerd[1449]: time="2025-01-13T20:30:51.492900073Z" level=info msg="StopPodSandbox for \"01d5bbe17e56f7c7083cb740ffa0476c833343a17ac74aeb1a900b08fcaf2d63\"" Jan 13 20:30:51.493065 containerd[1449]: time="2025-01-13T20:30:51.493023494Z" level=info msg="TearDown network for sandbox \"01d5bbe17e56f7c7083cb740ffa0476c833343a17ac74aeb1a900b08fcaf2d63\" successfully" Jan 13 20:30:51.493065 containerd[1449]: time="2025-01-13T20:30:51.493034375Z" level=info msg="StopPodSandbox for \"01d5bbe17e56f7c7083cb740ffa0476c833343a17ac74aeb1a900b08fcaf2d63\" returns successfully" Jan 13 20:30:51.493859 containerd[1449]: time="2025-01-13T20:30:51.493708928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-75ff5498fd-l6pnm,Uid:1881e196-e398-402d-91c4-c538f30e9a68,Namespace:calico-system,Attempt:4,}" Jan 13 20:30:51.494638 kubelet[2618]: I0113 20:30:51.494223 2618 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b49d8bbc5872aa71827f6c91bd741ba4297fa8a631a80fb080ede02d8dd26312" Jan 13 20:30:51.495216 containerd[1449]: time="2025-01-13T20:30:51.495181214Z" level=info msg="StopPodSandbox for \"b49d8bbc5872aa71827f6c91bd741ba4297fa8a631a80fb080ede02d8dd26312\"" Jan 13 20:30:51.496427 containerd[1449]: time="2025-01-13T20:30:51.496349609Z" level=info msg="Ensure that sandbox b49d8bbc5872aa71827f6c91bd741ba4297fa8a631a80fb080ede02d8dd26312 in task-service has been cleanup successfully" Jan 13 20:30:51.497829 containerd[1449]: time="2025-01-13T20:30:51.497744643Z" level=info msg="TearDown network for sandbox \"b49d8bbc5872aa71827f6c91bd741ba4297fa8a631a80fb080ede02d8dd26312\" successfully" Jan 13 20:30:51.498067 containerd[1449]: time="2025-01-13T20:30:51.497939555Z" level=info msg="StopPodSandbox for \"b49d8bbc5872aa71827f6c91bd741ba4297fa8a631a80fb080ede02d8dd26312\" returns successfully" Jan 13 20:30:51.498434 containerd[1449]: time="2025-01-13T20:30:51.498401832Z" level=info msg="StopPodSandbox for \"8756a856d4b009fabe0ee6301dd638e4fa9e16801d79908294c945cb1bffd67b\"" Jan 13 20:30:51.498554 containerd[1449]: time="2025-01-13T20:30:51.498506490Z" level=info msg="TearDown network for sandbox \"8756a856d4b009fabe0ee6301dd638e4fa9e16801d79908294c945cb1bffd67b\" successfully" Jan 13 20:30:51.498554 containerd[1449]: time="2025-01-13T20:30:51.498523653Z" level=info msg="StopPodSandbox for \"8756a856d4b009fabe0ee6301dd638e4fa9e16801d79908294c945cb1bffd67b\" returns successfully" Jan 13 20:30:51.499221 containerd[1449]: time="2025-01-13T20:30:51.499115272Z" level=info msg="StopPodSandbox for \"822d26c1b26dbe791c1f7b69d5ddd74ee6c9ad2e3c5e4663ee663ad0b60316a9\"" Jan 13 20:30:51.499221 containerd[1449]: time="2025-01-13T20:30:51.499204486Z" level=info msg="TearDown network for sandbox \"822d26c1b26dbe791c1f7b69d5ddd74ee6c9ad2e3c5e4663ee663ad0b60316a9\" successfully" Jan 13 20:30:51.499221 containerd[1449]: time="2025-01-13T20:30:51.499215328Z" level=info msg="StopPodSandbox for \"822d26c1b26dbe791c1f7b69d5ddd74ee6c9ad2e3c5e4663ee663ad0b60316a9\" returns successfully" Jan 13 20:30:51.500115 containerd[1449]: time="2025-01-13T20:30:51.500071791Z" level=info msg="StopPodSandbox for \"d39152a6fc1537621236a59d0c339b87e86a5ef4286937e0fdb62843acfe45ae\"" Jan 13 20:30:51.500494 containerd[1449]: time="2025-01-13T20:30:51.500383323Z" level=info msg="TearDown network for sandbox \"d39152a6fc1537621236a59d0c339b87e86a5ef4286937e0fdb62843acfe45ae\" successfully" Jan 13 20:30:51.500494 containerd[1449]: time="2025-01-13T20:30:51.500421410Z" level=info msg="StopPodSandbox for \"d39152a6fc1537621236a59d0c339b87e86a5ef4286937e0fdb62843acfe45ae\" returns successfully" Jan 13 20:30:51.501235 containerd[1449]: time="2025-01-13T20:30:51.501193699Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d98fcfdcc-xd4cg,Uid:32e2fd28-1c60-4d81-883d-85b833d714fc,Namespace:calico-apiserver,Attempt:4,}" Jan 13 20:30:51.501304 systemd[1]: run-netns-cni\x2d14667269\x2d9318\x2d8921\x2d6a45\x2dcda835ffdf07.mount: Deactivated successfully. Jan 13 20:30:51.503704 kubelet[2618]: I0113 20:30:51.503674 2618 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a0f6b2a96bbcddb21e2e35ca6ec1a7093fa5fa273e6581ee9a9af22197c639d8" Jan 13 20:30:51.504568 containerd[1449]: time="2025-01-13T20:30:51.504253770Z" level=info msg="StopPodSandbox for \"a0f6b2a96bbcddb21e2e35ca6ec1a7093fa5fa273e6581ee9a9af22197c639d8\"" Jan 13 20:30:51.504568 containerd[1449]: time="2025-01-13T20:30:51.504465606Z" level=info msg="Ensure that sandbox a0f6b2a96bbcddb21e2e35ca6ec1a7093fa5fa273e6581ee9a9af22197c639d8 in task-service has been cleanup successfully" Jan 13 20:30:51.504983 containerd[1449]: time="2025-01-13T20:30:51.504773377Z" level=info msg="TearDown network for sandbox \"a0f6b2a96bbcddb21e2e35ca6ec1a7093fa5fa273e6581ee9a9af22197c639d8\" successfully" Jan 13 20:30:51.504983 containerd[1449]: time="2025-01-13T20:30:51.504793980Z" level=info msg="StopPodSandbox for \"a0f6b2a96bbcddb21e2e35ca6ec1a7093fa5fa273e6581ee9a9af22197c639d8\" returns successfully" Jan 13 20:30:51.505693 containerd[1449]: time="2025-01-13T20:30:51.505393041Z" level=info msg="StopPodSandbox for \"12075a8be0ad7e3b6adc6798d491144f903fffff3ebdf8ebd973a2543f85c392\"" Jan 13 20:30:51.505693 containerd[1449]: time="2025-01-13T20:30:51.505487056Z" level=info msg="TearDown network for sandbox \"12075a8be0ad7e3b6adc6798d491144f903fffff3ebdf8ebd973a2543f85c392\" successfully" Jan 13 20:30:51.505693 containerd[1449]: time="2025-01-13T20:30:51.505497418Z" level=info msg="StopPodSandbox for \"12075a8be0ad7e3b6adc6798d491144f903fffff3ebdf8ebd973a2543f85c392\" returns successfully" Jan 13 20:30:51.505845 containerd[1449]: time="2025-01-13T20:30:51.505813071Z" level=info msg="StopPodSandbox for \"597fc95d0d65139c5ae8f7bc69afebc185cc01968b607fea11abd75b884cb672\"" Jan 13 20:30:51.505916 containerd[1449]: time="2025-01-13T20:30:51.505898005Z" level=info msg="TearDown network for sandbox \"597fc95d0d65139c5ae8f7bc69afebc185cc01968b607fea11abd75b884cb672\" successfully" Jan 13 20:30:51.505916 containerd[1449]: time="2025-01-13T20:30:51.505915048Z" level=info msg="StopPodSandbox for \"597fc95d0d65139c5ae8f7bc69afebc185cc01968b607fea11abd75b884cb672\" returns successfully" Jan 13 20:30:51.506469 containerd[1449]: time="2025-01-13T20:30:51.506436815Z" level=info msg="StopPodSandbox for \"e2805a8dda438eff1d0c2f55b32b782d9c1dc5901d5efac2a41313f78dd5f358\"" Jan 13 20:30:51.506569 containerd[1449]: time="2025-01-13T20:30:51.506532511Z" level=info msg="TearDown network for sandbox \"e2805a8dda438eff1d0c2f55b32b782d9c1dc5901d5efac2a41313f78dd5f358\" successfully" Jan 13 20:30:51.506569 containerd[1449]: time="2025-01-13T20:30:51.506565957Z" level=info msg="StopPodSandbox for \"e2805a8dda438eff1d0c2f55b32b782d9c1dc5901d5efac2a41313f78dd5f358\" returns successfully" Jan 13 20:30:51.506838 kubelet[2618]: E0113 20:30:51.506811 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:30:51.507952 containerd[1449]: time="2025-01-13T20:30:51.507633535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-kr682,Uid:9a4b274c-0db7-4f24-b51e-a8ee914d4260,Namespace:kube-system,Attempt:4,}" Jan 13 20:30:51.508243 systemd[1]: run-netns-cni\x2d1ee208f5\x2d44a8\x2dc3bf\x2df7fa\x2d9e51bbeb895e.mount: Deactivated successfully. Jan 13 20:30:51.509451 containerd[1449]: time="2025-01-13T20:30:51.508707794Z" level=info msg="StopPodSandbox for \"9abf490ce38226a786f2d986301dfedc02cd069f6afacf77e348719211c56812\"" Jan 13 20:30:51.509451 containerd[1449]: time="2025-01-13T20:30:51.508955556Z" level=info msg="Ensure that sandbox 9abf490ce38226a786f2d986301dfedc02cd069f6afacf77e348719211c56812 in task-service has been cleanup successfully" Jan 13 20:30:51.509451 containerd[1449]: time="2025-01-13T20:30:51.509159630Z" level=info msg="TearDown network for sandbox \"9abf490ce38226a786f2d986301dfedc02cd069f6afacf77e348719211c56812\" successfully" Jan 13 20:30:51.509451 containerd[1449]: time="2025-01-13T20:30:51.509176833Z" level=info msg="StopPodSandbox for \"9abf490ce38226a786f2d986301dfedc02cd069f6afacf77e348719211c56812\" returns successfully" Jan 13 20:30:51.509566 kubelet[2618]: I0113 20:30:51.508257 2618 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9abf490ce38226a786f2d986301dfedc02cd069f6afacf77e348719211c56812" Jan 13 20:30:51.510574 containerd[1449]: time="2025-01-13T20:30:51.510515056Z" level=info msg="StopPodSandbox for \"fc6d0aa128b08ee847c9327aaab1a518a44f6b9116caef01b282aee1404e1ed3\"" Jan 13 20:30:51.510966 containerd[1449]: time="2025-01-13T20:30:51.510944088Z" level=info msg="TearDown network for sandbox \"fc6d0aa128b08ee847c9327aaab1a518a44f6b9116caef01b282aee1404e1ed3\" successfully" Jan 13 20:30:51.511026 containerd[1449]: time="2025-01-13T20:30:51.510966612Z" level=info msg="StopPodSandbox for \"fc6d0aa128b08ee847c9327aaab1a518a44f6b9116caef01b282aee1404e1ed3\" returns successfully" Jan 13 20:30:51.511376 containerd[1449]: time="2025-01-13T20:30:51.511351276Z" level=info msg="StopPodSandbox for \"d759147d0355725660a4a0b3bb78339917fdf984cb56b79b18e8509ae207c0b0\"" Jan 13 20:30:51.511446 containerd[1449]: time="2025-01-13T20:30:51.511432090Z" level=info msg="TearDown network for sandbox \"d759147d0355725660a4a0b3bb78339917fdf984cb56b79b18e8509ae207c0b0\" successfully" Jan 13 20:30:51.511476 containerd[1449]: time="2025-01-13T20:30:51.511444932Z" level=info msg="StopPodSandbox for \"d759147d0355725660a4a0b3bb78339917fdf984cb56b79b18e8509ae207c0b0\" returns successfully" Jan 13 20:30:51.512441 containerd[1449]: time="2025-01-13T20:30:51.512412574Z" level=info msg="StopPodSandbox for \"9337bb1f603c24ba6ca35c3654eb6cf05598f7e4d660f21b53662788b856b030\"" Jan 13 20:30:51.512523 containerd[1449]: time="2025-01-13T20:30:51.512507789Z" level=info msg="TearDown network for sandbox \"9337bb1f603c24ba6ca35c3654eb6cf05598f7e4d660f21b53662788b856b030\" successfully" Jan 13 20:30:51.512523 containerd[1449]: time="2025-01-13T20:30:51.512521232Z" level=info msg="StopPodSandbox for \"9337bb1f603c24ba6ca35c3654eb6cf05598f7e4d660f21b53662788b856b030\" returns successfully" Jan 13 20:30:51.512706 systemd[1]: run-netns-cni\x2deb8783ca\x2dacff\x2d01dd\x2d8aed\x2d796e1baa6e77.mount: Deactivated successfully. Jan 13 20:30:51.513263 containerd[1449]: time="2025-01-13T20:30:51.513217508Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d98fcfdcc-j9cwm,Uid:5e2bb27f-e9a8-4574-9125-ac3ff1f5546b,Namespace:calico-apiserver,Attempt:4,}" Jan 13 20:30:51.514914 kubelet[2618]: I0113 20:30:51.514887 2618 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4248f57e7d5002ec4ded68e8604c05f3f557aaf1a7e22985b0ef5de114a7859c" Jan 13 20:30:51.515983 containerd[1449]: time="2025-01-13T20:30:51.515643793Z" level=info msg="StopPodSandbox for \"4248f57e7d5002ec4ded68e8604c05f3f557aaf1a7e22985b0ef5de114a7859c\"" Jan 13 20:30:51.515983 containerd[1449]: time="2025-01-13T20:30:51.515805180Z" level=info msg="Ensure that sandbox 4248f57e7d5002ec4ded68e8604c05f3f557aaf1a7e22985b0ef5de114a7859c in task-service has been cleanup successfully" Jan 13 20:30:51.516183 containerd[1449]: time="2025-01-13T20:30:51.516158159Z" level=info msg="TearDown network for sandbox \"4248f57e7d5002ec4ded68e8604c05f3f557aaf1a7e22985b0ef5de114a7859c\" successfully" Jan 13 20:30:51.516236 containerd[1449]: time="2025-01-13T20:30:51.516223210Z" level=info msg="StopPodSandbox for \"4248f57e7d5002ec4ded68e8604c05f3f557aaf1a7e22985b0ef5de114a7859c\" returns successfully" Jan 13 20:30:51.516614 containerd[1449]: time="2025-01-13T20:30:51.516591152Z" level=info msg="StopPodSandbox for \"fe58a6998064b3e9242653fb761145feca56f91a7b11a41150a14cadbeb92f21\"" Jan 13 20:30:51.516761 containerd[1449]: time="2025-01-13T20:30:51.516744497Z" level=info msg="TearDown network for sandbox \"fe58a6998064b3e9242653fb761145feca56f91a7b11a41150a14cadbeb92f21\" successfully" Jan 13 20:30:51.516816 containerd[1449]: time="2025-01-13T20:30:51.516802787Z" level=info msg="StopPodSandbox for \"fe58a6998064b3e9242653fb761145feca56f91a7b11a41150a14cadbeb92f21\" returns successfully" Jan 13 20:30:51.517189 containerd[1449]: time="2025-01-13T20:30:51.517163527Z" level=info msg="StopPodSandbox for \"4b056f2b56340a603cfdc19318f83a97ca8aea763054ab5414cfa299ed314b21\"" Jan 13 20:30:51.517356 containerd[1449]: time="2025-01-13T20:30:51.517330475Z" level=info msg="TearDown network for sandbox \"4b056f2b56340a603cfdc19318f83a97ca8aea763054ab5414cfa299ed314b21\" successfully" Jan 13 20:30:51.517470 containerd[1449]: time="2025-01-13T20:30:51.517443854Z" level=info msg="StopPodSandbox for \"4b056f2b56340a603cfdc19318f83a97ca8aea763054ab5414cfa299ed314b21\" returns successfully" Jan 13 20:30:51.517696 kubelet[2618]: I0113 20:30:51.517668 2618 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d724079b815cab68c80f805f3c9db36f335b9bf91e15a1e108f2559c044d35b" Jan 13 20:30:51.518285 containerd[1449]: time="2025-01-13T20:30:51.518244588Z" level=info msg="StopPodSandbox for \"8d724079b815cab68c80f805f3c9db36f335b9bf91e15a1e108f2559c044d35b\"" Jan 13 20:30:51.518463 containerd[1449]: time="2025-01-13T20:30:51.518430619Z" level=info msg="Ensure that sandbox 8d724079b815cab68c80f805f3c9db36f335b9bf91e15a1e108f2559c044d35b in task-service has been cleanup successfully" Jan 13 20:30:51.518658 containerd[1449]: time="2025-01-13T20:30:51.518183778Z" level=info msg="StopPodSandbox for \"6ff3f878365f5c93583995c40a4b88bb4d10b8b6cbd07d067f697265269986fb\"" Jan 13 20:30:51.518711 containerd[1449]: time="2025-01-13T20:30:51.518671939Z" level=info msg="TearDown network for sandbox \"6ff3f878365f5c93583995c40a4b88bb4d10b8b6cbd07d067f697265269986fb\" successfully" Jan 13 20:30:51.518711 containerd[1449]: time="2025-01-13T20:30:51.518684542Z" level=info msg="StopPodSandbox for \"6ff3f878365f5c93583995c40a4b88bb4d10b8b6cbd07d067f697265269986fb\" returns successfully" Jan 13 20:30:51.518851 containerd[1449]: time="2025-01-13T20:30:51.518818124Z" level=info msg="TearDown network for sandbox \"8d724079b815cab68c80f805f3c9db36f335b9bf91e15a1e108f2559c044d35b\" successfully" Jan 13 20:30:51.518851 containerd[1449]: time="2025-01-13T20:30:51.518840007Z" level=info msg="StopPodSandbox for \"8d724079b815cab68c80f805f3c9db36f335b9bf91e15a1e108f2559c044d35b\" returns successfully" Jan 13 20:30:51.518913 kubelet[2618]: E0113 20:30:51.518879 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:30:51.519356 containerd[1449]: time="2025-01-13T20:30:51.519266599Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-rlqw7,Uid:bfa3473d-43a3-447d-b0a7-c066cdd14301,Namespace:kube-system,Attempt:4,}" Jan 13 20:30:51.519534 containerd[1449]: time="2025-01-13T20:30:51.519509599Z" level=info msg="StopPodSandbox for \"02833b59f867cfdf5eba3e30bff932e672dc2fe1c1880d847ccdba9c056d9423\"" Jan 13 20:30:51.519703 containerd[1449]: time="2025-01-13T20:30:51.519665265Z" level=info msg="TearDown network for sandbox \"02833b59f867cfdf5eba3e30bff932e672dc2fe1c1880d847ccdba9c056d9423\" successfully" Jan 13 20:30:51.519703 containerd[1449]: time="2025-01-13T20:30:51.519687189Z" level=info msg="StopPodSandbox for \"02833b59f867cfdf5eba3e30bff932e672dc2fe1c1880d847ccdba9c056d9423\" returns successfully" Jan 13 20:30:51.520195 containerd[1449]: time="2025-01-13T20:30:51.520159388Z" level=info msg="StopPodSandbox for \"3ed141ee74fd8466fc2858bf0a83b765adcda31e8a45969b6575916e813952b8\"" Jan 13 20:30:51.521351 containerd[1449]: time="2025-01-13T20:30:51.521316061Z" level=info msg="TearDown network for sandbox \"3ed141ee74fd8466fc2858bf0a83b765adcda31e8a45969b6575916e813952b8\" successfully" Jan 13 20:30:51.521351 containerd[1449]: time="2025-01-13T20:30:51.521339985Z" level=info msg="StopPodSandbox for \"3ed141ee74fd8466fc2858bf0a83b765adcda31e8a45969b6575916e813952b8\" returns successfully" Jan 13 20:30:51.522214 containerd[1449]: time="2025-01-13T20:30:51.521852231Z" level=info msg="StopPodSandbox for \"43643c7e5ccf2f6123a60334abfa99ecb9c3c9ae6377489073238964a83b68d7\"" Jan 13 20:30:51.522214 containerd[1449]: time="2025-01-13T20:30:51.521946006Z" level=info msg="TearDown network for sandbox \"43643c7e5ccf2f6123a60334abfa99ecb9c3c9ae6377489073238964a83b68d7\" successfully" Jan 13 20:30:51.522214 containerd[1449]: time="2025-01-13T20:30:51.521956568Z" level=info msg="StopPodSandbox for \"43643c7e5ccf2f6123a60334abfa99ecb9c3c9ae6377489073238964a83b68d7\" returns successfully" Jan 13 20:30:51.522615 containerd[1449]: time="2025-01-13T20:30:51.522579592Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jl4cq,Uid:727a9f8b-291c-4cff-81c1-972e6591d923,Namespace:calico-system,Attempt:4,}" Jan 13 20:30:51.610662 systemd[1]: Started sshd@8-10.0.0.136:22-10.0.0.1:52698.service - OpenSSH per-connection server daemon (10.0.0.1:52698). Jan 13 20:30:51.710899 sshd[4320]: Accepted publickey for core from 10.0.0.1 port 52698 ssh2: RSA SHA256:dHV21v/TvsC6tzdcDH8HHQ5Gsjsp+3vcXiRjWYQ6Qqo Jan 13 20:30:51.713413 sshd-session[4320]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:30:51.720785 systemd-logind[1430]: New session 9 of user core. Jan 13 20:30:51.722782 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 13 20:30:51.805810 containerd[1449]: time="2025-01-13T20:30:51.805684418Z" level=error msg="Failed to destroy network for sandbox \"dd4df888906b41b0619fd44b1ab4a0973b52e36f69ae2891ff60516df6099166\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:30:51.806081 containerd[1449]: time="2025-01-13T20:30:51.806041678Z" level=error msg="encountered an error cleaning up failed sandbox \"dd4df888906b41b0619fd44b1ab4a0973b52e36f69ae2891ff60516df6099166\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:30:51.806136 containerd[1449]: time="2025-01-13T20:30:51.806115090Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-75ff5498fd-l6pnm,Uid:1881e196-e398-402d-91c4-c538f30e9a68,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"dd4df888906b41b0619fd44b1ab4a0973b52e36f69ae2891ff60516df6099166\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:30:51.808817 kubelet[2618]: E0113 20:30:51.808753 2618 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd4df888906b41b0619fd44b1ab4a0973b52e36f69ae2891ff60516df6099166\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:30:51.808817 kubelet[2618]: E0113 20:30:51.808809 2618 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd4df888906b41b0619fd44b1ab4a0973b52e36f69ae2891ff60516df6099166\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-75ff5498fd-l6pnm" Jan 13 20:30:51.809005 kubelet[2618]: E0113 20:30:51.808835 2618 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd4df888906b41b0619fd44b1ab4a0973b52e36f69ae2891ff60516df6099166\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-75ff5498fd-l6pnm" Jan 13 20:30:51.809005 kubelet[2618]: E0113 20:30:51.808891 2618 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-75ff5498fd-l6pnm_calico-system(1881e196-e398-402d-91c4-c538f30e9a68)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-75ff5498fd-l6pnm_calico-system(1881e196-e398-402d-91c4-c538f30e9a68)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dd4df888906b41b0619fd44b1ab4a0973b52e36f69ae2891ff60516df6099166\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-75ff5498fd-l6pnm" podUID="1881e196-e398-402d-91c4-c538f30e9a68" Jan 13 20:30:51.888777 containerd[1449]: time="2025-01-13T20:30:51.887713885Z" level=error msg="Failed to destroy network for sandbox \"b937976515bc4386cbc91b1a672d23935498f62b19acb872e035ee50bd8391cc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:30:51.889577 containerd[1449]: time="2025-01-13T20:30:51.889514185Z" level=error msg="encountered an error cleaning up failed sandbox \"b937976515bc4386cbc91b1a672d23935498f62b19acb872e035ee50bd8391cc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:30:51.889703 containerd[1449]: time="2025-01-13T20:30:51.889605641Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d98fcfdcc-xd4cg,Uid:32e2fd28-1c60-4d81-883d-85b833d714fc,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"b937976515bc4386cbc91b1a672d23935498f62b19acb872e035ee50bd8391cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:30:51.890882 kubelet[2618]: E0113 20:30:51.889935 2618 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b937976515bc4386cbc91b1a672d23935498f62b19acb872e035ee50bd8391cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:30:51.890882 kubelet[2618]: E0113 20:30:51.889997 2618 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b937976515bc4386cbc91b1a672d23935498f62b19acb872e035ee50bd8391cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d98fcfdcc-xd4cg" Jan 13 20:30:51.890882 kubelet[2618]: E0113 20:30:51.890016 2618 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b937976515bc4386cbc91b1a672d23935498f62b19acb872e035ee50bd8391cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d98fcfdcc-xd4cg" Jan 13 20:30:51.891013 kubelet[2618]: E0113 20:30:51.890076 2618 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-d98fcfdcc-xd4cg_calico-apiserver(32e2fd28-1c60-4d81-883d-85b833d714fc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-d98fcfdcc-xd4cg_calico-apiserver(32e2fd28-1c60-4d81-883d-85b833d714fc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b937976515bc4386cbc91b1a672d23935498f62b19acb872e035ee50bd8391cc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-d98fcfdcc-xd4cg" podUID="32e2fd28-1c60-4d81-883d-85b833d714fc" Jan 13 20:30:51.894416 sshd[4332]: Connection closed by 10.0.0.1 port 52698 Jan 13 20:30:51.894261 sshd-session[4320]: pam_unix(sshd:session): session closed for user core Jan 13 20:30:51.896082 containerd[1449]: time="2025-01-13T20:30:51.895180292Z" level=error msg="Failed to destroy network for sandbox \"59d0cc9c07439bbba899aeac4a8b9d1c6cf4747a77d08b51c8fdf1a1305a78f2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:30:51.897083 containerd[1449]: time="2025-01-13T20:30:51.896848331Z" level=error msg="encountered an error cleaning up failed sandbox \"59d0cc9c07439bbba899aeac4a8b9d1c6cf4747a77d08b51c8fdf1a1305a78f2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:30:51.897083 containerd[1449]: time="2025-01-13T20:30:51.896917222Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-rlqw7,Uid:bfa3473d-43a3-447d-b0a7-c066cdd14301,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"59d0cc9c07439bbba899aeac4a8b9d1c6cf4747a77d08b51c8fdf1a1305a78f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:30:51.897278 kubelet[2618]: E0113 20:30:51.897178 2618 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"59d0cc9c07439bbba899aeac4a8b9d1c6cf4747a77d08b51c8fdf1a1305a78f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:30:51.897278 kubelet[2618]: E0113 20:30:51.897236 2618 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"59d0cc9c07439bbba899aeac4a8b9d1c6cf4747a77d08b51c8fdf1a1305a78f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-rlqw7" Jan 13 20:30:51.897278 kubelet[2618]: E0113 20:30:51.897257 2618 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"59d0cc9c07439bbba899aeac4a8b9d1c6cf4747a77d08b51c8fdf1a1305a78f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-rlqw7" Jan 13 20:30:51.897363 kubelet[2618]: E0113 20:30:51.897315 2618 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-rlqw7_kube-system(bfa3473d-43a3-447d-b0a7-c066cdd14301)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-rlqw7_kube-system(bfa3473d-43a3-447d-b0a7-c066cdd14301)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"59d0cc9c07439bbba899aeac4a8b9d1c6cf4747a77d08b51c8fdf1a1305a78f2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-rlqw7" podUID="bfa3473d-43a3-447d-b0a7-c066cdd14301" Jan 13 20:30:51.900360 systemd[1]: sshd@8-10.0.0.136:22-10.0.0.1:52698.service: Deactivated successfully. Jan 13 20:30:51.902741 systemd[1]: session-9.scope: Deactivated successfully. Jan 13 20:30:51.904234 systemd-logind[1430]: Session 9 logged out. Waiting for processes to exit. Jan 13 20:30:51.907600 systemd-logind[1430]: Removed session 9. Jan 13 20:30:51.909459 containerd[1449]: time="2025-01-13T20:30:51.909419752Z" level=error msg="Failed to destroy network for sandbox \"e27a0631252ded34759262131f15c21e07c306b23e33e72b16e5689484506ef2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:30:51.910639 containerd[1449]: time="2025-01-13T20:30:51.910608270Z" level=error msg="Failed to destroy network for sandbox \"18037a9fd3c2ba10b2cd67f74cef79458b154604d459e6075c5a99ab49b87e21\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:30:51.915872 containerd[1449]: time="2025-01-13T20:30:51.915820581Z" level=error msg="encountered an error cleaning up failed sandbox \"e27a0631252ded34759262131f15c21e07c306b23e33e72b16e5689484506ef2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:30:51.915997 containerd[1449]: time="2025-01-13T20:30:51.915907116Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d98fcfdcc-j9cwm,Uid:5e2bb27f-e9a8-4574-9125-ac3ff1f5546b,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"e27a0631252ded34759262131f15c21e07c306b23e33e72b16e5689484506ef2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:30:51.916301 kubelet[2618]: E0113 20:30:51.916277 2618 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e27a0631252ded34759262131f15c21e07c306b23e33e72b16e5689484506ef2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:30:51.916374 kubelet[2618]: E0113 20:30:51.916350 2618 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e27a0631252ded34759262131f15c21e07c306b23e33e72b16e5689484506ef2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d98fcfdcc-j9cwm" Jan 13 20:30:51.916417 kubelet[2618]: E0113 20:30:51.916388 2618 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e27a0631252ded34759262131f15c21e07c306b23e33e72b16e5689484506ef2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d98fcfdcc-j9cwm" Jan 13 20:30:51.916596 kubelet[2618]: E0113 20:30:51.916449 2618 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-d98fcfdcc-j9cwm_calico-apiserver(5e2bb27f-e9a8-4574-9125-ac3ff1f5546b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-d98fcfdcc-j9cwm_calico-apiserver(5e2bb27f-e9a8-4574-9125-ac3ff1f5546b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e27a0631252ded34759262131f15c21e07c306b23e33e72b16e5689484506ef2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-d98fcfdcc-j9cwm" podUID="5e2bb27f-e9a8-4574-9125-ac3ff1f5546b" Jan 13 20:30:51.917777 containerd[1449]: time="2025-01-13T20:30:51.917625443Z" level=error msg="Failed to destroy network for sandbox \"7009ab9fe0d0c54a2f1a6562dda6cba704fec9b4594788d8e47830f96b28e57c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:30:51.918133 containerd[1449]: time="2025-01-13T20:30:51.918100402Z" level=error msg="encountered an error cleaning up failed sandbox \"7009ab9fe0d0c54a2f1a6562dda6cba704fec9b4594788d8e47830f96b28e57c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:30:51.918264 containerd[1449]: time="2025-01-13T20:30:51.918241866Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-kr682,Uid:9a4b274c-0db7-4f24-b51e-a8ee914d4260,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"7009ab9fe0d0c54a2f1a6562dda6cba704fec9b4594788d8e47830f96b28e57c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:30:51.918710 kubelet[2618]: E0113 20:30:51.918533 2618 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7009ab9fe0d0c54a2f1a6562dda6cba704fec9b4594788d8e47830f96b28e57c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:30:51.918710 kubelet[2618]: E0113 20:30:51.918600 2618 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7009ab9fe0d0c54a2f1a6562dda6cba704fec9b4594788d8e47830f96b28e57c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-kr682" Jan 13 20:30:51.918710 kubelet[2618]: E0113 20:30:51.918620 2618 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7009ab9fe0d0c54a2f1a6562dda6cba704fec9b4594788d8e47830f96b28e57c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-kr682" Jan 13 20:30:51.918832 kubelet[2618]: E0113 20:30:51.918680 2618 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-kr682_kube-system(9a4b274c-0db7-4f24-b51e-a8ee914d4260)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-kr682_kube-system(9a4b274c-0db7-4f24-b51e-a8ee914d4260)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7009ab9fe0d0c54a2f1a6562dda6cba704fec9b4594788d8e47830f96b28e57c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-kr682" podUID="9a4b274c-0db7-4f24-b51e-a8ee914d4260" Jan 13 20:30:51.923166 containerd[1449]: time="2025-01-13T20:30:51.922804828Z" level=error msg="encountered an error cleaning up failed sandbox \"18037a9fd3c2ba10b2cd67f74cef79458b154604d459e6075c5a99ab49b87e21\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:30:51.923166 containerd[1449]: time="2025-01-13T20:30:51.922880041Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jl4cq,Uid:727a9f8b-291c-4cff-81c1-972e6591d923,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"18037a9fd3c2ba10b2cd67f74cef79458b154604d459e6075c5a99ab49b87e21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:30:51.923351 kubelet[2618]: E0113 20:30:51.923236 2618 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"18037a9fd3c2ba10b2cd67f74cef79458b154604d459e6075c5a99ab49b87e21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:30:51.923351 kubelet[2618]: E0113 20:30:51.923286 2618 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"18037a9fd3c2ba10b2cd67f74cef79458b154604d459e6075c5a99ab49b87e21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jl4cq" Jan 13 20:30:51.923351 kubelet[2618]: E0113 20:30:51.923306 2618 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"18037a9fd3c2ba10b2cd67f74cef79458b154604d459e6075c5a99ab49b87e21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jl4cq" Jan 13 20:30:51.923437 kubelet[2618]: E0113 20:30:51.923362 2618 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-jl4cq_calico-system(727a9f8b-291c-4cff-81c1-972e6591d923)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-jl4cq_calico-system(727a9f8b-291c-4cff-81c1-972e6591d923)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"18037a9fd3c2ba10b2cd67f74cef79458b154604d459e6075c5a99ab49b87e21\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jl4cq" podUID="727a9f8b-291c-4cff-81c1-972e6591d923" Jan 13 20:30:52.131635 containerd[1449]: time="2025-01-13T20:30:52.131584495Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:30:52.134727 containerd[1449]: time="2025-01-13T20:30:52.134677598Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=137671762" Jan 13 20:30:52.135548 containerd[1449]: time="2025-01-13T20:30:52.135518135Z" level=info msg="ImageCreate event name:\"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:30:52.138151 containerd[1449]: time="2025-01-13T20:30:52.138109036Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:30:52.138823 containerd[1449]: time="2025-01-13T20:30:52.138772744Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"137671624\" in 4.756355145s" Jan 13 20:30:52.138859 containerd[1449]: time="2025-01-13T20:30:52.138821872Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\"" Jan 13 20:30:52.145919 containerd[1449]: time="2025-01-13T20:30:52.145863898Z" level=info msg="CreateContainer within sandbox \"fe47cf139320ec72ecca393b45a54ac7f8cef29fd5b8d6bd8b982698ceb85d27\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 13 20:30:52.166659 containerd[1449]: time="2025-01-13T20:30:52.166611153Z" level=info msg="CreateContainer within sandbox \"fe47cf139320ec72ecca393b45a54ac7f8cef29fd5b8d6bd8b982698ceb85d27\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"e362cca2940b74f3d0f51ad61913297cc1691414de760954a246556f86018795\"" Jan 13 20:30:52.167234 containerd[1449]: time="2025-01-13T20:30:52.167205649Z" level=info msg="StartContainer for \"e362cca2940b74f3d0f51ad61913297cc1691414de760954a246556f86018795\"" Jan 13 20:30:52.225735 systemd[1]: Started cri-containerd-e362cca2940b74f3d0f51ad61913297cc1691414de760954a246556f86018795.scope - libcontainer container e362cca2940b74f3d0f51ad61913297cc1691414de760954a246556f86018795. Jan 13 20:30:52.249463 systemd[1]: run-netns-cni\x2dbeae0184\x2dfe30\x2d8664\x2d6634\x2d561e5ba90825.mount: Deactivated successfully. Jan 13 20:30:52.249576 systemd[1]: run-netns-cni\x2d8237ec92\x2d7f89\x2d63e4\x2dcfde\x2d4d7bdaf48193.mount: Deactivated successfully. Jan 13 20:30:52.249631 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1831706720.mount: Deactivated successfully. Jan 13 20:30:52.257203 containerd[1449]: time="2025-01-13T20:30:52.257149641Z" level=info msg="StartContainer for \"e362cca2940b74f3d0f51ad61913297cc1691414de760954a246556f86018795\" returns successfully" Jan 13 20:30:52.498638 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 13 20:30:52.498761 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 13 20:30:52.525809 kubelet[2618]: I0113 20:30:52.525716 2618 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e27a0631252ded34759262131f15c21e07c306b23e33e72b16e5689484506ef2" Jan 13 20:30:52.528608 containerd[1449]: time="2025-01-13T20:30:52.527215293Z" level=info msg="StopPodSandbox for \"e27a0631252ded34759262131f15c21e07c306b23e33e72b16e5689484506ef2\"" Jan 13 20:30:52.528608 containerd[1449]: time="2025-01-13T20:30:52.527393802Z" level=info msg="Ensure that sandbox e27a0631252ded34759262131f15c21e07c306b23e33e72b16e5689484506ef2 in task-service has been cleanup successfully" Jan 13 20:30:52.528608 containerd[1449]: time="2025-01-13T20:30:52.527657605Z" level=info msg="TearDown network for sandbox \"e27a0631252ded34759262131f15c21e07c306b23e33e72b16e5689484506ef2\" successfully" Jan 13 20:30:52.528608 containerd[1449]: time="2025-01-13T20:30:52.527671968Z" level=info msg="StopPodSandbox for \"e27a0631252ded34759262131f15c21e07c306b23e33e72b16e5689484506ef2\" returns successfully" Jan 13 20:30:52.529935 systemd[1]: run-netns-cni\x2db2ac41a4\x2d7057\x2d944e\x2d94da\x2d9ba25db0b24b.mount: Deactivated successfully. Jan 13 20:30:52.530462 containerd[1449]: time="2025-01-13T20:30:52.530284993Z" level=info msg="StopPodSandbox for \"9abf490ce38226a786f2d986301dfedc02cd069f6afacf77e348719211c56812\"" Jan 13 20:30:52.530778 containerd[1449]: time="2025-01-13T20:30:52.530760510Z" level=info msg="TearDown network for sandbox \"9abf490ce38226a786f2d986301dfedc02cd069f6afacf77e348719211c56812\" successfully" Jan 13 20:30:52.531023 containerd[1449]: time="2025-01-13T20:30:52.530978786Z" level=info msg="StopPodSandbox for \"9abf490ce38226a786f2d986301dfedc02cd069f6afacf77e348719211c56812\" returns successfully" Jan 13 20:30:52.531517 containerd[1449]: time="2025-01-13T20:30:52.531497190Z" level=info msg="StopPodSandbox for \"fc6d0aa128b08ee847c9327aaab1a518a44f6b9116caef01b282aee1404e1ed3\"" Jan 13 20:30:52.531827 containerd[1449]: time="2025-01-13T20:30:52.531751111Z" level=info msg="TearDown network for sandbox \"fc6d0aa128b08ee847c9327aaab1a518a44f6b9116caef01b282aee1404e1ed3\" successfully" Jan 13 20:30:52.532502 containerd[1449]: time="2025-01-13T20:30:52.532002392Z" level=info msg="StopPodSandbox for \"fc6d0aa128b08ee847c9327aaab1a518a44f6b9116caef01b282aee1404e1ed3\" returns successfully" Jan 13 20:30:52.533129 kubelet[2618]: I0113 20:30:52.532912 2618 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="59d0cc9c07439bbba899aeac4a8b9d1c6cf4747a77d08b51c8fdf1a1305a78f2" Jan 13 20:30:52.533672 containerd[1449]: time="2025-01-13T20:30:52.533421303Z" level=info msg="StopPodSandbox for \"d759147d0355725660a4a0b3bb78339917fdf984cb56b79b18e8509ae207c0b0\"" Jan 13 20:30:52.533672 containerd[1449]: time="2025-01-13T20:30:52.533503196Z" level=info msg="TearDown network for sandbox \"d759147d0355725660a4a0b3bb78339917fdf984cb56b79b18e8509ae207c0b0\" successfully" Jan 13 20:30:52.533672 containerd[1449]: time="2025-01-13T20:30:52.533513638Z" level=info msg="StopPodSandbox for \"d759147d0355725660a4a0b3bb78339917fdf984cb56b79b18e8509ae207c0b0\" returns successfully" Jan 13 20:30:52.534506 containerd[1449]: time="2025-01-13T20:30:52.533947628Z" level=info msg="StopPodSandbox for \"9337bb1f603c24ba6ca35c3654eb6cf05598f7e4d660f21b53662788b856b030\"" Jan 13 20:30:52.534506 containerd[1449]: time="2025-01-13T20:30:52.534019800Z" level=info msg="TearDown network for sandbox \"9337bb1f603c24ba6ca35c3654eb6cf05598f7e4d660f21b53662788b856b030\" successfully" Jan 13 20:30:52.534506 containerd[1449]: time="2025-01-13T20:30:52.534029202Z" level=info msg="StopPodSandbox for \"9337bb1f603c24ba6ca35c3654eb6cf05598f7e4d660f21b53662788b856b030\" returns successfully" Jan 13 20:30:52.534506 containerd[1449]: time="2025-01-13T20:30:52.534194829Z" level=info msg="StopPodSandbox for \"59d0cc9c07439bbba899aeac4a8b9d1c6cf4747a77d08b51c8fdf1a1305a78f2\"" Jan 13 20:30:52.534506 containerd[1449]: time="2025-01-13T20:30:52.534358695Z" level=info msg="Ensure that sandbox 59d0cc9c07439bbba899aeac4a8b9d1c6cf4747a77d08b51c8fdf1a1305a78f2 in task-service has been cleanup successfully" Jan 13 20:30:52.534506 containerd[1449]: time="2025-01-13T20:30:52.534398662Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d98fcfdcc-j9cwm,Uid:5e2bb27f-e9a8-4574-9125-ac3ff1f5546b,Namespace:calico-apiserver,Attempt:5,}" Jan 13 20:30:52.536089 containerd[1449]: time="2025-01-13T20:30:52.535981719Z" level=info msg="TearDown network for sandbox \"59d0cc9c07439bbba899aeac4a8b9d1c6cf4747a77d08b51c8fdf1a1305a78f2\" successfully" Jan 13 20:30:52.536089 containerd[1449]: time="2025-01-13T20:30:52.536016285Z" level=info msg="StopPodSandbox for \"59d0cc9c07439bbba899aeac4a8b9d1c6cf4747a77d08b51c8fdf1a1305a78f2\" returns successfully" Jan 13 20:30:52.536286 systemd[1]: run-netns-cni\x2d8dd92c8e\x2d83f9\x2d743c\x2d0f5c\x2d50864a268173.mount: Deactivated successfully. Jan 13 20:30:52.537720 containerd[1449]: time="2025-01-13T20:30:52.537679356Z" level=info msg="StopPodSandbox for \"4248f57e7d5002ec4ded68e8604c05f3f557aaf1a7e22985b0ef5de114a7859c\"" Jan 13 20:30:52.537780 containerd[1449]: time="2025-01-13T20:30:52.537759048Z" level=info msg="TearDown network for sandbox \"4248f57e7d5002ec4ded68e8604c05f3f557aaf1a7e22985b0ef5de114a7859c\" successfully" Jan 13 20:30:52.537780 containerd[1449]: time="2025-01-13T20:30:52.537768450Z" level=info msg="StopPodSandbox for \"4248f57e7d5002ec4ded68e8604c05f3f557aaf1a7e22985b0ef5de114a7859c\" returns successfully" Jan 13 20:30:52.538734 containerd[1449]: time="2025-01-13T20:30:52.538707403Z" level=info msg="StopPodSandbox for \"fe58a6998064b3e9242653fb761145feca56f91a7b11a41150a14cadbeb92f21\"" Jan 13 20:30:52.538796 containerd[1449]: time="2025-01-13T20:30:52.538784775Z" level=info msg="TearDown network for sandbox \"fe58a6998064b3e9242653fb761145feca56f91a7b11a41150a14cadbeb92f21\" successfully" Jan 13 20:30:52.538796 containerd[1449]: time="2025-01-13T20:30:52.538794337Z" level=info msg="StopPodSandbox for \"fe58a6998064b3e9242653fb761145feca56f91a7b11a41150a14cadbeb92f21\" returns successfully" Jan 13 20:30:52.540690 containerd[1449]: time="2025-01-13T20:30:52.539091665Z" level=info msg="StopPodSandbox for \"4b056f2b56340a603cfdc19318f83a97ca8aea763054ab5414cfa299ed314b21\"" Jan 13 20:30:52.540690 containerd[1449]: time="2025-01-13T20:30:52.539315382Z" level=info msg="TearDown network for sandbox \"4b056f2b56340a603cfdc19318f83a97ca8aea763054ab5414cfa299ed314b21\" successfully" Jan 13 20:30:52.540690 containerd[1449]: time="2025-01-13T20:30:52.539331624Z" level=info msg="StopPodSandbox for \"4b056f2b56340a603cfdc19318f83a97ca8aea763054ab5414cfa299ed314b21\" returns successfully" Jan 13 20:30:52.540809 kubelet[2618]: I0113 20:30:52.540720 2618 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd4df888906b41b0619fd44b1ab4a0973b52e36f69ae2891ff60516df6099166" Jan 13 20:30:52.541372 containerd[1449]: time="2025-01-13T20:30:52.541274980Z" level=info msg="StopPodSandbox for \"dd4df888906b41b0619fd44b1ab4a0973b52e36f69ae2891ff60516df6099166\"" Jan 13 20:30:52.541461 containerd[1449]: time="2025-01-13T20:30:52.541427445Z" level=info msg="Ensure that sandbox dd4df888906b41b0619fd44b1ab4a0973b52e36f69ae2891ff60516df6099166 in task-service has been cleanup successfully" Jan 13 20:30:52.541905 containerd[1449]: time="2025-01-13T20:30:52.541851634Z" level=info msg="TearDown network for sandbox \"dd4df888906b41b0619fd44b1ab4a0973b52e36f69ae2891ff60516df6099166\" successfully" Jan 13 20:30:52.541905 containerd[1449]: time="2025-01-13T20:30:52.541875398Z" level=info msg="StopPodSandbox for \"dd4df888906b41b0619fd44b1ab4a0973b52e36f69ae2891ff60516df6099166\" returns successfully" Jan 13 20:30:52.542381 containerd[1449]: time="2025-01-13T20:30:52.542133960Z" level=info msg="StopPodSandbox for \"6ff3f878365f5c93583995c40a4b88bb4d10b8b6cbd07d067f697265269986fb\"" Jan 13 20:30:52.542381 containerd[1449]: time="2025-01-13T20:30:52.542235097Z" level=info msg="TearDown network for sandbox \"6ff3f878365f5c93583995c40a4b88bb4d10b8b6cbd07d067f697265269986fb\" successfully" Jan 13 20:30:52.542381 containerd[1449]: time="2025-01-13T20:30:52.542247139Z" level=info msg="StopPodSandbox for \"6ff3f878365f5c93583995c40a4b88bb4d10b8b6cbd07d067f697265269986fb\" returns successfully" Jan 13 20:30:52.542503 kubelet[2618]: E0113 20:30:52.542477 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:30:52.542787 containerd[1449]: time="2025-01-13T20:30:52.542764863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-rlqw7,Uid:bfa3473d-43a3-447d-b0a7-c066cdd14301,Namespace:kube-system,Attempt:5,}" Jan 13 20:30:52.543336 containerd[1449]: time="2025-01-13T20:30:52.543110799Z" level=info msg="StopPodSandbox for \"9554b1bca4b213353c5ba998a5dd52eb548f7df94ec5ff46724a5aaeb70f9be9\"" Jan 13 20:30:52.543204 systemd[1]: run-netns-cni\x2d61500723\x2d5fc2\x2d25b4\x2d2b8c\x2d93b13540eeac.mount: Deactivated successfully. Jan 13 20:30:52.546225 kubelet[2618]: I0113 20:30:52.546206 2618 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="18037a9fd3c2ba10b2cd67f74cef79458b154604d459e6075c5a99ab49b87e21" Jan 13 20:30:52.547061 containerd[1449]: time="2025-01-13T20:30:52.546832765Z" level=info msg="StopPodSandbox for \"18037a9fd3c2ba10b2cd67f74cef79458b154604d459e6075c5a99ab49b87e21\"" Jan 13 20:30:52.556530 containerd[1449]: time="2025-01-13T20:30:52.556387399Z" level=info msg="Ensure that sandbox 18037a9fd3c2ba10b2cd67f74cef79458b154604d459e6075c5a99ab49b87e21 in task-service has been cleanup successfully" Jan 13 20:30:52.561282 systemd[1]: run-netns-cni\x2d070b310b\x2d5041\x2d8403\x2d31d5\x2d6730bc782aaa.mount: Deactivated successfully. Jan 13 20:30:52.563710 containerd[1449]: time="2025-01-13T20:30:52.562648817Z" level=info msg="TearDown network for sandbox \"9554b1bca4b213353c5ba998a5dd52eb548f7df94ec5ff46724a5aaeb70f9be9\" successfully" Jan 13 20:30:52.563710 containerd[1449]: time="2025-01-13T20:30:52.562676382Z" level=info msg="StopPodSandbox for \"9554b1bca4b213353c5ba998a5dd52eb548f7df94ec5ff46724a5aaeb70f9be9\" returns successfully" Jan 13 20:30:52.568420 containerd[1449]: time="2025-01-13T20:30:52.567963482Z" level=info msg="StopPodSandbox for \"7dc39cf12b6679d45e439963f1d8ea527ed639331f599299008cb3a8d971d87d\"" Jan 13 20:30:52.568420 containerd[1449]: time="2025-01-13T20:30:52.568094303Z" level=info msg="TearDown network for sandbox \"7dc39cf12b6679d45e439963f1d8ea527ed639331f599299008cb3a8d971d87d\" successfully" Jan 13 20:30:52.568420 containerd[1449]: time="2025-01-13T20:30:52.568105945Z" level=info msg="StopPodSandbox for \"7dc39cf12b6679d45e439963f1d8ea527ed639331f599299008cb3a8d971d87d\" returns successfully" Jan 13 20:30:52.568702 containerd[1449]: time="2025-01-13T20:30:52.568643673Z" level=info msg="TearDown network for sandbox \"18037a9fd3c2ba10b2cd67f74cef79458b154604d459e6075c5a99ab49b87e21\" successfully" Jan 13 20:30:52.568917 containerd[1449]: time="2025-01-13T20:30:52.568853587Z" level=info msg="StopPodSandbox for \"18037a9fd3c2ba10b2cd67f74cef79458b154604d459e6075c5a99ab49b87e21\" returns successfully" Jan 13 20:30:52.571220 containerd[1449]: time="2025-01-13T20:30:52.571180405Z" level=info msg="StopPodSandbox for \"2647bc3b6ece374655386d1a80524db37cf5858a2fee36103340216679d81300\"" Jan 13 20:30:52.571412 containerd[1449]: time="2025-01-13T20:30:52.571270900Z" level=info msg="TearDown network for sandbox \"2647bc3b6ece374655386d1a80524db37cf5858a2fee36103340216679d81300\" successfully" Jan 13 20:30:52.571412 containerd[1449]: time="2025-01-13T20:30:52.571282862Z" level=info msg="StopPodSandbox for \"2647bc3b6ece374655386d1a80524db37cf5858a2fee36103340216679d81300\" returns successfully" Jan 13 20:30:52.571412 containerd[1449]: time="2025-01-13T20:30:52.571333550Z" level=info msg="StopPodSandbox for \"8d724079b815cab68c80f805f3c9db36f335b9bf91e15a1e108f2559c044d35b\"" Jan 13 20:30:52.571504 containerd[1449]: time="2025-01-13T20:30:52.571429126Z" level=info msg="TearDown network for sandbox \"8d724079b815cab68c80f805f3c9db36f335b9bf91e15a1e108f2559c044d35b\" successfully" Jan 13 20:30:52.571504 containerd[1449]: time="2025-01-13T20:30:52.571444408Z" level=info msg="StopPodSandbox for \"8d724079b815cab68c80f805f3c9db36f335b9bf91e15a1e108f2559c044d35b\" returns successfully" Jan 13 20:30:52.573682 containerd[1449]: time="2025-01-13T20:30:52.573204134Z" level=info msg="StopPodSandbox for \"01d5bbe17e56f7c7083cb740ffa0476c833343a17ac74aeb1a900b08fcaf2d63\"" Jan 13 20:30:52.574008 containerd[1449]: time="2025-01-13T20:30:52.573596758Z" level=info msg="StopPodSandbox for \"02833b59f867cfdf5eba3e30bff932e672dc2fe1c1880d847ccdba9c056d9423\"" Jan 13 20:30:52.574008 containerd[1449]: time="2025-01-13T20:30:52.573954657Z" level=info msg="TearDown network for sandbox \"02833b59f867cfdf5eba3e30bff932e672dc2fe1c1880d847ccdba9c056d9423\" successfully" Jan 13 20:30:52.574008 containerd[1449]: time="2025-01-13T20:30:52.573966058Z" level=info msg="StopPodSandbox for \"02833b59f867cfdf5eba3e30bff932e672dc2fe1c1880d847ccdba9c056d9423\" returns successfully" Jan 13 20:30:52.574343 containerd[1449]: time="2025-01-13T20:30:52.574319236Z" level=info msg="TearDown network for sandbox \"01d5bbe17e56f7c7083cb740ffa0476c833343a17ac74aeb1a900b08fcaf2d63\" successfully" Jan 13 20:30:52.574436 containerd[1449]: time="2025-01-13T20:30:52.574385967Z" level=info msg="StopPodSandbox for \"01d5bbe17e56f7c7083cb740ffa0476c833343a17ac74aeb1a900b08fcaf2d63\" returns successfully" Jan 13 20:30:52.575669 kubelet[2618]: E0113 20:30:52.575643 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:30:52.576175 containerd[1449]: time="2025-01-13T20:30:52.576042556Z" level=info msg="StopPodSandbox for \"3ed141ee74fd8466fc2858bf0a83b765adcda31e8a45969b6575916e813952b8\"" Jan 13 20:30:52.576175 containerd[1449]: time="2025-01-13T20:30:52.576122889Z" level=info msg="TearDown network for sandbox \"3ed141ee74fd8466fc2858bf0a83b765adcda31e8a45969b6575916e813952b8\" successfully" Jan 13 20:30:52.576175 containerd[1449]: time="2025-01-13T20:30:52.576132571Z" level=info msg="StopPodSandbox for \"3ed141ee74fd8466fc2858bf0a83b765adcda31e8a45969b6575916e813952b8\" returns successfully" Jan 13 20:30:52.577557 containerd[1449]: time="2025-01-13T20:30:52.577439623Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-75ff5498fd-l6pnm,Uid:1881e196-e398-402d-91c4-c538f30e9a68,Namespace:calico-system,Attempt:5,}" Jan 13 20:30:52.579315 containerd[1449]: time="2025-01-13T20:30:52.579277442Z" level=info msg="StopPodSandbox for \"43643c7e5ccf2f6123a60334abfa99ecb9c3c9ae6377489073238964a83b68d7\"" Jan 13 20:30:52.580111 containerd[1449]: time="2025-01-13T20:30:52.580067091Z" level=info msg="TearDown network for sandbox \"43643c7e5ccf2f6123a60334abfa99ecb9c3c9ae6377489073238964a83b68d7\" successfully" Jan 13 20:30:52.580185 containerd[1449]: time="2025-01-13T20:30:52.580116459Z" level=info msg="StopPodSandbox for \"43643c7e5ccf2f6123a60334abfa99ecb9c3c9ae6377489073238964a83b68d7\" returns successfully" Jan 13 20:30:52.583691 containerd[1449]: time="2025-01-13T20:30:52.583657835Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jl4cq,Uid:727a9f8b-291c-4cff-81c1-972e6591d923,Namespace:calico-system,Attempt:5,}" Jan 13 20:30:52.589952 kubelet[2618]: I0113 20:30:52.589586 2618 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7009ab9fe0d0c54a2f1a6562dda6cba704fec9b4594788d8e47830f96b28e57c" Jan 13 20:30:52.591121 containerd[1449]: time="2025-01-13T20:30:52.591072161Z" level=info msg="StopPodSandbox for \"7009ab9fe0d0c54a2f1a6562dda6cba704fec9b4594788d8e47830f96b28e57c\"" Jan 13 20:30:52.591268 containerd[1449]: time="2025-01-13T20:30:52.591240148Z" level=info msg="Ensure that sandbox 7009ab9fe0d0c54a2f1a6562dda6cba704fec9b4594788d8e47830f96b28e57c in task-service has been cleanup successfully" Jan 13 20:30:52.592436 containerd[1449]: time="2025-01-13T20:30:52.592375253Z" level=info msg="TearDown network for sandbox \"7009ab9fe0d0c54a2f1a6562dda6cba704fec9b4594788d8e47830f96b28e57c\" successfully" Jan 13 20:30:52.592436 containerd[1449]: time="2025-01-13T20:30:52.592413859Z" level=info msg="StopPodSandbox for \"7009ab9fe0d0c54a2f1a6562dda6cba704fec9b4594788d8e47830f96b28e57c\" returns successfully" Jan 13 20:30:52.593166 containerd[1449]: time="2025-01-13T20:30:52.592944506Z" level=info msg="StopPodSandbox for \"a0f6b2a96bbcddb21e2e35ca6ec1a7093fa5fa273e6581ee9a9af22197c639d8\"" Jan 13 20:30:52.593166 containerd[1449]: time="2025-01-13T20:30:52.593093010Z" level=info msg="TearDown network for sandbox \"a0f6b2a96bbcddb21e2e35ca6ec1a7093fa5fa273e6581ee9a9af22197c639d8\" successfully" Jan 13 20:30:52.593166 containerd[1449]: time="2025-01-13T20:30:52.593105692Z" level=info msg="StopPodSandbox for \"a0f6b2a96bbcddb21e2e35ca6ec1a7093fa5fa273e6581ee9a9af22197c639d8\" returns successfully" Jan 13 20:30:52.594654 containerd[1449]: time="2025-01-13T20:30:52.594609977Z" level=info msg="StopPodSandbox for \"12075a8be0ad7e3b6adc6798d491144f903fffff3ebdf8ebd973a2543f85c392\"" Jan 13 20:30:52.594751 containerd[1449]: time="2025-01-13T20:30:52.594697951Z" level=info msg="TearDown network for sandbox \"12075a8be0ad7e3b6adc6798d491144f903fffff3ebdf8ebd973a2543f85c392\" successfully" Jan 13 20:30:52.594751 containerd[1449]: time="2025-01-13T20:30:52.594709633Z" level=info msg="StopPodSandbox for \"12075a8be0ad7e3b6adc6798d491144f903fffff3ebdf8ebd973a2543f85c392\" returns successfully" Jan 13 20:30:52.595434 containerd[1449]: time="2025-01-13T20:30:52.595407026Z" level=info msg="StopPodSandbox for \"597fc95d0d65139c5ae8f7bc69afebc185cc01968b607fea11abd75b884cb672\"" Jan 13 20:30:52.595505 containerd[1449]: time="2025-01-13T20:30:52.595486719Z" level=info msg="TearDown network for sandbox \"597fc95d0d65139c5ae8f7bc69afebc185cc01968b607fea11abd75b884cb672\" successfully" Jan 13 20:30:52.595505 containerd[1449]: time="2025-01-13T20:30:52.595496481Z" level=info msg="StopPodSandbox for \"597fc95d0d65139c5ae8f7bc69afebc185cc01968b607fea11abd75b884cb672\" returns successfully" Jan 13 20:30:52.601946 containerd[1449]: time="2025-01-13T20:30:52.601872118Z" level=info msg="StopPodSandbox for \"e2805a8dda438eff1d0c2f55b32b782d9c1dc5901d5efac2a41313f78dd5f358\"" Jan 13 20:30:52.602106 containerd[1449]: time="2025-01-13T20:30:52.601981656Z" level=info msg="TearDown network for sandbox \"e2805a8dda438eff1d0c2f55b32b782d9c1dc5901d5efac2a41313f78dd5f358\" successfully" Jan 13 20:30:52.602106 containerd[1449]: time="2025-01-13T20:30:52.602089633Z" level=info msg="StopPodSandbox for \"e2805a8dda438eff1d0c2f55b32b782d9c1dc5901d5efac2a41313f78dd5f358\" returns successfully" Jan 13 20:30:52.603570 kubelet[2618]: E0113 20:30:52.602817 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:30:52.608413 containerd[1449]: time="2025-01-13T20:30:52.607092607Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-kr682,Uid:9a4b274c-0db7-4f24-b51e-a8ee914d4260,Namespace:kube-system,Attempt:5,}" Jan 13 20:30:52.609358 kubelet[2618]: I0113 20:30:52.609331 2618 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b937976515bc4386cbc91b1a672d23935498f62b19acb872e035ee50bd8391cc" Jan 13 20:30:52.610123 containerd[1449]: time="2025-01-13T20:30:52.610095256Z" level=info msg="StopPodSandbox for \"b937976515bc4386cbc91b1a672d23935498f62b19acb872e035ee50bd8391cc\"" Jan 13 20:30:52.612272 containerd[1449]: time="2025-01-13T20:30:52.612240845Z" level=info msg="Ensure that sandbox b937976515bc4386cbc91b1a672d23935498f62b19acb872e035ee50bd8391cc in task-service has been cleanup successfully" Jan 13 20:30:52.613261 containerd[1449]: time="2025-01-13T20:30:52.613230806Z" level=info msg="TearDown network for sandbox \"b937976515bc4386cbc91b1a672d23935498f62b19acb872e035ee50bd8391cc\" successfully" Jan 13 20:30:52.613446 containerd[1449]: time="2025-01-13T20:30:52.613316340Z" level=info msg="StopPodSandbox for \"b937976515bc4386cbc91b1a672d23935498f62b19acb872e035ee50bd8391cc\" returns successfully" Jan 13 20:30:52.614353 containerd[1449]: time="2025-01-13T20:30:52.614266334Z" level=info msg="StopPodSandbox for \"b49d8bbc5872aa71827f6c91bd741ba4297fa8a631a80fb080ede02d8dd26312\"" Jan 13 20:30:52.614430 containerd[1449]: time="2025-01-13T20:30:52.614371351Z" level=info msg="TearDown network for sandbox \"b49d8bbc5872aa71827f6c91bd741ba4297fa8a631a80fb080ede02d8dd26312\" successfully" Jan 13 20:30:52.614430 containerd[1449]: time="2025-01-13T20:30:52.614383753Z" level=info msg="StopPodSandbox for \"b49d8bbc5872aa71827f6c91bd741ba4297fa8a631a80fb080ede02d8dd26312\" returns successfully" Jan 13 20:30:52.614953 containerd[1449]: time="2025-01-13T20:30:52.614923721Z" level=info msg="StopPodSandbox for \"8756a856d4b009fabe0ee6301dd638e4fa9e16801d79908294c945cb1bffd67b\"" Jan 13 20:30:52.615113 containerd[1449]: time="2025-01-13T20:30:52.615094149Z" level=info msg="TearDown network for sandbox \"8756a856d4b009fabe0ee6301dd638e4fa9e16801d79908294c945cb1bffd67b\" successfully" Jan 13 20:30:52.615298 containerd[1449]: time="2025-01-13T20:30:52.615217009Z" level=info msg="StopPodSandbox for \"8756a856d4b009fabe0ee6301dd638e4fa9e16801d79908294c945cb1bffd67b\" returns successfully" Jan 13 20:30:52.615594 containerd[1449]: time="2025-01-13T20:30:52.615551863Z" level=info msg="StopPodSandbox for \"822d26c1b26dbe791c1f7b69d5ddd74ee6c9ad2e3c5e4663ee663ad0b60316a9\"" Jan 13 20:30:52.616010 containerd[1449]: time="2025-01-13T20:30:52.615899920Z" level=info msg="TearDown network for sandbox \"822d26c1b26dbe791c1f7b69d5ddd74ee6c9ad2e3c5e4663ee663ad0b60316a9\" successfully" Jan 13 20:30:52.616010 containerd[1449]: time="2025-01-13T20:30:52.615925524Z" level=info msg="StopPodSandbox for \"822d26c1b26dbe791c1f7b69d5ddd74ee6c9ad2e3c5e4663ee663ad0b60316a9\" returns successfully" Jan 13 20:30:52.616232 containerd[1449]: time="2025-01-13T20:30:52.616204850Z" level=info msg="StopPodSandbox for \"d39152a6fc1537621236a59d0c339b87e86a5ef4286937e0fdb62843acfe45ae\"" Jan 13 20:30:52.616491 containerd[1449]: time="2025-01-13T20:30:52.616374197Z" level=info msg="TearDown network for sandbox \"d39152a6fc1537621236a59d0c339b87e86a5ef4286937e0fdb62843acfe45ae\" successfully" Jan 13 20:30:52.616491 containerd[1449]: time="2025-01-13T20:30:52.616388919Z" level=info msg="StopPodSandbox for \"d39152a6fc1537621236a59d0c339b87e86a5ef4286937e0fdb62843acfe45ae\" returns successfully" Jan 13 20:30:52.616932 containerd[1449]: time="2025-01-13T20:30:52.616904603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d98fcfdcc-xd4cg,Uid:32e2fd28-1c60-4d81-883d-85b833d714fc,Namespace:calico-apiserver,Attempt:5,}" Jan 13 20:30:52.632691 kubelet[2618]: I0113 20:30:52.632645 2618 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-2d9jq" podStartSLOduration=1.914346848 podStartE2EDuration="15.632601197s" podCreationTimestamp="2025-01-13 20:30:37 +0000 UTC" firstStartedPulling="2025-01-13 20:30:38.420813683 +0000 UTC m=+20.229720149" lastFinishedPulling="2025-01-13 20:30:52.139068032 +0000 UTC m=+33.947974498" observedRunningTime="2025-01-13 20:30:52.600801424 +0000 UTC m=+34.409707970" watchObservedRunningTime="2025-01-13 20:30:52.632601197 +0000 UTC m=+34.441507663" Jan 13 20:30:53.207422 systemd-networkd[1389]: cali821bf8f17fb: Link UP Jan 13 20:30:53.207732 systemd-networkd[1389]: cali821bf8f17fb: Gained carrier Jan 13 20:30:53.227162 containerd[1449]: 2025-01-13 20:30:52.796 [INFO][4664] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 13 20:30:53.227162 containerd[1449]: 2025-01-13 20:30:52.839 [INFO][4664] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--jl4cq-eth0 csi-node-driver- calico-system 727a9f8b-291c-4cff-81c1-972e6591d923 659 0 2025-01-13 20:30:38 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b695c467 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-jl4cq eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali821bf8f17fb [] []}} ContainerID="b4a773916ee7fb2b9884ea0cd01f3d6939dd797ded15f246bef51222cb5f4bed" Namespace="calico-system" Pod="csi-node-driver-jl4cq" WorkloadEndpoint="localhost-k8s-csi--node--driver--jl4cq-" Jan 13 20:30:53.227162 containerd[1449]: 2025-01-13 20:30:52.839 [INFO][4664] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b4a773916ee7fb2b9884ea0cd01f3d6939dd797ded15f246bef51222cb5f4bed" Namespace="calico-system" Pod="csi-node-driver-jl4cq" WorkloadEndpoint="localhost-k8s-csi--node--driver--jl4cq-eth0" Jan 13 20:30:53.227162 containerd[1449]: 2025-01-13 20:30:53.130 [INFO][4737] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b4a773916ee7fb2b9884ea0cd01f3d6939dd797ded15f246bef51222cb5f4bed" HandleID="k8s-pod-network.b4a773916ee7fb2b9884ea0cd01f3d6939dd797ded15f246bef51222cb5f4bed" Workload="localhost-k8s-csi--node--driver--jl4cq-eth0" Jan 13 20:30:53.227162 containerd[1449]: 2025-01-13 20:30:53.159 [INFO][4737] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b4a773916ee7fb2b9884ea0cd01f3d6939dd797ded15f246bef51222cb5f4bed" HandleID="k8s-pod-network.b4a773916ee7fb2b9884ea0cd01f3d6939dd797ded15f246bef51222cb5f4bed" Workload="localhost-k8s-csi--node--driver--jl4cq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003e61b0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-jl4cq", "timestamp":"2025-01-13 20:30:53.130175052 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 20:30:53.227162 containerd[1449]: 2025-01-13 20:30:53.159 [INFO][4737] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 20:30:53.227162 containerd[1449]: 2025-01-13 20:30:53.159 [INFO][4737] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 20:30:53.227162 containerd[1449]: 2025-01-13 20:30:53.159 [INFO][4737] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 13 20:30:53.227162 containerd[1449]: 2025-01-13 20:30:53.161 [INFO][4737] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b4a773916ee7fb2b9884ea0cd01f3d6939dd797ded15f246bef51222cb5f4bed" host="localhost" Jan 13 20:30:53.227162 containerd[1449]: 2025-01-13 20:30:53.169 [INFO][4737] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 13 20:30:53.227162 containerd[1449]: 2025-01-13 20:30:53.173 [INFO][4737] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 13 20:30:53.227162 containerd[1449]: 2025-01-13 20:30:53.175 [INFO][4737] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 13 20:30:53.227162 containerd[1449]: 2025-01-13 20:30:53.177 [INFO][4737] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 13 20:30:53.227162 containerd[1449]: 2025-01-13 20:30:53.177 [INFO][4737] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b4a773916ee7fb2b9884ea0cd01f3d6939dd797ded15f246bef51222cb5f4bed" host="localhost" Jan 13 20:30:53.227162 containerd[1449]: 2025-01-13 20:30:53.179 [INFO][4737] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b4a773916ee7fb2b9884ea0cd01f3d6939dd797ded15f246bef51222cb5f4bed Jan 13 20:30:53.227162 containerd[1449]: 2025-01-13 20:30:53.187 [INFO][4737] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b4a773916ee7fb2b9884ea0cd01f3d6939dd797ded15f246bef51222cb5f4bed" host="localhost" Jan 13 20:30:53.227162 containerd[1449]: 2025-01-13 20:30:53.192 [INFO][4737] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.b4a773916ee7fb2b9884ea0cd01f3d6939dd797ded15f246bef51222cb5f4bed" host="localhost" Jan 13 20:30:53.227162 containerd[1449]: 2025-01-13 20:30:53.192 [INFO][4737] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.b4a773916ee7fb2b9884ea0cd01f3d6939dd797ded15f246bef51222cb5f4bed" host="localhost" Jan 13 20:30:53.227162 containerd[1449]: 2025-01-13 20:30:53.192 [INFO][4737] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 20:30:53.227162 containerd[1449]: 2025-01-13 20:30:53.192 [INFO][4737] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="b4a773916ee7fb2b9884ea0cd01f3d6939dd797ded15f246bef51222cb5f4bed" HandleID="k8s-pod-network.b4a773916ee7fb2b9884ea0cd01f3d6939dd797ded15f246bef51222cb5f4bed" Workload="localhost-k8s-csi--node--driver--jl4cq-eth0" Jan 13 20:30:53.228158 containerd[1449]: 2025-01-13 20:30:53.195 [INFO][4664] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b4a773916ee7fb2b9884ea0cd01f3d6939dd797ded15f246bef51222cb5f4bed" Namespace="calico-system" Pod="csi-node-driver-jl4cq" WorkloadEndpoint="localhost-k8s-csi--node--driver--jl4cq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--jl4cq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"727a9f8b-291c-4cff-81c1-972e6591d923", ResourceVersion:"659", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 30, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-jl4cq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali821bf8f17fb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:30:53.228158 containerd[1449]: 2025-01-13 20:30:53.195 [INFO][4664] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="b4a773916ee7fb2b9884ea0cd01f3d6939dd797ded15f246bef51222cb5f4bed" Namespace="calico-system" Pod="csi-node-driver-jl4cq" WorkloadEndpoint="localhost-k8s-csi--node--driver--jl4cq-eth0" Jan 13 20:30:53.228158 containerd[1449]: 2025-01-13 20:30:53.195 [INFO][4664] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali821bf8f17fb ContainerID="b4a773916ee7fb2b9884ea0cd01f3d6939dd797ded15f246bef51222cb5f4bed" Namespace="calico-system" Pod="csi-node-driver-jl4cq" WorkloadEndpoint="localhost-k8s-csi--node--driver--jl4cq-eth0" Jan 13 20:30:53.228158 containerd[1449]: 2025-01-13 20:30:53.208 [INFO][4664] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b4a773916ee7fb2b9884ea0cd01f3d6939dd797ded15f246bef51222cb5f4bed" Namespace="calico-system" Pod="csi-node-driver-jl4cq" WorkloadEndpoint="localhost-k8s-csi--node--driver--jl4cq-eth0" Jan 13 20:30:53.228158 containerd[1449]: 2025-01-13 20:30:53.208 [INFO][4664] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b4a773916ee7fb2b9884ea0cd01f3d6939dd797ded15f246bef51222cb5f4bed" Namespace="calico-system" Pod="csi-node-driver-jl4cq" WorkloadEndpoint="localhost-k8s-csi--node--driver--jl4cq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--jl4cq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"727a9f8b-291c-4cff-81c1-972e6591d923", ResourceVersion:"659", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 30, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b4a773916ee7fb2b9884ea0cd01f3d6939dd797ded15f246bef51222cb5f4bed", Pod:"csi-node-driver-jl4cq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali821bf8f17fb", MAC:"4a:e9:26:95:dd:39", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:30:53.228158 containerd[1449]: 2025-01-13 20:30:53.224 [INFO][4664] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b4a773916ee7fb2b9884ea0cd01f3d6939dd797ded15f246bef51222cb5f4bed" Namespace="calico-system" Pod="csi-node-driver-jl4cq" WorkloadEndpoint="localhost-k8s-csi--node--driver--jl4cq-eth0" Jan 13 20:30:53.234849 systemd-networkd[1389]: cali130ea9803d2: Link UP Jan 13 20:30:53.236119 systemd-networkd[1389]: cali130ea9803d2: Gained carrier Jan 13 20:30:53.252889 systemd[1]: run-netns-cni\x2d9bd27a2e\x2da183\x2d871f\x2d7754\x2d49d2694daf46.mount: Deactivated successfully. Jan 13 20:30:53.252982 systemd[1]: run-netns-cni\x2dba09e431\x2df7e1\x2d0f4d\x2d023e\x2d98af7f22d6a0.mount: Deactivated successfully. Jan 13 20:30:53.267911 containerd[1449]: 2025-01-13 20:30:52.622 [INFO][4611] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 13 20:30:53.267911 containerd[1449]: 2025-01-13 20:30:52.733 [INFO][4611] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--d98fcfdcc--j9cwm-eth0 calico-apiserver-d98fcfdcc- calico-apiserver 5e2bb27f-e9a8-4574-9125-ac3ff1f5546b 784 0 2025-01-13 20:30:37 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:d98fcfdcc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-d98fcfdcc-j9cwm eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali130ea9803d2 [] []}} ContainerID="ccda5107a088d63db2d511e9927cb37c658f4048172a74b90a4d2ea6a01b5d25" Namespace="calico-apiserver" Pod="calico-apiserver-d98fcfdcc-j9cwm" WorkloadEndpoint="localhost-k8s-calico--apiserver--d98fcfdcc--j9cwm-" Jan 13 20:30:53.267911 containerd[1449]: 2025-01-13 20:30:52.733 [INFO][4611] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ccda5107a088d63db2d511e9927cb37c658f4048172a74b90a4d2ea6a01b5d25" Namespace="calico-apiserver" Pod="calico-apiserver-d98fcfdcc-j9cwm" WorkloadEndpoint="localhost-k8s-calico--apiserver--d98fcfdcc--j9cwm-eth0" Jan 13 20:30:53.267911 containerd[1449]: 2025-01-13 20:30:53.132 [INFO][4670] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ccda5107a088d63db2d511e9927cb37c658f4048172a74b90a4d2ea6a01b5d25" HandleID="k8s-pod-network.ccda5107a088d63db2d511e9927cb37c658f4048172a74b90a4d2ea6a01b5d25" Workload="localhost-k8s-calico--apiserver--d98fcfdcc--j9cwm-eth0" Jan 13 20:30:53.267911 containerd[1449]: 2025-01-13 20:30:53.160 [INFO][4670] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ccda5107a088d63db2d511e9927cb37c658f4048172a74b90a4d2ea6a01b5d25" HandleID="k8s-pod-network.ccda5107a088d63db2d511e9927cb37c658f4048172a74b90a4d2ea6a01b5d25" Workload="localhost-k8s-calico--apiserver--d98fcfdcc--j9cwm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000121400), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-d98fcfdcc-j9cwm", "timestamp":"2025-01-13 20:30:53.132953613 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 20:30:53.267911 containerd[1449]: 2025-01-13 20:30:53.160 [INFO][4670] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 20:30:53.267911 containerd[1449]: 2025-01-13 20:30:53.192 [INFO][4670] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 20:30:53.267911 containerd[1449]: 2025-01-13 20:30:53.192 [INFO][4670] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 13 20:30:53.267911 containerd[1449]: 2025-01-13 20:30:53.196 [INFO][4670] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ccda5107a088d63db2d511e9927cb37c658f4048172a74b90a4d2ea6a01b5d25" host="localhost" Jan 13 20:30:53.267911 containerd[1449]: 2025-01-13 20:30:53.201 [INFO][4670] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 13 20:30:53.267911 containerd[1449]: 2025-01-13 20:30:53.205 [INFO][4670] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 13 20:30:53.267911 containerd[1449]: 2025-01-13 20:30:53.207 [INFO][4670] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 13 20:30:53.267911 containerd[1449]: 2025-01-13 20:30:53.212 [INFO][4670] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 13 20:30:53.267911 containerd[1449]: 2025-01-13 20:30:53.212 [INFO][4670] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ccda5107a088d63db2d511e9927cb37c658f4048172a74b90a4d2ea6a01b5d25" host="localhost" Jan 13 20:30:53.267911 containerd[1449]: 2025-01-13 20:30:53.214 [INFO][4670] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.ccda5107a088d63db2d511e9927cb37c658f4048172a74b90a4d2ea6a01b5d25 Jan 13 20:30:53.267911 containerd[1449]: 2025-01-13 20:30:53.218 [INFO][4670] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ccda5107a088d63db2d511e9927cb37c658f4048172a74b90a4d2ea6a01b5d25" host="localhost" Jan 13 20:30:53.267911 containerd[1449]: 2025-01-13 20:30:53.227 [INFO][4670] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.ccda5107a088d63db2d511e9927cb37c658f4048172a74b90a4d2ea6a01b5d25" host="localhost" Jan 13 20:30:53.267911 containerd[1449]: 2025-01-13 20:30:53.227 [INFO][4670] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.ccda5107a088d63db2d511e9927cb37c658f4048172a74b90a4d2ea6a01b5d25" host="localhost" Jan 13 20:30:53.267911 containerd[1449]: 2025-01-13 20:30:53.227 [INFO][4670] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 20:30:53.267911 containerd[1449]: 2025-01-13 20:30:53.227 [INFO][4670] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="ccda5107a088d63db2d511e9927cb37c658f4048172a74b90a4d2ea6a01b5d25" HandleID="k8s-pod-network.ccda5107a088d63db2d511e9927cb37c658f4048172a74b90a4d2ea6a01b5d25" Workload="localhost-k8s-calico--apiserver--d98fcfdcc--j9cwm-eth0" Jan 13 20:30:53.268497 containerd[1449]: 2025-01-13 20:30:53.231 [INFO][4611] cni-plugin/k8s.go 386: Populated endpoint ContainerID="ccda5107a088d63db2d511e9927cb37c658f4048172a74b90a4d2ea6a01b5d25" Namespace="calico-apiserver" Pod="calico-apiserver-d98fcfdcc-j9cwm" WorkloadEndpoint="localhost-k8s-calico--apiserver--d98fcfdcc--j9cwm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--d98fcfdcc--j9cwm-eth0", GenerateName:"calico-apiserver-d98fcfdcc-", Namespace:"calico-apiserver", SelfLink:"", UID:"5e2bb27f-e9a8-4574-9125-ac3ff1f5546b", ResourceVersion:"784", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 30, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d98fcfdcc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-d98fcfdcc-j9cwm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali130ea9803d2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:30:53.268497 containerd[1449]: 2025-01-13 20:30:53.232 [INFO][4611] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="ccda5107a088d63db2d511e9927cb37c658f4048172a74b90a4d2ea6a01b5d25" Namespace="calico-apiserver" Pod="calico-apiserver-d98fcfdcc-j9cwm" WorkloadEndpoint="localhost-k8s-calico--apiserver--d98fcfdcc--j9cwm-eth0" Jan 13 20:30:53.268497 containerd[1449]: 2025-01-13 20:30:53.232 [INFO][4611] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali130ea9803d2 ContainerID="ccda5107a088d63db2d511e9927cb37c658f4048172a74b90a4d2ea6a01b5d25" Namespace="calico-apiserver" Pod="calico-apiserver-d98fcfdcc-j9cwm" WorkloadEndpoint="localhost-k8s-calico--apiserver--d98fcfdcc--j9cwm-eth0" Jan 13 20:30:53.268497 containerd[1449]: 2025-01-13 20:30:53.236 [INFO][4611] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ccda5107a088d63db2d511e9927cb37c658f4048172a74b90a4d2ea6a01b5d25" Namespace="calico-apiserver" Pod="calico-apiserver-d98fcfdcc-j9cwm" WorkloadEndpoint="localhost-k8s-calico--apiserver--d98fcfdcc--j9cwm-eth0" Jan 13 20:30:53.268497 containerd[1449]: 2025-01-13 20:30:53.238 [INFO][4611] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ccda5107a088d63db2d511e9927cb37c658f4048172a74b90a4d2ea6a01b5d25" Namespace="calico-apiserver" Pod="calico-apiserver-d98fcfdcc-j9cwm" WorkloadEndpoint="localhost-k8s-calico--apiserver--d98fcfdcc--j9cwm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--d98fcfdcc--j9cwm-eth0", GenerateName:"calico-apiserver-d98fcfdcc-", Namespace:"calico-apiserver", SelfLink:"", UID:"5e2bb27f-e9a8-4574-9125-ac3ff1f5546b", ResourceVersion:"784", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 30, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d98fcfdcc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ccda5107a088d63db2d511e9927cb37c658f4048172a74b90a4d2ea6a01b5d25", Pod:"calico-apiserver-d98fcfdcc-j9cwm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali130ea9803d2", MAC:"e2:25:db:d9:08:c7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:30:53.268497 containerd[1449]: 2025-01-13 20:30:53.252 [INFO][4611] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="ccda5107a088d63db2d511e9927cb37c658f4048172a74b90a4d2ea6a01b5d25" Namespace="calico-apiserver" Pod="calico-apiserver-d98fcfdcc-j9cwm" WorkloadEndpoint="localhost-k8s-calico--apiserver--d98fcfdcc--j9cwm-eth0" Jan 13 20:30:53.275469 systemd-networkd[1389]: calic0051919091: Link UP Jan 13 20:30:53.275892 systemd-networkd[1389]: calic0051919091: Gained carrier Jan 13 20:30:53.285144 containerd[1449]: time="2025-01-13T20:30:53.284364256Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:30:53.285144 containerd[1449]: time="2025-01-13T20:30:53.284430506Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:30:53.285144 containerd[1449]: time="2025-01-13T20:30:53.284449909Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:30:53.285144 containerd[1449]: time="2025-01-13T20:30:53.284564167Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:30:53.298452 containerd[1449]: 2025-01-13 20:30:52.790 [INFO][4686] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 13 20:30:53.298452 containerd[1449]: 2025-01-13 20:30:52.818 [INFO][4686] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--75ff5498fd--l6pnm-eth0 calico-kube-controllers-75ff5498fd- calico-system 1881e196-e398-402d-91c4-c538f30e9a68 785 0 2025-01-13 20:30:38 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:75ff5498fd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-75ff5498fd-l6pnm eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calic0051919091 [] []}} ContainerID="caf0a85d964e2934dca0ab0c978230c733513ed97ea73d28db485e46ae5dd3d6" Namespace="calico-system" Pod="calico-kube-controllers-75ff5498fd-l6pnm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--75ff5498fd--l6pnm-" Jan 13 20:30:53.298452 containerd[1449]: 2025-01-13 20:30:52.819 [INFO][4686] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="caf0a85d964e2934dca0ab0c978230c733513ed97ea73d28db485e46ae5dd3d6" Namespace="calico-system" Pod="calico-kube-controllers-75ff5498fd-l6pnm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--75ff5498fd--l6pnm-eth0" Jan 13 20:30:53.298452 containerd[1449]: 2025-01-13 20:30:53.133 [INFO][4738] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="caf0a85d964e2934dca0ab0c978230c733513ed97ea73d28db485e46ae5dd3d6" HandleID="k8s-pod-network.caf0a85d964e2934dca0ab0c978230c733513ed97ea73d28db485e46ae5dd3d6" Workload="localhost-k8s-calico--kube--controllers--75ff5498fd--l6pnm-eth0" Jan 13 20:30:53.298452 containerd[1449]: 2025-01-13 20:30:53.160 [INFO][4738] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="caf0a85d964e2934dca0ab0c978230c733513ed97ea73d28db485e46ae5dd3d6" HandleID="k8s-pod-network.caf0a85d964e2934dca0ab0c978230c733513ed97ea73d28db485e46ae5dd3d6" Workload="localhost-k8s-calico--kube--controllers--75ff5498fd--l6pnm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001324f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-75ff5498fd-l6pnm", "timestamp":"2025-01-13 20:30:53.13363304 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 20:30:53.298452 containerd[1449]: 2025-01-13 20:30:53.160 [INFO][4738] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 20:30:53.298452 containerd[1449]: 2025-01-13 20:30:53.227 [INFO][4738] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 20:30:53.298452 containerd[1449]: 2025-01-13 20:30:53.227 [INFO][4738] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 13 20:30:53.298452 containerd[1449]: 2025-01-13 20:30:53.230 [INFO][4738] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.caf0a85d964e2934dca0ab0c978230c733513ed97ea73d28db485e46ae5dd3d6" host="localhost" Jan 13 20:30:53.298452 containerd[1449]: 2025-01-13 20:30:53.236 [INFO][4738] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 13 20:30:53.298452 containerd[1449]: 2025-01-13 20:30:53.242 [INFO][4738] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 13 20:30:53.298452 containerd[1449]: 2025-01-13 20:30:53.248 [INFO][4738] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 13 20:30:53.298452 containerd[1449]: 2025-01-13 20:30:53.257 [INFO][4738] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 13 20:30:53.298452 containerd[1449]: 2025-01-13 20:30:53.257 [INFO][4738] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.caf0a85d964e2934dca0ab0c978230c733513ed97ea73d28db485e46ae5dd3d6" host="localhost" Jan 13 20:30:53.298452 containerd[1449]: 2025-01-13 20:30:53.258 [INFO][4738] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.caf0a85d964e2934dca0ab0c978230c733513ed97ea73d28db485e46ae5dd3d6 Jan 13 20:30:53.298452 containerd[1449]: 2025-01-13 20:30:53.262 [INFO][4738] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.caf0a85d964e2934dca0ab0c978230c733513ed97ea73d28db485e46ae5dd3d6" host="localhost" Jan 13 20:30:53.298452 containerd[1449]: 2025-01-13 20:30:53.268 [INFO][4738] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.caf0a85d964e2934dca0ab0c978230c733513ed97ea73d28db485e46ae5dd3d6" host="localhost" Jan 13 20:30:53.298452 containerd[1449]: 2025-01-13 20:30:53.268 [INFO][4738] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.caf0a85d964e2934dca0ab0c978230c733513ed97ea73d28db485e46ae5dd3d6" host="localhost" Jan 13 20:30:53.298452 containerd[1449]: 2025-01-13 20:30:53.269 [INFO][4738] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 20:30:53.298452 containerd[1449]: 2025-01-13 20:30:53.269 [INFO][4738] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="caf0a85d964e2934dca0ab0c978230c733513ed97ea73d28db485e46ae5dd3d6" HandleID="k8s-pod-network.caf0a85d964e2934dca0ab0c978230c733513ed97ea73d28db485e46ae5dd3d6" Workload="localhost-k8s-calico--kube--controllers--75ff5498fd--l6pnm-eth0" Jan 13 20:30:53.299028 containerd[1449]: 2025-01-13 20:30:53.273 [INFO][4686] cni-plugin/k8s.go 386: Populated endpoint ContainerID="caf0a85d964e2934dca0ab0c978230c733513ed97ea73d28db485e46ae5dd3d6" Namespace="calico-system" Pod="calico-kube-controllers-75ff5498fd-l6pnm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--75ff5498fd--l6pnm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--75ff5498fd--l6pnm-eth0", GenerateName:"calico-kube-controllers-75ff5498fd-", Namespace:"calico-system", SelfLink:"", UID:"1881e196-e398-402d-91c4-c538f30e9a68", ResourceVersion:"785", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 30, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"75ff5498fd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-75ff5498fd-l6pnm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic0051919091", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:30:53.299028 containerd[1449]: 2025-01-13 20:30:53.273 [INFO][4686] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="caf0a85d964e2934dca0ab0c978230c733513ed97ea73d28db485e46ae5dd3d6" Namespace="calico-system" Pod="calico-kube-controllers-75ff5498fd-l6pnm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--75ff5498fd--l6pnm-eth0" Jan 13 20:30:53.299028 containerd[1449]: 2025-01-13 20:30:53.273 [INFO][4686] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic0051919091 ContainerID="caf0a85d964e2934dca0ab0c978230c733513ed97ea73d28db485e46ae5dd3d6" Namespace="calico-system" Pod="calico-kube-controllers-75ff5498fd-l6pnm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--75ff5498fd--l6pnm-eth0" Jan 13 20:30:53.299028 containerd[1449]: 2025-01-13 20:30:53.276 [INFO][4686] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="caf0a85d964e2934dca0ab0c978230c733513ed97ea73d28db485e46ae5dd3d6" Namespace="calico-system" Pod="calico-kube-controllers-75ff5498fd-l6pnm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--75ff5498fd--l6pnm-eth0" Jan 13 20:30:53.299028 containerd[1449]: 2025-01-13 20:30:53.277 [INFO][4686] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="caf0a85d964e2934dca0ab0c978230c733513ed97ea73d28db485e46ae5dd3d6" Namespace="calico-system" Pod="calico-kube-controllers-75ff5498fd-l6pnm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--75ff5498fd--l6pnm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--75ff5498fd--l6pnm-eth0", GenerateName:"calico-kube-controllers-75ff5498fd-", Namespace:"calico-system", SelfLink:"", UID:"1881e196-e398-402d-91c4-c538f30e9a68", ResourceVersion:"785", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 30, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"75ff5498fd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"caf0a85d964e2934dca0ab0c978230c733513ed97ea73d28db485e46ae5dd3d6", Pod:"calico-kube-controllers-75ff5498fd-l6pnm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic0051919091", MAC:"a6:9f:4c:e3:84:d7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:30:53.299028 containerd[1449]: 2025-01-13 20:30:53.293 [INFO][4686] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="caf0a85d964e2934dca0ab0c978230c733513ed97ea73d28db485e46ae5dd3d6" Namespace="calico-system" Pod="calico-kube-controllers-75ff5498fd-l6pnm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--75ff5498fd--l6pnm-eth0" Jan 13 20:30:53.311364 containerd[1449]: time="2025-01-13T20:30:53.309060130Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:30:53.311364 containerd[1449]: time="2025-01-13T20:30:53.309124541Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:30:53.311364 containerd[1449]: time="2025-01-13T20:30:53.309139703Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:30:53.311364 containerd[1449]: time="2025-01-13T20:30:53.309206154Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:30:53.310840 systemd[1]: run-containerd-runc-k8s.io-b4a773916ee7fb2b9884ea0cd01f3d6939dd797ded15f246bef51222cb5f4bed-runc.miT3vA.mount: Deactivated successfully. Jan 13 20:30:53.327759 systemd[1]: Started cri-containerd-b4a773916ee7fb2b9884ea0cd01f3d6939dd797ded15f246bef51222cb5f4bed.scope - libcontainer container b4a773916ee7fb2b9884ea0cd01f3d6939dd797ded15f246bef51222cb5f4bed. Jan 13 20:30:53.328478 containerd[1449]: time="2025-01-13T20:30:53.328077665Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:30:53.328478 containerd[1449]: time="2025-01-13T20:30:53.328151077Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:30:53.328478 containerd[1449]: time="2025-01-13T20:30:53.328162839Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:30:53.328478 containerd[1449]: time="2025-01-13T20:30:53.328255453Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:30:53.334442 systemd[1]: Started cri-containerd-ccda5107a088d63db2d511e9927cb37c658f4048172a74b90a4d2ea6a01b5d25.scope - libcontainer container ccda5107a088d63db2d511e9927cb37c658f4048172a74b90a4d2ea6a01b5d25. Jan 13 20:30:53.346494 systemd-networkd[1389]: cali0a97dcf2039: Link UP Jan 13 20:30:53.350189 systemd-networkd[1389]: cali0a97dcf2039: Gained carrier Jan 13 20:30:53.356640 systemd-resolved[1312]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 20:30:53.358968 systemd[1]: Started cri-containerd-caf0a85d964e2934dca0ab0c978230c733513ed97ea73d28db485e46ae5dd3d6.scope - libcontainer container caf0a85d964e2934dca0ab0c978230c733513ed97ea73d28db485e46ae5dd3d6. Jan 13 20:30:53.369044 systemd-resolved[1312]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 20:30:53.377091 containerd[1449]: time="2025-01-13T20:30:53.376855798Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jl4cq,Uid:727a9f8b-291c-4cff-81c1-972e6591d923,Namespace:calico-system,Attempt:5,} returns sandbox id \"b4a773916ee7fb2b9884ea0cd01f3d6939dd797ded15f246bef51222cb5f4bed\"" Jan 13 20:30:53.378963 containerd[1449]: time="2025-01-13T20:30:53.378939328Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 13 20:30:53.386535 systemd-resolved[1312]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 20:30:53.394703 containerd[1449]: time="2025-01-13T20:30:53.394669462Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d98fcfdcc-j9cwm,Uid:5e2bb27f-e9a8-4574-9125-ac3ff1f5546b,Namespace:calico-apiserver,Attempt:5,} returns sandbox id \"ccda5107a088d63db2d511e9927cb37c658f4048172a74b90a4d2ea6a01b5d25\"" Jan 13 20:30:53.407250 containerd[1449]: time="2025-01-13T20:30:53.407204929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-75ff5498fd-l6pnm,Uid:1881e196-e398-402d-91c4-c538f30e9a68,Namespace:calico-system,Attempt:5,} returns sandbox id \"caf0a85d964e2934dca0ab0c978230c733513ed97ea73d28db485e46ae5dd3d6\"" Jan 13 20:30:53.408997 containerd[1449]: 2025-01-13 20:30:52.686 [INFO][4629] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 13 20:30:53.408997 containerd[1449]: 2025-01-13 20:30:52.732 [INFO][4629] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--76f75df574--rlqw7-eth0 coredns-76f75df574- kube-system bfa3473d-43a3-447d-b0a7-c066cdd14301 779 0 2025-01-13 20:30:31 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-76f75df574-rlqw7 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali0a97dcf2039 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="330930d46ab746015349f34a07bca900956149f4a9cc319b208983df8c0bb423" Namespace="kube-system" Pod="coredns-76f75df574-rlqw7" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--rlqw7-" Jan 13 20:30:53.408997 containerd[1449]: 2025-01-13 20:30:52.733 [INFO][4629] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="330930d46ab746015349f34a07bca900956149f4a9cc319b208983df8c0bb423" Namespace="kube-system" Pod="coredns-76f75df574-rlqw7" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--rlqw7-eth0" Jan 13 20:30:53.408997 containerd[1449]: 2025-01-13 20:30:53.130 [INFO][4671] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="330930d46ab746015349f34a07bca900956149f4a9cc319b208983df8c0bb423" HandleID="k8s-pod-network.330930d46ab746015349f34a07bca900956149f4a9cc319b208983df8c0bb423" Workload="localhost-k8s-coredns--76f75df574--rlqw7-eth0" Jan 13 20:30:53.408997 containerd[1449]: 2025-01-13 20:30:53.166 [INFO][4671] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="330930d46ab746015349f34a07bca900956149f4a9cc319b208983df8c0bb423" HandleID="k8s-pod-network.330930d46ab746015349f34a07bca900956149f4a9cc319b208983df8c0bb423" Workload="localhost-k8s-coredns--76f75df574--rlqw7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400053dd10), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-76f75df574-rlqw7", "timestamp":"2025-01-13 20:30:53.130386606 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 20:30:53.408997 containerd[1449]: 2025-01-13 20:30:53.166 [INFO][4671] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 20:30:53.408997 containerd[1449]: 2025-01-13 20:30:53.269 [INFO][4671] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 20:30:53.408997 containerd[1449]: 2025-01-13 20:30:53.269 [INFO][4671] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 13 20:30:53.408997 containerd[1449]: 2025-01-13 20:30:53.277 [INFO][4671] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.330930d46ab746015349f34a07bca900956149f4a9cc319b208983df8c0bb423" host="localhost" Jan 13 20:30:53.408997 containerd[1449]: 2025-01-13 20:30:53.293 [INFO][4671] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 13 20:30:53.408997 containerd[1449]: 2025-01-13 20:30:53.303 [INFO][4671] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 13 20:30:53.408997 containerd[1449]: 2025-01-13 20:30:53.305 [INFO][4671] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 13 20:30:53.408997 containerd[1449]: 2025-01-13 20:30:53.311 [INFO][4671] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 13 20:30:53.408997 containerd[1449]: 2025-01-13 20:30:53.311 [INFO][4671] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.330930d46ab746015349f34a07bca900956149f4a9cc319b208983df8c0bb423" host="localhost" Jan 13 20:30:53.408997 containerd[1449]: 2025-01-13 20:30:53.315 [INFO][4671] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.330930d46ab746015349f34a07bca900956149f4a9cc319b208983df8c0bb423 Jan 13 20:30:53.408997 containerd[1449]: 2025-01-13 20:30:53.321 [INFO][4671] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.330930d46ab746015349f34a07bca900956149f4a9cc319b208983df8c0bb423" host="localhost" Jan 13 20:30:53.408997 containerd[1449]: 2025-01-13 20:30:53.331 [INFO][4671] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.330930d46ab746015349f34a07bca900956149f4a9cc319b208983df8c0bb423" host="localhost" Jan 13 20:30:53.408997 containerd[1449]: 2025-01-13 20:30:53.332 [INFO][4671] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.330930d46ab746015349f34a07bca900956149f4a9cc319b208983df8c0bb423" host="localhost" Jan 13 20:30:53.408997 containerd[1449]: 2025-01-13 20:30:53.332 [INFO][4671] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 20:30:53.408997 containerd[1449]: 2025-01-13 20:30:53.332 [INFO][4671] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="330930d46ab746015349f34a07bca900956149f4a9cc319b208983df8c0bb423" HandleID="k8s-pod-network.330930d46ab746015349f34a07bca900956149f4a9cc319b208983df8c0bb423" Workload="localhost-k8s-coredns--76f75df574--rlqw7-eth0" Jan 13 20:30:53.410870 containerd[1449]: 2025-01-13 20:30:53.340 [INFO][4629] cni-plugin/k8s.go 386: Populated endpoint ContainerID="330930d46ab746015349f34a07bca900956149f4a9cc319b208983df8c0bb423" Namespace="kube-system" Pod="coredns-76f75df574-rlqw7" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--rlqw7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--rlqw7-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"bfa3473d-43a3-447d-b0a7-c066cdd14301", ResourceVersion:"779", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 30, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-76f75df574-rlqw7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0a97dcf2039", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:30:53.410870 containerd[1449]: 2025-01-13 20:30:53.340 [INFO][4629] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="330930d46ab746015349f34a07bca900956149f4a9cc319b208983df8c0bb423" Namespace="kube-system" Pod="coredns-76f75df574-rlqw7" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--rlqw7-eth0" Jan 13 20:30:53.410870 containerd[1449]: 2025-01-13 20:30:53.340 [INFO][4629] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0a97dcf2039 ContainerID="330930d46ab746015349f34a07bca900956149f4a9cc319b208983df8c0bb423" Namespace="kube-system" Pod="coredns-76f75df574-rlqw7" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--rlqw7-eth0" Jan 13 20:30:53.410870 containerd[1449]: 2025-01-13 20:30:53.353 [INFO][4629] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="330930d46ab746015349f34a07bca900956149f4a9cc319b208983df8c0bb423" Namespace="kube-system" Pod="coredns-76f75df574-rlqw7" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--rlqw7-eth0" Jan 13 20:30:53.410870 containerd[1449]: 2025-01-13 20:30:53.354 [INFO][4629] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="330930d46ab746015349f34a07bca900956149f4a9cc319b208983df8c0bb423" Namespace="kube-system" Pod="coredns-76f75df574-rlqw7" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--rlqw7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--rlqw7-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"bfa3473d-43a3-447d-b0a7-c066cdd14301", ResourceVersion:"779", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 30, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"330930d46ab746015349f34a07bca900956149f4a9cc319b208983df8c0bb423", Pod:"coredns-76f75df574-rlqw7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0a97dcf2039", MAC:"26:c9:e5:91:a0:c3", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:30:53.410870 containerd[1449]: 2025-01-13 20:30:53.404 [INFO][4629] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="330930d46ab746015349f34a07bca900956149f4a9cc319b208983df8c0bb423" Namespace="kube-system" Pod="coredns-76f75df574-rlqw7" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--rlqw7-eth0" Jan 13 20:30:53.425122 systemd-networkd[1389]: cali0275ce7fcbf: Link UP Jan 13 20:30:53.425767 systemd-networkd[1389]: cali0275ce7fcbf: Gained carrier Jan 13 20:30:53.432312 containerd[1449]: time="2025-01-13T20:30:53.431675128Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:30:53.432312 containerd[1449]: time="2025-01-13T20:30:53.431741819Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:30:53.432312 containerd[1449]: time="2025-01-13T20:30:53.431758542Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:30:53.432312 containerd[1449]: time="2025-01-13T20:30:53.431840475Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:30:53.443745 containerd[1449]: 2025-01-13 20:30:52.829 [INFO][4682] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 13 20:30:53.443745 containerd[1449]: 2025-01-13 20:30:52.844 [INFO][4682] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--d98fcfdcc--xd4cg-eth0 calico-apiserver-d98fcfdcc- calico-apiserver 32e2fd28-1c60-4d81-883d-85b833d714fc 782 0 2025-01-13 20:30:37 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:d98fcfdcc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-d98fcfdcc-xd4cg eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali0275ce7fcbf [] []}} ContainerID="a979cc48a0b130f4f2f0913e53f81c90d2731e2a5556037b06940473d7be0344" Namespace="calico-apiserver" Pod="calico-apiserver-d98fcfdcc-xd4cg" WorkloadEndpoint="localhost-k8s-calico--apiserver--d98fcfdcc--xd4cg-" Jan 13 20:30:53.443745 containerd[1449]: 2025-01-13 20:30:52.844 [INFO][4682] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a979cc48a0b130f4f2f0913e53f81c90d2731e2a5556037b06940473d7be0344" Namespace="calico-apiserver" Pod="calico-apiserver-d98fcfdcc-xd4cg" WorkloadEndpoint="localhost-k8s-calico--apiserver--d98fcfdcc--xd4cg-eth0" Jan 13 20:30:53.443745 containerd[1449]: 2025-01-13 20:30:53.132 [INFO][4732] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a979cc48a0b130f4f2f0913e53f81c90d2731e2a5556037b06940473d7be0344" HandleID="k8s-pod-network.a979cc48a0b130f4f2f0913e53f81c90d2731e2a5556037b06940473d7be0344" Workload="localhost-k8s-calico--apiserver--d98fcfdcc--xd4cg-eth0" Jan 13 20:30:53.443745 containerd[1449]: 2025-01-13 20:30:53.169 [INFO][4732] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a979cc48a0b130f4f2f0913e53f81c90d2731e2a5556037b06940473d7be0344" HandleID="k8s-pod-network.a979cc48a0b130f4f2f0913e53f81c90d2731e2a5556037b06940473d7be0344" Workload="localhost-k8s-calico--apiserver--d98fcfdcc--xd4cg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003469a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-d98fcfdcc-xd4cg", "timestamp":"2025-01-13 20:30:53.1324927 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 20:30:53.443745 containerd[1449]: 2025-01-13 20:30:53.169 [INFO][4732] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 20:30:53.443745 containerd[1449]: 2025-01-13 20:30:53.332 [INFO][4732] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 20:30:53.443745 containerd[1449]: 2025-01-13 20:30:53.332 [INFO][4732] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 13 20:30:53.443745 containerd[1449]: 2025-01-13 20:30:53.336 [INFO][4732] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a979cc48a0b130f4f2f0913e53f81c90d2731e2a5556037b06940473d7be0344" host="localhost" Jan 13 20:30:53.443745 containerd[1449]: 2025-01-13 20:30:53.345 [INFO][4732] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 13 20:30:53.443745 containerd[1449]: 2025-01-13 20:30:53.355 [INFO][4732] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 13 20:30:53.443745 containerd[1449]: 2025-01-13 20:30:53.358 [INFO][4732] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 13 20:30:53.443745 containerd[1449]: 2025-01-13 20:30:53.360 [INFO][4732] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 13 20:30:53.443745 containerd[1449]: 2025-01-13 20:30:53.360 [INFO][4732] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a979cc48a0b130f4f2f0913e53f81c90d2731e2a5556037b06940473d7be0344" host="localhost" Jan 13 20:30:53.443745 containerd[1449]: 2025-01-13 20:30:53.406 [INFO][4732] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a979cc48a0b130f4f2f0913e53f81c90d2731e2a5556037b06940473d7be0344 Jan 13 20:30:53.443745 containerd[1449]: 2025-01-13 20:30:53.412 [INFO][4732] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a979cc48a0b130f4f2f0913e53f81c90d2731e2a5556037b06940473d7be0344" host="localhost" Jan 13 20:30:53.443745 containerd[1449]: 2025-01-13 20:30:53.419 [INFO][4732] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.a979cc48a0b130f4f2f0913e53f81c90d2731e2a5556037b06940473d7be0344" host="localhost" Jan 13 20:30:53.443745 containerd[1449]: 2025-01-13 20:30:53.419 [INFO][4732] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.a979cc48a0b130f4f2f0913e53f81c90d2731e2a5556037b06940473d7be0344" host="localhost" Jan 13 20:30:53.443745 containerd[1449]: 2025-01-13 20:30:53.419 [INFO][4732] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 20:30:53.443745 containerd[1449]: 2025-01-13 20:30:53.419 [INFO][4732] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="a979cc48a0b130f4f2f0913e53f81c90d2731e2a5556037b06940473d7be0344" HandleID="k8s-pod-network.a979cc48a0b130f4f2f0913e53f81c90d2731e2a5556037b06940473d7be0344" Workload="localhost-k8s-calico--apiserver--d98fcfdcc--xd4cg-eth0" Jan 13 20:30:53.444511 containerd[1449]: 2025-01-13 20:30:53.423 [INFO][4682] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a979cc48a0b130f4f2f0913e53f81c90d2731e2a5556037b06940473d7be0344" Namespace="calico-apiserver" Pod="calico-apiserver-d98fcfdcc-xd4cg" WorkloadEndpoint="localhost-k8s-calico--apiserver--d98fcfdcc--xd4cg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--d98fcfdcc--xd4cg-eth0", GenerateName:"calico-apiserver-d98fcfdcc-", Namespace:"calico-apiserver", SelfLink:"", UID:"32e2fd28-1c60-4d81-883d-85b833d714fc", ResourceVersion:"782", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 30, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d98fcfdcc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-d98fcfdcc-xd4cg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0275ce7fcbf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:30:53.444511 containerd[1449]: 2025-01-13 20:30:53.423 [INFO][4682] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="a979cc48a0b130f4f2f0913e53f81c90d2731e2a5556037b06940473d7be0344" Namespace="calico-apiserver" Pod="calico-apiserver-d98fcfdcc-xd4cg" WorkloadEndpoint="localhost-k8s-calico--apiserver--d98fcfdcc--xd4cg-eth0" Jan 13 20:30:53.444511 containerd[1449]: 2025-01-13 20:30:53.423 [INFO][4682] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0275ce7fcbf ContainerID="a979cc48a0b130f4f2f0913e53f81c90d2731e2a5556037b06940473d7be0344" Namespace="calico-apiserver" Pod="calico-apiserver-d98fcfdcc-xd4cg" WorkloadEndpoint="localhost-k8s-calico--apiserver--d98fcfdcc--xd4cg-eth0" Jan 13 20:30:53.444511 containerd[1449]: 2025-01-13 20:30:53.425 [INFO][4682] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a979cc48a0b130f4f2f0913e53f81c90d2731e2a5556037b06940473d7be0344" Namespace="calico-apiserver" Pod="calico-apiserver-d98fcfdcc-xd4cg" WorkloadEndpoint="localhost-k8s-calico--apiserver--d98fcfdcc--xd4cg-eth0" Jan 13 20:30:53.444511 containerd[1449]: 2025-01-13 20:30:53.426 [INFO][4682] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a979cc48a0b130f4f2f0913e53f81c90d2731e2a5556037b06940473d7be0344" Namespace="calico-apiserver" Pod="calico-apiserver-d98fcfdcc-xd4cg" WorkloadEndpoint="localhost-k8s-calico--apiserver--d98fcfdcc--xd4cg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--d98fcfdcc--xd4cg-eth0", GenerateName:"calico-apiserver-d98fcfdcc-", Namespace:"calico-apiserver", SelfLink:"", UID:"32e2fd28-1c60-4d81-883d-85b833d714fc", ResourceVersion:"782", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 30, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d98fcfdcc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a979cc48a0b130f4f2f0913e53f81c90d2731e2a5556037b06940473d7be0344", Pod:"calico-apiserver-d98fcfdcc-xd4cg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0275ce7fcbf", MAC:"92:99:5e:37:ee:11", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:30:53.444511 containerd[1449]: 2025-01-13 20:30:53.438 [INFO][4682] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a979cc48a0b130f4f2f0913e53f81c90d2731e2a5556037b06940473d7be0344" Namespace="calico-apiserver" Pod="calico-apiserver-d98fcfdcc-xd4cg" WorkloadEndpoint="localhost-k8s-calico--apiserver--d98fcfdcc--xd4cg-eth0" Jan 13 20:30:53.459733 systemd[1]: Started cri-containerd-330930d46ab746015349f34a07bca900956149f4a9cc319b208983df8c0bb423.scope - libcontainer container 330930d46ab746015349f34a07bca900956149f4a9cc319b208983df8c0bb423. Jan 13 20:30:53.471200 containerd[1449]: time="2025-01-13T20:30:53.469987882Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:30:53.471200 containerd[1449]: time="2025-01-13T20:30:53.470054533Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:30:53.471200 containerd[1449]: time="2025-01-13T20:30:53.470070255Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:30:53.471200 containerd[1449]: time="2025-01-13T20:30:53.470161710Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:30:53.475186 systemd-networkd[1389]: calic4c30332dd6: Link UP Jan 13 20:30:53.475450 systemd-networkd[1389]: calic4c30332dd6: Gained carrier Jan 13 20:30:53.479080 systemd-resolved[1312]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 20:30:53.496074 containerd[1449]: 2025-01-13 20:30:52.828 [INFO][4704] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 13 20:30:53.496074 containerd[1449]: 2025-01-13 20:30:52.870 [INFO][4704] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--76f75df574--kr682-eth0 coredns-76f75df574- kube-system 9a4b274c-0db7-4f24-b51e-a8ee914d4260 783 0 2025-01-13 20:30:31 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-76f75df574-kr682 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic4c30332dd6 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="f004e0a21ba6eb53df4e28c6e3bf16624f1fcfbebd6f0121c89958595306f410" Namespace="kube-system" Pod="coredns-76f75df574-kr682" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--kr682-" Jan 13 20:30:53.496074 containerd[1449]: 2025-01-13 20:30:52.870 [INFO][4704] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f004e0a21ba6eb53df4e28c6e3bf16624f1fcfbebd6f0121c89958595306f410" Namespace="kube-system" Pod="coredns-76f75df574-kr682" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--kr682-eth0" Jan 13 20:30:53.496074 containerd[1449]: 2025-01-13 20:30:53.130 [INFO][4747] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f004e0a21ba6eb53df4e28c6e3bf16624f1fcfbebd6f0121c89958595306f410" HandleID="k8s-pod-network.f004e0a21ba6eb53df4e28c6e3bf16624f1fcfbebd6f0121c89958595306f410" Workload="localhost-k8s-coredns--76f75df574--kr682-eth0" Jan 13 20:30:53.496074 containerd[1449]: 2025-01-13 20:30:53.170 [INFO][4747] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f004e0a21ba6eb53df4e28c6e3bf16624f1fcfbebd6f0121c89958595306f410" HandleID="k8s-pod-network.f004e0a21ba6eb53df4e28c6e3bf16624f1fcfbebd6f0121c89958595306f410" Workload="localhost-k8s-coredns--76f75df574--kr682-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003767d0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-76f75df574-kr682", "timestamp":"2025-01-13 20:30:53.130750583 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 20:30:53.496074 containerd[1449]: 2025-01-13 20:30:53.170 [INFO][4747] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 20:30:53.496074 containerd[1449]: 2025-01-13 20:30:53.419 [INFO][4747] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 20:30:53.496074 containerd[1449]: 2025-01-13 20:30:53.420 [INFO][4747] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 13 20:30:53.496074 containerd[1449]: 2025-01-13 20:30:53.423 [INFO][4747] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f004e0a21ba6eb53df4e28c6e3bf16624f1fcfbebd6f0121c89958595306f410" host="localhost" Jan 13 20:30:53.496074 containerd[1449]: 2025-01-13 20:30:53.430 [INFO][4747] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 13 20:30:53.496074 containerd[1449]: 2025-01-13 20:30:53.437 [INFO][4747] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 13 20:30:53.496074 containerd[1449]: 2025-01-13 20:30:53.439 [INFO][4747] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 13 20:30:53.496074 containerd[1449]: 2025-01-13 20:30:53.442 [INFO][4747] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 13 20:30:53.496074 containerd[1449]: 2025-01-13 20:30:53.442 [INFO][4747] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f004e0a21ba6eb53df4e28c6e3bf16624f1fcfbebd6f0121c89958595306f410" host="localhost" Jan 13 20:30:53.496074 containerd[1449]: 2025-01-13 20:30:53.444 [INFO][4747] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f004e0a21ba6eb53df4e28c6e3bf16624f1fcfbebd6f0121c89958595306f410 Jan 13 20:30:53.496074 containerd[1449]: 2025-01-13 20:30:53.452 [INFO][4747] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f004e0a21ba6eb53df4e28c6e3bf16624f1fcfbebd6f0121c89958595306f410" host="localhost" Jan 13 20:30:53.496074 containerd[1449]: 2025-01-13 20:30:53.460 [INFO][4747] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.f004e0a21ba6eb53df4e28c6e3bf16624f1fcfbebd6f0121c89958595306f410" host="localhost" Jan 13 20:30:53.496074 containerd[1449]: 2025-01-13 20:30:53.460 [INFO][4747] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.f004e0a21ba6eb53df4e28c6e3bf16624f1fcfbebd6f0121c89958595306f410" host="localhost" Jan 13 20:30:53.496074 containerd[1449]: 2025-01-13 20:30:53.460 [INFO][4747] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 20:30:53.496074 containerd[1449]: 2025-01-13 20:30:53.460 [INFO][4747] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="f004e0a21ba6eb53df4e28c6e3bf16624f1fcfbebd6f0121c89958595306f410" HandleID="k8s-pod-network.f004e0a21ba6eb53df4e28c6e3bf16624f1fcfbebd6f0121c89958595306f410" Workload="localhost-k8s-coredns--76f75df574--kr682-eth0" Jan 13 20:30:53.496695 containerd[1449]: 2025-01-13 20:30:53.465 [INFO][4704] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f004e0a21ba6eb53df4e28c6e3bf16624f1fcfbebd6f0121c89958595306f410" Namespace="kube-system" Pod="coredns-76f75df574-kr682" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--kr682-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--kr682-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"9a4b274c-0db7-4f24-b51e-a8ee914d4260", ResourceVersion:"783", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 30, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-76f75df574-kr682", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic4c30332dd6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:30:53.496695 containerd[1449]: 2025-01-13 20:30:53.465 [INFO][4704] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="f004e0a21ba6eb53df4e28c6e3bf16624f1fcfbebd6f0121c89958595306f410" Namespace="kube-system" Pod="coredns-76f75df574-kr682" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--kr682-eth0" Jan 13 20:30:53.496695 containerd[1449]: 2025-01-13 20:30:53.465 [INFO][4704] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic4c30332dd6 ContainerID="f004e0a21ba6eb53df4e28c6e3bf16624f1fcfbebd6f0121c89958595306f410" Namespace="kube-system" Pod="coredns-76f75df574-kr682" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--kr682-eth0" Jan 13 20:30:53.496695 containerd[1449]: 2025-01-13 20:30:53.475 [INFO][4704] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f004e0a21ba6eb53df4e28c6e3bf16624f1fcfbebd6f0121c89958595306f410" Namespace="kube-system" Pod="coredns-76f75df574-kr682" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--kr682-eth0" Jan 13 20:30:53.496695 containerd[1449]: 2025-01-13 20:30:53.475 [INFO][4704] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f004e0a21ba6eb53df4e28c6e3bf16624f1fcfbebd6f0121c89958595306f410" Namespace="kube-system" Pod="coredns-76f75df574-kr682" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--kr682-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--kr682-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"9a4b274c-0db7-4f24-b51e-a8ee914d4260", ResourceVersion:"783", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 30, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f004e0a21ba6eb53df4e28c6e3bf16624f1fcfbebd6f0121c89958595306f410", Pod:"coredns-76f75df574-kr682", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic4c30332dd6", MAC:"6e:61:9a:39:f4:bd", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:30:53.496695 containerd[1449]: 2025-01-13 20:30:53.491 [INFO][4704] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f004e0a21ba6eb53df4e28c6e3bf16624f1fcfbebd6f0121c89958595306f410" Namespace="kube-system" Pod="coredns-76f75df574-kr682" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--kr682-eth0" Jan 13 20:30:53.505224 systemd[1]: Started cri-containerd-a979cc48a0b130f4f2f0913e53f81c90d2731e2a5556037b06940473d7be0344.scope - libcontainer container a979cc48a0b130f4f2f0913e53f81c90d2731e2a5556037b06940473d7be0344. Jan 13 20:30:53.506488 containerd[1449]: time="2025-01-13T20:30:53.506440861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-rlqw7,Uid:bfa3473d-43a3-447d-b0a7-c066cdd14301,Namespace:kube-system,Attempt:5,} returns sandbox id \"330930d46ab746015349f34a07bca900956149f4a9cc319b208983df8c0bb423\"" Jan 13 20:30:53.507445 kubelet[2618]: E0113 20:30:53.507413 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:30:53.512124 containerd[1449]: time="2025-01-13T20:30:53.512078435Z" level=info msg="CreateContainer within sandbox \"330930d46ab746015349f34a07bca900956149f4a9cc319b208983df8c0bb423\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 20:30:53.526385 systemd-resolved[1312]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 20:30:53.541972 containerd[1449]: time="2025-01-13T20:30:53.523788611Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:30:53.542569 containerd[1449]: time="2025-01-13T20:30:53.542317188Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:30:53.542569 containerd[1449]: time="2025-01-13T20:30:53.542382679Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:30:53.542569 containerd[1449]: time="2025-01-13T20:30:53.542496777Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:30:53.545689 containerd[1449]: time="2025-01-13T20:30:53.545636234Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d98fcfdcc-xd4cg,Uid:32e2fd28-1c60-4d81-883d-85b833d714fc,Namespace:calico-apiserver,Attempt:5,} returns sandbox id \"a979cc48a0b130f4f2f0913e53f81c90d2731e2a5556037b06940473d7be0344\"" Jan 13 20:30:53.551850 containerd[1449]: time="2025-01-13T20:30:53.551800652Z" level=info msg="CreateContainer within sandbox \"330930d46ab746015349f34a07bca900956149f4a9cc319b208983df8c0bb423\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3e2641099ef5995583ed87b35ab438cb7c79e584260bf782162b32047ad420cc\"" Jan 13 20:30:53.552510 containerd[1449]: time="2025-01-13T20:30:53.552462076Z" level=info msg="StartContainer for \"3e2641099ef5995583ed87b35ab438cb7c79e584260bf782162b32047ad420cc\"" Jan 13 20:30:53.565720 systemd[1]: Started cri-containerd-f004e0a21ba6eb53df4e28c6e3bf16624f1fcfbebd6f0121c89958595306f410.scope - libcontainer container f004e0a21ba6eb53df4e28c6e3bf16624f1fcfbebd6f0121c89958595306f410. Jan 13 20:30:53.589744 systemd[1]: Started cri-containerd-3e2641099ef5995583ed87b35ab438cb7c79e584260bf782162b32047ad420cc.scope - libcontainer container 3e2641099ef5995583ed87b35ab438cb7c79e584260bf782162b32047ad420cc. Jan 13 20:30:53.593005 systemd-resolved[1312]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 20:30:53.611981 containerd[1449]: time="2025-01-13T20:30:53.611899779Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-kr682,Uid:9a4b274c-0db7-4f24-b51e-a8ee914d4260,Namespace:kube-system,Attempt:5,} returns sandbox id \"f004e0a21ba6eb53df4e28c6e3bf16624f1fcfbebd6f0121c89958595306f410\"" Jan 13 20:30:53.613643 kubelet[2618]: E0113 20:30:53.613617 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:30:53.620662 containerd[1449]: time="2025-01-13T20:30:53.620454975Z" level=info msg="CreateContainer within sandbox \"f004e0a21ba6eb53df4e28c6e3bf16624f1fcfbebd6f0121c89958595306f410\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 20:30:53.658528 containerd[1449]: time="2025-01-13T20:30:53.658477923Z" level=info msg="StartContainer for \"3e2641099ef5995583ed87b35ab438cb7c79e584260bf782162b32047ad420cc\" returns successfully" Jan 13 20:30:53.666653 kubelet[2618]: E0113 20:30:53.665703 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:30:53.668458 kubelet[2618]: E0113 20:30:53.668331 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:30:53.681529 containerd[1449]: time="2025-01-13T20:30:53.679696847Z" level=info msg="CreateContainer within sandbox \"f004e0a21ba6eb53df4e28c6e3bf16624f1fcfbebd6f0121c89958595306f410\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"712049e70f7200143041943833b65e5722b3d47188c7549b626d6b46f6e9343d\"" Jan 13 20:30:53.683400 kubelet[2618]: I0113 20:30:53.682956 2618 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-rlqw7" podStartSLOduration=22.682915397 podStartE2EDuration="22.682915397s" podCreationTimestamp="2025-01-13 20:30:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:30:53.682757532 +0000 UTC m=+35.491663998" watchObservedRunningTime="2025-01-13 20:30:53.682915397 +0000 UTC m=+35.491821863" Jan 13 20:30:53.684745 containerd[1449]: time="2025-01-13T20:30:53.684316539Z" level=info msg="StartContainer for \"712049e70f7200143041943833b65e5722b3d47188c7549b626d6b46f6e9343d\"" Jan 13 20:30:53.726156 systemd[1]: Started cri-containerd-712049e70f7200143041943833b65e5722b3d47188c7549b626d6b46f6e9343d.scope - libcontainer container 712049e70f7200143041943833b65e5722b3d47188c7549b626d6b46f6e9343d. Jan 13 20:30:53.753775 containerd[1449]: time="2025-01-13T20:30:53.753733624Z" level=info msg="StartContainer for \"712049e70f7200143041943833b65e5722b3d47188c7549b626d6b46f6e9343d\" returns successfully" Jan 13 20:30:54.306570 kernel: bpftool[5320]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 13 20:30:54.456783 systemd-networkd[1389]: cali0275ce7fcbf: Gained IPv6LL Jan 13 20:30:54.468585 systemd-networkd[1389]: vxlan.calico: Link UP Jan 13 20:30:54.468591 systemd-networkd[1389]: vxlan.calico: Gained carrier Jan 13 20:30:54.520795 systemd-networkd[1389]: cali130ea9803d2: Gained IPv6LL Jan 13 20:30:54.585664 systemd-networkd[1389]: calic0051919091: Gained IPv6LL Jan 13 20:30:54.591460 containerd[1449]: time="2025-01-13T20:30:54.591036909Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:30:54.592433 containerd[1449]: time="2025-01-13T20:30:54.592386478Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7464730" Jan 13 20:30:54.593674 containerd[1449]: time="2025-01-13T20:30:54.593388073Z" level=info msg="ImageCreate event name:\"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:30:54.595592 containerd[1449]: time="2025-01-13T20:30:54.595536965Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:30:54.596245 containerd[1449]: time="2025-01-13T20:30:54.596219190Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"8834384\" in 1.217249297s" Jan 13 20:30:54.596309 containerd[1449]: time="2025-01-13T20:30:54.596248195Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\"" Jan 13 20:30:54.597246 containerd[1449]: time="2025-01-13T20:30:54.597161336Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 13 20:30:54.598220 containerd[1449]: time="2025-01-13T20:30:54.598095841Z" level=info msg="CreateContainer within sandbox \"b4a773916ee7fb2b9884ea0cd01f3d6939dd797ded15f246bef51222cb5f4bed\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 13 20:30:54.612326 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount633431664.mount: Deactivated successfully. Jan 13 20:30:54.617045 containerd[1449]: time="2025-01-13T20:30:54.617007205Z" level=info msg="CreateContainer within sandbox \"b4a773916ee7fb2b9884ea0cd01f3d6939dd797ded15f246bef51222cb5f4bed\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"1289d09b7174aa4130796fd0fc32467d6f301a1dfc35f296164d56cc9dfafe5a\"" Jan 13 20:30:54.617694 containerd[1449]: time="2025-01-13T20:30:54.617669548Z" level=info msg="StartContainer for \"1289d09b7174aa4130796fd0fc32467d6f301a1dfc35f296164d56cc9dfafe5a\"" Jan 13 20:30:54.677213 systemd[1]: Started cri-containerd-1289d09b7174aa4130796fd0fc32467d6f301a1dfc35f296164d56cc9dfafe5a.scope - libcontainer container 1289d09b7174aa4130796fd0fc32467d6f301a1dfc35f296164d56cc9dfafe5a. Jan 13 20:30:54.690630 kubelet[2618]: E0113 20:30:54.690600 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:30:54.698561 kubelet[2618]: E0113 20:30:54.696437 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:30:54.698561 kubelet[2618]: E0113 20:30:54.696951 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:30:54.710617 kubelet[2618]: I0113 20:30:54.705340 2618 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-kr682" podStartSLOduration=23.70518044 podStartE2EDuration="23.70518044s" podCreationTimestamp="2025-01-13 20:30:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:30:54.702366485 +0000 UTC m=+36.511272951" watchObservedRunningTime="2025-01-13 20:30:54.70518044 +0000 UTC m=+36.514087186" Jan 13 20:30:54.779376 systemd-networkd[1389]: cali821bf8f17fb: Gained IPv6LL Jan 13 20:30:54.785286 containerd[1449]: time="2025-01-13T20:30:54.785246662Z" level=info msg="StartContainer for \"1289d09b7174aa4130796fd0fc32467d6f301a1dfc35f296164d56cc9dfafe5a\" returns successfully" Jan 13 20:30:54.968760 systemd-networkd[1389]: cali0a97dcf2039: Gained IPv6LL Jan 13 20:30:55.288706 systemd-networkd[1389]: calic4c30332dd6: Gained IPv6LL Jan 13 20:30:55.704441 kubelet[2618]: E0113 20:30:55.704253 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:30:55.704441 kubelet[2618]: E0113 20:30:55.704368 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:30:55.928808 systemd-networkd[1389]: vxlan.calico: Gained IPv6LL Jan 13 20:30:56.521359 containerd[1449]: time="2025-01-13T20:30:56.521311178Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:30:56.522294 containerd[1449]: time="2025-01-13T20:30:56.522160343Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=39298409" Jan 13 20:30:56.523200 containerd[1449]: time="2025-01-13T20:30:56.523004267Z" level=info msg="ImageCreate event name:\"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:30:56.525354 containerd[1449]: time="2025-01-13T20:30:56.525324330Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:30:56.526107 containerd[1449]: time="2025-01-13T20:30:56.526077121Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 1.928884299s" Jan 13 20:30:56.526180 containerd[1449]: time="2025-01-13T20:30:56.526110006Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Jan 13 20:30:56.528350 containerd[1449]: time="2025-01-13T20:30:56.527956078Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 13 20:30:56.529826 containerd[1449]: time="2025-01-13T20:30:56.529790149Z" level=info msg="CreateContainer within sandbox \"ccda5107a088d63db2d511e9927cb37c658f4048172a74b90a4d2ea6a01b5d25\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 13 20:30:56.543521 containerd[1449]: time="2025-01-13T20:30:56.543450445Z" level=info msg="CreateContainer within sandbox \"ccda5107a088d63db2d511e9927cb37c658f4048172a74b90a4d2ea6a01b5d25\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"de480b211d67f4c51315bd452716d6ae8adf81425cccd1c5d604896b679e7e03\"" Jan 13 20:30:56.544052 containerd[1449]: time="2025-01-13T20:30:56.544022969Z" level=info msg="StartContainer for \"de480b211d67f4c51315bd452716d6ae8adf81425cccd1c5d604896b679e7e03\"" Jan 13 20:30:56.592768 systemd[1]: Started cri-containerd-de480b211d67f4c51315bd452716d6ae8adf81425cccd1c5d604896b679e7e03.scope - libcontainer container de480b211d67f4c51315bd452716d6ae8adf81425cccd1c5d604896b679e7e03. Jan 13 20:30:56.678699 containerd[1449]: time="2025-01-13T20:30:56.678655159Z" level=info msg="StartContainer for \"de480b211d67f4c51315bd452716d6ae8adf81425cccd1c5d604896b679e7e03\" returns successfully" Jan 13 20:30:56.711418 kubelet[2618]: E0113 20:30:56.710373 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:30:56.712093 kubelet[2618]: E0113 20:30:56.712013 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:30:56.907265 systemd[1]: Started sshd@9-10.0.0.136:22-10.0.0.1:48972.service - OpenSSH per-connection server daemon (10.0.0.1:48972). Jan 13 20:30:56.961961 sshd[5519]: Accepted publickey for core from 10.0.0.1 port 48972 ssh2: RSA SHA256:dHV21v/TvsC6tzdcDH8HHQ5Gsjsp+3vcXiRjWYQ6Qqo Jan 13 20:30:56.963528 sshd-session[5519]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:30:56.967764 systemd-logind[1430]: New session 10 of user core. Jan 13 20:30:56.973716 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 13 20:30:57.136401 sshd[5521]: Connection closed by 10.0.0.1 port 48972 Jan 13 20:30:57.136803 sshd-session[5519]: pam_unix(sshd:session): session closed for user core Jan 13 20:30:57.150249 systemd[1]: sshd@9-10.0.0.136:22-10.0.0.1:48972.service: Deactivated successfully. Jan 13 20:30:57.152295 systemd[1]: session-10.scope: Deactivated successfully. Jan 13 20:30:57.154363 systemd-logind[1430]: Session 10 logged out. Waiting for processes to exit. Jan 13 20:30:57.162011 systemd[1]: Started sshd@10-10.0.0.136:22-10.0.0.1:48984.service - OpenSSH per-connection server daemon (10.0.0.1:48984). Jan 13 20:30:57.163830 systemd-logind[1430]: Removed session 10. Jan 13 20:30:57.204327 sshd[5535]: Accepted publickey for core from 10.0.0.1 port 48984 ssh2: RSA SHA256:dHV21v/TvsC6tzdcDH8HHQ5Gsjsp+3vcXiRjWYQ6Qqo Jan 13 20:30:57.205661 sshd-session[5535]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:30:57.209586 systemd-logind[1430]: New session 11 of user core. Jan 13 20:30:57.220724 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 13 20:30:57.430427 sshd[5537]: Connection closed by 10.0.0.1 port 48984 Jan 13 20:30:57.431671 sshd-session[5535]: pam_unix(sshd:session): session closed for user core Jan 13 20:30:57.438805 systemd[1]: sshd@10-10.0.0.136:22-10.0.0.1:48984.service: Deactivated successfully. Jan 13 20:30:57.443520 systemd[1]: session-11.scope: Deactivated successfully. Jan 13 20:30:57.447781 systemd-logind[1430]: Session 11 logged out. Waiting for processes to exit. Jan 13 20:30:57.457874 systemd[1]: Started sshd@11-10.0.0.136:22-10.0.0.1:48992.service - OpenSSH per-connection server daemon (10.0.0.1:48992). Jan 13 20:30:57.463319 systemd-logind[1430]: Removed session 11. Jan 13 20:30:57.509950 sshd[5548]: Accepted publickey for core from 10.0.0.1 port 48992 ssh2: RSA SHA256:dHV21v/TvsC6tzdcDH8HHQ5Gsjsp+3vcXiRjWYQ6Qqo Jan 13 20:30:57.511800 sshd-session[5548]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:30:57.516964 systemd-logind[1430]: New session 12 of user core. Jan 13 20:30:57.525791 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 13 20:30:57.740487 sshd[5550]: Connection closed by 10.0.0.1 port 48992 Jan 13 20:30:57.742036 sshd-session[5548]: pam_unix(sshd:session): session closed for user core Jan 13 20:30:57.747752 systemd[1]: sshd@11-10.0.0.136:22-10.0.0.1:48992.service: Deactivated successfully. Jan 13 20:30:57.750363 systemd[1]: session-12.scope: Deactivated successfully. Jan 13 20:30:57.752532 systemd-logind[1430]: Session 12 logged out. Waiting for processes to exit. Jan 13 20:30:57.756459 systemd-logind[1430]: Removed session 12. Jan 13 20:30:58.315394 containerd[1449]: time="2025-01-13T20:30:58.315352749Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:30:58.322814 containerd[1449]: time="2025-01-13T20:30:58.319784856Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=31953828" Jan 13 20:30:58.336116 containerd[1449]: time="2025-01-13T20:30:58.333485793Z" level=info msg="ImageCreate event name:\"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:30:58.359705 containerd[1449]: time="2025-01-13T20:30:58.357517030Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"33323450\" in 1.829527467s" Jan 13 20:30:58.360007 containerd[1449]: time="2025-01-13T20:30:58.359887085Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\"" Jan 13 20:30:58.360007 containerd[1449]: time="2025-01-13T20:30:58.359699899Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:30:58.360591 containerd[1449]: time="2025-01-13T20:30:58.360380795Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 13 20:30:58.376208 containerd[1449]: time="2025-01-13T20:30:58.376165427Z" level=info msg="CreateContainer within sandbox \"caf0a85d964e2934dca0ab0c978230c733513ed97ea73d28db485e46ae5dd3d6\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 13 20:30:58.399156 containerd[1449]: time="2025-01-13T20:30:58.399086467Z" level=info msg="CreateContainer within sandbox \"caf0a85d964e2934dca0ab0c978230c733513ed97ea73d28db485e46ae5dd3d6\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"8b0a93bb340d1703710cde1ee8dfe37a18ef8b8c2d755a660e63f100ffb04b56\"" Jan 13 20:30:58.399949 containerd[1449]: time="2025-01-13T20:30:58.399917625Z" level=info msg="StartContainer for \"8b0a93bb340d1703710cde1ee8dfe37a18ef8b8c2d755a660e63f100ffb04b56\"" Jan 13 20:30:58.463729 kubelet[2618]: I0113 20:30:58.463180 2618 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-d98fcfdcc-j9cwm" podStartSLOduration=18.333100365 podStartE2EDuration="21.463134402s" podCreationTimestamp="2025-01-13 20:30:37 +0000 UTC" firstStartedPulling="2025-01-13 20:30:53.396475868 +0000 UTC m=+35.205382334" lastFinishedPulling="2025-01-13 20:30:56.526509905 +0000 UTC m=+38.335416371" observedRunningTime="2025-01-13 20:30:56.723532742 +0000 UTC m=+38.532439208" watchObservedRunningTime="2025-01-13 20:30:58.463134402 +0000 UTC m=+40.272040868" Jan 13 20:30:58.465456 systemd[1]: Started cri-containerd-8b0a93bb340d1703710cde1ee8dfe37a18ef8b8c2d755a660e63f100ffb04b56.scope - libcontainer container 8b0a93bb340d1703710cde1ee8dfe37a18ef8b8c2d755a660e63f100ffb04b56. Jan 13 20:30:58.559475 kubelet[2618]: E0113 20:30:58.559430 2618 cadvisor_stats_provider.go:501] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1881e196_e398_402d_91c4_c538f30e9a68.slice/cri-containerd-8b0a93bb340d1703710cde1ee8dfe37a18ef8b8c2d755a660e63f100ffb04b56.scope\": RecentStats: unable to find data in memory cache]" Jan 13 20:30:58.580074 containerd[1449]: time="2025-01-13T20:30:58.579946437Z" level=info msg="StartContainer for \"8b0a93bb340d1703710cde1ee8dfe37a18ef8b8c2d755a660e63f100ffb04b56\" returns successfully" Jan 13 20:30:58.631246 containerd[1449]: time="2025-01-13T20:30:58.631185481Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:30:58.631847 containerd[1449]: time="2025-01-13T20:30:58.631793207Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 13 20:30:58.633957 containerd[1449]: time="2025-01-13T20:30:58.633921508Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 273.514628ms" Jan 13 20:30:58.634007 containerd[1449]: time="2025-01-13T20:30:58.633958193Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Jan 13 20:30:58.635144 containerd[1449]: time="2025-01-13T20:30:58.635118797Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 13 20:30:58.635972 containerd[1449]: time="2025-01-13T20:30:58.635945194Z" level=info msg="CreateContainer within sandbox \"a979cc48a0b130f4f2f0913e53f81c90d2731e2a5556037b06940473d7be0344\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 13 20:30:58.649556 containerd[1449]: time="2025-01-13T20:30:58.649503671Z" level=info msg="CreateContainer within sandbox \"a979cc48a0b130f4f2f0913e53f81c90d2731e2a5556037b06940473d7be0344\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"90d1af428ea1e504730b736d6a5d0cae27ba81b4be7b9edb659e1cf4c2ebe154\"" Jan 13 20:30:58.650585 containerd[1449]: time="2025-01-13T20:30:58.650224892Z" level=info msg="StartContainer for \"90d1af428ea1e504730b736d6a5d0cae27ba81b4be7b9edb659e1cf4c2ebe154\"" Jan 13 20:30:58.679191 systemd[1]: Started cri-containerd-90d1af428ea1e504730b736d6a5d0cae27ba81b4be7b9edb659e1cf4c2ebe154.scope - libcontainer container 90d1af428ea1e504730b736d6a5d0cae27ba81b4be7b9edb659e1cf4c2ebe154. Jan 13 20:30:58.723152 containerd[1449]: time="2025-01-13T20:30:58.723047628Z" level=info msg="StartContainer for \"90d1af428ea1e504730b736d6a5d0cae27ba81b4be7b9edb659e1cf4c2ebe154\" returns successfully" Jan 13 20:30:58.742461 kubelet[2618]: I0113 20:30:58.741644 2618 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-75ff5498fd-l6pnm" podStartSLOduration=15.790188339 podStartE2EDuration="20.74159761s" podCreationTimestamp="2025-01-13 20:30:38 +0000 UTC" firstStartedPulling="2025-01-13 20:30:53.408719769 +0000 UTC m=+35.217626235" lastFinishedPulling="2025-01-13 20:30:58.36012904 +0000 UTC m=+40.169035506" observedRunningTime="2025-01-13 20:30:58.740400881 +0000 UTC m=+40.549307307" watchObservedRunningTime="2025-01-13 20:30:58.74159761 +0000 UTC m=+40.550504076" Jan 13 20:30:58.755610 kubelet[2618]: I0113 20:30:58.755437 2618 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-d98fcfdcc-xd4cg" podStartSLOduration=16.668099374 podStartE2EDuration="21.755219656s" podCreationTimestamp="2025-01-13 20:30:37 +0000 UTC" firstStartedPulling="2025-01-13 20:30:53.547172318 +0000 UTC m=+35.356078784" lastFinishedPulling="2025-01-13 20:30:58.6342926 +0000 UTC m=+40.443199066" observedRunningTime="2025-01-13 20:30:58.753727005 +0000 UTC m=+40.562633511" watchObservedRunningTime="2025-01-13 20:30:58.755219656 +0000 UTC m=+40.564126202" Jan 13 20:30:59.736795 kubelet[2618]: I0113 20:30:59.736696 2618 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 20:30:59.860074 containerd[1449]: time="2025-01-13T20:30:59.860018630Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:30:59.860783 containerd[1449]: time="2025-01-13T20:30:59.860742210Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=9883368" Jan 13 20:30:59.863190 containerd[1449]: time="2025-01-13T20:30:59.863152304Z" level=info msg="ImageCreate event name:\"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:30:59.865579 containerd[1449]: time="2025-01-13T20:30:59.865525193Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:30:59.867095 containerd[1449]: time="2025-01-13T20:30:59.867051205Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11252974\" in 1.231900723s" Jan 13 20:30:59.867095 containerd[1449]: time="2025-01-13T20:30:59.867088570Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\"" Jan 13 20:30:59.871713 containerd[1449]: time="2025-01-13T20:30:59.871680846Z" level=info msg="CreateContainer within sandbox \"b4a773916ee7fb2b9884ea0cd01f3d6939dd797ded15f246bef51222cb5f4bed\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 13 20:30:59.890843 containerd[1449]: time="2025-01-13T20:30:59.890790534Z" level=info msg="CreateContainer within sandbox \"b4a773916ee7fb2b9884ea0cd01f3d6939dd797ded15f246bef51222cb5f4bed\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"3f08aab627c33c7dfac2c7c6491380f05a08930de255580af312c6dc7049b363\"" Jan 13 20:30:59.892730 containerd[1449]: time="2025-01-13T20:30:59.892694918Z" level=info msg="StartContainer for \"3f08aab627c33c7dfac2c7c6491380f05a08930de255580af312c6dc7049b363\"" Jan 13 20:30:59.927735 systemd[1]: Started cri-containerd-3f08aab627c33c7dfac2c7c6491380f05a08930de255580af312c6dc7049b363.scope - libcontainer container 3f08aab627c33c7dfac2c7c6491380f05a08930de255580af312c6dc7049b363. Jan 13 20:30:59.966401 containerd[1449]: time="2025-01-13T20:30:59.966355644Z" level=info msg="StartContainer for \"3f08aab627c33c7dfac2c7c6491380f05a08930de255580af312c6dc7049b363\" returns successfully" Jan 13 20:31:00.380121 kubelet[2618]: I0113 20:31:00.380081 2618 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 13 20:31:00.380321 kubelet[2618]: I0113 20:31:00.380136 2618 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 13 20:31:00.754682 kubelet[2618]: I0113 20:31:00.753856 2618 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-jl4cq" podStartSLOduration=16.264914775 podStartE2EDuration="22.75380481s" podCreationTimestamp="2025-01-13 20:30:38 +0000 UTC" firstStartedPulling="2025-01-13 20:30:53.378410764 +0000 UTC m=+35.187317190" lastFinishedPulling="2025-01-13 20:30:59.867300759 +0000 UTC m=+41.676207225" observedRunningTime="2025-01-13 20:31:00.752614088 +0000 UTC m=+42.561520554" watchObservedRunningTime="2025-01-13 20:31:00.75380481 +0000 UTC m=+42.562711236" Jan 13 20:31:02.759413 systemd[1]: Started sshd@12-10.0.0.136:22-10.0.0.1:52014.service - OpenSSH per-connection server daemon (10.0.0.1:52014). Jan 13 20:31:02.832646 sshd[5728]: Accepted publickey for core from 10.0.0.1 port 52014 ssh2: RSA SHA256:dHV21v/TvsC6tzdcDH8HHQ5Gsjsp+3vcXiRjWYQ6Qqo Jan 13 20:31:02.836715 sshd-session[5728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:31:02.840439 systemd-logind[1430]: New session 13 of user core. Jan 13 20:31:02.849774 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 13 20:31:03.020782 sshd[5730]: Connection closed by 10.0.0.1 port 52014 Jan 13 20:31:03.021330 sshd-session[5728]: pam_unix(sshd:session): session closed for user core Jan 13 20:31:03.034118 systemd[1]: sshd@12-10.0.0.136:22-10.0.0.1:52014.service: Deactivated successfully. Jan 13 20:31:03.036603 systemd[1]: session-13.scope: Deactivated successfully. Jan 13 20:31:03.038214 systemd-logind[1430]: Session 13 logged out. Waiting for processes to exit. Jan 13 20:31:03.039794 systemd[1]: Started sshd@13-10.0.0.136:22-10.0.0.1:52030.service - OpenSSH per-connection server daemon (10.0.0.1:52030). Jan 13 20:31:03.040637 systemd-logind[1430]: Removed session 13. Jan 13 20:31:03.088726 sshd[5742]: Accepted publickey for core from 10.0.0.1 port 52030 ssh2: RSA SHA256:dHV21v/TvsC6tzdcDH8HHQ5Gsjsp+3vcXiRjWYQ6Qqo Jan 13 20:31:03.090054 sshd-session[5742]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:31:03.095843 systemd-logind[1430]: New session 14 of user core. Jan 13 20:31:03.105755 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 13 20:31:03.323033 sshd[5744]: Connection closed by 10.0.0.1 port 52030 Jan 13 20:31:03.323523 sshd-session[5742]: pam_unix(sshd:session): session closed for user core Jan 13 20:31:03.337564 systemd[1]: sshd@13-10.0.0.136:22-10.0.0.1:52030.service: Deactivated successfully. Jan 13 20:31:03.340559 systemd[1]: session-14.scope: Deactivated successfully. Jan 13 20:31:03.343146 systemd-logind[1430]: Session 14 logged out. Waiting for processes to exit. Jan 13 20:31:03.357654 systemd[1]: Started sshd@14-10.0.0.136:22-10.0.0.1:52034.service - OpenSSH per-connection server daemon (10.0.0.1:52034). Jan 13 20:31:03.358994 systemd-logind[1430]: Removed session 14. Jan 13 20:31:03.416493 sshd[5754]: Accepted publickey for core from 10.0.0.1 port 52034 ssh2: RSA SHA256:dHV21v/TvsC6tzdcDH8HHQ5Gsjsp+3vcXiRjWYQ6Qqo Jan 13 20:31:03.417896 sshd-session[5754]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:31:03.422225 systemd-logind[1430]: New session 15 of user core. Jan 13 20:31:03.436498 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 13 20:31:05.059010 sshd[5756]: Connection closed by 10.0.0.1 port 52034 Jan 13 20:31:05.059788 sshd-session[5754]: pam_unix(sshd:session): session closed for user core Jan 13 20:31:05.068976 systemd[1]: sshd@14-10.0.0.136:22-10.0.0.1:52034.service: Deactivated successfully. Jan 13 20:31:05.072149 systemd[1]: session-15.scope: Deactivated successfully. Jan 13 20:31:05.074801 systemd-logind[1430]: Session 15 logged out. Waiting for processes to exit. Jan 13 20:31:05.085720 systemd[1]: Started sshd@15-10.0.0.136:22-10.0.0.1:52042.service - OpenSSH per-connection server daemon (10.0.0.1:52042). Jan 13 20:31:05.086904 systemd-logind[1430]: Removed session 15. Jan 13 20:31:05.132771 sshd[5785]: Accepted publickey for core from 10.0.0.1 port 52042 ssh2: RSA SHA256:dHV21v/TvsC6tzdcDH8HHQ5Gsjsp+3vcXiRjWYQ6Qqo Jan 13 20:31:05.134036 sshd-session[5785]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:31:05.138170 systemd-logind[1430]: New session 16 of user core. Jan 13 20:31:05.148707 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 13 20:31:05.494204 sshd[5787]: Connection closed by 10.0.0.1 port 52042 Jan 13 20:31:05.494687 sshd-session[5785]: pam_unix(sshd:session): session closed for user core Jan 13 20:31:05.504222 systemd[1]: sshd@15-10.0.0.136:22-10.0.0.1:52042.service: Deactivated successfully. Jan 13 20:31:05.507683 systemd[1]: session-16.scope: Deactivated successfully. Jan 13 20:31:05.510826 systemd-logind[1430]: Session 16 logged out. Waiting for processes to exit. Jan 13 20:31:05.516892 systemd[1]: Started sshd@16-10.0.0.136:22-10.0.0.1:52048.service - OpenSSH per-connection server daemon (10.0.0.1:52048). Jan 13 20:31:05.518146 systemd-logind[1430]: Removed session 16. Jan 13 20:31:05.578439 sshd[5797]: Accepted publickey for core from 10.0.0.1 port 52048 ssh2: RSA SHA256:dHV21v/TvsC6tzdcDH8HHQ5Gsjsp+3vcXiRjWYQ6Qqo Jan 13 20:31:05.579844 sshd-session[5797]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:31:05.584509 systemd-logind[1430]: New session 17 of user core. Jan 13 20:31:05.595723 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 13 20:31:05.684839 kubelet[2618]: I0113 20:31:05.684800 2618 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 20:31:05.746065 sshd[5799]: Connection closed by 10.0.0.1 port 52048 Jan 13 20:31:05.746863 sshd-session[5797]: pam_unix(sshd:session): session closed for user core Jan 13 20:31:05.751410 systemd[1]: sshd@16-10.0.0.136:22-10.0.0.1:52048.service: Deactivated successfully. Jan 13 20:31:05.757588 systemd[1]: session-17.scope: Deactivated successfully. Jan 13 20:31:05.758645 systemd-logind[1430]: Session 17 logged out. Waiting for processes to exit. Jan 13 20:31:05.759475 systemd-logind[1430]: Removed session 17. Jan 13 20:31:10.759858 systemd[1]: Started sshd@17-10.0.0.136:22-10.0.0.1:52050.service - OpenSSH per-connection server daemon (10.0.0.1:52050). Jan 13 20:31:10.810002 sshd[5817]: Accepted publickey for core from 10.0.0.1 port 52050 ssh2: RSA SHA256:dHV21v/TvsC6tzdcDH8HHQ5Gsjsp+3vcXiRjWYQ6Qqo Jan 13 20:31:10.811142 sshd-session[5817]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:31:10.815147 systemd-logind[1430]: New session 18 of user core. Jan 13 20:31:10.824763 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 13 20:31:10.990192 sshd[5819]: Connection closed by 10.0.0.1 port 52050 Jan 13 20:31:10.990607 sshd-session[5817]: pam_unix(sshd:session): session closed for user core Jan 13 20:31:10.994927 systemd[1]: sshd@17-10.0.0.136:22-10.0.0.1:52050.service: Deactivated successfully. Jan 13 20:31:10.998401 systemd[1]: session-18.scope: Deactivated successfully. Jan 13 20:31:10.999957 systemd-logind[1430]: Session 18 logged out. Waiting for processes to exit. Jan 13 20:31:11.001335 systemd-logind[1430]: Removed session 18. Jan 13 20:31:16.002678 systemd[1]: Started sshd@18-10.0.0.136:22-10.0.0.1:36348.service - OpenSSH per-connection server daemon (10.0.0.1:36348). Jan 13 20:31:16.065296 sshd[5841]: Accepted publickey for core from 10.0.0.1 port 36348 ssh2: RSA SHA256:dHV21v/TvsC6tzdcDH8HHQ5Gsjsp+3vcXiRjWYQ6Qqo Jan 13 20:31:16.066782 sshd-session[5841]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:31:16.072430 systemd-logind[1430]: New session 19 of user core. Jan 13 20:31:16.083755 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 13 20:31:16.257636 sshd[5843]: Connection closed by 10.0.0.1 port 36348 Jan 13 20:31:16.258285 sshd-session[5841]: pam_unix(sshd:session): session closed for user core Jan 13 20:31:16.261575 systemd[1]: sshd@18-10.0.0.136:22-10.0.0.1:36348.service: Deactivated successfully. Jan 13 20:31:16.263472 systemd[1]: session-19.scope: Deactivated successfully. Jan 13 20:31:16.265738 systemd-logind[1430]: Session 19 logged out. Waiting for processes to exit. Jan 13 20:31:16.266820 systemd-logind[1430]: Removed session 19. Jan 13 20:31:16.423083 kubelet[2618]: E0113 20:31:16.423036 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:31:18.271951 containerd[1449]: time="2025-01-13T20:31:18.271912084Z" level=info msg="StopPodSandbox for \"43643c7e5ccf2f6123a60334abfa99ecb9c3c9ae6377489073238964a83b68d7\"" Jan 13 20:31:18.272643 containerd[1449]: time="2025-01-13T20:31:18.272562019Z" level=info msg="TearDown network for sandbox \"43643c7e5ccf2f6123a60334abfa99ecb9c3c9ae6377489073238964a83b68d7\" successfully" Jan 13 20:31:18.272643 containerd[1449]: time="2025-01-13T20:31:18.272581621Z" level=info msg="StopPodSandbox for \"43643c7e5ccf2f6123a60334abfa99ecb9c3c9ae6377489073238964a83b68d7\" returns successfully" Jan 13 20:31:18.281314 containerd[1449]: time="2025-01-13T20:31:18.281257563Z" level=info msg="RemovePodSandbox for \"43643c7e5ccf2f6123a60334abfa99ecb9c3c9ae6377489073238964a83b68d7\"" Jan 13 20:31:18.281314 containerd[1449]: time="2025-01-13T20:31:18.281318568Z" level=info msg="Forcibly stopping sandbox \"43643c7e5ccf2f6123a60334abfa99ecb9c3c9ae6377489073238964a83b68d7\"" Jan 13 20:31:18.281431 containerd[1449]: time="2025-01-13T20:31:18.281399055Z" level=info msg="TearDown network for sandbox \"43643c7e5ccf2f6123a60334abfa99ecb9c3c9ae6377489073238964a83b68d7\" successfully" Jan 13 20:31:18.300932 containerd[1449]: time="2025-01-13T20:31:18.300873961Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"43643c7e5ccf2f6123a60334abfa99ecb9c3c9ae6377489073238964a83b68d7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:31:18.301121 containerd[1449]: time="2025-01-13T20:31:18.300951008Z" level=info msg="RemovePodSandbox \"43643c7e5ccf2f6123a60334abfa99ecb9c3c9ae6377489073238964a83b68d7\" returns successfully" Jan 13 20:31:18.301754 containerd[1449]: time="2025-01-13T20:31:18.301728835Z" level=info msg="StopPodSandbox for \"3ed141ee74fd8466fc2858bf0a83b765adcda31e8a45969b6575916e813952b8\"" Jan 13 20:31:18.301841 containerd[1449]: time="2025-01-13T20:31:18.301826283Z" level=info msg="TearDown network for sandbox \"3ed141ee74fd8466fc2858bf0a83b765adcda31e8a45969b6575916e813952b8\" successfully" Jan 13 20:31:18.301878 containerd[1449]: time="2025-01-13T20:31:18.301840444Z" level=info msg="StopPodSandbox for \"3ed141ee74fd8466fc2858bf0a83b765adcda31e8a45969b6575916e813952b8\" returns successfully" Jan 13 20:31:18.303032 containerd[1449]: time="2025-01-13T20:31:18.302338527Z" level=info msg="RemovePodSandbox for \"3ed141ee74fd8466fc2858bf0a83b765adcda31e8a45969b6575916e813952b8\"" Jan 13 20:31:18.303032 containerd[1449]: time="2025-01-13T20:31:18.302363449Z" level=info msg="Forcibly stopping sandbox \"3ed141ee74fd8466fc2858bf0a83b765adcda31e8a45969b6575916e813952b8\"" Jan 13 20:31:18.303032 containerd[1449]: time="2025-01-13T20:31:18.302431215Z" level=info msg="TearDown network for sandbox \"3ed141ee74fd8466fc2858bf0a83b765adcda31e8a45969b6575916e813952b8\" successfully" Jan 13 20:31:18.306616 containerd[1449]: time="2025-01-13T20:31:18.306580610Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3ed141ee74fd8466fc2858bf0a83b765adcda31e8a45969b6575916e813952b8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:31:18.306768 containerd[1449]: time="2025-01-13T20:31:18.306749984Z" level=info msg="RemovePodSandbox \"3ed141ee74fd8466fc2858bf0a83b765adcda31e8a45969b6575916e813952b8\" returns successfully" Jan 13 20:31:18.307148 containerd[1449]: time="2025-01-13T20:31:18.307117056Z" level=info msg="StopPodSandbox for \"02833b59f867cfdf5eba3e30bff932e672dc2fe1c1880d847ccdba9c056d9423\"" Jan 13 20:31:18.307214 containerd[1449]: time="2025-01-13T20:31:18.307205903Z" level=info msg="TearDown network for sandbox \"02833b59f867cfdf5eba3e30bff932e672dc2fe1c1880d847ccdba9c056d9423\" successfully" Jan 13 20:31:18.307240 containerd[1449]: time="2025-01-13T20:31:18.307217184Z" level=info msg="StopPodSandbox for \"02833b59f867cfdf5eba3e30bff932e672dc2fe1c1880d847ccdba9c056d9423\" returns successfully" Jan 13 20:31:18.307729 containerd[1449]: time="2025-01-13T20:31:18.307705146Z" level=info msg="RemovePodSandbox for \"02833b59f867cfdf5eba3e30bff932e672dc2fe1c1880d847ccdba9c056d9423\"" Jan 13 20:31:18.307819 containerd[1449]: time="2025-01-13T20:31:18.307805914Z" level=info msg="Forcibly stopping sandbox \"02833b59f867cfdf5eba3e30bff932e672dc2fe1c1880d847ccdba9c056d9423\"" Jan 13 20:31:18.307937 containerd[1449]: time="2025-01-13T20:31:18.307921204Z" level=info msg="TearDown network for sandbox \"02833b59f867cfdf5eba3e30bff932e672dc2fe1c1880d847ccdba9c056d9423\" successfully" Jan 13 20:31:18.310441 containerd[1449]: time="2025-01-13T20:31:18.310400656Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"02833b59f867cfdf5eba3e30bff932e672dc2fe1c1880d847ccdba9c056d9423\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:31:18.310613 containerd[1449]: time="2025-01-13T20:31:18.310593433Z" level=info msg="RemovePodSandbox \"02833b59f867cfdf5eba3e30bff932e672dc2fe1c1880d847ccdba9c056d9423\" returns successfully" Jan 13 20:31:18.311065 containerd[1449]: time="2025-01-13T20:31:18.311034111Z" level=info msg="StopPodSandbox for \"8d724079b815cab68c80f805f3c9db36f335b9bf91e15a1e108f2559c044d35b\"" Jan 13 20:31:18.311199 containerd[1449]: time="2025-01-13T20:31:18.311146520Z" level=info msg="TearDown network for sandbox \"8d724079b815cab68c80f805f3c9db36f335b9bf91e15a1e108f2559c044d35b\" successfully" Jan 13 20:31:18.311199 containerd[1449]: time="2025-01-13T20:31:18.311161642Z" level=info msg="StopPodSandbox for \"8d724079b815cab68c80f805f3c9db36f335b9bf91e15a1e108f2559c044d35b\" returns successfully" Jan 13 20:31:18.311493 containerd[1449]: time="2025-01-13T20:31:18.311412423Z" level=info msg="RemovePodSandbox for \"8d724079b815cab68c80f805f3c9db36f335b9bf91e15a1e108f2559c044d35b\"" Jan 13 20:31:18.311493 containerd[1449]: time="2025-01-13T20:31:18.311476188Z" level=info msg="Forcibly stopping sandbox \"8d724079b815cab68c80f805f3c9db36f335b9bf91e15a1e108f2559c044d35b\"" Jan 13 20:31:18.311755 containerd[1449]: time="2025-01-13T20:31:18.311734051Z" level=info msg="TearDown network for sandbox \"8d724079b815cab68c80f805f3c9db36f335b9bf91e15a1e108f2559c044d35b\" successfully" Jan 13 20:31:18.315109 containerd[1449]: time="2025-01-13T20:31:18.315073136Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8d724079b815cab68c80f805f3c9db36f335b9bf91e15a1e108f2559c044d35b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:31:18.315351 containerd[1449]: time="2025-01-13T20:31:18.315135382Z" level=info msg="RemovePodSandbox \"8d724079b815cab68c80f805f3c9db36f335b9bf91e15a1e108f2559c044d35b\" returns successfully" Jan 13 20:31:18.315991 containerd[1449]: time="2025-01-13T20:31:18.315702830Z" level=info msg="StopPodSandbox for \"18037a9fd3c2ba10b2cd67f74cef79458b154604d459e6075c5a99ab49b87e21\"" Jan 13 20:31:18.315991 containerd[1449]: time="2025-01-13T20:31:18.315796078Z" level=info msg="TearDown network for sandbox \"18037a9fd3c2ba10b2cd67f74cef79458b154604d459e6075c5a99ab49b87e21\" successfully" Jan 13 20:31:18.315991 containerd[1449]: time="2025-01-13T20:31:18.315806839Z" level=info msg="StopPodSandbox for \"18037a9fd3c2ba10b2cd67f74cef79458b154604d459e6075c5a99ab49b87e21\" returns successfully" Jan 13 20:31:18.316109 containerd[1449]: time="2025-01-13T20:31:18.316080382Z" level=info msg="RemovePodSandbox for \"18037a9fd3c2ba10b2cd67f74cef79458b154604d459e6075c5a99ab49b87e21\"" Jan 13 20:31:18.316156 containerd[1449]: time="2025-01-13T20:31:18.316111025Z" level=info msg="Forcibly stopping sandbox \"18037a9fd3c2ba10b2cd67f74cef79458b154604d459e6075c5a99ab49b87e21\"" Jan 13 20:31:18.316195 containerd[1449]: time="2025-01-13T20:31:18.316177271Z" level=info msg="TearDown network for sandbox \"18037a9fd3c2ba10b2cd67f74cef79458b154604d459e6075c5a99ab49b87e21\" successfully" Jan 13 20:31:18.323655 containerd[1449]: time="2025-01-13T20:31:18.323618627Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"18037a9fd3c2ba10b2cd67f74cef79458b154604d459e6075c5a99ab49b87e21\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:31:18.323719 containerd[1449]: time="2025-01-13T20:31:18.323681273Z" level=info msg="RemovePodSandbox \"18037a9fd3c2ba10b2cd67f74cef79458b154604d459e6075c5a99ab49b87e21\" returns successfully" Jan 13 20:31:18.324359 containerd[1449]: time="2025-01-13T20:31:18.324163394Z" level=info msg="StopPodSandbox for \"6ff3f878365f5c93583995c40a4b88bb4d10b8b6cbd07d067f697265269986fb\"" Jan 13 20:31:18.324359 containerd[1449]: time="2025-01-13T20:31:18.324251401Z" level=info msg="TearDown network for sandbox \"6ff3f878365f5c93583995c40a4b88bb4d10b8b6cbd07d067f697265269986fb\" successfully" Jan 13 20:31:18.324359 containerd[1449]: time="2025-01-13T20:31:18.324261282Z" level=info msg="StopPodSandbox for \"6ff3f878365f5c93583995c40a4b88bb4d10b8b6cbd07d067f697265269986fb\" returns successfully" Jan 13 20:31:18.328583 containerd[1449]: time="2025-01-13T20:31:18.324770366Z" level=info msg="RemovePodSandbox for \"6ff3f878365f5c93583995c40a4b88bb4d10b8b6cbd07d067f697265269986fb\"" Jan 13 20:31:18.328583 containerd[1449]: time="2025-01-13T20:31:18.324805089Z" level=info msg="Forcibly stopping sandbox \"6ff3f878365f5c93583995c40a4b88bb4d10b8b6cbd07d067f697265269986fb\"" Jan 13 20:31:18.328583 containerd[1449]: time="2025-01-13T20:31:18.324948141Z" level=info msg="TearDown network for sandbox \"6ff3f878365f5c93583995c40a4b88bb4d10b8b6cbd07d067f697265269986fb\" successfully" Jan 13 20:31:18.329885 containerd[1449]: time="2025-01-13T20:31:18.329851521Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6ff3f878365f5c93583995c40a4b88bb4d10b8b6cbd07d067f697265269986fb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:31:18.330064 containerd[1449]: time="2025-01-13T20:31:18.330043337Z" level=info msg="RemovePodSandbox \"6ff3f878365f5c93583995c40a4b88bb4d10b8b6cbd07d067f697265269986fb\" returns successfully" Jan 13 20:31:18.330987 containerd[1449]: time="2025-01-13T20:31:18.330961856Z" level=info msg="StopPodSandbox for \"4b056f2b56340a603cfdc19318f83a97ca8aea763054ab5414cfa299ed314b21\"" Jan 13 20:31:18.331065 containerd[1449]: time="2025-01-13T20:31:18.331050543Z" level=info msg="TearDown network for sandbox \"4b056f2b56340a603cfdc19318f83a97ca8aea763054ab5414cfa299ed314b21\" successfully" Jan 13 20:31:18.331112 containerd[1449]: time="2025-01-13T20:31:18.331063944Z" level=info msg="StopPodSandbox for \"4b056f2b56340a603cfdc19318f83a97ca8aea763054ab5414cfa299ed314b21\" returns successfully" Jan 13 20:31:18.332009 containerd[1449]: time="2025-01-13T20:31:18.331814808Z" level=info msg="RemovePodSandbox for \"4b056f2b56340a603cfdc19318f83a97ca8aea763054ab5414cfa299ed314b21\"" Jan 13 20:31:18.332009 containerd[1449]: time="2025-01-13T20:31:18.331877854Z" level=info msg="Forcibly stopping sandbox \"4b056f2b56340a603cfdc19318f83a97ca8aea763054ab5414cfa299ed314b21\"" Jan 13 20:31:18.332009 containerd[1449]: time="2025-01-13T20:31:18.331966501Z" level=info msg="TearDown network for sandbox \"4b056f2b56340a603cfdc19318f83a97ca8aea763054ab5414cfa299ed314b21\" successfully" Jan 13 20:31:18.334553 containerd[1449]: time="2025-01-13T20:31:18.334490397Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4b056f2b56340a603cfdc19318f83a97ca8aea763054ab5414cfa299ed314b21\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:31:18.334869 containerd[1449]: time="2025-01-13T20:31:18.334654371Z" level=info msg="RemovePodSandbox \"4b056f2b56340a603cfdc19318f83a97ca8aea763054ab5414cfa299ed314b21\" returns successfully" Jan 13 20:31:18.335184 containerd[1449]: time="2025-01-13T20:31:18.335044805Z" level=info msg="StopPodSandbox for \"fe58a6998064b3e9242653fb761145feca56f91a7b11a41150a14cadbeb92f21\"" Jan 13 20:31:18.335184 containerd[1449]: time="2025-01-13T20:31:18.335126972Z" level=info msg="TearDown network for sandbox \"fe58a6998064b3e9242653fb761145feca56f91a7b11a41150a14cadbeb92f21\" successfully" Jan 13 20:31:18.335184 containerd[1449]: time="2025-01-13T20:31:18.335136253Z" level=info msg="StopPodSandbox for \"fe58a6998064b3e9242653fb761145feca56f91a7b11a41150a14cadbeb92f21\" returns successfully" Jan 13 20:31:18.335402 containerd[1449]: time="2025-01-13T20:31:18.335376033Z" level=info msg="RemovePodSandbox for \"fe58a6998064b3e9242653fb761145feca56f91a7b11a41150a14cadbeb92f21\"" Jan 13 20:31:18.335402 containerd[1449]: time="2025-01-13T20:31:18.335405836Z" level=info msg="Forcibly stopping sandbox \"fe58a6998064b3e9242653fb761145feca56f91a7b11a41150a14cadbeb92f21\"" Jan 13 20:31:18.335509 containerd[1449]: time="2025-01-13T20:31:18.335484042Z" level=info msg="TearDown network for sandbox \"fe58a6998064b3e9242653fb761145feca56f91a7b11a41150a14cadbeb92f21\" successfully" Jan 13 20:31:18.337828 containerd[1449]: time="2025-01-13T20:31:18.337790440Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fe58a6998064b3e9242653fb761145feca56f91a7b11a41150a14cadbeb92f21\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:31:18.337876 containerd[1449]: time="2025-01-13T20:31:18.337838684Z" level=info msg="RemovePodSandbox \"fe58a6998064b3e9242653fb761145feca56f91a7b11a41150a14cadbeb92f21\" returns successfully" Jan 13 20:31:18.338342 containerd[1449]: time="2025-01-13T20:31:18.338194874Z" level=info msg="StopPodSandbox for \"4248f57e7d5002ec4ded68e8604c05f3f557aaf1a7e22985b0ef5de114a7859c\"" Jan 13 20:31:18.338342 containerd[1449]: time="2025-01-13T20:31:18.338276921Z" level=info msg="TearDown network for sandbox \"4248f57e7d5002ec4ded68e8604c05f3f557aaf1a7e22985b0ef5de114a7859c\" successfully" Jan 13 20:31:18.338342 containerd[1449]: time="2025-01-13T20:31:18.338285682Z" level=info msg="StopPodSandbox for \"4248f57e7d5002ec4ded68e8604c05f3f557aaf1a7e22985b0ef5de114a7859c\" returns successfully" Jan 13 20:31:18.338898 containerd[1449]: time="2025-01-13T20:31:18.338871492Z" level=info msg="RemovePodSandbox for \"4248f57e7d5002ec4ded68e8604c05f3f557aaf1a7e22985b0ef5de114a7859c\"" Jan 13 20:31:18.338959 containerd[1449]: time="2025-01-13T20:31:18.338901335Z" level=info msg="Forcibly stopping sandbox \"4248f57e7d5002ec4ded68e8604c05f3f557aaf1a7e22985b0ef5de114a7859c\"" Jan 13 20:31:18.338998 containerd[1449]: time="2025-01-13T20:31:18.338981742Z" level=info msg="TearDown network for sandbox \"4248f57e7d5002ec4ded68e8604c05f3f557aaf1a7e22985b0ef5de114a7859c\" successfully" Jan 13 20:31:18.341748 containerd[1449]: time="2025-01-13T20:31:18.341714095Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4248f57e7d5002ec4ded68e8604c05f3f557aaf1a7e22985b0ef5de114a7859c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:31:18.341802 containerd[1449]: time="2025-01-13T20:31:18.341769180Z" level=info msg="RemovePodSandbox \"4248f57e7d5002ec4ded68e8604c05f3f557aaf1a7e22985b0ef5de114a7859c\" returns successfully" Jan 13 20:31:18.342294 containerd[1449]: time="2025-01-13T20:31:18.342092008Z" level=info msg="StopPodSandbox for \"59d0cc9c07439bbba899aeac4a8b9d1c6cf4747a77d08b51c8fdf1a1305a78f2\"" Jan 13 20:31:18.342294 containerd[1449]: time="2025-01-13T20:31:18.342174455Z" level=info msg="TearDown network for sandbox \"59d0cc9c07439bbba899aeac4a8b9d1c6cf4747a77d08b51c8fdf1a1305a78f2\" successfully" Jan 13 20:31:18.342294 containerd[1449]: time="2025-01-13T20:31:18.342183976Z" level=info msg="StopPodSandbox for \"59d0cc9c07439bbba899aeac4a8b9d1c6cf4747a77d08b51c8fdf1a1305a78f2\" returns successfully" Jan 13 20:31:18.343393 containerd[1449]: time="2025-01-13T20:31:18.343361796Z" level=info msg="RemovePodSandbox for \"59d0cc9c07439bbba899aeac4a8b9d1c6cf4747a77d08b51c8fdf1a1305a78f2\"" Jan 13 20:31:18.343657 containerd[1449]: time="2025-01-13T20:31:18.343499808Z" level=info msg="Forcibly stopping sandbox \"59d0cc9c07439bbba899aeac4a8b9d1c6cf4747a77d08b51c8fdf1a1305a78f2\"" Jan 13 20:31:18.343657 containerd[1449]: time="2025-01-13T20:31:18.343580415Z" level=info msg="TearDown network for sandbox \"59d0cc9c07439bbba899aeac4a8b9d1c6cf4747a77d08b51c8fdf1a1305a78f2\" successfully" Jan 13 20:31:18.345964 containerd[1449]: time="2025-01-13T20:31:18.345933376Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"59d0cc9c07439bbba899aeac4a8b9d1c6cf4747a77d08b51c8fdf1a1305a78f2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:31:18.346182 containerd[1449]: time="2025-01-13T20:31:18.346080749Z" level=info msg="RemovePodSandbox \"59d0cc9c07439bbba899aeac4a8b9d1c6cf4747a77d08b51c8fdf1a1305a78f2\" returns successfully" Jan 13 20:31:18.346568 containerd[1449]: time="2025-01-13T20:31:18.346513586Z" level=info msg="StopPodSandbox for \"9337bb1f603c24ba6ca35c3654eb6cf05598f7e4d660f21b53662788b856b030\"" Jan 13 20:31:18.346640 containerd[1449]: time="2025-01-13T20:31:18.346621715Z" level=info msg="TearDown network for sandbox \"9337bb1f603c24ba6ca35c3654eb6cf05598f7e4d660f21b53662788b856b030\" successfully" Jan 13 20:31:18.346677 containerd[1449]: time="2025-01-13T20:31:18.346638997Z" level=info msg="StopPodSandbox for \"9337bb1f603c24ba6ca35c3654eb6cf05598f7e4d660f21b53662788b856b030\" returns successfully" Jan 13 20:31:18.348087 containerd[1449]: time="2025-01-13T20:31:18.346956704Z" level=info msg="RemovePodSandbox for \"9337bb1f603c24ba6ca35c3654eb6cf05598f7e4d660f21b53662788b856b030\"" Jan 13 20:31:18.348087 containerd[1449]: time="2025-01-13T20:31:18.346981746Z" level=info msg="Forcibly stopping sandbox \"9337bb1f603c24ba6ca35c3654eb6cf05598f7e4d660f21b53662788b856b030\"" Jan 13 20:31:18.348087 containerd[1449]: time="2025-01-13T20:31:18.347042711Z" level=info msg="TearDown network for sandbox \"9337bb1f603c24ba6ca35c3654eb6cf05598f7e4d660f21b53662788b856b030\" successfully" Jan 13 20:31:18.357949 containerd[1449]: time="2025-01-13T20:31:18.357921562Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9337bb1f603c24ba6ca35c3654eb6cf05598f7e4d660f21b53662788b856b030\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:31:18.358115 containerd[1449]: time="2025-01-13T20:31:18.358097617Z" level=info msg="RemovePodSandbox \"9337bb1f603c24ba6ca35c3654eb6cf05598f7e4d660f21b53662788b856b030\" returns successfully" Jan 13 20:31:18.358609 containerd[1449]: time="2025-01-13T20:31:18.358581378Z" level=info msg="StopPodSandbox for \"d759147d0355725660a4a0b3bb78339917fdf984cb56b79b18e8509ae207c0b0\"" Jan 13 20:31:18.358688 containerd[1449]: time="2025-01-13T20:31:18.358673386Z" level=info msg="TearDown network for sandbox \"d759147d0355725660a4a0b3bb78339917fdf984cb56b79b18e8509ae207c0b0\" successfully" Jan 13 20:31:18.358721 containerd[1449]: time="2025-01-13T20:31:18.358687868Z" level=info msg="StopPodSandbox for \"d759147d0355725660a4a0b3bb78339917fdf984cb56b79b18e8509ae207c0b0\" returns successfully" Jan 13 20:31:18.360235 containerd[1449]: time="2025-01-13T20:31:18.359061340Z" level=info msg="RemovePodSandbox for \"d759147d0355725660a4a0b3bb78339917fdf984cb56b79b18e8509ae207c0b0\"" Jan 13 20:31:18.360235 containerd[1449]: time="2025-01-13T20:31:18.359086822Z" level=info msg="Forcibly stopping sandbox \"d759147d0355725660a4a0b3bb78339917fdf984cb56b79b18e8509ae207c0b0\"" Jan 13 20:31:18.360235 containerd[1449]: time="2025-01-13T20:31:18.359146507Z" level=info msg="TearDown network for sandbox \"d759147d0355725660a4a0b3bb78339917fdf984cb56b79b18e8509ae207c0b0\" successfully" Jan 13 20:31:18.361491 containerd[1449]: time="2025-01-13T20:31:18.361462985Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d759147d0355725660a4a0b3bb78339917fdf984cb56b79b18e8509ae207c0b0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:31:18.361703 containerd[1449]: time="2025-01-13T20:31:18.361626439Z" level=info msg="RemovePodSandbox \"d759147d0355725660a4a0b3bb78339917fdf984cb56b79b18e8509ae207c0b0\" returns successfully" Jan 13 20:31:18.361980 containerd[1449]: time="2025-01-13T20:31:18.361949907Z" level=info msg="StopPodSandbox for \"fc6d0aa128b08ee847c9327aaab1a518a44f6b9116caef01b282aee1404e1ed3\"" Jan 13 20:31:18.362065 containerd[1449]: time="2025-01-13T20:31:18.362046195Z" level=info msg="TearDown network for sandbox \"fc6d0aa128b08ee847c9327aaab1a518a44f6b9116caef01b282aee1404e1ed3\" successfully" Jan 13 20:31:18.362065 containerd[1449]: time="2025-01-13T20:31:18.362063196Z" level=info msg="StopPodSandbox for \"fc6d0aa128b08ee847c9327aaab1a518a44f6b9116caef01b282aee1404e1ed3\" returns successfully" Jan 13 20:31:18.362362 containerd[1449]: time="2025-01-13T20:31:18.362338740Z" level=info msg="RemovePodSandbox for \"fc6d0aa128b08ee847c9327aaab1a518a44f6b9116caef01b282aee1404e1ed3\"" Jan 13 20:31:18.363527 containerd[1449]: time="2025-01-13T20:31:18.362458190Z" level=info msg="Forcibly stopping sandbox \"fc6d0aa128b08ee847c9327aaab1a518a44f6b9116caef01b282aee1404e1ed3\"" Jan 13 20:31:18.363527 containerd[1449]: time="2025-01-13T20:31:18.362566799Z" level=info msg="TearDown network for sandbox \"fc6d0aa128b08ee847c9327aaab1a518a44f6b9116caef01b282aee1404e1ed3\" successfully" Jan 13 20:31:18.364903 containerd[1449]: time="2025-01-13T20:31:18.364867756Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fc6d0aa128b08ee847c9327aaab1a518a44f6b9116caef01b282aee1404e1ed3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:31:18.365183 containerd[1449]: time="2025-01-13T20:31:18.365041211Z" level=info msg="RemovePodSandbox \"fc6d0aa128b08ee847c9327aaab1a518a44f6b9116caef01b282aee1404e1ed3\" returns successfully" Jan 13 20:31:18.365480 containerd[1449]: time="2025-01-13T20:31:18.365455207Z" level=info msg="StopPodSandbox for \"9abf490ce38226a786f2d986301dfedc02cd069f6afacf77e348719211c56812\"" Jan 13 20:31:18.365830 containerd[1449]: time="2025-01-13T20:31:18.365689547Z" level=info msg="TearDown network for sandbox \"9abf490ce38226a786f2d986301dfedc02cd069f6afacf77e348719211c56812\" successfully" Jan 13 20:31:18.365830 containerd[1449]: time="2025-01-13T20:31:18.365705468Z" level=info msg="StopPodSandbox for \"9abf490ce38226a786f2d986301dfedc02cd069f6afacf77e348719211c56812\" returns successfully" Jan 13 20:31:18.366208 containerd[1449]: time="2025-01-13T20:31:18.366088061Z" level=info msg="RemovePodSandbox for \"9abf490ce38226a786f2d986301dfedc02cd069f6afacf77e348719211c56812\"" Jan 13 20:31:18.366208 containerd[1449]: time="2025-01-13T20:31:18.366110983Z" level=info msg="Forcibly stopping sandbox \"9abf490ce38226a786f2d986301dfedc02cd069f6afacf77e348719211c56812\"" Jan 13 20:31:18.366208 containerd[1449]: time="2025-01-13T20:31:18.366169348Z" level=info msg="TearDown network for sandbox \"9abf490ce38226a786f2d986301dfedc02cd069f6afacf77e348719211c56812\" successfully" Jan 13 20:31:18.368636 containerd[1449]: time="2025-01-13T20:31:18.368505587Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9abf490ce38226a786f2d986301dfedc02cd069f6afacf77e348719211c56812\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:31:18.368636 containerd[1449]: time="2025-01-13T20:31:18.368570233Z" level=info msg="RemovePodSandbox \"9abf490ce38226a786f2d986301dfedc02cd069f6afacf77e348719211c56812\" returns successfully" Jan 13 20:31:18.368894 containerd[1449]: time="2025-01-13T20:31:18.368832895Z" level=info msg="StopPodSandbox for \"e27a0631252ded34759262131f15c21e07c306b23e33e72b16e5689484506ef2\"" Jan 13 20:31:18.369065 containerd[1449]: time="2025-01-13T20:31:18.368992509Z" level=info msg="TearDown network for sandbox \"e27a0631252ded34759262131f15c21e07c306b23e33e72b16e5689484506ef2\" successfully" Jan 13 20:31:18.369065 containerd[1449]: time="2025-01-13T20:31:18.369009751Z" level=info msg="StopPodSandbox for \"e27a0631252ded34759262131f15c21e07c306b23e33e72b16e5689484506ef2\" returns successfully" Jan 13 20:31:18.369240 containerd[1449]: time="2025-01-13T20:31:18.369213888Z" level=info msg="RemovePodSandbox for \"e27a0631252ded34759262131f15c21e07c306b23e33e72b16e5689484506ef2\"" Jan 13 20:31:18.369285 containerd[1449]: time="2025-01-13T20:31:18.369242451Z" level=info msg="Forcibly stopping sandbox \"e27a0631252ded34759262131f15c21e07c306b23e33e72b16e5689484506ef2\"" Jan 13 20:31:18.369311 containerd[1449]: time="2025-01-13T20:31:18.369304336Z" level=info msg="TearDown network for sandbox \"e27a0631252ded34759262131f15c21e07c306b23e33e72b16e5689484506ef2\" successfully" Jan 13 20:31:18.371654 containerd[1449]: time="2025-01-13T20:31:18.371622054Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e27a0631252ded34759262131f15c21e07c306b23e33e72b16e5689484506ef2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:31:18.371716 containerd[1449]: time="2025-01-13T20:31:18.371683899Z" level=info msg="RemovePodSandbox \"e27a0631252ded34759262131f15c21e07c306b23e33e72b16e5689484506ef2\" returns successfully" Jan 13 20:31:18.372001 containerd[1449]: time="2025-01-13T20:31:18.371980405Z" level=info msg="StopPodSandbox for \"d39152a6fc1537621236a59d0c339b87e86a5ef4286937e0fdb62843acfe45ae\"" Jan 13 20:31:18.372252 containerd[1449]: time="2025-01-13T20:31:18.372181582Z" level=info msg="TearDown network for sandbox \"d39152a6fc1537621236a59d0c339b87e86a5ef4286937e0fdb62843acfe45ae\" successfully" Jan 13 20:31:18.372252 containerd[1449]: time="2025-01-13T20:31:18.372196863Z" level=info msg="StopPodSandbox for \"d39152a6fc1537621236a59d0c339b87e86a5ef4286937e0fdb62843acfe45ae\" returns successfully" Jan 13 20:31:18.373649 containerd[1449]: time="2025-01-13T20:31:18.372584976Z" level=info msg="RemovePodSandbox for \"d39152a6fc1537621236a59d0c339b87e86a5ef4286937e0fdb62843acfe45ae\"" Jan 13 20:31:18.373649 containerd[1449]: time="2025-01-13T20:31:18.372611219Z" level=info msg="Forcibly stopping sandbox \"d39152a6fc1537621236a59d0c339b87e86a5ef4286937e0fdb62843acfe45ae\"" Jan 13 20:31:18.373649 containerd[1449]: time="2025-01-13T20:31:18.372674464Z" level=info msg="TearDown network for sandbox \"d39152a6fc1537621236a59d0c339b87e86a5ef4286937e0fdb62843acfe45ae\" successfully" Jan 13 20:31:18.374925 containerd[1449]: time="2025-01-13T20:31:18.374900975Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d39152a6fc1537621236a59d0c339b87e86a5ef4286937e0fdb62843acfe45ae\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:31:18.375065 containerd[1449]: time="2025-01-13T20:31:18.375048427Z" level=info msg="RemovePodSandbox \"d39152a6fc1537621236a59d0c339b87e86a5ef4286937e0fdb62843acfe45ae\" returns successfully" Jan 13 20:31:18.375383 containerd[1449]: time="2025-01-13T20:31:18.375360134Z" level=info msg="StopPodSandbox for \"822d26c1b26dbe791c1f7b69d5ddd74ee6c9ad2e3c5e4663ee663ad0b60316a9\"" Jan 13 20:31:18.375476 containerd[1449]: time="2025-01-13T20:31:18.375459742Z" level=info msg="TearDown network for sandbox \"822d26c1b26dbe791c1f7b69d5ddd74ee6c9ad2e3c5e4663ee663ad0b60316a9\" successfully" Jan 13 20:31:18.375476 containerd[1449]: time="2025-01-13T20:31:18.375475024Z" level=info msg="StopPodSandbox for \"822d26c1b26dbe791c1f7b69d5ddd74ee6c9ad2e3c5e4663ee663ad0b60316a9\" returns successfully" Jan 13 20:31:18.376827 containerd[1449]: time="2025-01-13T20:31:18.375793931Z" level=info msg="RemovePodSandbox for \"822d26c1b26dbe791c1f7b69d5ddd74ee6c9ad2e3c5e4663ee663ad0b60316a9\"" Jan 13 20:31:18.376827 containerd[1449]: time="2025-01-13T20:31:18.375830454Z" level=info msg="Forcibly stopping sandbox \"822d26c1b26dbe791c1f7b69d5ddd74ee6c9ad2e3c5e4663ee663ad0b60316a9\"" Jan 13 20:31:18.376827 containerd[1449]: time="2025-01-13T20:31:18.375893780Z" level=info msg="TearDown network for sandbox \"822d26c1b26dbe791c1f7b69d5ddd74ee6c9ad2e3c5e4663ee663ad0b60316a9\" successfully" Jan 13 20:31:18.378788 containerd[1449]: time="2025-01-13T20:31:18.378764345Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"822d26c1b26dbe791c1f7b69d5ddd74ee6c9ad2e3c5e4663ee663ad0b60316a9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:31:18.378923 containerd[1449]: time="2025-01-13T20:31:18.378907077Z" level=info msg="RemovePodSandbox \"822d26c1b26dbe791c1f7b69d5ddd74ee6c9ad2e3c5e4663ee663ad0b60316a9\" returns successfully" Jan 13 20:31:18.379347 containerd[1449]: time="2025-01-13T20:31:18.379327353Z" level=info msg="StopPodSandbox for \"8756a856d4b009fabe0ee6301dd638e4fa9e16801d79908294c945cb1bffd67b\"" Jan 13 20:31:18.386522 containerd[1449]: time="2025-01-13T20:31:18.386486006Z" level=info msg="TearDown network for sandbox \"8756a856d4b009fabe0ee6301dd638e4fa9e16801d79908294c945cb1bffd67b\" successfully" Jan 13 20:31:18.386626 containerd[1449]: time="2025-01-13T20:31:18.386610936Z" level=info msg="StopPodSandbox for \"8756a856d4b009fabe0ee6301dd638e4fa9e16801d79908294c945cb1bffd67b\" returns successfully" Jan 13 20:31:18.387110 containerd[1449]: time="2025-01-13T20:31:18.387076136Z" level=info msg="RemovePodSandbox for \"8756a856d4b009fabe0ee6301dd638e4fa9e16801d79908294c945cb1bffd67b\"" Jan 13 20:31:18.387182 containerd[1449]: time="2025-01-13T20:31:18.387114940Z" level=info msg="Forcibly stopping sandbox \"8756a856d4b009fabe0ee6301dd638e4fa9e16801d79908294c945cb1bffd67b\"" Jan 13 20:31:18.387224 containerd[1449]: time="2025-01-13T20:31:18.387205747Z" level=info msg="TearDown network for sandbox \"8756a856d4b009fabe0ee6301dd638e4fa9e16801d79908294c945cb1bffd67b\" successfully" Jan 13 20:31:18.389985 containerd[1449]: time="2025-01-13T20:31:18.389947342Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8756a856d4b009fabe0ee6301dd638e4fa9e16801d79908294c945cb1bffd67b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:31:18.390043 containerd[1449]: time="2025-01-13T20:31:18.390018668Z" level=info msg="RemovePodSandbox \"8756a856d4b009fabe0ee6301dd638e4fa9e16801d79908294c945cb1bffd67b\" returns successfully" Jan 13 20:31:18.390467 containerd[1449]: time="2025-01-13T20:31:18.390443984Z" level=info msg="StopPodSandbox for \"b49d8bbc5872aa71827f6c91bd741ba4297fa8a631a80fb080ede02d8dd26312\"" Jan 13 20:31:18.390658 containerd[1449]: time="2025-01-13T20:31:18.390638161Z" level=info msg="TearDown network for sandbox \"b49d8bbc5872aa71827f6c91bd741ba4297fa8a631a80fb080ede02d8dd26312\" successfully" Jan 13 20:31:18.390735 containerd[1449]: time="2025-01-13T20:31:18.390721088Z" level=info msg="StopPodSandbox for \"b49d8bbc5872aa71827f6c91bd741ba4297fa8a631a80fb080ede02d8dd26312\" returns successfully" Jan 13 20:31:18.391047 containerd[1449]: time="2025-01-13T20:31:18.391026834Z" level=info msg="RemovePodSandbox for \"b49d8bbc5872aa71827f6c91bd741ba4297fa8a631a80fb080ede02d8dd26312\"" Jan 13 20:31:18.391133 containerd[1449]: time="2025-01-13T20:31:18.391119242Z" level=info msg="Forcibly stopping sandbox \"b49d8bbc5872aa71827f6c91bd741ba4297fa8a631a80fb080ede02d8dd26312\"" Jan 13 20:31:18.391269 containerd[1449]: time="2025-01-13T20:31:18.391253014Z" level=info msg="TearDown network for sandbox \"b49d8bbc5872aa71827f6c91bd741ba4297fa8a631a80fb080ede02d8dd26312\" successfully" Jan 13 20:31:18.394188 containerd[1449]: time="2025-01-13T20:31:18.394160342Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b49d8bbc5872aa71827f6c91bd741ba4297fa8a631a80fb080ede02d8dd26312\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:31:18.394318 containerd[1449]: time="2025-01-13T20:31:18.394301274Z" level=info msg="RemovePodSandbox \"b49d8bbc5872aa71827f6c91bd741ba4297fa8a631a80fb080ede02d8dd26312\" returns successfully" Jan 13 20:31:18.394713 containerd[1449]: time="2025-01-13T20:31:18.394691508Z" level=info msg="StopPodSandbox for \"b937976515bc4386cbc91b1a672d23935498f62b19acb872e035ee50bd8391cc\"" Jan 13 20:31:18.394942 containerd[1449]: time="2025-01-13T20:31:18.394866043Z" level=info msg="TearDown network for sandbox \"b937976515bc4386cbc91b1a672d23935498f62b19acb872e035ee50bd8391cc\" successfully" Jan 13 20:31:18.394942 containerd[1449]: time="2025-01-13T20:31:18.394883204Z" level=info msg="StopPodSandbox for \"b937976515bc4386cbc91b1a672d23935498f62b19acb872e035ee50bd8391cc\" returns successfully" Jan 13 20:31:18.395211 containerd[1449]: time="2025-01-13T20:31:18.395188030Z" level=info msg="RemovePodSandbox for \"b937976515bc4386cbc91b1a672d23935498f62b19acb872e035ee50bd8391cc\"" Jan 13 20:31:18.395245 containerd[1449]: time="2025-01-13T20:31:18.395216633Z" level=info msg="Forcibly stopping sandbox \"b937976515bc4386cbc91b1a672d23935498f62b19acb872e035ee50bd8391cc\"" Jan 13 20:31:18.395300 containerd[1449]: time="2025-01-13T20:31:18.395286079Z" level=info msg="TearDown network for sandbox \"b937976515bc4386cbc91b1a672d23935498f62b19acb872e035ee50bd8391cc\" successfully" Jan 13 20:31:18.397820 containerd[1449]: time="2025-01-13T20:31:18.397760410Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b937976515bc4386cbc91b1a672d23935498f62b19acb872e035ee50bd8391cc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:31:18.397874 containerd[1449]: time="2025-01-13T20:31:18.397840137Z" level=info msg="RemovePodSandbox \"b937976515bc4386cbc91b1a672d23935498f62b19acb872e035ee50bd8391cc\" returns successfully" Jan 13 20:31:18.398184 containerd[1449]: time="2025-01-13T20:31:18.398163125Z" level=info msg="StopPodSandbox for \"e2805a8dda438eff1d0c2f55b32b782d9c1dc5901d5efac2a41313f78dd5f358\"" Jan 13 20:31:18.398257 containerd[1449]: time="2025-01-13T20:31:18.398241852Z" level=info msg="TearDown network for sandbox \"e2805a8dda438eff1d0c2f55b32b782d9c1dc5901d5efac2a41313f78dd5f358\" successfully" Jan 13 20:31:18.398287 containerd[1449]: time="2025-01-13T20:31:18.398256133Z" level=info msg="StopPodSandbox for \"e2805a8dda438eff1d0c2f55b32b782d9c1dc5901d5efac2a41313f78dd5f358\" returns successfully" Jan 13 20:31:18.398798 containerd[1449]: time="2025-01-13T20:31:18.398499394Z" level=info msg="RemovePodSandbox for \"e2805a8dda438eff1d0c2f55b32b782d9c1dc5901d5efac2a41313f78dd5f358\"" Jan 13 20:31:18.398798 containerd[1449]: time="2025-01-13T20:31:18.398525516Z" level=info msg="Forcibly stopping sandbox \"e2805a8dda438eff1d0c2f55b32b782d9c1dc5901d5efac2a41313f78dd5f358\"" Jan 13 20:31:18.398798 containerd[1449]: time="2025-01-13T20:31:18.398601242Z" level=info msg="TearDown network for sandbox \"e2805a8dda438eff1d0c2f55b32b782d9c1dc5901d5efac2a41313f78dd5f358\" successfully" Jan 13 20:31:18.400932 containerd[1449]: time="2025-01-13T20:31:18.400905759Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e2805a8dda438eff1d0c2f55b32b782d9c1dc5901d5efac2a41313f78dd5f358\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:31:18.401111 containerd[1449]: time="2025-01-13T20:31:18.401046651Z" level=info msg="RemovePodSandbox \"e2805a8dda438eff1d0c2f55b32b782d9c1dc5901d5efac2a41313f78dd5f358\" returns successfully" Jan 13 20:31:18.401298 containerd[1449]: time="2025-01-13T20:31:18.401276951Z" level=info msg="StopPodSandbox for \"597fc95d0d65139c5ae8f7bc69afebc185cc01968b607fea11abd75b884cb672\"" Jan 13 20:31:18.401365 containerd[1449]: time="2025-01-13T20:31:18.401352958Z" level=info msg="TearDown network for sandbox \"597fc95d0d65139c5ae8f7bc69afebc185cc01968b607fea11abd75b884cb672\" successfully" Jan 13 20:31:18.401388 containerd[1449]: time="2025-01-13T20:31:18.401365959Z" level=info msg="StopPodSandbox for \"597fc95d0d65139c5ae8f7bc69afebc185cc01968b607fea11abd75b884cb672\" returns successfully" Jan 13 20:31:18.401676 containerd[1449]: time="2025-01-13T20:31:18.401655224Z" level=info msg="RemovePodSandbox for \"597fc95d0d65139c5ae8f7bc69afebc185cc01968b607fea11abd75b884cb672\"" Jan 13 20:31:18.401720 containerd[1449]: time="2025-01-13T20:31:18.401679506Z" level=info msg="Forcibly stopping sandbox \"597fc95d0d65139c5ae8f7bc69afebc185cc01968b607fea11abd75b884cb672\"" Jan 13 20:31:18.401744 containerd[1449]: time="2025-01-13T20:31:18.401734230Z" level=info msg="TearDown network for sandbox \"597fc95d0d65139c5ae8f7bc69afebc185cc01968b607fea11abd75b884cb672\" successfully" Jan 13 20:31:18.404200 containerd[1449]: time="2025-01-13T20:31:18.404151317Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"597fc95d0d65139c5ae8f7bc69afebc185cc01968b607fea11abd75b884cb672\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:31:18.404200 containerd[1449]: time="2025-01-13T20:31:18.404199001Z" level=info msg="RemovePodSandbox \"597fc95d0d65139c5ae8f7bc69afebc185cc01968b607fea11abd75b884cb672\" returns successfully" Jan 13 20:31:18.404716 containerd[1449]: time="2025-01-13T20:31:18.404568273Z" level=info msg="StopPodSandbox for \"12075a8be0ad7e3b6adc6798d491144f903fffff3ebdf8ebd973a2543f85c392\"" Jan 13 20:31:18.404716 containerd[1449]: time="2025-01-13T20:31:18.404651240Z" level=info msg="TearDown network for sandbox \"12075a8be0ad7e3b6adc6798d491144f903fffff3ebdf8ebd973a2543f85c392\" successfully" Jan 13 20:31:18.404716 containerd[1449]: time="2025-01-13T20:31:18.404661241Z" level=info msg="StopPodSandbox for \"12075a8be0ad7e3b6adc6798d491144f903fffff3ebdf8ebd973a2543f85c392\" returns successfully" Jan 13 20:31:18.405060 containerd[1449]: time="2025-01-13T20:31:18.405031752Z" level=info msg="RemovePodSandbox for \"12075a8be0ad7e3b6adc6798d491144f903fffff3ebdf8ebd973a2543f85c392\"" Jan 13 20:31:18.405170 containerd[1449]: time="2025-01-13T20:31:18.405061875Z" level=info msg="Forcibly stopping sandbox \"12075a8be0ad7e3b6adc6798d491144f903fffff3ebdf8ebd973a2543f85c392\"" Jan 13 20:31:18.405267 containerd[1449]: time="2025-01-13T20:31:18.405239410Z" level=info msg="TearDown network for sandbox \"12075a8be0ad7e3b6adc6798d491144f903fffff3ebdf8ebd973a2543f85c392\" successfully" Jan 13 20:31:18.407621 containerd[1449]: time="2025-01-13T20:31:18.407591131Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"12075a8be0ad7e3b6adc6798d491144f903fffff3ebdf8ebd973a2543f85c392\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:31:18.407689 containerd[1449]: time="2025-01-13T20:31:18.407643376Z" level=info msg="RemovePodSandbox \"12075a8be0ad7e3b6adc6798d491144f903fffff3ebdf8ebd973a2543f85c392\" returns successfully" Jan 13 20:31:18.407986 containerd[1449]: time="2025-01-13T20:31:18.407964283Z" level=info msg="StopPodSandbox for \"a0f6b2a96bbcddb21e2e35ca6ec1a7093fa5fa273e6581ee9a9af22197c639d8\"" Jan 13 20:31:18.408070 containerd[1449]: time="2025-01-13T20:31:18.408054531Z" level=info msg="TearDown network for sandbox \"a0f6b2a96bbcddb21e2e35ca6ec1a7093fa5fa273e6581ee9a9af22197c639d8\" successfully" Jan 13 20:31:18.408104 containerd[1449]: time="2025-01-13T20:31:18.408070572Z" level=info msg="StopPodSandbox for \"a0f6b2a96bbcddb21e2e35ca6ec1a7093fa5fa273e6581ee9a9af22197c639d8\" returns successfully" Jan 13 20:31:18.408380 containerd[1449]: time="2025-01-13T20:31:18.408348156Z" level=info msg="RemovePodSandbox for \"a0f6b2a96bbcddb21e2e35ca6ec1a7093fa5fa273e6581ee9a9af22197c639d8\"" Jan 13 20:31:18.408562 containerd[1449]: time="2025-01-13T20:31:18.408520531Z" level=info msg="Forcibly stopping sandbox \"a0f6b2a96bbcddb21e2e35ca6ec1a7093fa5fa273e6581ee9a9af22197c639d8\"" Jan 13 20:31:18.409746 containerd[1449]: time="2025-01-13T20:31:18.408679464Z" level=info msg="TearDown network for sandbox \"a0f6b2a96bbcddb21e2e35ca6ec1a7093fa5fa273e6581ee9a9af22197c639d8\" successfully" Jan 13 20:31:18.410901 containerd[1449]: time="2025-01-13T20:31:18.410870692Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a0f6b2a96bbcddb21e2e35ca6ec1a7093fa5fa273e6581ee9a9af22197c639d8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:31:18.411021 containerd[1449]: time="2025-01-13T20:31:18.411004943Z" level=info msg="RemovePodSandbox \"a0f6b2a96bbcddb21e2e35ca6ec1a7093fa5fa273e6581ee9a9af22197c639d8\" returns successfully" Jan 13 20:31:18.411401 containerd[1449]: time="2025-01-13T20:31:18.411373695Z" level=info msg="StopPodSandbox for \"7009ab9fe0d0c54a2f1a6562dda6cba704fec9b4594788d8e47830f96b28e57c\"" Jan 13 20:31:18.411498 containerd[1449]: time="2025-01-13T20:31:18.411479024Z" level=info msg="TearDown network for sandbox \"7009ab9fe0d0c54a2f1a6562dda6cba704fec9b4594788d8e47830f96b28e57c\" successfully" Jan 13 20:31:18.411575 containerd[1449]: time="2025-01-13T20:31:18.411497346Z" level=info msg="StopPodSandbox for \"7009ab9fe0d0c54a2f1a6562dda6cba704fec9b4594788d8e47830f96b28e57c\" returns successfully" Jan 13 20:31:18.411922 containerd[1449]: time="2025-01-13T20:31:18.411830934Z" level=info msg="RemovePodSandbox for \"7009ab9fe0d0c54a2f1a6562dda6cba704fec9b4594788d8e47830f96b28e57c\"" Jan 13 20:31:18.411922 containerd[1449]: time="2025-01-13T20:31:18.411869377Z" level=info msg="Forcibly stopping sandbox \"7009ab9fe0d0c54a2f1a6562dda6cba704fec9b4594788d8e47830f96b28e57c\"" Jan 13 20:31:18.411993 containerd[1449]: time="2025-01-13T20:31:18.411934263Z" level=info msg="TearDown network for sandbox \"7009ab9fe0d0c54a2f1a6562dda6cba704fec9b4594788d8e47830f96b28e57c\" successfully" Jan 13 20:31:18.414256 containerd[1449]: time="2025-01-13T20:31:18.414226059Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7009ab9fe0d0c54a2f1a6562dda6cba704fec9b4594788d8e47830f96b28e57c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:31:18.414305 containerd[1449]: time="2025-01-13T20:31:18.414276423Z" level=info msg="RemovePodSandbox \"7009ab9fe0d0c54a2f1a6562dda6cba704fec9b4594788d8e47830f96b28e57c\" returns successfully" Jan 13 20:31:18.414872 containerd[1449]: time="2025-01-13T20:31:18.414585650Z" level=info msg="StopPodSandbox for \"01d5bbe17e56f7c7083cb740ffa0476c833343a17ac74aeb1a900b08fcaf2d63\"" Jan 13 20:31:18.414872 containerd[1449]: time="2025-01-13T20:31:18.414675137Z" level=info msg="TearDown network for sandbox \"01d5bbe17e56f7c7083cb740ffa0476c833343a17ac74aeb1a900b08fcaf2d63\" successfully" Jan 13 20:31:18.414872 containerd[1449]: time="2025-01-13T20:31:18.414685138Z" level=info msg="StopPodSandbox for \"01d5bbe17e56f7c7083cb740ffa0476c833343a17ac74aeb1a900b08fcaf2d63\" returns successfully" Jan 13 20:31:18.414983 containerd[1449]: time="2025-01-13T20:31:18.414949201Z" level=info msg="RemovePodSandbox for \"01d5bbe17e56f7c7083cb740ffa0476c833343a17ac74aeb1a900b08fcaf2d63\"" Jan 13 20:31:18.414983 containerd[1449]: time="2025-01-13T20:31:18.414973283Z" level=info msg="Forcibly stopping sandbox \"01d5bbe17e56f7c7083cb740ffa0476c833343a17ac74aeb1a900b08fcaf2d63\"" Jan 13 20:31:18.415059 containerd[1449]: time="2025-01-13T20:31:18.415038969Z" level=info msg="TearDown network for sandbox \"01d5bbe17e56f7c7083cb740ffa0476c833343a17ac74aeb1a900b08fcaf2d63\" successfully" Jan 13 20:31:18.417483 containerd[1449]: time="2025-01-13T20:31:18.417452735Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"01d5bbe17e56f7c7083cb740ffa0476c833343a17ac74aeb1a900b08fcaf2d63\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:31:18.417545 containerd[1449]: time="2025-01-13T20:31:18.417503379Z" level=info msg="RemovePodSandbox \"01d5bbe17e56f7c7083cb740ffa0476c833343a17ac74aeb1a900b08fcaf2d63\" returns successfully" Jan 13 20:31:18.417991 containerd[1449]: time="2025-01-13T20:31:18.417967259Z" level=info msg="StopPodSandbox for \"2647bc3b6ece374655386d1a80524db37cf5858a2fee36103340216679d81300\"" Jan 13 20:31:18.418097 containerd[1449]: time="2025-01-13T20:31:18.418080189Z" level=info msg="TearDown network for sandbox \"2647bc3b6ece374655386d1a80524db37cf5858a2fee36103340216679d81300\" successfully" Jan 13 20:31:18.418119 containerd[1449]: time="2025-01-13T20:31:18.418096350Z" level=info msg="StopPodSandbox for \"2647bc3b6ece374655386d1a80524db37cf5858a2fee36103340216679d81300\" returns successfully" Jan 13 20:31:18.418420 containerd[1449]: time="2025-01-13T20:31:18.418397576Z" level=info msg="RemovePodSandbox for \"2647bc3b6ece374655386d1a80524db37cf5858a2fee36103340216679d81300\"" Jan 13 20:31:18.419628 containerd[1449]: time="2025-01-13T20:31:18.418503865Z" level=info msg="Forcibly stopping sandbox \"2647bc3b6ece374655386d1a80524db37cf5858a2fee36103340216679d81300\"" Jan 13 20:31:18.419628 containerd[1449]: time="2025-01-13T20:31:18.418595313Z" level=info msg="TearDown network for sandbox \"2647bc3b6ece374655386d1a80524db37cf5858a2fee36103340216679d81300\" successfully" Jan 13 20:31:18.420966 containerd[1449]: time="2025-01-13T20:31:18.420853386Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2647bc3b6ece374655386d1a80524db37cf5858a2fee36103340216679d81300\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:31:18.420966 containerd[1449]: time="2025-01-13T20:31:18.420904550Z" level=info msg="RemovePodSandbox \"2647bc3b6ece374655386d1a80524db37cf5858a2fee36103340216679d81300\" returns successfully" Jan 13 20:31:18.421285 containerd[1449]: time="2025-01-13T20:31:18.421264341Z" level=info msg="StopPodSandbox for \"7dc39cf12b6679d45e439963f1d8ea527ed639331f599299008cb3a8d971d87d\"" Jan 13 20:31:18.421361 containerd[1449]: time="2025-01-13T20:31:18.421346268Z" level=info msg="TearDown network for sandbox \"7dc39cf12b6679d45e439963f1d8ea527ed639331f599299008cb3a8d971d87d\" successfully" Jan 13 20:31:18.421386 containerd[1449]: time="2025-01-13T20:31:18.421359309Z" level=info msg="StopPodSandbox for \"7dc39cf12b6679d45e439963f1d8ea527ed639331f599299008cb3a8d971d87d\" returns successfully" Jan 13 20:31:18.422034 containerd[1449]: time="2025-01-13T20:31:18.421655455Z" level=info msg="RemovePodSandbox for \"7dc39cf12b6679d45e439963f1d8ea527ed639331f599299008cb3a8d971d87d\"" Jan 13 20:31:18.422034 containerd[1449]: time="2025-01-13T20:31:18.421690218Z" level=info msg="Forcibly stopping sandbox \"7dc39cf12b6679d45e439963f1d8ea527ed639331f599299008cb3a8d971d87d\"" Jan 13 20:31:18.422034 containerd[1449]: time="2025-01-13T20:31:18.421759704Z" level=info msg="TearDown network for sandbox \"7dc39cf12b6679d45e439963f1d8ea527ed639331f599299008cb3a8d971d87d\" successfully" Jan 13 20:31:18.423833 containerd[1449]: time="2025-01-13T20:31:18.423801078Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7dc39cf12b6679d45e439963f1d8ea527ed639331f599299008cb3a8d971d87d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:31:18.423894 containerd[1449]: time="2025-01-13T20:31:18.423852163Z" level=info msg="RemovePodSandbox \"7dc39cf12b6679d45e439963f1d8ea527ed639331f599299008cb3a8d971d87d\" returns successfully" Jan 13 20:31:18.424157 containerd[1449]: time="2025-01-13T20:31:18.424125746Z" level=info msg="StopPodSandbox for \"9554b1bca4b213353c5ba998a5dd52eb548f7df94ec5ff46724a5aaeb70f9be9\"" Jan 13 20:31:18.424242 containerd[1449]: time="2025-01-13T20:31:18.424221874Z" level=info msg="TearDown network for sandbox \"9554b1bca4b213353c5ba998a5dd52eb548f7df94ec5ff46724a5aaeb70f9be9\" successfully" Jan 13 20:31:18.424242 containerd[1449]: time="2025-01-13T20:31:18.424237115Z" level=info msg="StopPodSandbox for \"9554b1bca4b213353c5ba998a5dd52eb548f7df94ec5ff46724a5aaeb70f9be9\" returns successfully" Jan 13 20:31:18.425680 containerd[1449]: time="2025-01-13T20:31:18.424514819Z" level=info msg="RemovePodSandbox for \"9554b1bca4b213353c5ba998a5dd52eb548f7df94ec5ff46724a5aaeb70f9be9\"" Jan 13 20:31:18.425680 containerd[1449]: time="2025-01-13T20:31:18.424554623Z" level=info msg="Forcibly stopping sandbox \"9554b1bca4b213353c5ba998a5dd52eb548f7df94ec5ff46724a5aaeb70f9be9\"" Jan 13 20:31:18.425680 containerd[1449]: time="2025-01-13T20:31:18.424613228Z" level=info msg="TearDown network for sandbox \"9554b1bca4b213353c5ba998a5dd52eb548f7df94ec5ff46724a5aaeb70f9be9\" successfully" Jan 13 20:31:18.427965 containerd[1449]: time="2025-01-13T20:31:18.427176607Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9554b1bca4b213353c5ba998a5dd52eb548f7df94ec5ff46724a5aaeb70f9be9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:31:18.427965 containerd[1449]: time="2025-01-13T20:31:18.427924911Z" level=info msg="RemovePodSandbox \"9554b1bca4b213353c5ba998a5dd52eb548f7df94ec5ff46724a5aaeb70f9be9\" returns successfully" Jan 13 20:31:18.428469 containerd[1449]: time="2025-01-13T20:31:18.428442035Z" level=info msg="StopPodSandbox for \"dd4df888906b41b0619fd44b1ab4a0973b52e36f69ae2891ff60516df6099166\"" Jan 13 20:31:18.428571 containerd[1449]: time="2025-01-13T20:31:18.428534043Z" level=info msg="TearDown network for sandbox \"dd4df888906b41b0619fd44b1ab4a0973b52e36f69ae2891ff60516df6099166\" successfully" Jan 13 20:31:18.428571 containerd[1449]: time="2025-01-13T20:31:18.428568846Z" level=info msg="StopPodSandbox for \"dd4df888906b41b0619fd44b1ab4a0973b52e36f69ae2891ff60516df6099166\" returns successfully" Jan 13 20:31:18.429022 containerd[1449]: time="2025-01-13T20:31:18.428900554Z" level=info msg="RemovePodSandbox for \"dd4df888906b41b0619fd44b1ab4a0973b52e36f69ae2891ff60516df6099166\"" Jan 13 20:31:18.429063 containerd[1449]: time="2025-01-13T20:31:18.429035486Z" level=info msg="Forcibly stopping sandbox \"dd4df888906b41b0619fd44b1ab4a0973b52e36f69ae2891ff60516df6099166\"" Jan 13 20:31:18.429134 containerd[1449]: time="2025-01-13T20:31:18.429110252Z" level=info msg="TearDown network for sandbox \"dd4df888906b41b0619fd44b1ab4a0973b52e36f69ae2891ff60516df6099166\" successfully" Jan 13 20:31:18.431279 containerd[1449]: time="2025-01-13T20:31:18.431239595Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dd4df888906b41b0619fd44b1ab4a0973b52e36f69ae2891ff60516df6099166\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:31:18.431279 containerd[1449]: time="2025-01-13T20:31:18.431287319Z" level=info msg="RemovePodSandbox \"dd4df888906b41b0619fd44b1ab4a0973b52e36f69ae2891ff60516df6099166\" returns successfully" Jan 13 20:31:21.269803 systemd[1]: Started sshd@19-10.0.0.136:22-10.0.0.1:36362.service - OpenSSH per-connection server daemon (10.0.0.1:36362). Jan 13 20:31:21.359504 sshd[5898]: Accepted publickey for core from 10.0.0.1 port 36362 ssh2: RSA SHA256:dHV21v/TvsC6tzdcDH8HHQ5Gsjsp+3vcXiRjWYQ6Qqo Jan 13 20:31:21.360755 sshd-session[5898]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:31:21.364747 systemd-logind[1430]: New session 20 of user core. Jan 13 20:31:21.377744 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 13 20:31:21.522370 sshd[5900]: Connection closed by 10.0.0.1 port 36362 Jan 13 20:31:21.522276 sshd-session[5898]: pam_unix(sshd:session): session closed for user core Jan 13 20:31:21.526811 systemd[1]: sshd@19-10.0.0.136:22-10.0.0.1:36362.service: Deactivated successfully. Jan 13 20:31:21.529176 systemd[1]: session-20.scope: Deactivated successfully. Jan 13 20:31:21.531149 systemd-logind[1430]: Session 20 logged out. Waiting for processes to exit. Jan 13 20:31:21.532092 systemd-logind[1430]: Removed session 20.