Oct 8 19:43:29.959149 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Oct 8 19:43:29.959171 kernel: Linux version 6.6.54-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT Tue Oct 8 18:22:02 -00 2024 Oct 8 19:43:29.959181 kernel: KASLR enabled Oct 8 19:43:29.959187 kernel: efi: EFI v2.7 by EDK II Oct 8 19:43:29.959192 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Oct 8 19:43:29.959198 kernel: random: crng init done Oct 8 19:43:29.959205 kernel: ACPI: Early table checksum verification disabled Oct 8 19:43:29.959211 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Oct 8 19:43:29.959217 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Oct 8 19:43:29.959225 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:43:29.959231 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:43:29.959237 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:43:29.959243 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:43:29.959249 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:43:29.959256 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:43:29.959264 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:43:29.959271 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:43:29.959277 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:43:29.959284 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Oct 8 19:43:29.959290 kernel: NUMA: Failed to initialise from firmware Oct 8 19:43:29.959297 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Oct 8 19:43:29.959303 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Oct 8 19:43:29.959309 kernel: Zone ranges: Oct 8 19:43:29.959315 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Oct 8 19:43:29.959322 kernel: DMA32 empty Oct 8 19:43:29.959329 kernel: Normal empty Oct 8 19:43:29.959336 kernel: Movable zone start for each node Oct 8 19:43:29.959342 kernel: Early memory node ranges Oct 8 19:43:29.959348 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Oct 8 19:43:29.959354 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Oct 8 19:43:29.959361 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Oct 8 19:43:29.959367 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Oct 8 19:43:29.959373 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Oct 8 19:43:29.959380 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Oct 8 19:43:29.959386 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Oct 8 19:43:29.959392 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Oct 8 19:43:29.959399 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Oct 8 19:43:29.959406 kernel: psci: probing for conduit method from ACPI. Oct 8 19:43:29.959413 kernel: psci: PSCIv1.1 detected in firmware. Oct 8 19:43:29.959419 kernel: psci: Using standard PSCI v0.2 function IDs Oct 8 19:43:29.959428 kernel: psci: Trusted OS migration not required Oct 8 19:43:29.959434 kernel: psci: SMC Calling Convention v1.1 Oct 8 19:43:29.959441 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Oct 8 19:43:29.959449 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Oct 8 19:43:29.959456 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Oct 8 19:43:29.959463 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Oct 8 19:43:29.959470 kernel: Detected PIPT I-cache on CPU0 Oct 8 19:43:29.959477 kernel: CPU features: detected: GIC system register CPU interface Oct 8 19:43:29.959483 kernel: CPU features: detected: Hardware dirty bit management Oct 8 19:43:29.959490 kernel: CPU features: detected: Spectre-v4 Oct 8 19:43:29.959496 kernel: CPU features: detected: Spectre-BHB Oct 8 19:43:29.959503 kernel: CPU features: kernel page table isolation forced ON by KASLR Oct 8 19:43:29.959510 kernel: CPU features: detected: Kernel page table isolation (KPTI) Oct 8 19:43:29.959518 kernel: CPU features: detected: ARM erratum 1418040 Oct 8 19:43:29.959525 kernel: CPU features: detected: SSBS not fully self-synchronizing Oct 8 19:43:29.959531 kernel: alternatives: applying boot alternatives Oct 8 19:43:29.959539 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c838587f25bc3913a152d0e9ed071e943b77b8dea81b67c254bbd10c29051fd2 Oct 8 19:43:29.959546 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 8 19:43:29.959553 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 8 19:43:29.959560 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 8 19:43:29.959567 kernel: Fallback order for Node 0: 0 Oct 8 19:43:29.959573 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Oct 8 19:43:29.959580 kernel: Policy zone: DMA Oct 8 19:43:29.959587 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 8 19:43:29.959595 kernel: software IO TLB: area num 4. Oct 8 19:43:29.959602 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Oct 8 19:43:29.959609 kernel: Memory: 2386788K/2572288K available (10240K kernel code, 2184K rwdata, 8080K rodata, 39104K init, 897K bss, 185500K reserved, 0K cma-reserved) Oct 8 19:43:29.959616 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Oct 8 19:43:29.959623 kernel: trace event string verifier disabled Oct 8 19:43:29.959629 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 8 19:43:29.959636 kernel: rcu: RCU event tracing is enabled. Oct 8 19:43:29.959643 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Oct 8 19:43:29.959650 kernel: Trampoline variant of Tasks RCU enabled. Oct 8 19:43:29.959657 kernel: Tracing variant of Tasks RCU enabled. Oct 8 19:43:29.959664 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 8 19:43:29.959670 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Oct 8 19:43:29.959763 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Oct 8 19:43:29.959776 kernel: GICv3: 256 SPIs implemented Oct 8 19:43:29.959783 kernel: GICv3: 0 Extended SPIs implemented Oct 8 19:43:29.959790 kernel: Root IRQ handler: gic_handle_irq Oct 8 19:43:29.959797 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Oct 8 19:43:29.959804 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Oct 8 19:43:29.959810 kernel: ITS [mem 0x08080000-0x0809ffff] Oct 8 19:43:29.959817 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400d0000 (indirect, esz 8, psz 64K, shr 1) Oct 8 19:43:29.959824 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400e0000 (flat, esz 8, psz 64K, shr 1) Oct 8 19:43:29.959831 kernel: GICv3: using LPI property table @0x00000000400f0000 Oct 8 19:43:29.959838 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Oct 8 19:43:29.959849 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 8 19:43:29.959856 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 8 19:43:29.959862 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Oct 8 19:43:29.959869 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Oct 8 19:43:29.959876 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Oct 8 19:43:29.959883 kernel: arm-pv: using stolen time PV Oct 8 19:43:29.959893 kernel: Console: colour dummy device 80x25 Oct 8 19:43:29.959900 kernel: ACPI: Core revision 20230628 Oct 8 19:43:29.959908 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Oct 8 19:43:29.959914 kernel: pid_max: default: 32768 minimum: 301 Oct 8 19:43:29.959923 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Oct 8 19:43:29.959930 kernel: SELinux: Initializing. Oct 8 19:43:29.959937 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 8 19:43:29.959944 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 8 19:43:29.959951 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Oct 8 19:43:29.959958 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Oct 8 19:43:29.959965 kernel: rcu: Hierarchical SRCU implementation. Oct 8 19:43:29.959972 kernel: rcu: Max phase no-delay instances is 400. Oct 8 19:43:29.959979 kernel: Platform MSI: ITS@0x8080000 domain created Oct 8 19:43:29.959988 kernel: PCI/MSI: ITS@0x8080000 domain created Oct 8 19:43:29.959995 kernel: Remapping and enabling EFI services. Oct 8 19:43:29.960002 kernel: smp: Bringing up secondary CPUs ... Oct 8 19:43:29.960009 kernel: Detected PIPT I-cache on CPU1 Oct 8 19:43:29.960016 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Oct 8 19:43:29.960029 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Oct 8 19:43:29.960036 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 8 19:43:29.960043 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Oct 8 19:43:29.960050 kernel: Detected PIPT I-cache on CPU2 Oct 8 19:43:29.960057 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Oct 8 19:43:29.960066 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Oct 8 19:43:29.960073 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 8 19:43:29.960085 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Oct 8 19:43:29.960097 kernel: Detected PIPT I-cache on CPU3 Oct 8 19:43:29.960105 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Oct 8 19:43:29.960112 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Oct 8 19:43:29.960119 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 8 19:43:29.960126 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Oct 8 19:43:29.960134 kernel: smp: Brought up 1 node, 4 CPUs Oct 8 19:43:29.960143 kernel: SMP: Total of 4 processors activated. Oct 8 19:43:29.960150 kernel: CPU features: detected: 32-bit EL0 Support Oct 8 19:43:29.960158 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Oct 8 19:43:29.960165 kernel: CPU features: detected: Common not Private translations Oct 8 19:43:29.960172 kernel: CPU features: detected: CRC32 instructions Oct 8 19:43:29.960180 kernel: CPU features: detected: Enhanced Virtualization Traps Oct 8 19:43:29.960187 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Oct 8 19:43:29.960194 kernel: CPU features: detected: LSE atomic instructions Oct 8 19:43:29.960203 kernel: CPU features: detected: Privileged Access Never Oct 8 19:43:29.960210 kernel: CPU features: detected: RAS Extension Support Oct 8 19:43:29.960217 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Oct 8 19:43:29.960224 kernel: CPU: All CPU(s) started at EL1 Oct 8 19:43:29.960231 kernel: alternatives: applying system-wide alternatives Oct 8 19:43:29.960239 kernel: devtmpfs: initialized Oct 8 19:43:29.960246 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 8 19:43:29.960253 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Oct 8 19:43:29.960261 kernel: pinctrl core: initialized pinctrl subsystem Oct 8 19:43:29.960270 kernel: SMBIOS 3.0.0 present. Oct 8 19:43:29.960278 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Oct 8 19:43:29.960285 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 8 19:43:29.960293 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Oct 8 19:43:29.960300 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Oct 8 19:43:29.960307 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Oct 8 19:43:29.960315 kernel: audit: initializing netlink subsys (disabled) Oct 8 19:43:29.960322 kernel: audit: type=2000 audit(0.024:1): state=initialized audit_enabled=0 res=1 Oct 8 19:43:29.960329 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 8 19:43:29.960338 kernel: cpuidle: using governor menu Oct 8 19:43:29.960346 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Oct 8 19:43:29.960353 kernel: ASID allocator initialised with 32768 entries Oct 8 19:43:29.960360 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 8 19:43:29.960367 kernel: Serial: AMBA PL011 UART driver Oct 8 19:43:29.960375 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Oct 8 19:43:29.960382 kernel: Modules: 0 pages in range for non-PLT usage Oct 8 19:43:29.960389 kernel: Modules: 509104 pages in range for PLT usage Oct 8 19:43:29.960396 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 8 19:43:29.960405 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Oct 8 19:43:29.960412 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Oct 8 19:43:29.960420 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Oct 8 19:43:29.960427 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 8 19:43:29.960434 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Oct 8 19:43:29.960441 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Oct 8 19:43:29.960449 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Oct 8 19:43:29.960456 kernel: ACPI: Added _OSI(Module Device) Oct 8 19:43:29.960463 kernel: ACPI: Added _OSI(Processor Device) Oct 8 19:43:29.960472 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Oct 8 19:43:29.960479 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 8 19:43:29.960487 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 8 19:43:29.960494 kernel: ACPI: Interpreter enabled Oct 8 19:43:29.960501 kernel: ACPI: Using GIC for interrupt routing Oct 8 19:43:29.960508 kernel: ACPI: MCFG table detected, 1 entries Oct 8 19:43:29.960517 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Oct 8 19:43:29.960524 kernel: printk: console [ttyAMA0] enabled Oct 8 19:43:29.960531 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 8 19:43:29.960675 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 8 19:43:29.960780 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Oct 8 19:43:29.960847 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Oct 8 19:43:29.960910 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Oct 8 19:43:29.960974 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Oct 8 19:43:29.960984 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Oct 8 19:43:29.960992 kernel: PCI host bridge to bus 0000:00 Oct 8 19:43:29.961078 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Oct 8 19:43:29.961140 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Oct 8 19:43:29.961198 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Oct 8 19:43:29.961255 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 8 19:43:29.961335 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Oct 8 19:43:29.961409 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Oct 8 19:43:29.961478 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Oct 8 19:43:29.961549 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Oct 8 19:43:29.961614 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Oct 8 19:43:29.961698 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Oct 8 19:43:29.961767 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Oct 8 19:43:29.961834 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Oct 8 19:43:29.961894 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Oct 8 19:43:29.961952 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Oct 8 19:43:29.962015 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Oct 8 19:43:29.962033 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Oct 8 19:43:29.962041 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Oct 8 19:43:29.962048 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Oct 8 19:43:29.962056 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Oct 8 19:43:29.962063 kernel: iommu: Default domain type: Translated Oct 8 19:43:29.962070 kernel: iommu: DMA domain TLB invalidation policy: strict mode Oct 8 19:43:29.962077 kernel: efivars: Registered efivars operations Oct 8 19:43:29.962087 kernel: vgaarb: loaded Oct 8 19:43:29.962094 kernel: clocksource: Switched to clocksource arch_sys_counter Oct 8 19:43:29.962102 kernel: VFS: Disk quotas dquot_6.6.0 Oct 8 19:43:29.962110 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 8 19:43:29.962117 kernel: pnp: PnP ACPI init Oct 8 19:43:29.962201 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Oct 8 19:43:29.962212 kernel: pnp: PnP ACPI: found 1 devices Oct 8 19:43:29.962220 kernel: NET: Registered PF_INET protocol family Oct 8 19:43:29.962230 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 8 19:43:29.962238 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 8 19:43:29.962245 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 8 19:43:29.962255 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 8 19:43:29.962263 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Oct 8 19:43:29.962272 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 8 19:43:29.962280 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 8 19:43:29.962292 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 8 19:43:29.962300 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 8 19:43:29.962310 kernel: PCI: CLS 0 bytes, default 64 Oct 8 19:43:29.962318 kernel: kvm [1]: HYP mode not available Oct 8 19:43:29.962325 kernel: Initialise system trusted keyrings Oct 8 19:43:29.962332 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 8 19:43:29.962340 kernel: Key type asymmetric registered Oct 8 19:43:29.962349 kernel: Asymmetric key parser 'x509' registered Oct 8 19:43:29.962357 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Oct 8 19:43:29.962368 kernel: io scheduler mq-deadline registered Oct 8 19:43:29.962381 kernel: io scheduler kyber registered Oct 8 19:43:29.962392 kernel: io scheduler bfq registered Oct 8 19:43:29.962402 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Oct 8 19:43:29.962409 kernel: ACPI: button: Power Button [PWRB] Oct 8 19:43:29.962417 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Oct 8 19:43:29.962498 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Oct 8 19:43:29.962513 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 8 19:43:29.962521 kernel: thunder_xcv, ver 1.0 Oct 8 19:43:29.962528 kernel: thunder_bgx, ver 1.0 Oct 8 19:43:29.962536 kernel: nicpf, ver 1.0 Oct 8 19:43:29.962544 kernel: nicvf, ver 1.0 Oct 8 19:43:29.962623 kernel: rtc-efi rtc-efi.0: registered as rtc0 Oct 8 19:43:29.962730 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-10-08T19:43:29 UTC (1728416609) Oct 8 19:43:29.962742 kernel: hid: raw HID events driver (C) Jiri Kosina Oct 8 19:43:29.962749 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Oct 8 19:43:29.962757 kernel: watchdog: Delayed init of the lockup detector failed: -19 Oct 8 19:43:29.962764 kernel: watchdog: Hard watchdog permanently disabled Oct 8 19:43:29.962771 kernel: NET: Registered PF_INET6 protocol family Oct 8 19:43:29.962782 kernel: Segment Routing with IPv6 Oct 8 19:43:29.962789 kernel: In-situ OAM (IOAM) with IPv6 Oct 8 19:43:29.963433 kernel: NET: Registered PF_PACKET protocol family Oct 8 19:43:29.963457 kernel: Key type dns_resolver registered Oct 8 19:43:29.963465 kernel: registered taskstats version 1 Oct 8 19:43:29.963473 kernel: Loading compiled-in X.509 certificates Oct 8 19:43:29.963480 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.54-flatcar: e5b54c43c129014ce5ace0e8cd7b641a0fcb136e' Oct 8 19:43:29.963488 kernel: Key type .fscrypt registered Oct 8 19:43:29.963496 kernel: Key type fscrypt-provisioning registered Oct 8 19:43:29.963508 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 8 19:43:29.963515 kernel: ima: Allocated hash algorithm: sha1 Oct 8 19:43:29.963523 kernel: ima: No architecture policies found Oct 8 19:43:29.963531 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Oct 8 19:43:29.963539 kernel: clk: Disabling unused clocks Oct 8 19:43:29.963546 kernel: Freeing unused kernel memory: 39104K Oct 8 19:43:29.963554 kernel: Run /init as init process Oct 8 19:43:29.963561 kernel: with arguments: Oct 8 19:43:29.963569 kernel: /init Oct 8 19:43:29.963581 kernel: with environment: Oct 8 19:43:29.963594 kernel: HOME=/ Oct 8 19:43:29.963601 kernel: TERM=linux Oct 8 19:43:29.963609 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 8 19:43:29.963619 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 8 19:43:29.963629 systemd[1]: Detected virtualization kvm. Oct 8 19:43:29.963638 systemd[1]: Detected architecture arm64. Oct 8 19:43:29.963645 systemd[1]: Running in initrd. Oct 8 19:43:29.963655 systemd[1]: No hostname configured, using default hostname. Oct 8 19:43:29.963663 systemd[1]: Hostname set to . Oct 8 19:43:29.963671 systemd[1]: Initializing machine ID from VM UUID. Oct 8 19:43:29.963697 systemd[1]: Queued start job for default target initrd.target. Oct 8 19:43:29.963706 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 8 19:43:29.963714 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 8 19:43:29.963723 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 8 19:43:29.963731 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 8 19:43:29.963742 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 8 19:43:29.963750 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 8 19:43:29.963760 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Oct 8 19:43:29.963768 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Oct 8 19:43:29.963777 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 8 19:43:29.963785 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 8 19:43:29.963795 systemd[1]: Reached target paths.target - Path Units. Oct 8 19:43:29.963803 systemd[1]: Reached target slices.target - Slice Units. Oct 8 19:43:29.963811 systemd[1]: Reached target swap.target - Swaps. Oct 8 19:43:29.963819 systemd[1]: Reached target timers.target - Timer Units. Oct 8 19:43:29.963827 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 8 19:43:29.963835 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 8 19:43:29.963844 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 8 19:43:29.963852 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Oct 8 19:43:29.963860 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 8 19:43:29.963869 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 8 19:43:29.963878 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 8 19:43:29.963886 systemd[1]: Reached target sockets.target - Socket Units. Oct 8 19:43:29.963894 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 8 19:43:29.963903 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 8 19:43:29.963911 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 8 19:43:29.963919 systemd[1]: Starting systemd-fsck-usr.service... Oct 8 19:43:29.963927 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 8 19:43:29.963935 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 8 19:43:29.963945 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 19:43:29.963953 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 8 19:43:29.963962 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 8 19:43:29.963970 systemd[1]: Finished systemd-fsck-usr.service. Oct 8 19:43:29.963979 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 8 19:43:29.963989 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 19:43:29.963997 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 8 19:43:29.964006 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 8 19:43:29.964014 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 8 19:43:29.964058 systemd-journald[236]: Collecting audit messages is disabled. Oct 8 19:43:29.964082 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 8 19:43:29.964091 systemd-journald[236]: Journal started Oct 8 19:43:29.964110 systemd-journald[236]: Runtime Journal (/run/log/journal/040e1087368e4ae4a1fd567f431b52fe) is 5.9M, max 47.3M, 41.4M free. Oct 8 19:43:29.944399 systemd-modules-load[237]: Inserted module 'overlay' Oct 8 19:43:29.966053 systemd[1]: Started systemd-journald.service - Journal Service. Oct 8 19:43:29.967470 systemd-modules-load[237]: Inserted module 'br_netfilter' Oct 8 19:43:29.968538 kernel: Bridge firewalling registered Oct 8 19:43:29.968641 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 8 19:43:29.970126 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 8 19:43:29.983841 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 8 19:43:29.985614 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Oct 8 19:43:29.987268 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 19:43:29.991122 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 8 19:43:29.994446 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 8 19:43:30.002002 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Oct 8 19:43:30.005080 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 8 19:43:30.013402 dracut-cmdline[275]: dracut-dracut-053 Oct 8 19:43:30.015856 dracut-cmdline[275]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c838587f25bc3913a152d0e9ed071e943b77b8dea81b67c254bbd10c29051fd2 Oct 8 19:43:30.032906 systemd-resolved[281]: Positive Trust Anchors: Oct 8 19:43:30.032923 systemd-resolved[281]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 8 19:43:30.032953 systemd-resolved[281]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Oct 8 19:43:30.037598 systemd-resolved[281]: Defaulting to hostname 'linux'. Oct 8 19:43:30.038609 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 8 19:43:30.042144 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 8 19:43:30.082721 kernel: SCSI subsystem initialized Oct 8 19:43:30.087696 kernel: Loading iSCSI transport class v2.0-870. Oct 8 19:43:30.094706 kernel: iscsi: registered transport (tcp) Oct 8 19:43:30.109709 kernel: iscsi: registered transport (qla4xxx) Oct 8 19:43:30.109760 kernel: QLogic iSCSI HBA Driver Oct 8 19:43:30.157942 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 8 19:43:30.178863 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 8 19:43:30.198986 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 8 19:43:30.199051 kernel: device-mapper: uevent: version 1.0.3 Oct 8 19:43:30.200100 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Oct 8 19:43:30.247722 kernel: raid6: neonx8 gen() 15747 MB/s Oct 8 19:43:30.264707 kernel: raid6: neonx4 gen() 15625 MB/s Oct 8 19:43:30.281710 kernel: raid6: neonx2 gen() 13230 MB/s Oct 8 19:43:30.298704 kernel: raid6: neonx1 gen() 10437 MB/s Oct 8 19:43:30.315700 kernel: raid6: int64x8 gen() 6952 MB/s Oct 8 19:43:30.332710 kernel: raid6: int64x4 gen() 7352 MB/s Oct 8 19:43:30.349712 kernel: raid6: int64x2 gen() 6120 MB/s Oct 8 19:43:30.366704 kernel: raid6: int64x1 gen() 5053 MB/s Oct 8 19:43:30.366724 kernel: raid6: using algorithm neonx8 gen() 15747 MB/s Oct 8 19:43:30.383859 kernel: raid6: .... xor() 11906 MB/s, rmw enabled Oct 8 19:43:30.383895 kernel: raid6: using neon recovery algorithm Oct 8 19:43:30.390103 kernel: xor: measuring software checksum speed Oct 8 19:43:30.390128 kernel: 8regs : 19754 MB/sec Oct 8 19:43:30.390137 kernel: 32regs : 19674 MB/sec Oct 8 19:43:30.393125 kernel: arm64_neon : 1752 MB/sec Oct 8 19:43:30.393141 kernel: xor: using function: 8regs (19754 MB/sec) Oct 8 19:43:30.443812 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 8 19:43:30.453926 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 8 19:43:30.464859 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 8 19:43:30.476626 systemd-udevd[461]: Using default interface naming scheme 'v255'. Oct 8 19:43:30.485966 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 8 19:43:30.503418 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 8 19:43:30.516058 dracut-pre-trigger[473]: rd.md=0: removing MD RAID activation Oct 8 19:43:30.542274 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 8 19:43:30.549851 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 8 19:43:30.587173 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 8 19:43:30.596982 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 8 19:43:30.609909 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 8 19:43:30.612554 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 8 19:43:30.615656 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 8 19:43:30.618223 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 8 19:43:30.626882 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Oct 8 19:43:30.632168 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Oct 8 19:43:30.637470 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 8 19:43:30.637514 kernel: GPT:9289727 != 19775487 Oct 8 19:43:30.637525 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 8 19:43:30.637535 kernel: GPT:9289727 != 19775487 Oct 8 19:43:30.637553 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 8 19:43:30.637563 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 8 19:43:30.634034 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 8 19:43:30.639511 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 8 19:43:30.639616 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 19:43:30.644211 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 8 19:43:30.645873 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 8 19:43:30.646053 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 19:43:30.652377 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 19:43:30.662667 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (519) Oct 8 19:43:30.664911 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 19:43:30.669591 kernel: BTRFS: device fsid a2a78d47-736b-4018-a518-3cfb16920575 devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (521) Oct 8 19:43:30.667413 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 8 19:43:30.676148 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Oct 8 19:43:30.682712 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 19:43:30.688339 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Oct 8 19:43:30.698414 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Oct 8 19:43:30.699642 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Oct 8 19:43:30.705950 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 8 19:43:30.720855 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 8 19:43:30.725836 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 8 19:43:30.729583 disk-uuid[553]: Primary Header is updated. Oct 8 19:43:30.729583 disk-uuid[553]: Secondary Entries is updated. Oct 8 19:43:30.729583 disk-uuid[553]: Secondary Header is updated. Oct 8 19:43:30.732825 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 8 19:43:30.745084 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 19:43:31.742704 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 8 19:43:31.742807 disk-uuid[554]: The operation has completed successfully. Oct 8 19:43:31.770278 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 8 19:43:31.771369 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 8 19:43:31.794860 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Oct 8 19:43:31.798038 sh[576]: Success Oct 8 19:43:31.815704 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Oct 8 19:43:31.851460 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Oct 8 19:43:31.853314 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Oct 8 19:43:31.854275 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Oct 8 19:43:31.865609 kernel: BTRFS info (device dm-0): first mount of filesystem a2a78d47-736b-4018-a518-3cfb16920575 Oct 8 19:43:31.865647 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Oct 8 19:43:31.865665 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Oct 8 19:43:31.866708 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 8 19:43:31.868072 kernel: BTRFS info (device dm-0): using free space tree Oct 8 19:43:31.871813 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Oct 8 19:43:31.873085 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 8 19:43:31.885465 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 8 19:43:31.887111 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 8 19:43:31.895732 kernel: BTRFS info (device vda6): first mount of filesystem 95ed8f66-d8c4-4374-b329-28c20748d95f Oct 8 19:43:31.895775 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 8 19:43:31.895792 kernel: BTRFS info (device vda6): using free space tree Oct 8 19:43:31.901702 kernel: BTRFS info (device vda6): auto enabling async discard Oct 8 19:43:31.910183 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 8 19:43:31.911942 kernel: BTRFS info (device vda6): last unmount of filesystem 95ed8f66-d8c4-4374-b329-28c20748d95f Oct 8 19:43:31.917884 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 8 19:43:31.925866 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 8 19:43:31.996913 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 8 19:43:32.011890 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 8 19:43:32.021879 ignition[672]: Ignition 2.18.0 Oct 8 19:43:32.021893 ignition[672]: Stage: fetch-offline Oct 8 19:43:32.021926 ignition[672]: no configs at "/usr/lib/ignition/base.d" Oct 8 19:43:32.021934 ignition[672]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 8 19:43:32.022013 ignition[672]: parsed url from cmdline: "" Oct 8 19:43:32.022016 ignition[672]: no config URL provided Oct 8 19:43:32.022030 ignition[672]: reading system config file "/usr/lib/ignition/user.ign" Oct 8 19:43:32.022038 ignition[672]: no config at "/usr/lib/ignition/user.ign" Oct 8 19:43:32.022069 ignition[672]: op(1): [started] loading QEMU firmware config module Oct 8 19:43:32.022074 ignition[672]: op(1): executing: "modprobe" "qemu_fw_cfg" Oct 8 19:43:32.032142 ignition[672]: op(1): [finished] loading QEMU firmware config module Oct 8 19:43:32.039126 systemd-networkd[771]: lo: Link UP Oct 8 19:43:32.039135 systemd-networkd[771]: lo: Gained carrier Oct 8 19:43:32.039857 systemd-networkd[771]: Enumeration completed Oct 8 19:43:32.040100 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 8 19:43:32.040255 systemd-networkd[771]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 19:43:32.040259 systemd-networkd[771]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 8 19:43:32.041505 systemd-networkd[771]: eth0: Link UP Oct 8 19:43:32.041508 systemd-networkd[771]: eth0: Gained carrier Oct 8 19:43:32.041514 systemd-networkd[771]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 19:43:32.042273 systemd[1]: Reached target network.target - Network. Oct 8 19:43:32.061731 systemd-networkd[771]: eth0: DHCPv4 address 10.0.0.84/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 8 19:43:32.081823 ignition[672]: parsing config with SHA512: 7d05314490473d37509f8596e190b92d4fcf7accd0db32117b35a80034db158403f1be9c7e182fc66d2e01f7f340124df0c4a2e5c19632d9a2af8241b8d1aaa4 Oct 8 19:43:32.086520 unknown[672]: fetched base config from "system" Oct 8 19:43:32.086530 unknown[672]: fetched user config from "qemu" Oct 8 19:43:32.087067 ignition[672]: fetch-offline: fetch-offline passed Oct 8 19:43:32.088526 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 8 19:43:32.087134 ignition[672]: Ignition finished successfully Oct 8 19:43:32.090401 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 8 19:43:32.099867 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 8 19:43:32.110529 ignition[776]: Ignition 2.18.0 Oct 8 19:43:32.110542 ignition[776]: Stage: kargs Oct 8 19:43:32.110737 ignition[776]: no configs at "/usr/lib/ignition/base.d" Oct 8 19:43:32.110748 ignition[776]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 8 19:43:32.113827 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 8 19:43:32.111667 ignition[776]: kargs: kargs passed Oct 8 19:43:32.116159 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 8 19:43:32.111723 ignition[776]: Ignition finished successfully Oct 8 19:43:32.129698 ignition[784]: Ignition 2.18.0 Oct 8 19:43:32.129710 ignition[784]: Stage: disks Oct 8 19:43:32.130232 ignition[784]: no configs at "/usr/lib/ignition/base.d" Oct 8 19:43:32.130243 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 8 19:43:32.131105 ignition[784]: disks: disks passed Oct 8 19:43:32.131147 ignition[784]: Ignition finished successfully Oct 8 19:43:32.134563 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 8 19:43:32.136258 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 8 19:43:32.137320 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 8 19:43:32.138484 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 8 19:43:32.140205 systemd[1]: Reached target sysinit.target - System Initialization. Oct 8 19:43:32.142107 systemd[1]: Reached target basic.target - Basic System. Oct 8 19:43:32.154015 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 8 19:43:32.165871 systemd-fsck[794]: ROOT: clean, 14/553520 files, 52654/553472 blocks Oct 8 19:43:32.170455 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 8 19:43:32.183865 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 8 19:43:32.237188 kernel: EXT4-fs (vda9): mounted filesystem fbf53fb2-c32f-44fa-a235-3100e56d8882 r/w with ordered data mode. Quota mode: none. Oct 8 19:43:32.237232 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 8 19:43:32.238435 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 8 19:43:32.251813 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 8 19:43:32.253714 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 8 19:43:32.255127 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Oct 8 19:43:32.255169 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 8 19:43:32.261777 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (802) Oct 8 19:43:32.262002 kernel: BTRFS info (device vda6): first mount of filesystem 95ed8f66-d8c4-4374-b329-28c20748d95f Oct 8 19:43:32.255191 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 8 19:43:32.266699 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 8 19:43:32.266720 kernel: BTRFS info (device vda6): using free space tree Oct 8 19:43:32.260152 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 8 19:43:32.266676 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 8 19:43:32.270450 kernel: BTRFS info (device vda6): auto enabling async discard Oct 8 19:43:32.271646 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 8 19:43:32.311562 initrd-setup-root[826]: cut: /sysroot/etc/passwd: No such file or directory Oct 8 19:43:32.314785 initrd-setup-root[833]: cut: /sysroot/etc/group: No such file or directory Oct 8 19:43:32.317917 initrd-setup-root[840]: cut: /sysroot/etc/shadow: No such file or directory Oct 8 19:43:32.320979 initrd-setup-root[847]: cut: /sysroot/etc/gshadow: No such file or directory Oct 8 19:43:32.402552 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 8 19:43:32.414833 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 8 19:43:32.417643 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 8 19:43:32.423768 kernel: BTRFS info (device vda6): last unmount of filesystem 95ed8f66-d8c4-4374-b329-28c20748d95f Oct 8 19:43:32.441959 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 8 19:43:32.444866 ignition[914]: INFO : Ignition 2.18.0 Oct 8 19:43:32.444866 ignition[914]: INFO : Stage: mount Oct 8 19:43:32.447436 ignition[914]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 8 19:43:32.447436 ignition[914]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 8 19:43:32.447436 ignition[914]: INFO : mount: mount passed Oct 8 19:43:32.447436 ignition[914]: INFO : Ignition finished successfully Oct 8 19:43:32.447231 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 8 19:43:32.466823 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 8 19:43:32.864583 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 8 19:43:32.876912 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 8 19:43:32.886915 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (930) Oct 8 19:43:32.889505 kernel: BTRFS info (device vda6): first mount of filesystem 95ed8f66-d8c4-4374-b329-28c20748d95f Oct 8 19:43:32.889524 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 8 19:43:32.889535 kernel: BTRFS info (device vda6): using free space tree Oct 8 19:43:32.893700 kernel: BTRFS info (device vda6): auto enabling async discard Oct 8 19:43:32.894400 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 8 19:43:32.920399 ignition[947]: INFO : Ignition 2.18.0 Oct 8 19:43:32.921906 ignition[947]: INFO : Stage: files Oct 8 19:43:32.921906 ignition[947]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 8 19:43:32.921906 ignition[947]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 8 19:43:32.924785 ignition[947]: DEBUG : files: compiled without relabeling support, skipping Oct 8 19:43:32.926217 ignition[947]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 8 19:43:32.926217 ignition[947]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 8 19:43:32.931439 ignition[947]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 8 19:43:32.932726 ignition[947]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 8 19:43:32.934035 ignition[947]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 8 19:43:32.933116 unknown[947]: wrote ssh authorized keys file for user: core Oct 8 19:43:32.937156 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Oct 8 19:43:32.937156 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Oct 8 19:43:32.985100 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 8 19:43:33.137963 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Oct 8 19:43:33.137963 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Oct 8 19:43:33.142429 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Oct 8 19:43:33.142429 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 8 19:43:33.142429 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 8 19:43:33.142429 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 8 19:43:33.142429 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 8 19:43:33.142429 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 8 19:43:33.142429 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 8 19:43:33.142429 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 8 19:43:33.142429 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 8 19:43:33.142429 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Oct 8 19:43:33.142429 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Oct 8 19:43:33.142429 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Oct 8 19:43:33.142429 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 Oct 8 19:43:33.283157 systemd-networkd[771]: eth0: Gained IPv6LL Oct 8 19:43:33.468893 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Oct 8 19:43:33.726430 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Oct 8 19:43:33.726430 ignition[947]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Oct 8 19:43:33.729991 ignition[947]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 8 19:43:33.729991 ignition[947]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 8 19:43:33.729991 ignition[947]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Oct 8 19:43:33.729991 ignition[947]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Oct 8 19:43:33.729991 ignition[947]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 8 19:43:33.729991 ignition[947]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 8 19:43:33.729991 ignition[947]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Oct 8 19:43:33.729991 ignition[947]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Oct 8 19:43:33.759635 ignition[947]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 8 19:43:33.763342 ignition[947]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 8 19:43:33.764977 ignition[947]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Oct 8 19:43:33.764977 ignition[947]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Oct 8 19:43:33.764977 ignition[947]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Oct 8 19:43:33.764977 ignition[947]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 8 19:43:33.764977 ignition[947]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 8 19:43:33.764977 ignition[947]: INFO : files: files passed Oct 8 19:43:33.764977 ignition[947]: INFO : Ignition finished successfully Oct 8 19:43:33.766338 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 8 19:43:33.779105 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 8 19:43:33.780832 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 8 19:43:33.784239 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 8 19:43:33.784353 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 8 19:43:33.789996 initrd-setup-root-after-ignition[977]: grep: /sysroot/oem/oem-release: No such file or directory Oct 8 19:43:33.793649 initrd-setup-root-after-ignition[979]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 8 19:43:33.793649 initrd-setup-root-after-ignition[979]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 8 19:43:33.796786 initrd-setup-root-after-ignition[983]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 8 19:43:33.797664 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 8 19:43:33.799737 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 8 19:43:33.815862 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 8 19:43:33.835530 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 8 19:43:33.835636 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 8 19:43:33.837221 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 8 19:43:33.838832 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 8 19:43:33.840851 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 8 19:43:33.841548 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 8 19:43:33.857864 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 8 19:43:33.860306 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 8 19:43:33.871063 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 8 19:43:33.872279 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 8 19:43:33.874446 systemd[1]: Stopped target timers.target - Timer Units. Oct 8 19:43:33.876207 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 8 19:43:33.876324 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 8 19:43:33.878823 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 8 19:43:33.880851 systemd[1]: Stopped target basic.target - Basic System. Oct 8 19:43:33.882459 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 8 19:43:33.884148 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 8 19:43:33.886048 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 8 19:43:33.888084 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 8 19:43:33.889887 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 8 19:43:33.891795 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 8 19:43:33.893718 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 8 19:43:33.895442 systemd[1]: Stopped target swap.target - Swaps. Oct 8 19:43:33.896945 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 8 19:43:33.897081 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 8 19:43:33.899408 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 8 19:43:33.900546 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 8 19:43:33.902433 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 8 19:43:33.903241 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 8 19:43:33.904498 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 8 19:43:33.904616 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 8 19:43:33.907296 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 8 19:43:33.907421 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 8 19:43:33.909744 systemd[1]: Stopped target paths.target - Path Units. Oct 8 19:43:33.911316 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 8 19:43:33.911783 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 8 19:43:33.913315 systemd[1]: Stopped target slices.target - Slice Units. Oct 8 19:43:33.914960 systemd[1]: Stopped target sockets.target - Socket Units. Oct 8 19:43:33.916698 systemd[1]: iscsid.socket: Deactivated successfully. Oct 8 19:43:33.916791 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 8 19:43:33.918397 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 8 19:43:33.918480 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 8 19:43:33.920086 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 8 19:43:33.920202 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 8 19:43:33.922401 systemd[1]: ignition-files.service: Deactivated successfully. Oct 8 19:43:33.922509 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 8 19:43:33.933878 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 8 19:43:33.934785 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 8 19:43:33.934935 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 8 19:43:33.938091 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 8 19:43:33.939518 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 8 19:43:33.939666 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 8 19:43:33.941670 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 8 19:43:33.941881 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 8 19:43:33.946982 ignition[1003]: INFO : Ignition 2.18.0 Oct 8 19:43:33.946982 ignition[1003]: INFO : Stage: umount Oct 8 19:43:33.955081 ignition[1003]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 8 19:43:33.955081 ignition[1003]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 8 19:43:33.955081 ignition[1003]: INFO : umount: umount passed Oct 8 19:43:33.955081 ignition[1003]: INFO : Ignition finished successfully Oct 8 19:43:33.948984 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 8 19:43:33.949112 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 8 19:43:33.955386 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 8 19:43:33.955875 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 8 19:43:33.955979 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 8 19:43:33.957522 systemd[1]: Stopped target network.target - Network. Oct 8 19:43:33.959003 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 8 19:43:33.959380 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 8 19:43:33.960730 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 8 19:43:33.960786 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 8 19:43:33.963467 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 8 19:43:33.963524 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 8 19:43:33.965546 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 8 19:43:33.965604 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 8 19:43:33.969268 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 8 19:43:33.971060 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 8 19:43:33.975774 systemd-networkd[771]: eth0: DHCPv6 lease lost Oct 8 19:43:33.977978 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 8 19:43:33.978124 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 8 19:43:33.980523 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 8 19:43:33.980654 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 8 19:43:33.983333 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 8 19:43:33.983385 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 8 19:43:33.993808 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 8 19:43:33.994699 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 8 19:43:33.994778 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 8 19:43:33.996882 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 8 19:43:33.996941 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 8 19:43:33.999127 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 8 19:43:33.999195 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 8 19:43:34.001296 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 8 19:43:34.001354 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Oct 8 19:43:34.003470 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 8 19:43:34.007365 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 8 19:43:34.008100 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 8 19:43:34.010534 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 8 19:43:34.010634 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 8 19:43:34.015409 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 8 19:43:34.015522 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 8 19:43:34.022342 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 8 19:43:34.022476 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 8 19:43:34.023978 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 8 19:43:34.024034 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 8 19:43:34.025715 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 8 19:43:34.025751 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 8 19:43:34.027767 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 8 19:43:34.027827 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 8 19:43:34.030388 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 8 19:43:34.030445 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 8 19:43:34.033158 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 8 19:43:34.033220 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 19:43:34.046882 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 8 19:43:34.047807 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 8 19:43:34.047881 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 8 19:43:34.049899 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 8 19:43:34.049956 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 19:43:34.051981 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 8 19:43:34.052099 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 8 19:43:34.054314 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 8 19:43:34.056523 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 8 19:43:34.066025 systemd[1]: Switching root. Oct 8 19:43:34.098801 systemd-journald[236]: Journal stopped Oct 8 19:43:34.831294 systemd-journald[236]: Received SIGTERM from PID 1 (systemd). Oct 8 19:43:34.831356 kernel: SELinux: policy capability network_peer_controls=1 Oct 8 19:43:34.831377 kernel: SELinux: policy capability open_perms=1 Oct 8 19:43:34.831397 kernel: SELinux: policy capability extended_socket_class=1 Oct 8 19:43:34.831417 kernel: SELinux: policy capability always_check_network=0 Oct 8 19:43:34.831427 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 8 19:43:34.831440 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 8 19:43:34.831450 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 8 19:43:34.831459 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 8 19:43:34.831469 kernel: audit: type=1403 audit(1728416614.233:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 8 19:43:34.831483 systemd[1]: Successfully loaded SELinux policy in 30.080ms. Oct 8 19:43:34.831502 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.426ms. Oct 8 19:43:34.831514 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 8 19:43:34.831524 systemd[1]: Detected virtualization kvm. Oct 8 19:43:34.831535 systemd[1]: Detected architecture arm64. Oct 8 19:43:34.831546 systemd[1]: Detected first boot. Oct 8 19:43:34.831556 systemd[1]: Initializing machine ID from VM UUID. Oct 8 19:43:34.831567 zram_generator::config[1049]: No configuration found. Oct 8 19:43:34.831578 systemd[1]: Populated /etc with preset unit settings. Oct 8 19:43:34.831589 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 8 19:43:34.831599 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 8 19:43:34.831610 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 8 19:43:34.831620 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 8 19:43:34.831632 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 8 19:43:34.831643 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 8 19:43:34.831654 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 8 19:43:34.831664 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 8 19:43:34.831674 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 8 19:43:34.831711 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 8 19:43:34.831723 systemd[1]: Created slice user.slice - User and Session Slice. Oct 8 19:43:34.831734 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 8 19:43:34.831744 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 8 19:43:34.831757 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 8 19:43:34.831767 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 8 19:43:34.831777 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 8 19:43:34.831788 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 8 19:43:34.831798 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Oct 8 19:43:34.831808 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 8 19:43:34.831818 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 8 19:43:34.831828 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 8 19:43:34.831850 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 8 19:43:34.831864 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 8 19:43:34.831875 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 8 19:43:34.831885 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 8 19:43:34.831896 systemd[1]: Reached target slices.target - Slice Units. Oct 8 19:43:34.831906 systemd[1]: Reached target swap.target - Swaps. Oct 8 19:43:34.831918 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 8 19:43:34.831928 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 8 19:43:34.831940 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 8 19:43:34.831969 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 8 19:43:34.831981 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 8 19:43:34.831993 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 8 19:43:34.832004 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 8 19:43:34.832019 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 8 19:43:34.832031 systemd[1]: Mounting media.mount - External Media Directory... Oct 8 19:43:34.832041 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 8 19:43:34.832052 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 8 19:43:34.832063 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 8 19:43:34.832074 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 8 19:43:34.832084 systemd[1]: Reached target machines.target - Containers. Oct 8 19:43:34.832094 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 8 19:43:34.832105 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 19:43:34.832115 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 8 19:43:34.832125 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 8 19:43:34.832135 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 8 19:43:34.832146 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 8 19:43:34.832158 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 8 19:43:34.832171 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 8 19:43:34.832182 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 8 19:43:34.832192 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 8 19:43:34.832203 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 8 19:43:34.832214 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 8 19:43:34.832224 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 8 19:43:34.832234 systemd[1]: Stopped systemd-fsck-usr.service. Oct 8 19:43:34.832246 kernel: fuse: init (API version 7.39) Oct 8 19:43:34.832256 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 8 19:43:34.832266 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 8 19:43:34.832276 kernel: ACPI: bus type drm_connector registered Oct 8 19:43:34.832285 kernel: loop: module loaded Oct 8 19:43:34.832295 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 8 19:43:34.832305 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 8 19:43:34.832315 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 8 19:43:34.832345 systemd-journald[1115]: Collecting audit messages is disabled. Oct 8 19:43:34.832368 systemd[1]: verity-setup.service: Deactivated successfully. Oct 8 19:43:34.832379 systemd[1]: Stopped verity-setup.service. Oct 8 19:43:34.832390 systemd-journald[1115]: Journal started Oct 8 19:43:34.832423 systemd-journald[1115]: Runtime Journal (/run/log/journal/040e1087368e4ae4a1fd567f431b52fe) is 5.9M, max 47.3M, 41.4M free. Oct 8 19:43:34.619480 systemd[1]: Queued start job for default target multi-user.target. Oct 8 19:43:34.637633 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Oct 8 19:43:34.637974 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 8 19:43:34.836836 systemd[1]: Started systemd-journald.service - Journal Service. Oct 8 19:43:34.837479 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 8 19:43:34.838664 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 8 19:43:34.839895 systemd[1]: Mounted media.mount - External Media Directory. Oct 8 19:43:34.840981 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 8 19:43:34.842203 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 8 19:43:34.843425 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 8 19:43:34.846714 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 8 19:43:34.847997 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 8 19:43:34.849494 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 8 19:43:34.849633 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 8 19:43:34.851098 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 8 19:43:34.851245 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 8 19:43:34.852620 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 8 19:43:34.852860 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 8 19:43:34.854164 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 8 19:43:34.854296 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 8 19:43:34.855782 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 8 19:43:34.855937 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 8 19:43:34.857522 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 8 19:43:34.857665 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 8 19:43:34.859028 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 8 19:43:34.860394 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 8 19:43:34.861952 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 8 19:43:34.874189 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 8 19:43:34.884845 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 8 19:43:34.886877 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 8 19:43:34.887963 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 8 19:43:34.888004 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 8 19:43:34.889871 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Oct 8 19:43:34.893256 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 8 19:43:34.895409 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 8 19:43:34.896529 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 19:43:34.900119 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 8 19:43:34.902992 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 8 19:43:34.904203 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 8 19:43:34.907084 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 8 19:43:34.908475 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 8 19:43:34.911035 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 8 19:43:34.912996 systemd-journald[1115]: Time spent on flushing to /var/log/journal/040e1087368e4ae4a1fd567f431b52fe is 18.905ms for 852 entries. Oct 8 19:43:34.912996 systemd-journald[1115]: System Journal (/var/log/journal/040e1087368e4ae4a1fd567f431b52fe) is 8.0M, max 195.6M, 187.6M free. Oct 8 19:43:34.950340 systemd-journald[1115]: Received client request to flush runtime journal. Oct 8 19:43:34.950384 kernel: loop0: detected capacity change from 0 to 189592 Oct 8 19:43:34.950396 kernel: block loop0: the capability attribute has been deprecated. Oct 8 19:43:34.914975 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 8 19:43:34.917707 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 8 19:43:34.920311 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 8 19:43:34.921806 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 8 19:43:34.923278 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 8 19:43:34.924958 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 8 19:43:34.944328 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Oct 8 19:43:34.945984 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 8 19:43:34.947497 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 8 19:43:34.950866 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Oct 8 19:43:34.952674 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 8 19:43:34.954270 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 8 19:43:34.964908 udevadm[1167]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Oct 8 19:43:34.968702 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 8 19:43:34.977730 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 8 19:43:34.979614 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 8 19:43:34.980227 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Oct 8 19:43:34.988862 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 8 19:43:34.994974 kernel: loop1: detected capacity change from 0 to 113672 Oct 8 19:43:35.009183 systemd-tmpfiles[1178]: ACLs are not supported, ignoring. Oct 8 19:43:35.009199 systemd-tmpfiles[1178]: ACLs are not supported, ignoring. Oct 8 19:43:35.014735 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 8 19:43:35.019725 kernel: loop2: detected capacity change from 0 to 59688 Oct 8 19:43:35.075721 kernel: loop3: detected capacity change from 0 to 189592 Oct 8 19:43:35.087849 kernel: loop4: detected capacity change from 0 to 113672 Oct 8 19:43:35.092741 kernel: loop5: detected capacity change from 0 to 59688 Oct 8 19:43:35.095254 (sd-merge)[1183]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Oct 8 19:43:35.096402 (sd-merge)[1183]: Merged extensions into '/usr'. Oct 8 19:43:35.099612 systemd[1]: Reloading requested from client PID 1159 ('systemd-sysext') (unit systemd-sysext.service)... Oct 8 19:43:35.099627 systemd[1]: Reloading... Oct 8 19:43:35.150621 zram_generator::config[1208]: No configuration found. Oct 8 19:43:35.190337 ldconfig[1154]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 8 19:43:35.242765 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 19:43:35.279844 systemd[1]: Reloading finished in 179 ms. Oct 8 19:43:35.309270 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 8 19:43:35.310748 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 8 19:43:35.321837 systemd[1]: Starting ensure-sysext.service... Oct 8 19:43:35.323613 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Oct 8 19:43:35.335463 systemd[1]: Reloading requested from client PID 1242 ('systemctl') (unit ensure-sysext.service)... Oct 8 19:43:35.335478 systemd[1]: Reloading... Oct 8 19:43:35.343608 systemd-tmpfiles[1243]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 8 19:43:35.344211 systemd-tmpfiles[1243]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 8 19:43:35.345051 systemd-tmpfiles[1243]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 8 19:43:35.345370 systemd-tmpfiles[1243]: ACLs are not supported, ignoring. Oct 8 19:43:35.345480 systemd-tmpfiles[1243]: ACLs are not supported, ignoring. Oct 8 19:43:35.347938 systemd-tmpfiles[1243]: Detected autofs mount point /boot during canonicalization of boot. Oct 8 19:43:35.348054 systemd-tmpfiles[1243]: Skipping /boot Oct 8 19:43:35.354638 systemd-tmpfiles[1243]: Detected autofs mount point /boot during canonicalization of boot. Oct 8 19:43:35.354756 systemd-tmpfiles[1243]: Skipping /boot Oct 8 19:43:35.375731 zram_generator::config[1266]: No configuration found. Oct 8 19:43:35.458649 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 19:43:35.495859 systemd[1]: Reloading finished in 160 ms. Oct 8 19:43:35.512716 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 8 19:43:35.523108 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Oct 8 19:43:35.531133 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Oct 8 19:43:35.533665 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 8 19:43:35.536028 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 8 19:43:35.539895 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 8 19:43:35.547980 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 8 19:43:35.552961 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 8 19:43:35.556448 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 19:43:35.557930 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 8 19:43:35.561656 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 8 19:43:35.564982 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 8 19:43:35.566206 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 19:43:35.572242 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 8 19:43:35.575554 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 8 19:43:35.575730 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 8 19:43:35.577702 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 8 19:43:35.589505 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 8 19:43:35.589657 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 8 19:43:35.589699 systemd-udevd[1315]: Using default interface naming scheme 'v255'. Oct 8 19:43:35.593376 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 8 19:43:35.593518 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 8 19:43:35.600833 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 19:43:35.610037 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 8 19:43:35.614956 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 8 19:43:35.618729 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 8 19:43:35.619835 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 19:43:35.621078 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 8 19:43:35.624706 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 8 19:43:35.626604 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 8 19:43:35.628612 augenrules[1340]: No rules Oct 8 19:43:35.628722 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 8 19:43:35.630421 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 8 19:43:35.630547 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 8 19:43:35.632892 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Oct 8 19:43:35.634465 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 8 19:43:35.634592 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 8 19:43:35.639383 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 8 19:43:35.644749 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 8 19:43:35.661105 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1354) Oct 8 19:43:35.661209 systemd[1]: Finished ensure-sysext.service. Oct 8 19:43:35.666713 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1344) Oct 8 19:43:35.666939 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 8 19:43:35.668807 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 8 19:43:35.678335 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Oct 8 19:43:35.678657 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 19:43:35.691889 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 8 19:43:35.695410 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 8 19:43:35.698136 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 8 19:43:35.701293 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 19:43:35.704128 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 8 19:43:35.707797 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 8 19:43:35.708901 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 8 19:43:35.709314 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 8 19:43:35.709482 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 8 19:43:35.712925 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 8 19:43:35.713080 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 8 19:43:35.717100 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 8 19:43:35.717248 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 8 19:43:35.725271 systemd-resolved[1309]: Positive Trust Anchors: Oct 8 19:43:35.725289 systemd-resolved[1309]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 8 19:43:35.725319 systemd-resolved[1309]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Oct 8 19:43:35.727259 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 8 19:43:35.734870 systemd-resolved[1309]: Defaulting to hostname 'linux'. Oct 8 19:43:35.738938 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 8 19:43:35.740146 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 8 19:43:35.740219 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 8 19:43:35.741949 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 8 19:43:35.744962 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 8 19:43:35.756521 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 8 19:43:35.768372 systemd-networkd[1383]: lo: Link UP Oct 8 19:43:35.768378 systemd-networkd[1383]: lo: Gained carrier Oct 8 19:43:35.769084 systemd-networkd[1383]: Enumeration completed Oct 8 19:43:35.769190 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 8 19:43:35.770553 systemd[1]: Reached target network.target - Network. Oct 8 19:43:35.771363 systemd-networkd[1383]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 19:43:35.771372 systemd-networkd[1383]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 8 19:43:35.772140 systemd-networkd[1383]: eth0: Link UP Oct 8 19:43:35.772147 systemd-networkd[1383]: eth0: Gained carrier Oct 8 19:43:35.772160 systemd-networkd[1383]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 19:43:35.777878 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 8 19:43:35.779513 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 8 19:43:35.781080 systemd[1]: Reached target time-set.target - System Time Set. Oct 8 19:43:35.790751 systemd-networkd[1383]: eth0: DHCPv4 address 10.0.0.84/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 8 19:43:35.791341 systemd-timesyncd[1384]: Network configuration changed, trying to establish connection. Oct 8 19:43:35.792927 systemd-timesyncd[1384]: Contacted time server 10.0.0.1:123 (10.0.0.1). Oct 8 19:43:35.792984 systemd-timesyncd[1384]: Initial clock synchronization to Tue 2024-10-08 19:43:36.095317 UTC. Oct 8 19:43:35.810935 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 19:43:35.820001 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Oct 8 19:43:35.822665 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Oct 8 19:43:35.835324 lvm[1400]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 8 19:43:35.847261 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 19:43:35.863145 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Oct 8 19:43:35.864411 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 8 19:43:35.866853 systemd[1]: Reached target sysinit.target - System Initialization. Oct 8 19:43:35.867914 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 8 19:43:35.869187 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 8 19:43:35.870590 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 8 19:43:35.871703 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 8 19:43:35.872903 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 8 19:43:35.874002 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 8 19:43:35.874047 systemd[1]: Reached target paths.target - Path Units. Oct 8 19:43:35.875512 systemd[1]: Reached target timers.target - Timer Units. Oct 8 19:43:35.877296 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 8 19:43:35.879596 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 8 19:43:35.887663 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 8 19:43:35.889857 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Oct 8 19:43:35.891413 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 8 19:43:35.892663 systemd[1]: Reached target sockets.target - Socket Units. Oct 8 19:43:35.893622 systemd[1]: Reached target basic.target - Basic System. Oct 8 19:43:35.894621 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 8 19:43:35.894653 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 8 19:43:35.895584 systemd[1]: Starting containerd.service - containerd container runtime... Oct 8 19:43:35.897584 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 8 19:43:35.897780 lvm[1407]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 8 19:43:35.901845 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 8 19:43:35.903829 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 8 19:43:35.904939 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 8 19:43:35.906971 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 8 19:43:35.911114 jq[1410]: false Oct 8 19:43:35.911979 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 8 19:43:35.919547 extend-filesystems[1411]: Found loop3 Oct 8 19:43:35.919547 extend-filesystems[1411]: Found loop4 Oct 8 19:43:35.919547 extend-filesystems[1411]: Found loop5 Oct 8 19:43:35.919547 extend-filesystems[1411]: Found vda Oct 8 19:43:35.919547 extend-filesystems[1411]: Found vda1 Oct 8 19:43:35.919547 extend-filesystems[1411]: Found vda2 Oct 8 19:43:35.919547 extend-filesystems[1411]: Found vda3 Oct 8 19:43:35.919547 extend-filesystems[1411]: Found usr Oct 8 19:43:35.946455 extend-filesystems[1411]: Found vda4 Oct 8 19:43:35.946455 extend-filesystems[1411]: Found vda6 Oct 8 19:43:35.946455 extend-filesystems[1411]: Found vda7 Oct 8 19:43:35.946455 extend-filesystems[1411]: Found vda9 Oct 8 19:43:35.946455 extend-filesystems[1411]: Checking size of /dev/vda9 Oct 8 19:43:35.946455 extend-filesystems[1411]: Resized partition /dev/vda9 Oct 8 19:43:35.961973 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1346) Oct 8 19:43:35.962004 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Oct 8 19:43:35.924829 dbus-daemon[1409]: [system] SELinux support is enabled Oct 8 19:43:35.927901 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 8 19:43:35.962357 extend-filesystems[1421]: resize2fs 1.47.0 (5-Feb-2023) Oct 8 19:43:35.933816 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 8 19:43:35.937106 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 8 19:43:35.947914 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 8 19:43:35.948402 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 8 19:43:35.949939 systemd[1]: Starting update-engine.service - Update Engine... Oct 8 19:43:35.957056 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 8 19:43:35.960181 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 8 19:43:35.973965 jq[1432]: true Oct 8 19:43:35.981417 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Oct 8 19:43:35.967154 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Oct 8 19:43:35.978178 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 8 19:43:35.978446 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 8 19:43:36.002040 extend-filesystems[1421]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 8 19:43:36.002040 extend-filesystems[1421]: old_desc_blocks = 1, new_desc_blocks = 1 Oct 8 19:43:36.002040 extend-filesystems[1421]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Oct 8 19:43:35.979069 systemd[1]: motdgen.service: Deactivated successfully. Oct 8 19:43:36.015146 update_engine[1429]: I1008 19:43:36.008567 1429 main.cc:92] Flatcar Update Engine starting Oct 8 19:43:36.015350 extend-filesystems[1411]: Resized filesystem in /dev/vda9 Oct 8 19:43:35.979243 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 8 19:43:36.019902 jq[1436]: true Oct 8 19:43:35.981933 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 8 19:43:36.024454 update_engine[1429]: I1008 19:43:36.020460 1429 update_check_scheduler.cc:74] Next update check in 9m24s Oct 8 19:43:35.982090 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 8 19:43:36.001392 (ntainerd)[1437]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 8 19:43:36.007407 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 8 19:43:36.007621 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 8 19:43:36.008103 systemd-logind[1424]: Watching system buttons on /dev/input/event0 (Power Button) Oct 8 19:43:36.008363 systemd-logind[1424]: New seat seat0. Oct 8 19:43:36.012139 systemd[1]: Started systemd-logind.service - User Login Management. Oct 8 19:43:36.025867 tar[1435]: linux-arm64/helm Oct 8 19:43:36.027567 systemd[1]: Started update-engine.service - Update Engine. Oct 8 19:43:36.029558 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 8 19:43:36.029691 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 8 19:43:36.032107 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 8 19:43:36.032386 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 8 19:43:36.041680 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 8 19:43:36.093770 bash[1465]: Updated "/home/core/.ssh/authorized_keys" Oct 8 19:43:36.097104 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 8 19:43:36.099888 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Oct 8 19:43:36.106768 locksmithd[1455]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 8 19:43:36.215101 containerd[1437]: time="2024-10-08T19:43:36.214951841Z" level=info msg="starting containerd" revision=1fbfc07f8d28210e62bdbcbf7b950bac8028afbf version=v1.7.17 Oct 8 19:43:36.243140 containerd[1437]: time="2024-10-08T19:43:36.243085826Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Oct 8 19:43:36.243255 containerd[1437]: time="2024-10-08T19:43:36.243235852Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 8 19:43:36.244773 containerd[1437]: time="2024-10-08T19:43:36.244732913Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.54-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 8 19:43:36.244773 containerd[1437]: time="2024-10-08T19:43:36.244770856Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 8 19:43:36.245040 containerd[1437]: time="2024-10-08T19:43:36.245008348Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 19:43:36.245040 containerd[1437]: time="2024-10-08T19:43:36.245034625Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 8 19:43:36.245142 containerd[1437]: time="2024-10-08T19:43:36.245117152Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Oct 8 19:43:36.245203 containerd[1437]: time="2024-10-08T19:43:36.245184153Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 19:43:36.245240 containerd[1437]: time="2024-10-08T19:43:36.245201879Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 8 19:43:36.245295 containerd[1437]: time="2024-10-08T19:43:36.245277639Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 8 19:43:36.245500 containerd[1437]: time="2024-10-08T19:43:36.245478310Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 8 19:43:36.245531 containerd[1437]: time="2024-10-08T19:43:36.245502470Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Oct 8 19:43:36.245531 containerd[1437]: time="2024-10-08T19:43:36.245512682Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 8 19:43:36.245628 containerd[1437]: time="2024-10-08T19:43:36.245607164Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 19:43:36.245628 containerd[1437]: time="2024-10-08T19:43:36.245625554Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 8 19:43:36.245698 containerd[1437]: time="2024-10-08T19:43:36.245681555Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Oct 8 19:43:36.245750 containerd[1437]: time="2024-10-08T19:43:36.245698159Z" level=info msg="metadata content store policy set" policy=shared Oct 8 19:43:36.249373 containerd[1437]: time="2024-10-08T19:43:36.249331490Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 8 19:43:36.249373 containerd[1437]: time="2024-10-08T19:43:36.249374248Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 8 19:43:36.249440 containerd[1437]: time="2024-10-08T19:43:36.249397412Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 8 19:43:36.249501 containerd[1437]: time="2024-10-08T19:43:36.249478236Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Oct 8 19:43:36.249529 containerd[1437]: time="2024-10-08T19:43:36.249501525Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Oct 8 19:43:36.249529 containerd[1437]: time="2024-10-08T19:43:36.249513356Z" level=info msg="NRI interface is disabled by configuration." Oct 8 19:43:36.249598 containerd[1437]: time="2024-10-08T19:43:36.249581851Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 8 19:43:36.249748 containerd[1437]: time="2024-10-08T19:43:36.249730507Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Oct 8 19:43:36.249775 containerd[1437]: time="2024-10-08T19:43:36.249754999Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Oct 8 19:43:36.249775 containerd[1437]: time="2024-10-08T19:43:36.249770235Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Oct 8 19:43:36.249811 containerd[1437]: time="2024-10-08T19:43:36.249783643Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Oct 8 19:43:36.249811 containerd[1437]: time="2024-10-08T19:43:36.249798712Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 8 19:43:36.249857 containerd[1437]: time="2024-10-08T19:43:36.249814404Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 8 19:43:36.249857 containerd[1437]: time="2024-10-08T19:43:36.249827107Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 8 19:43:36.249857 containerd[1437]: time="2024-10-08T19:43:36.249838896Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 8 19:43:36.249857 containerd[1437]: time="2024-10-08T19:43:36.249851765Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 8 19:43:36.249927 containerd[1437]: time="2024-10-08T19:43:36.249864675Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 8 19:43:36.249927 containerd[1437]: time="2024-10-08T19:43:36.249876548Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 8 19:43:36.249927 containerd[1437]: time="2024-10-08T19:43:36.249887382Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 8 19:43:36.250140 containerd[1437]: time="2024-10-08T19:43:36.249986390Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 8 19:43:36.250444 containerd[1437]: time="2024-10-08T19:43:36.250412058Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 8 19:43:36.250482 containerd[1437]: time="2024-10-08T19:43:36.250463907Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 8 19:43:36.250482 containerd[1437]: time="2024-10-08T19:43:36.250479183Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Oct 8 19:43:36.250523 containerd[1437]: time="2024-10-08T19:43:36.250501268Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 8 19:43:36.250710 containerd[1437]: time="2024-10-08T19:43:36.250674914Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 8 19:43:36.250805 containerd[1437]: time="2024-10-08T19:43:36.250786375Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 8 19:43:36.250833 containerd[1437]: time="2024-10-08T19:43:36.250807463Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 8 19:43:36.250833 containerd[1437]: time="2024-10-08T19:43:36.250820373Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 8 19:43:36.250883 containerd[1437]: time="2024-10-08T19:43:36.250833408Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 8 19:43:36.250883 containerd[1437]: time="2024-10-08T19:43:36.250847398Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 8 19:43:36.250883 containerd[1437]: time="2024-10-08T19:43:36.250859229Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 8 19:43:36.250883 containerd[1437]: time="2024-10-08T19:43:36.250871683Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 8 19:43:36.250952 containerd[1437]: time="2024-10-08T19:43:36.250885506Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 8 19:43:36.251071 containerd[1437]: time="2024-10-08T19:43:36.251035947Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Oct 8 19:43:36.251106 containerd[1437]: time="2024-10-08T19:43:36.251077958Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Oct 8 19:43:36.251106 containerd[1437]: time="2024-10-08T19:43:36.251092114Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 8 19:43:36.251151 containerd[1437]: time="2024-10-08T19:43:36.251105190Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Oct 8 19:43:36.251151 containerd[1437]: time="2024-10-08T19:43:36.251118391Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 8 19:43:36.251151 containerd[1437]: time="2024-10-08T19:43:36.251131384Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Oct 8 19:43:36.251205 containerd[1437]: time="2024-10-08T19:43:36.251152929Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 8 19:43:36.251205 containerd[1437]: time="2024-10-08T19:43:36.251164428Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 8 19:43:36.251523 containerd[1437]: time="2024-10-08T19:43:36.251459540Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 8 19:43:36.251523 containerd[1437]: time="2024-10-08T19:43:36.251524216Z" level=info msg="Connect containerd service" Oct 8 19:43:36.251654 containerd[1437]: time="2024-10-08T19:43:36.251552071Z" level=info msg="using legacy CRI server" Oct 8 19:43:36.251654 containerd[1437]: time="2024-10-08T19:43:36.251559626Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 8 19:43:36.251724 containerd[1437]: time="2024-10-08T19:43:36.251694583Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 8 19:43:36.252855 containerd[1437]: time="2024-10-08T19:43:36.252821769Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 8 19:43:36.252903 containerd[1437]: time="2024-10-08T19:43:36.252884411Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 8 19:43:36.252924 containerd[1437]: time="2024-10-08T19:43:36.252906662Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Oct 8 19:43:36.252924 containerd[1437]: time="2024-10-08T19:43:36.252918285Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 8 19:43:36.253337 containerd[1437]: time="2024-10-08T19:43:36.253299784Z" level=info msg="Start subscribing containerd event" Oct 8 19:43:36.253369 containerd[1437]: time="2024-10-08T19:43:36.253355867Z" level=info msg="Start recovering state" Oct 8 19:43:36.253615 containerd[1437]: time="2024-10-08T19:43:36.253593567Z" level=info msg="Start event monitor" Oct 8 19:43:36.253644 containerd[1437]: time="2024-10-08T19:43:36.253619554Z" level=info msg="Start snapshots syncer" Oct 8 19:43:36.253644 containerd[1437]: time="2024-10-08T19:43:36.253629392Z" level=info msg="Start cni network conf syncer for default" Oct 8 19:43:36.253680 containerd[1437]: time="2024-10-08T19:43:36.253636699Z" level=info msg="Start streaming server" Oct 8 19:43:36.253934 containerd[1437]: time="2024-10-08T19:43:36.253908771Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Oct 8 19:43:36.254404 containerd[1437]: time="2024-10-08T19:43:36.254376200Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 8 19:43:36.254520 containerd[1437]: time="2024-10-08T19:43:36.254440835Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 8 19:43:36.254596 containerd[1437]: time="2024-10-08T19:43:36.254582019Z" level=info msg="containerd successfully booted in 0.040457s" Oct 8 19:43:36.254700 systemd[1]: Started containerd.service - containerd container runtime. Oct 8 19:43:36.315366 sshd_keygen[1433]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 8 19:43:36.336589 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 8 19:43:36.344971 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 8 19:43:36.351568 systemd[1]: issuegen.service: Deactivated successfully. Oct 8 19:43:36.353778 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 8 19:43:36.369110 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 8 19:43:36.376978 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 8 19:43:36.379930 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 8 19:43:36.382119 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Oct 8 19:43:36.383496 systemd[1]: Reached target getty.target - Login Prompts. Oct 8 19:43:36.385966 tar[1435]: linux-arm64/LICENSE Oct 8 19:43:36.386029 tar[1435]: linux-arm64/README.md Oct 8 19:43:36.395764 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 8 19:43:37.189544 systemd-networkd[1383]: eth0: Gained IPv6LL Oct 8 19:43:37.192099 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 8 19:43:37.194502 systemd[1]: Reached target network-online.target - Network is Online. Oct 8 19:43:37.206988 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Oct 8 19:43:37.209440 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:43:37.211629 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 8 19:43:37.226124 systemd[1]: coreos-metadata.service: Deactivated successfully. Oct 8 19:43:37.226365 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Oct 8 19:43:37.228191 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 8 19:43:37.231666 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 8 19:43:37.725494 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:43:37.727179 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 8 19:43:37.730922 (kubelet)[1523]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 19:43:37.731943 systemd[1]: Startup finished in 596ms (kernel) + 4.512s (initrd) + 3.531s (userspace) = 8.640s. Oct 8 19:43:38.170643 kubelet[1523]: E1008 19:43:38.170488 1523 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 19:43:38.172678 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 19:43:38.172845 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 19:43:43.008608 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 8 19:43:43.010869 systemd[1]: Started sshd@0-10.0.0.84:22-10.0.0.1:38800.service - OpenSSH per-connection server daemon (10.0.0.1:38800). Oct 8 19:43:43.067735 sshd[1536]: Accepted publickey for core from 10.0.0.1 port 38800 ssh2: RSA SHA256:7GlzoUcthdqM2/gWbc3rpA5Lm+7Qkd3pe7wSn/JGGIM Oct 8 19:43:43.069572 sshd[1536]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:43:43.085093 systemd-logind[1424]: New session 1 of user core. Oct 8 19:43:43.086434 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 8 19:43:43.104527 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 8 19:43:43.115653 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 8 19:43:43.127733 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 8 19:43:43.130407 (systemd)[1540]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:43:43.228425 systemd[1540]: Queued start job for default target default.target. Oct 8 19:43:43.238659 systemd[1540]: Created slice app.slice - User Application Slice. Oct 8 19:43:43.238686 systemd[1540]: Reached target paths.target - Paths. Oct 8 19:43:43.238721 systemd[1540]: Reached target timers.target - Timers. Oct 8 19:43:43.240034 systemd[1540]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 8 19:43:43.252463 systemd[1540]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 8 19:43:43.252570 systemd[1540]: Reached target sockets.target - Sockets. Oct 8 19:43:43.252583 systemd[1540]: Reached target basic.target - Basic System. Oct 8 19:43:43.252618 systemd[1540]: Reached target default.target - Main User Target. Oct 8 19:43:43.252662 systemd[1540]: Startup finished in 115ms. Oct 8 19:43:43.253079 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 8 19:43:43.255147 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 8 19:43:43.320048 systemd[1]: Started sshd@1-10.0.0.84:22-10.0.0.1:38810.service - OpenSSH per-connection server daemon (10.0.0.1:38810). Oct 8 19:43:43.356909 sshd[1551]: Accepted publickey for core from 10.0.0.1 port 38810 ssh2: RSA SHA256:7GlzoUcthdqM2/gWbc3rpA5Lm+7Qkd3pe7wSn/JGGIM Oct 8 19:43:43.358376 sshd[1551]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:43:43.363912 systemd-logind[1424]: New session 2 of user core. Oct 8 19:43:43.370902 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 8 19:43:43.430098 sshd[1551]: pam_unix(sshd:session): session closed for user core Oct 8 19:43:43.438222 systemd[1]: sshd@1-10.0.0.84:22-10.0.0.1:38810.service: Deactivated successfully. Oct 8 19:43:43.439655 systemd[1]: session-2.scope: Deactivated successfully. Oct 8 19:43:43.443782 systemd-logind[1424]: Session 2 logged out. Waiting for processes to exit. Oct 8 19:43:43.466678 systemd[1]: Started sshd@2-10.0.0.84:22-10.0.0.1:38826.service - OpenSSH per-connection server daemon (10.0.0.1:38826). Oct 8 19:43:43.468056 systemd-logind[1424]: Removed session 2. Oct 8 19:43:43.497711 sshd[1558]: Accepted publickey for core from 10.0.0.1 port 38826 ssh2: RSA SHA256:7GlzoUcthdqM2/gWbc3rpA5Lm+7Qkd3pe7wSn/JGGIM Oct 8 19:43:43.499012 sshd[1558]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:43:43.503748 systemd-logind[1424]: New session 3 of user core. Oct 8 19:43:43.513960 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 8 19:43:43.566943 sshd[1558]: pam_unix(sshd:session): session closed for user core Oct 8 19:43:43.581576 systemd[1]: sshd@2-10.0.0.84:22-10.0.0.1:38826.service: Deactivated successfully. Oct 8 19:43:43.583080 systemd[1]: session-3.scope: Deactivated successfully. Oct 8 19:43:43.584953 systemd-logind[1424]: Session 3 logged out. Waiting for processes to exit. Oct 8 19:43:43.592204 systemd[1]: Started sshd@3-10.0.0.84:22-10.0.0.1:38834.service - OpenSSH per-connection server daemon (10.0.0.1:38834). Oct 8 19:43:43.594038 systemd-logind[1424]: Removed session 3. Oct 8 19:43:43.621954 sshd[1565]: Accepted publickey for core from 10.0.0.1 port 38834 ssh2: RSA SHA256:7GlzoUcthdqM2/gWbc3rpA5Lm+7Qkd3pe7wSn/JGGIM Oct 8 19:43:43.623304 sshd[1565]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:43:43.627574 systemd-logind[1424]: New session 4 of user core. Oct 8 19:43:43.636924 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 8 19:43:43.689936 sshd[1565]: pam_unix(sshd:session): session closed for user core Oct 8 19:43:43.700313 systemd[1]: sshd@3-10.0.0.84:22-10.0.0.1:38834.service: Deactivated successfully. Oct 8 19:43:43.701868 systemd[1]: session-4.scope: Deactivated successfully. Oct 8 19:43:43.703872 systemd-logind[1424]: Session 4 logged out. Waiting for processes to exit. Oct 8 19:43:43.705239 systemd[1]: Started sshd@4-10.0.0.84:22-10.0.0.1:38838.service - OpenSSH per-connection server daemon (10.0.0.1:38838). Oct 8 19:43:43.706566 systemd-logind[1424]: Removed session 4. Oct 8 19:43:43.742156 sshd[1572]: Accepted publickey for core from 10.0.0.1 port 38838 ssh2: RSA SHA256:7GlzoUcthdqM2/gWbc3rpA5Lm+7Qkd3pe7wSn/JGGIM Oct 8 19:43:43.743587 sshd[1572]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:43:43.749357 systemd-logind[1424]: New session 5 of user core. Oct 8 19:43:43.754896 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 8 19:43:43.824201 sudo[1575]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 8 19:43:43.824449 sudo[1575]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 8 19:43:43.839559 sudo[1575]: pam_unix(sudo:session): session closed for user root Oct 8 19:43:43.841552 sshd[1572]: pam_unix(sshd:session): session closed for user core Oct 8 19:43:43.856221 systemd[1]: sshd@4-10.0.0.84:22-10.0.0.1:38838.service: Deactivated successfully. Oct 8 19:43:43.857766 systemd[1]: session-5.scope: Deactivated successfully. Oct 8 19:43:43.859118 systemd-logind[1424]: Session 5 logged out. Waiting for processes to exit. Oct 8 19:43:43.860343 systemd[1]: Started sshd@5-10.0.0.84:22-10.0.0.1:38852.service - OpenSSH per-connection server daemon (10.0.0.1:38852). Oct 8 19:43:43.861199 systemd-logind[1424]: Removed session 5. Oct 8 19:43:43.893712 sshd[1580]: Accepted publickey for core from 10.0.0.1 port 38852 ssh2: RSA SHA256:7GlzoUcthdqM2/gWbc3rpA5Lm+7Qkd3pe7wSn/JGGIM Oct 8 19:43:43.894984 sshd[1580]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:43:43.899728 systemd-logind[1424]: New session 6 of user core. Oct 8 19:43:43.910415 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 8 19:43:43.963952 sudo[1584]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 8 19:43:43.964487 sudo[1584]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 8 19:43:43.968384 sudo[1584]: pam_unix(sudo:session): session closed for user root Oct 8 19:43:43.973056 sudo[1583]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Oct 8 19:43:43.973298 sudo[1583]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 8 19:43:43.993946 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Oct 8 19:43:43.995100 auditctl[1587]: No rules Oct 8 19:43:43.995965 systemd[1]: audit-rules.service: Deactivated successfully. Oct 8 19:43:43.996781 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Oct 8 19:43:43.998420 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Oct 8 19:43:44.021254 augenrules[1605]: No rules Oct 8 19:43:44.022478 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Oct 8 19:43:44.023842 sudo[1583]: pam_unix(sudo:session): session closed for user root Oct 8 19:43:44.025947 sshd[1580]: pam_unix(sshd:session): session closed for user core Oct 8 19:43:44.042837 systemd[1]: sshd@5-10.0.0.84:22-10.0.0.1:38852.service: Deactivated successfully. Oct 8 19:43:44.044184 systemd[1]: session-6.scope: Deactivated successfully. Oct 8 19:43:44.046215 systemd-logind[1424]: Session 6 logged out. Waiting for processes to exit. Oct 8 19:43:44.056006 systemd[1]: Started sshd@6-10.0.0.84:22-10.0.0.1:38864.service - OpenSSH per-connection server daemon (10.0.0.1:38864). Oct 8 19:43:44.056854 systemd-logind[1424]: Removed session 6. Oct 8 19:43:44.084338 sshd[1613]: Accepted publickey for core from 10.0.0.1 port 38864 ssh2: RSA SHA256:7GlzoUcthdqM2/gWbc3rpA5Lm+7Qkd3pe7wSn/JGGIM Oct 8 19:43:44.085460 sshd[1613]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:43:44.089367 systemd-logind[1424]: New session 7 of user core. Oct 8 19:43:44.098928 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 8 19:43:44.150891 sudo[1616]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 8 19:43:44.151136 sudo[1616]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 8 19:43:44.256024 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 8 19:43:44.256123 (dockerd)[1627]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 8 19:43:44.492089 dockerd[1627]: time="2024-10-08T19:43:44.491955081Z" level=info msg="Starting up" Oct 8 19:43:44.575094 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3964391179-merged.mount: Deactivated successfully. Oct 8 19:43:44.591936 dockerd[1627]: time="2024-10-08T19:43:44.591810341Z" level=info msg="Loading containers: start." Oct 8 19:43:44.701745 kernel: Initializing XFRM netlink socket Oct 8 19:43:44.780751 systemd-networkd[1383]: docker0: Link UP Oct 8 19:43:44.800088 dockerd[1627]: time="2024-10-08T19:43:44.800056045Z" level=info msg="Loading containers: done." Oct 8 19:43:44.870617 dockerd[1627]: time="2024-10-08T19:43:44.870561894Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 8 19:43:44.870862 dockerd[1627]: time="2024-10-08T19:43:44.870831755Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Oct 8 19:43:44.870989 dockerd[1627]: time="2024-10-08T19:43:44.870964822Z" level=info msg="Daemon has completed initialization" Oct 8 19:43:44.898298 dockerd[1627]: time="2024-10-08T19:43:44.898233912Z" level=info msg="API listen on /run/docker.sock" Oct 8 19:43:44.898719 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 8 19:43:45.306525 containerd[1437]: time="2024-10-08T19:43:45.306482714Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.0\"" Oct 8 19:43:45.973251 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4075173657.mount: Deactivated successfully. Oct 8 19:43:47.322163 containerd[1437]: time="2024-10-08T19:43:47.322099701Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:43:47.323159 containerd[1437]: time="2024-10-08T19:43:47.323110542Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.0: active requests=0, bytes read=25691523" Oct 8 19:43:47.324157 containerd[1437]: time="2024-10-08T19:43:47.324101251Z" level=info msg="ImageCreate event name:\"sha256:cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:43:47.326967 containerd[1437]: time="2024-10-08T19:43:47.326929332Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:43:47.328190 containerd[1437]: time="2024-10-08T19:43:47.328149783Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.0\" with image id \"sha256:cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.0\", repo digest \"registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf\", size \"25688321\" in 2.021620857s" Oct 8 19:43:47.328233 containerd[1437]: time="2024-10-08T19:43:47.328189526Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.0\" returns image reference \"sha256:cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388\"" Oct 8 19:43:47.329141 containerd[1437]: time="2024-10-08T19:43:47.329108576Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.0\"" Oct 8 19:43:48.251240 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 8 19:43:48.260926 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:43:48.349837 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:43:48.353404 (kubelet)[1824]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 19:43:48.388644 kubelet[1824]: E1008 19:43:48.388569 1824 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 19:43:48.391286 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 19:43:48.391422 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 19:43:48.716733 containerd[1437]: time="2024-10-08T19:43:48.716608504Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:43:48.717703 containerd[1437]: time="2024-10-08T19:43:48.717119324Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.0: active requests=0, bytes read=22460088" Oct 8 19:43:48.718279 containerd[1437]: time="2024-10-08T19:43:48.718246886Z" level=info msg="ImageCreate event name:\"sha256:fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:43:48.720855 containerd[1437]: time="2024-10-08T19:43:48.720817193Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:43:48.722042 containerd[1437]: time="2024-10-08T19:43:48.722004889Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.0\" with image id \"sha256:fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.0\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d\", size \"23947353\" in 1.392861061s" Oct 8 19:43:48.722085 containerd[1437]: time="2024-10-08T19:43:48.722043461Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.0\" returns image reference \"sha256:fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd\"" Oct 8 19:43:48.722491 containerd[1437]: time="2024-10-08T19:43:48.722457510Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.0\"" Oct 8 19:43:50.813423 containerd[1437]: time="2024-10-08T19:43:50.813366010Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:43:50.813867 containerd[1437]: time="2024-10-08T19:43:50.813835814Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.0: active requests=0, bytes read=17018560" Oct 8 19:43:50.814698 containerd[1437]: time="2024-10-08T19:43:50.814658060Z" level=info msg="ImageCreate event name:\"sha256:fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:43:50.817242 containerd[1437]: time="2024-10-08T19:43:50.817209973Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:43:50.818343 containerd[1437]: time="2024-10-08T19:43:50.818307938Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.0\" with image id \"sha256:fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.0\", repo digest \"registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808\", size \"18505843\" in 2.095818966s" Oct 8 19:43:50.818380 containerd[1437]: time="2024-10-08T19:43:50.818343303Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.0\" returns image reference \"sha256:fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb\"" Oct 8 19:43:50.818847 containerd[1437]: time="2024-10-08T19:43:50.818753762Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.0\"" Oct 8 19:43:53.181779 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount978883790.mount: Deactivated successfully. Oct 8 19:43:53.390657 containerd[1437]: time="2024-10-08T19:43:53.390606379Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:43:53.392199 containerd[1437]: time="2024-10-08T19:43:53.391993456Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.0: active requests=0, bytes read=26753317" Oct 8 19:43:53.392969 containerd[1437]: time="2024-10-08T19:43:53.392930261Z" level=info msg="ImageCreate event name:\"sha256:71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:43:53.394897 containerd[1437]: time="2024-10-08T19:43:53.394869045Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:43:53.395660 containerd[1437]: time="2024-10-08T19:43:53.395617919Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.0\" with image id \"sha256:71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89\", repo tag \"registry.k8s.io/kube-proxy:v1.31.0\", repo digest \"registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe\", size \"26752334\" in 2.576831347s" Oct 8 19:43:53.395660 containerd[1437]: time="2024-10-08T19:43:53.395659079Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.0\" returns image reference \"sha256:71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89\"" Oct 8 19:43:53.396228 containerd[1437]: time="2024-10-08T19:43:53.396202674Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Oct 8 19:43:54.012498 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount460406195.mount: Deactivated successfully. Oct 8 19:43:54.611966 containerd[1437]: time="2024-10-08T19:43:54.611903399Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:43:54.613203 containerd[1437]: time="2024-10-08T19:43:54.613164696Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Oct 8 19:43:54.613998 containerd[1437]: time="2024-10-08T19:43:54.613963376Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:43:54.617349 containerd[1437]: time="2024-10-08T19:43:54.617314390Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:43:54.619477 containerd[1437]: time="2024-10-08T19:43:54.619440713Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.223210216s" Oct 8 19:43:54.619520 containerd[1437]: time="2024-10-08T19:43:54.619484221Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Oct 8 19:43:54.620163 containerd[1437]: time="2024-10-08T19:43:54.620133232Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Oct 8 19:43:55.132138 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount489285062.mount: Deactivated successfully. Oct 8 19:43:55.135275 containerd[1437]: time="2024-10-08T19:43:55.135214809Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:43:55.135781 containerd[1437]: time="2024-10-08T19:43:55.135741379Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Oct 8 19:43:55.136668 containerd[1437]: time="2024-10-08T19:43:55.136616308Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:43:55.138737 containerd[1437]: time="2024-10-08T19:43:55.138678938Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:43:55.139780 containerd[1437]: time="2024-10-08T19:43:55.139752940Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 519.585797ms" Oct 8 19:43:55.139858 containerd[1437]: time="2024-10-08T19:43:55.139783351Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Oct 8 19:43:55.140463 containerd[1437]: time="2024-10-08T19:43:55.140257605Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Oct 8 19:43:55.739696 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3661330642.mount: Deactivated successfully. Oct 8 19:43:58.133039 containerd[1437]: time="2024-10-08T19:43:58.132945146Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:43:58.134264 containerd[1437]: time="2024-10-08T19:43:58.134220135Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=65868194" Oct 8 19:43:58.137038 containerd[1437]: time="2024-10-08T19:43:58.136995084Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:43:58.140623 containerd[1437]: time="2024-10-08T19:43:58.140580613Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:43:58.141835 containerd[1437]: time="2024-10-08T19:43:58.141798809Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 3.001509437s" Oct 8 19:43:58.141876 containerd[1437]: time="2024-10-08T19:43:58.141833478Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Oct 8 19:43:58.501251 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 8 19:43:58.512062 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:43:58.598859 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:43:58.602775 (kubelet)[1980]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 19:43:58.637375 kubelet[1980]: E1008 19:43:58.637326 1980 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 19:43:58.640101 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 19:43:58.640250 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 19:44:02.930138 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:44:02.946010 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:44:02.961981 systemd[1]: Reloading requested from client PID 1997 ('systemctl') (unit session-7.scope)... Oct 8 19:44:02.961996 systemd[1]: Reloading... Oct 8 19:44:03.029712 zram_generator::config[2034]: No configuration found. Oct 8 19:44:03.179467 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 19:44:03.233268 systemd[1]: Reloading finished in 270 ms. Oct 8 19:44:03.279914 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:44:03.282420 systemd[1]: kubelet.service: Deactivated successfully. Oct 8 19:44:03.282617 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:44:03.284036 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:44:03.376329 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:44:03.380115 (kubelet)[2081]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 8 19:44:03.413268 kubelet[2081]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 19:44:03.413268 kubelet[2081]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 8 19:44:03.413268 kubelet[2081]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 19:44:03.413624 kubelet[2081]: I1008 19:44:03.413376 2081 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 8 19:44:03.835478 kubelet[2081]: I1008 19:44:03.835434 2081 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Oct 8 19:44:03.835478 kubelet[2081]: I1008 19:44:03.835463 2081 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 8 19:44:03.835730 kubelet[2081]: I1008 19:44:03.835712 2081 server.go:929] "Client rotation is on, will bootstrap in background" Oct 8 19:44:03.864824 kubelet[2081]: E1008 19:44:03.864795 2081 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.84:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.84:6443: connect: connection refused" logger="UnhandledError" Oct 8 19:44:03.865673 kubelet[2081]: I1008 19:44:03.865637 2081 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 8 19:44:03.873651 kubelet[2081]: E1008 19:44:03.873618 2081 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Oct 8 19:44:03.873651 kubelet[2081]: I1008 19:44:03.873647 2081 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Oct 8 19:44:03.878855 kubelet[2081]: I1008 19:44:03.878838 2081 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 8 19:44:03.879724 kubelet[2081]: I1008 19:44:03.879693 2081 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Oct 8 19:44:03.879847 kubelet[2081]: I1008 19:44:03.879822 2081 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 8 19:44:03.879997 kubelet[2081]: I1008 19:44:03.879848 2081 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 8 19:44:03.880084 kubelet[2081]: I1008 19:44:03.880005 2081 topology_manager.go:138] "Creating topology manager with none policy" Oct 8 19:44:03.880084 kubelet[2081]: I1008 19:44:03.880013 2081 container_manager_linux.go:300] "Creating device plugin manager" Oct 8 19:44:03.880177 kubelet[2081]: I1008 19:44:03.880164 2081 state_mem.go:36] "Initialized new in-memory state store" Oct 8 19:44:03.882710 kubelet[2081]: I1008 19:44:03.882675 2081 kubelet.go:408] "Attempting to sync node with API server" Oct 8 19:44:03.883349 kubelet[2081]: I1008 19:44:03.882804 2081 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 8 19:44:03.883349 kubelet[2081]: W1008 19:44:03.882786 2081 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.84:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.84:6443: connect: connection refused Oct 8 19:44:03.883349 kubelet[2081]: I1008 19:44:03.882905 2081 kubelet.go:314] "Adding apiserver pod source" Oct 8 19:44:03.883349 kubelet[2081]: I1008 19:44:03.882925 2081 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 8 19:44:03.883604 kubelet[2081]: W1008 19:44:03.883563 2081 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.84:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.84:6443: connect: connection refused Oct 8 19:44:03.883716 kubelet[2081]: E1008 19:44:03.883676 2081 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.84:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.84:6443: connect: connection refused" logger="UnhandledError" Oct 8 19:44:03.883796 kubelet[2081]: E1008 19:44:03.883719 2081 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.84:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.84:6443: connect: connection refused" logger="UnhandledError" Oct 8 19:44:03.887164 kubelet[2081]: I1008 19:44:03.887146 2081 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Oct 8 19:44:03.890989 kubelet[2081]: I1008 19:44:03.890969 2081 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 8 19:44:03.891729 kubelet[2081]: W1008 19:44:03.891711 2081 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 8 19:44:03.894575 kubelet[2081]: I1008 19:44:03.894557 2081 server.go:1269] "Started kubelet" Oct 8 19:44:03.896086 kubelet[2081]: I1008 19:44:03.896040 2081 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 8 19:44:03.896347 kubelet[2081]: I1008 19:44:03.896322 2081 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 8 19:44:03.896458 kubelet[2081]: I1008 19:44:03.896434 2081 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Oct 8 19:44:03.897890 kubelet[2081]: I1008 19:44:03.897844 2081 server.go:460] "Adding debug handlers to kubelet server" Oct 8 19:44:03.897890 kubelet[2081]: E1008 19:44:03.896870 2081 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.84:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.84:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.17fc91c7b334b192 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-10-08 19:44:03.894522258 +0000 UTC m=+0.511199928,LastTimestamp:2024-10-08 19:44:03.894522258 +0000 UTC m=+0.511199928,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 8 19:44:03.898523 kubelet[2081]: I1008 19:44:03.898404 2081 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 8 19:44:03.898523 kubelet[2081]: I1008 19:44:03.898440 2081 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 8 19:44:03.898927 kubelet[2081]: I1008 19:44:03.898798 2081 volume_manager.go:289] "Starting Kubelet Volume Manager" Oct 8 19:44:03.898927 kubelet[2081]: E1008 19:44:03.898919 2081 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 8 19:44:03.899707 kubelet[2081]: E1008 19:44:03.899689 2081 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:44:03.899800 kubelet[2081]: W1008 19:44:03.899671 2081 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.84:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.84:6443: connect: connection refused Oct 8 19:44:03.899894 kubelet[2081]: E1008 19:44:03.899877 2081 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.84:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.84:6443: connect: connection refused" logger="UnhandledError" Oct 8 19:44:03.899964 kubelet[2081]: I1008 19:44:03.899806 2081 reconciler.go:26] "Reconciler: start to sync state" Oct 8 19:44:03.900034 kubelet[2081]: I1008 19:44:03.899769 2081 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 8 19:44:03.900096 kubelet[2081]: E1008 19:44:03.900016 2081 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.84:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.84:6443: connect: connection refused" interval="200ms" Oct 8 19:44:03.900314 kubelet[2081]: I1008 19:44:03.900296 2081 factory.go:221] Registration of the systemd container factory successfully Oct 8 19:44:03.900390 kubelet[2081]: I1008 19:44:03.900375 2081 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 8 19:44:03.903532 kubelet[2081]: I1008 19:44:03.901808 2081 factory.go:221] Registration of the containerd container factory successfully Oct 8 19:44:03.913460 kubelet[2081]: I1008 19:44:03.913442 2081 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 8 19:44:03.913460 kubelet[2081]: I1008 19:44:03.913457 2081 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 8 19:44:03.913589 kubelet[2081]: I1008 19:44:03.913473 2081 state_mem.go:36] "Initialized new in-memory state store" Oct 8 19:44:03.979890 kubelet[2081]: I1008 19:44:03.979856 2081 policy_none.go:49] "None policy: Start" Oct 8 19:44:03.981601 kubelet[2081]: I1008 19:44:03.981247 2081 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 8 19:44:03.981601 kubelet[2081]: I1008 19:44:03.981272 2081 state_mem.go:35] "Initializing new in-memory state store" Oct 8 19:44:03.983994 kubelet[2081]: I1008 19:44:03.983964 2081 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 8 19:44:03.985277 kubelet[2081]: I1008 19:44:03.985218 2081 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 8 19:44:03.985277 kubelet[2081]: I1008 19:44:03.985242 2081 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 8 19:44:03.985277 kubelet[2081]: I1008 19:44:03.985259 2081 kubelet.go:2321] "Starting kubelet main sync loop" Oct 8 19:44:03.985393 kubelet[2081]: E1008 19:44:03.985301 2081 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 8 19:44:03.986422 kubelet[2081]: W1008 19:44:03.985814 2081 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.84:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.84:6443: connect: connection refused Oct 8 19:44:03.986422 kubelet[2081]: E1008 19:44:03.985859 2081 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.84:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.84:6443: connect: connection refused" logger="UnhandledError" Oct 8 19:44:03.989369 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 8 19:44:04.000886 kubelet[2081]: E1008 19:44:04.000833 2081 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:44:04.006340 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 8 19:44:04.009082 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 8 19:44:04.023415 kubelet[2081]: I1008 19:44:04.023380 2081 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 8 19:44:04.023747 kubelet[2081]: I1008 19:44:04.023578 2081 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 8 19:44:04.023747 kubelet[2081]: I1008 19:44:04.023596 2081 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 8 19:44:04.023856 kubelet[2081]: I1008 19:44:04.023819 2081 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 8 19:44:04.025771 kubelet[2081]: E1008 19:44:04.025698 2081 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Oct 8 19:44:04.097974 systemd[1]: Created slice kubepods-burstable-pod442d751ee4d60708ee366de9e593a9dd.slice - libcontainer container kubepods-burstable-pod442d751ee4d60708ee366de9e593a9dd.slice. Oct 8 19:44:04.100938 kubelet[2081]: E1008 19:44:04.100895 2081 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.84:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.84:6443: connect: connection refused" interval="400ms" Oct 8 19:44:04.110903 systemd[1]: Created slice kubepods-burstable-pod344660bab292c4b91cf719f133c08ba2.slice - libcontainer container kubepods-burstable-pod344660bab292c4b91cf719f133c08ba2.slice. Oct 8 19:44:04.114077 systemd[1]: Created slice kubepods-burstable-pod1510be5a54dc8eef4f27b06886c891dc.slice - libcontainer container kubepods-burstable-pod1510be5a54dc8eef4f27b06886c891dc.slice. Oct 8 19:44:04.126061 kubelet[2081]: I1008 19:44:04.126017 2081 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Oct 8 19:44:04.126468 kubelet[2081]: E1008 19:44:04.126423 2081 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.84:6443/api/v1/nodes\": dial tcp 10.0.0.84:6443: connect: connection refused" node="localhost" Oct 8 19:44:04.201989 kubelet[2081]: I1008 19:44:04.201908 2081 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1510be5a54dc8eef4f27b06886c891dc-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"1510be5a54dc8eef4f27b06886c891dc\") " pod="kube-system/kube-scheduler-localhost" Oct 8 19:44:04.201989 kubelet[2081]: I1008 19:44:04.201954 2081 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/344660bab292c4b91cf719f133c08ba2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"344660bab292c4b91cf719f133c08ba2\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:44:04.201989 kubelet[2081]: I1008 19:44:04.201974 2081 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/344660bab292c4b91cf719f133c08ba2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"344660bab292c4b91cf719f133c08ba2\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:44:04.202176 kubelet[2081]: I1008 19:44:04.201995 2081 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/344660bab292c4b91cf719f133c08ba2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"344660bab292c4b91cf719f133c08ba2\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:44:04.202176 kubelet[2081]: I1008 19:44:04.202030 2081 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/442d751ee4d60708ee366de9e593a9dd-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"442d751ee4d60708ee366de9e593a9dd\") " pod="kube-system/kube-apiserver-localhost" Oct 8 19:44:04.202176 kubelet[2081]: I1008 19:44:04.202045 2081 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/442d751ee4d60708ee366de9e593a9dd-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"442d751ee4d60708ee366de9e593a9dd\") " pod="kube-system/kube-apiserver-localhost" Oct 8 19:44:04.202176 kubelet[2081]: I1008 19:44:04.202060 2081 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/442d751ee4d60708ee366de9e593a9dd-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"442d751ee4d60708ee366de9e593a9dd\") " pod="kube-system/kube-apiserver-localhost" Oct 8 19:44:04.202176 kubelet[2081]: I1008 19:44:04.202074 2081 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/344660bab292c4b91cf719f133c08ba2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"344660bab292c4b91cf719f133c08ba2\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:44:04.202284 kubelet[2081]: I1008 19:44:04.202091 2081 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/344660bab292c4b91cf719f133c08ba2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"344660bab292c4b91cf719f133c08ba2\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:44:04.328443 kubelet[2081]: I1008 19:44:04.328405 2081 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Oct 8 19:44:04.328816 kubelet[2081]: E1008 19:44:04.328780 2081 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.84:6443/api/v1/nodes\": dial tcp 10.0.0.84:6443: connect: connection refused" node="localhost" Oct 8 19:44:04.409234 kubelet[2081]: E1008 19:44:04.409110 2081 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:44:04.409943 containerd[1437]: time="2024-10-08T19:44:04.409876632Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:442d751ee4d60708ee366de9e593a9dd,Namespace:kube-system,Attempt:0,}" Oct 8 19:44:04.413013 kubelet[2081]: E1008 19:44:04.412982 2081 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:44:04.413381 containerd[1437]: time="2024-10-08T19:44:04.413294342Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:344660bab292c4b91cf719f133c08ba2,Namespace:kube-system,Attempt:0,}" Oct 8 19:44:04.415716 kubelet[2081]: E1008 19:44:04.415672 2081 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:44:04.416044 containerd[1437]: time="2024-10-08T19:44:04.415995809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:1510be5a54dc8eef4f27b06886c891dc,Namespace:kube-system,Attempt:0,}" Oct 8 19:44:04.501960 kubelet[2081]: E1008 19:44:04.501911 2081 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.84:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.84:6443: connect: connection refused" interval="800ms" Oct 8 19:44:04.730221 kubelet[2081]: I1008 19:44:04.730113 2081 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Oct 8 19:44:04.730508 kubelet[2081]: E1008 19:44:04.730458 2081 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.84:6443/api/v1/nodes\": dial tcp 10.0.0.84:6443: connect: connection refused" node="localhost" Oct 8 19:44:04.795115 kubelet[2081]: W1008 19:44:04.795044 2081 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.84:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.84:6443: connect: connection refused Oct 8 19:44:04.795115 kubelet[2081]: E1008 19:44:04.795115 2081 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.84:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.84:6443: connect: connection refused" logger="UnhandledError" Oct 8 19:44:04.939326 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount234677536.mount: Deactivated successfully. Oct 8 19:44:04.941974 containerd[1437]: time="2024-10-08T19:44:04.941928514Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 19:44:04.943830 containerd[1437]: time="2024-10-08T19:44:04.943795912Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 19:44:04.944916 containerd[1437]: time="2024-10-08T19:44:04.944889054Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Oct 8 19:44:04.945751 containerd[1437]: time="2024-10-08T19:44:04.945730529Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 8 19:44:04.946755 containerd[1437]: time="2024-10-08T19:44:04.946708048Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 19:44:04.947758 containerd[1437]: time="2024-10-08T19:44:04.947721238Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 19:44:04.948139 containerd[1437]: time="2024-10-08T19:44:04.948100739Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 8 19:44:04.950360 containerd[1437]: time="2024-10-08T19:44:04.950314047Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 19:44:04.953699 containerd[1437]: time="2024-10-08T19:44:04.951885298Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 535.794084ms" Oct 8 19:44:04.953699 containerd[1437]: time="2024-10-08T19:44:04.952573877Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 542.569571ms" Oct 8 19:44:04.956242 containerd[1437]: time="2024-10-08T19:44:04.956205740Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 542.752695ms" Oct 8 19:44:05.023734 kubelet[2081]: W1008 19:44:05.023659 2081 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.84:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.84:6443: connect: connection refused Oct 8 19:44:05.023860 kubelet[2081]: E1008 19:44:05.023744 2081 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.84:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.84:6443: connect: connection refused" logger="UnhandledError" Oct 8 19:44:05.098757 containerd[1437]: time="2024-10-08T19:44:05.098630375Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:44:05.098757 containerd[1437]: time="2024-10-08T19:44:05.098718284Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:44:05.098757 containerd[1437]: time="2024-10-08T19:44:05.098732976Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:44:05.099505 containerd[1437]: time="2024-10-08T19:44:05.099284489Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:44:05.099505 containerd[1437]: time="2024-10-08T19:44:05.099335289Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:44:05.099505 containerd[1437]: time="2024-10-08T19:44:05.099349100Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:44:05.099505 containerd[1437]: time="2024-10-08T19:44:05.099358067Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:44:05.099505 containerd[1437]: time="2024-10-08T19:44:05.099196100Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:44:05.099505 containerd[1437]: time="2024-10-08T19:44:05.099278084Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:44:05.099505 containerd[1437]: time="2024-10-08T19:44:05.099363231Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:44:05.099505 containerd[1437]: time="2024-10-08T19:44:05.099379003Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:44:05.099505 containerd[1437]: time="2024-10-08T19:44:05.099201824Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:44:05.118859 systemd[1]: Started cri-containerd-2c5e093edde0fc67a6b4ec6e8d638b482cf493a5c39a6c5684019ac3ffeb7247.scope - libcontainer container 2c5e093edde0fc67a6b4ec6e8d638b482cf493a5c39a6c5684019ac3ffeb7247. Oct 8 19:44:05.119992 systemd[1]: Started cri-containerd-3ea2f099fb777713aaca6187c736cd566c61a0a5e70472d2c96e4b53db1fb6a6.scope - libcontainer container 3ea2f099fb777713aaca6187c736cd566c61a0a5e70472d2c96e4b53db1fb6a6. Oct 8 19:44:05.121247 systemd[1]: Started cri-containerd-6bd7d86baaec00f1b3ed1760a649fa111dc2928268024efd8d8d9cf27340dca7.scope - libcontainer container 6bd7d86baaec00f1b3ed1760a649fa111dc2928268024efd8d8d9cf27340dca7. Oct 8 19:44:05.153542 containerd[1437]: time="2024-10-08T19:44:05.153481135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:1510be5a54dc8eef4f27b06886c891dc,Namespace:kube-system,Attempt:0,} returns sandbox id \"6bd7d86baaec00f1b3ed1760a649fa111dc2928268024efd8d8d9cf27340dca7\"" Oct 8 19:44:05.155094 kubelet[2081]: E1008 19:44:05.155070 2081 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:44:05.155396 containerd[1437]: time="2024-10-08T19:44:05.155346762Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:344660bab292c4b91cf719f133c08ba2,Namespace:kube-system,Attempt:0,} returns sandbox id \"2c5e093edde0fc67a6b4ec6e8d638b482cf493a5c39a6c5684019ac3ffeb7247\"" Oct 8 19:44:05.155933 kubelet[2081]: E1008 19:44:05.155915 2081 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:44:05.158095 containerd[1437]: time="2024-10-08T19:44:05.158049646Z" level=info msg="CreateContainer within sandbox \"2c5e093edde0fc67a6b4ec6e8d638b482cf493a5c39a6c5684019ac3ffeb7247\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 8 19:44:05.158201 containerd[1437]: time="2024-10-08T19:44:05.158052569Z" level=info msg="CreateContainer within sandbox \"6bd7d86baaec00f1b3ed1760a649fa111dc2928268024efd8d8d9cf27340dca7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 8 19:44:05.163260 containerd[1437]: time="2024-10-08T19:44:05.163224555Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:442d751ee4d60708ee366de9e593a9dd,Namespace:kube-system,Attempt:0,} returns sandbox id \"3ea2f099fb777713aaca6187c736cd566c61a0a5e70472d2c96e4b53db1fb6a6\"" Oct 8 19:44:05.164022 kubelet[2081]: W1008 19:44:05.163920 2081 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.84:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.84:6443: connect: connection refused Oct 8 19:44:05.164022 kubelet[2081]: E1008 19:44:05.163985 2081 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:44:05.164022 kubelet[2081]: E1008 19:44:05.163989 2081 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.84:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.84:6443: connect: connection refused" logger="UnhandledError" Oct 8 19:44:05.165869 containerd[1437]: time="2024-10-08T19:44:05.165838289Z" level=info msg="CreateContainer within sandbox \"3ea2f099fb777713aaca6187c736cd566c61a0a5e70472d2c96e4b53db1fb6a6\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 8 19:44:05.181070 containerd[1437]: time="2024-10-08T19:44:05.181015861Z" level=info msg="CreateContainer within sandbox \"6bd7d86baaec00f1b3ed1760a649fa111dc2928268024efd8d8d9cf27340dca7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"05b38b3df19dde6e3f12370614fd0fb3b005cc27f9544fbcedc3250035771cda\"" Oct 8 19:44:05.181809 containerd[1437]: time="2024-10-08T19:44:05.181760806Z" level=info msg="StartContainer for \"05b38b3df19dde6e3f12370614fd0fb3b005cc27f9544fbcedc3250035771cda\"" Oct 8 19:44:05.186821 containerd[1437]: time="2024-10-08T19:44:05.185067606Z" level=info msg="CreateContainer within sandbox \"2c5e093edde0fc67a6b4ec6e8d638b482cf493a5c39a6c5684019ac3ffeb7247\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"50c58ebba8470cebde43ac287d265fdef92cf71480afb25484a8bdade6f9206d\"" Oct 8 19:44:05.186821 containerd[1437]: time="2024-10-08T19:44:05.185487176Z" level=info msg="StartContainer for \"50c58ebba8470cebde43ac287d265fdef92cf71480afb25484a8bdade6f9206d\"" Oct 8 19:44:05.194112 kubelet[2081]: W1008 19:44:05.194064 2081 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.84:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.84:6443: connect: connection refused Oct 8 19:44:05.194112 kubelet[2081]: E1008 19:44:05.194111 2081 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.84:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.84:6443: connect: connection refused" logger="UnhandledError" Oct 8 19:44:05.204871 systemd[1]: Started cri-containerd-05b38b3df19dde6e3f12370614fd0fb3b005cc27f9544fbcedc3250035771cda.scope - libcontainer container 05b38b3df19dde6e3f12370614fd0fb3b005cc27f9544fbcedc3250035771cda. Oct 8 19:44:05.207562 systemd[1]: Started cri-containerd-50c58ebba8470cebde43ac287d265fdef92cf71480afb25484a8bdade6f9206d.scope - libcontainer container 50c58ebba8470cebde43ac287d265fdef92cf71480afb25484a8bdade6f9206d. Oct 8 19:44:05.258356 containerd[1437]: time="2024-10-08T19:44:05.258285085Z" level=info msg="StartContainer for \"05b38b3df19dde6e3f12370614fd0fb3b005cc27f9544fbcedc3250035771cda\" returns successfully" Oct 8 19:44:05.258460 containerd[1437]: time="2024-10-08T19:44:05.258429999Z" level=info msg="StartContainer for \"50c58ebba8470cebde43ac287d265fdef92cf71480afb25484a8bdade6f9206d\" returns successfully" Oct 8 19:44:05.266070 containerd[1437]: time="2024-10-08T19:44:05.266027932Z" level=info msg="CreateContainer within sandbox \"3ea2f099fb777713aaca6187c736cd566c61a0a5e70472d2c96e4b53db1fb6a6\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"7bf1900bf08ab0ee82095392743afc6002cbc782e5827b50d53ec04ef9dfe7b5\"" Oct 8 19:44:05.266450 containerd[1437]: time="2024-10-08T19:44:05.266423162Z" level=info msg="StartContainer for \"7bf1900bf08ab0ee82095392743afc6002cbc782e5827b50d53ec04ef9dfe7b5\"" Oct 8 19:44:05.294856 systemd[1]: Started cri-containerd-7bf1900bf08ab0ee82095392743afc6002cbc782e5827b50d53ec04ef9dfe7b5.scope - libcontainer container 7bf1900bf08ab0ee82095392743afc6002cbc782e5827b50d53ec04ef9dfe7b5. Oct 8 19:44:05.302770 kubelet[2081]: E1008 19:44:05.302720 2081 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.84:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.84:6443: connect: connection refused" interval="1.6s" Oct 8 19:44:05.342716 containerd[1437]: time="2024-10-08T19:44:05.339860814Z" level=info msg="StartContainer for \"7bf1900bf08ab0ee82095392743afc6002cbc782e5827b50d53ec04ef9dfe7b5\" returns successfully" Oct 8 19:44:05.532412 kubelet[2081]: I1008 19:44:05.532374 2081 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Oct 8 19:44:05.995372 kubelet[2081]: E1008 19:44:05.994792 2081 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:44:05.995863 kubelet[2081]: E1008 19:44:05.995838 2081 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:44:05.999303 kubelet[2081]: E1008 19:44:05.999269 2081 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:44:06.909031 kubelet[2081]: E1008 19:44:06.908993 2081 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Oct 8 19:44:06.924196 kubelet[2081]: I1008 19:44:06.924156 2081 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Oct 8 19:44:06.924330 kubelet[2081]: E1008 19:44:06.924213 2081 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Oct 8 19:44:06.935006 kubelet[2081]: E1008 19:44:06.934960 2081 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:44:07.004236 kubelet[2081]: E1008 19:44:07.004150 2081 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:44:07.035553 kubelet[2081]: E1008 19:44:07.035519 2081 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:44:07.135858 kubelet[2081]: E1008 19:44:07.135819 2081 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:44:07.236816 kubelet[2081]: E1008 19:44:07.236347 2081 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:44:07.336850 kubelet[2081]: E1008 19:44:07.336812 2081 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:44:07.437277 kubelet[2081]: E1008 19:44:07.437225 2081 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:44:07.537960 kubelet[2081]: E1008 19:44:07.537910 2081 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:44:07.639021 kubelet[2081]: E1008 19:44:07.638947 2081 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:44:07.739515 kubelet[2081]: E1008 19:44:07.739474 2081 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:44:07.842396 kubelet[2081]: E1008 19:44:07.840088 2081 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:44:07.940388 kubelet[2081]: E1008 19:44:07.940346 2081 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:44:08.003958 kubelet[2081]: E1008 19:44:08.003915 2081 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:44:08.862652 systemd[1]: Reloading requested from client PID 2361 ('systemctl') (unit session-7.scope)... Oct 8 19:44:08.862671 systemd[1]: Reloading... Oct 8 19:44:08.885129 kubelet[2081]: I1008 19:44:08.885094 2081 apiserver.go:52] "Watching apiserver" Oct 8 19:44:08.900621 kubelet[2081]: I1008 19:44:08.900576 2081 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Oct 8 19:44:08.934720 zram_generator::config[2401]: No configuration found. Oct 8 19:44:09.077404 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 19:44:09.143284 systemd[1]: Reloading finished in 279 ms. Oct 8 19:44:09.179616 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:44:09.192616 systemd[1]: kubelet.service: Deactivated successfully. Oct 8 19:44:09.192874 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:44:09.203017 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:44:09.291169 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:44:09.295006 (kubelet)[2440]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 8 19:44:09.335313 kubelet[2440]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 19:44:09.335313 kubelet[2440]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 8 19:44:09.335313 kubelet[2440]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 19:44:09.335313 kubelet[2440]: I1008 19:44:09.334731 2440 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 8 19:44:09.342001 kubelet[2440]: I1008 19:44:09.341960 2440 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Oct 8 19:44:09.342001 kubelet[2440]: I1008 19:44:09.341988 2440 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 8 19:44:09.342710 kubelet[2440]: I1008 19:44:09.342405 2440 server.go:929] "Client rotation is on, will bootstrap in background" Oct 8 19:44:09.343840 kubelet[2440]: I1008 19:44:09.343816 2440 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Oct 8 19:44:09.345739 kubelet[2440]: I1008 19:44:09.345708 2440 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 8 19:44:09.348772 kubelet[2440]: E1008 19:44:09.348741 2440 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Oct 8 19:44:09.348772 kubelet[2440]: I1008 19:44:09.348769 2440 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Oct 8 19:44:09.351169 kubelet[2440]: I1008 19:44:09.351148 2440 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 8 19:44:09.351254 kubelet[2440]: I1008 19:44:09.351240 2440 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Oct 8 19:44:09.351356 kubelet[2440]: I1008 19:44:09.351330 2440 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 8 19:44:09.351504 kubelet[2440]: I1008 19:44:09.351357 2440 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 8 19:44:09.351575 kubelet[2440]: I1008 19:44:09.351514 2440 topology_manager.go:138] "Creating topology manager with none policy" Oct 8 19:44:09.351575 kubelet[2440]: I1008 19:44:09.351524 2440 container_manager_linux.go:300] "Creating device plugin manager" Oct 8 19:44:09.351575 kubelet[2440]: I1008 19:44:09.351553 2440 state_mem.go:36] "Initialized new in-memory state store" Oct 8 19:44:09.351660 kubelet[2440]: I1008 19:44:09.351649 2440 kubelet.go:408] "Attempting to sync node with API server" Oct 8 19:44:09.351705 kubelet[2440]: I1008 19:44:09.351662 2440 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 8 19:44:09.351705 kubelet[2440]: I1008 19:44:09.351697 2440 kubelet.go:314] "Adding apiserver pod source" Oct 8 19:44:09.352179 kubelet[2440]: I1008 19:44:09.351708 2440 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 8 19:44:09.353002 kubelet[2440]: I1008 19:44:09.352536 2440 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Oct 8 19:44:09.353002 kubelet[2440]: I1008 19:44:09.352995 2440 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 8 19:44:09.354269 kubelet[2440]: I1008 19:44:09.353328 2440 server.go:1269] "Started kubelet" Oct 8 19:44:09.354269 kubelet[2440]: I1008 19:44:09.354020 2440 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 8 19:44:09.354269 kubelet[2440]: I1008 19:44:09.354227 2440 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 8 19:44:09.355322 kubelet[2440]: I1008 19:44:09.355275 2440 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Oct 8 19:44:09.356335 kubelet[2440]: I1008 19:44:09.356309 2440 server.go:460] "Adding debug handlers to kubelet server" Oct 8 19:44:09.357729 kubelet[2440]: I1008 19:44:09.357705 2440 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 8 19:44:09.364383 kubelet[2440]: I1008 19:44:09.364349 2440 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 8 19:44:09.364966 kubelet[2440]: E1008 19:44:09.364833 2440 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 8 19:44:09.369202 kubelet[2440]: I1008 19:44:09.366312 2440 volume_manager.go:289] "Starting Kubelet Volume Manager" Oct 8 19:44:09.369202 kubelet[2440]: E1008 19:44:09.366532 2440 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:44:09.369202 kubelet[2440]: I1008 19:44:09.366895 2440 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 8 19:44:09.369202 kubelet[2440]: I1008 19:44:09.367039 2440 reconciler.go:26] "Reconciler: start to sync state" Oct 8 19:44:09.374738 kubelet[2440]: I1008 19:44:09.372574 2440 factory.go:221] Registration of the systemd container factory successfully Oct 8 19:44:09.374738 kubelet[2440]: I1008 19:44:09.372656 2440 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 8 19:44:09.374738 kubelet[2440]: I1008 19:44:09.374304 2440 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 8 19:44:09.375336 kubelet[2440]: I1008 19:44:09.375313 2440 factory.go:221] Registration of the containerd container factory successfully Oct 8 19:44:09.375474 kubelet[2440]: I1008 19:44:09.375449 2440 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 8 19:44:09.375474 kubelet[2440]: I1008 19:44:09.375472 2440 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 8 19:44:09.375533 kubelet[2440]: I1008 19:44:09.375490 2440 kubelet.go:2321] "Starting kubelet main sync loop" Oct 8 19:44:09.375563 kubelet[2440]: E1008 19:44:09.375534 2440 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 8 19:44:09.414687 kubelet[2440]: I1008 19:44:09.414576 2440 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 8 19:44:09.414687 kubelet[2440]: I1008 19:44:09.414596 2440 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 8 19:44:09.414687 kubelet[2440]: I1008 19:44:09.414616 2440 state_mem.go:36] "Initialized new in-memory state store" Oct 8 19:44:09.414834 kubelet[2440]: I1008 19:44:09.414778 2440 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 8 19:44:09.414834 kubelet[2440]: I1008 19:44:09.414789 2440 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 8 19:44:09.414834 kubelet[2440]: I1008 19:44:09.414806 2440 policy_none.go:49] "None policy: Start" Oct 8 19:44:09.416579 kubelet[2440]: I1008 19:44:09.416464 2440 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 8 19:44:09.416579 kubelet[2440]: I1008 19:44:09.416493 2440 state_mem.go:35] "Initializing new in-memory state store" Oct 8 19:44:09.416699 kubelet[2440]: I1008 19:44:09.416667 2440 state_mem.go:75] "Updated machine memory state" Oct 8 19:44:09.420486 kubelet[2440]: I1008 19:44:09.420390 2440 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 8 19:44:09.420637 kubelet[2440]: I1008 19:44:09.420565 2440 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 8 19:44:09.420637 kubelet[2440]: I1008 19:44:09.420596 2440 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 8 19:44:09.420933 kubelet[2440]: I1008 19:44:09.420804 2440 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 8 19:44:09.524817 kubelet[2440]: I1008 19:44:09.524391 2440 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Oct 8 19:44:09.539326 kubelet[2440]: I1008 19:44:09.539290 2440 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Oct 8 19:44:09.539425 kubelet[2440]: I1008 19:44:09.539372 2440 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Oct 8 19:44:09.668232 kubelet[2440]: I1008 19:44:09.668123 2440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/344660bab292c4b91cf719f133c08ba2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"344660bab292c4b91cf719f133c08ba2\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:44:09.668232 kubelet[2440]: I1008 19:44:09.668165 2440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1510be5a54dc8eef4f27b06886c891dc-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"1510be5a54dc8eef4f27b06886c891dc\") " pod="kube-system/kube-scheduler-localhost" Oct 8 19:44:09.668232 kubelet[2440]: I1008 19:44:09.668185 2440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/442d751ee4d60708ee366de9e593a9dd-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"442d751ee4d60708ee366de9e593a9dd\") " pod="kube-system/kube-apiserver-localhost" Oct 8 19:44:09.668232 kubelet[2440]: I1008 19:44:09.668202 2440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/344660bab292c4b91cf719f133c08ba2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"344660bab292c4b91cf719f133c08ba2\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:44:09.668232 kubelet[2440]: I1008 19:44:09.668217 2440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/344660bab292c4b91cf719f133c08ba2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"344660bab292c4b91cf719f133c08ba2\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:44:09.668417 kubelet[2440]: I1008 19:44:09.668233 2440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/344660bab292c4b91cf719f133c08ba2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"344660bab292c4b91cf719f133c08ba2\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:44:09.668417 kubelet[2440]: I1008 19:44:09.668257 2440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/442d751ee4d60708ee366de9e593a9dd-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"442d751ee4d60708ee366de9e593a9dd\") " pod="kube-system/kube-apiserver-localhost" Oct 8 19:44:09.668417 kubelet[2440]: I1008 19:44:09.668273 2440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/442d751ee4d60708ee366de9e593a9dd-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"442d751ee4d60708ee366de9e593a9dd\") " pod="kube-system/kube-apiserver-localhost" Oct 8 19:44:09.668417 kubelet[2440]: I1008 19:44:09.668288 2440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/344660bab292c4b91cf719f133c08ba2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"344660bab292c4b91cf719f133c08ba2\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:44:09.788035 kubelet[2440]: E1008 19:44:09.787999 2440 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:44:09.789087 kubelet[2440]: E1008 19:44:09.789064 2440 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:44:09.789201 kubelet[2440]: E1008 19:44:09.789184 2440 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:44:10.353102 kubelet[2440]: I1008 19:44:10.353055 2440 apiserver.go:52] "Watching apiserver" Oct 8 19:44:10.367048 kubelet[2440]: I1008 19:44:10.366983 2440 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Oct 8 19:44:10.398851 kubelet[2440]: E1008 19:44:10.398659 2440 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:44:10.399673 kubelet[2440]: E1008 19:44:10.399583 2440 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:44:10.416844 kubelet[2440]: E1008 19:44:10.416801 2440 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 8 19:44:10.416995 kubelet[2440]: E1008 19:44:10.416974 2440 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:44:10.455670 kubelet[2440]: I1008 19:44:10.455598 2440 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.4555617490000001 podStartE2EDuration="1.455561749s" podCreationTimestamp="2024-10-08 19:44:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:44:10.440262717 +0000 UTC m=+1.141795104" watchObservedRunningTime="2024-10-08 19:44:10.455561749 +0000 UTC m=+1.157094136" Oct 8 19:44:10.455843 kubelet[2440]: I1008 19:44:10.455752 2440 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.455746631 podStartE2EDuration="1.455746631s" podCreationTimestamp="2024-10-08 19:44:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:44:10.455023875 +0000 UTC m=+1.156556222" watchObservedRunningTime="2024-10-08 19:44:10.455746631 +0000 UTC m=+1.157279018" Oct 8 19:44:10.468643 kubelet[2440]: I1008 19:44:10.468575 2440 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.468558505 podStartE2EDuration="1.468558505s" podCreationTimestamp="2024-10-08 19:44:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:44:10.468434464 +0000 UTC m=+1.169966811" watchObservedRunningTime="2024-10-08 19:44:10.468558505 +0000 UTC m=+1.170090892" Oct 8 19:44:11.406707 kubelet[2440]: E1008 19:44:11.403990 2440 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:44:12.640206 kubelet[2440]: E1008 19:44:12.640169 2440 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:44:14.056089 sudo[1616]: pam_unix(sudo:session): session closed for user root Oct 8 19:44:14.057632 sshd[1613]: pam_unix(sshd:session): session closed for user core Oct 8 19:44:14.060753 systemd-logind[1424]: Session 7 logged out. Waiting for processes to exit. Oct 8 19:44:14.061214 systemd[1]: sshd@6-10.0.0.84:22-10.0.0.1:38864.service: Deactivated successfully. Oct 8 19:44:14.062933 systemd[1]: session-7.scope: Deactivated successfully. Oct 8 19:44:14.063233 systemd[1]: session-7.scope: Consumed 6.705s CPU time, 103.3M memory peak, 0B memory swap peak. Oct 8 19:44:14.065269 systemd-logind[1424]: Removed session 7. Oct 8 19:44:14.678164 kubelet[2440]: E1008 19:44:14.678127 2440 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:44:15.530332 kubelet[2440]: I1008 19:44:15.529939 2440 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 8 19:44:15.530478 containerd[1437]: time="2024-10-08T19:44:15.530411566Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 8 19:44:15.533038 kubelet[2440]: I1008 19:44:15.530604 2440 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 8 19:44:16.625000 systemd[1]: Created slice kubepods-besteffort-pod565d0d17_7e9d_4e97_82cb_d33756f68b17.slice - libcontainer container kubepods-besteffort-pod565d0d17_7e9d_4e97_82cb_d33756f68b17.slice. Oct 8 19:44:16.659639 systemd[1]: Created slice kubepods-besteffort-pod2ce583ee_0b91_440d_a8e9_f9ff3a2dba4e.slice - libcontainer container kubepods-besteffort-pod2ce583ee_0b91_440d_a8e9_f9ff3a2dba4e.slice. Oct 8 19:44:16.714405 kubelet[2440]: I1008 19:44:16.714348 2440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/565d0d17-7e9d-4e97-82cb-d33756f68b17-var-lib-calico\") pod \"tigera-operator-55748b469f-s56pw\" (UID: \"565d0d17-7e9d-4e97-82cb-d33756f68b17\") " pod="tigera-operator/tigera-operator-55748b469f-s56pw" Oct 8 19:44:16.714405 kubelet[2440]: I1008 19:44:16.714387 2440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2ce583ee-0b91-440d-a8e9-f9ff3a2dba4e-lib-modules\") pod \"kube-proxy-p2264\" (UID: \"2ce583ee-0b91-440d-a8e9-f9ff3a2dba4e\") " pod="kube-system/kube-proxy-p2264" Oct 8 19:44:16.714405 kubelet[2440]: I1008 19:44:16.714412 2440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xd5h9\" (UniqueName: \"kubernetes.io/projected/2ce583ee-0b91-440d-a8e9-f9ff3a2dba4e-kube-api-access-xd5h9\") pod \"kube-proxy-p2264\" (UID: \"2ce583ee-0b91-440d-a8e9-f9ff3a2dba4e\") " pod="kube-system/kube-proxy-p2264" Oct 8 19:44:16.714874 kubelet[2440]: I1008 19:44:16.714432 2440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2ce583ee-0b91-440d-a8e9-f9ff3a2dba4e-xtables-lock\") pod \"kube-proxy-p2264\" (UID: \"2ce583ee-0b91-440d-a8e9-f9ff3a2dba4e\") " pod="kube-system/kube-proxy-p2264" Oct 8 19:44:16.714874 kubelet[2440]: I1008 19:44:16.714451 2440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8776h\" (UniqueName: \"kubernetes.io/projected/565d0d17-7e9d-4e97-82cb-d33756f68b17-kube-api-access-8776h\") pod \"tigera-operator-55748b469f-s56pw\" (UID: \"565d0d17-7e9d-4e97-82cb-d33756f68b17\") " pod="tigera-operator/tigera-operator-55748b469f-s56pw" Oct 8 19:44:16.714874 kubelet[2440]: I1008 19:44:16.714466 2440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2ce583ee-0b91-440d-a8e9-f9ff3a2dba4e-kube-proxy\") pod \"kube-proxy-p2264\" (UID: \"2ce583ee-0b91-440d-a8e9-f9ff3a2dba4e\") " pod="kube-system/kube-proxy-p2264" Oct 8 19:44:16.935381 containerd[1437]: time="2024-10-08T19:44:16.935228696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-55748b469f-s56pw,Uid:565d0d17-7e9d-4e97-82cb-d33756f68b17,Namespace:tigera-operator,Attempt:0,}" Oct 8 19:44:16.956691 containerd[1437]: time="2024-10-08T19:44:16.956458355Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:44:16.956691 containerd[1437]: time="2024-10-08T19:44:16.956512901Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:44:16.956691 containerd[1437]: time="2024-10-08T19:44:16.956531069Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:44:16.956691 containerd[1437]: time="2024-10-08T19:44:16.956543995Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:44:16.962995 kubelet[2440]: E1008 19:44:16.962750 2440 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:44:16.963427 containerd[1437]: time="2024-10-08T19:44:16.963378621Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-p2264,Uid:2ce583ee-0b91-440d-a8e9-f9ff3a2dba4e,Namespace:kube-system,Attempt:0,}" Oct 8 19:44:16.978885 systemd[1]: Started cri-containerd-931170fe766f41e3025478d456838c37ee9b3bd369794f27c65203972978b78d.scope - libcontainer container 931170fe766f41e3025478d456838c37ee9b3bd369794f27c65203972978b78d. Oct 8 19:44:16.991648 containerd[1437]: time="2024-10-08T19:44:16.991437302Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:44:16.991648 containerd[1437]: time="2024-10-08T19:44:16.991483564Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:44:16.991648 containerd[1437]: time="2024-10-08T19:44:16.991504334Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:44:16.991648 containerd[1437]: time="2024-10-08T19:44:16.991522422Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:44:17.008835 systemd[1]: Started cri-containerd-49d2f7db09bf9775e41feaed53279efc0ea9248647a6996e29ec963f756dc040.scope - libcontainer container 49d2f7db09bf9775e41feaed53279efc0ea9248647a6996e29ec963f756dc040. Oct 8 19:44:17.012977 containerd[1437]: time="2024-10-08T19:44:17.012940295Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-55748b469f-s56pw,Uid:565d0d17-7e9d-4e97-82cb-d33756f68b17,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"931170fe766f41e3025478d456838c37ee9b3bd369794f27c65203972978b78d\"" Oct 8 19:44:17.014961 containerd[1437]: time="2024-10-08T19:44:17.014932386Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\"" Oct 8 19:44:17.030470 containerd[1437]: time="2024-10-08T19:44:17.030436040Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-p2264,Uid:2ce583ee-0b91-440d-a8e9-f9ff3a2dba4e,Namespace:kube-system,Attempt:0,} returns sandbox id \"49d2f7db09bf9775e41feaed53279efc0ea9248647a6996e29ec963f756dc040\"" Oct 8 19:44:17.032923 kubelet[2440]: E1008 19:44:17.032456 2440 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:44:17.034960 containerd[1437]: time="2024-10-08T19:44:17.034928049Z" level=info msg="CreateContainer within sandbox \"49d2f7db09bf9775e41feaed53279efc0ea9248647a6996e29ec963f756dc040\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 8 19:44:17.050448 containerd[1437]: time="2024-10-08T19:44:17.050384003Z" level=info msg="CreateContainer within sandbox \"49d2f7db09bf9775e41feaed53279efc0ea9248647a6996e29ec963f756dc040\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"9815ebf0a5c6da06941ba52c9b5a3103ab10056ffdbfc43749c102940eba0350\"" Oct 8 19:44:17.051181 containerd[1437]: time="2024-10-08T19:44:17.051145463Z" level=info msg="StartContainer for \"9815ebf0a5c6da06941ba52c9b5a3103ab10056ffdbfc43749c102940eba0350\"" Oct 8 19:44:17.076848 systemd[1]: Started cri-containerd-9815ebf0a5c6da06941ba52c9b5a3103ab10056ffdbfc43749c102940eba0350.scope - libcontainer container 9815ebf0a5c6da06941ba52c9b5a3103ab10056ffdbfc43749c102940eba0350. Oct 8 19:44:17.108397 containerd[1437]: time="2024-10-08T19:44:17.107180407Z" level=info msg="StartContainer for \"9815ebf0a5c6da06941ba52c9b5a3103ab10056ffdbfc43749c102940eba0350\" returns successfully" Oct 8 19:44:17.415265 kubelet[2440]: E1008 19:44:17.415230 2440 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:44:17.426065 kubelet[2440]: I1008 19:44:17.425991 2440 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-p2264" podStartSLOduration=1.425952308 podStartE2EDuration="1.425952308s" podCreationTimestamp="2024-10-08 19:44:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:44:17.425906927 +0000 UTC m=+8.127439314" watchObservedRunningTime="2024-10-08 19:44:17.425952308 +0000 UTC m=+8.127484695" Oct 8 19:44:17.950286 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1591007620.mount: Deactivated successfully. Oct 8 19:44:18.269040 kubelet[2440]: E1008 19:44:18.269011 2440 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:44:18.390590 containerd[1437]: time="2024-10-08T19:44:18.390541984Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:44:18.391558 containerd[1437]: time="2024-10-08T19:44:18.391467257Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.3: active requests=0, bytes read=19485907" Oct 8 19:44:18.392440 containerd[1437]: time="2024-10-08T19:44:18.392202729Z" level=info msg="ImageCreate event name:\"sha256:2fd8a2c22d96f6b41bf5709bd6ebbb915093532073f7039d03ab056b4e148f56\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:44:18.395001 containerd[1437]: time="2024-10-08T19:44:18.394947933Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:44:18.395903 containerd[1437]: time="2024-10-08T19:44:18.395801455Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.3\" with image id \"sha256:2fd8a2c22d96f6b41bf5709bd6ebbb915093532073f7039d03ab056b4e148f56\", repo tag \"quay.io/tigera/operator:v1.34.3\", repo digest \"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\", size \"19480102\" in 1.380709478s" Oct 8 19:44:18.395903 containerd[1437]: time="2024-10-08T19:44:18.395832468Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\" returns image reference \"sha256:2fd8a2c22d96f6b41bf5709bd6ebbb915093532073f7039d03ab056b4e148f56\"" Oct 8 19:44:18.398411 containerd[1437]: time="2024-10-08T19:44:18.398373906Z" level=info msg="CreateContainer within sandbox \"931170fe766f41e3025478d456838c37ee9b3bd369794f27c65203972978b78d\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 8 19:44:18.408591 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount895892228.mount: Deactivated successfully. Oct 8 19:44:18.409480 containerd[1437]: time="2024-10-08T19:44:18.409366129Z" level=info msg="CreateContainer within sandbox \"931170fe766f41e3025478d456838c37ee9b3bd369794f27c65203972978b78d\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"3f8b480dc5702195ecbfbf66cf4abd9eb159b1388cb0a0f7cba8e131fa56ffc5\"" Oct 8 19:44:18.409989 containerd[1437]: time="2024-10-08T19:44:18.409954499Z" level=info msg="StartContainer for \"3f8b480dc5702195ecbfbf66cf4abd9eb159b1388cb0a0f7cba8e131fa56ffc5\"" Oct 8 19:44:18.419495 kubelet[2440]: E1008 19:44:18.419458 2440 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:44:18.440833 systemd[1]: Started cri-containerd-3f8b480dc5702195ecbfbf66cf4abd9eb159b1388cb0a0f7cba8e131fa56ffc5.scope - libcontainer container 3f8b480dc5702195ecbfbf66cf4abd9eb159b1388cb0a0f7cba8e131fa56ffc5. Oct 8 19:44:18.461285 containerd[1437]: time="2024-10-08T19:44:18.461169663Z" level=info msg="StartContainer for \"3f8b480dc5702195ecbfbf66cf4abd9eb159b1388cb0a0f7cba8e131fa56ffc5\" returns successfully" Oct 8 19:44:19.434381 kubelet[2440]: I1008 19:44:19.432330 2440 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-55748b469f-s56pw" podStartSLOduration=2.04988362 podStartE2EDuration="3.432314566s" podCreationTimestamp="2024-10-08 19:44:16 +0000 UTC" firstStartedPulling="2024-10-08 19:44:17.014329716 +0000 UTC m=+7.715862103" lastFinishedPulling="2024-10-08 19:44:18.396760702 +0000 UTC m=+9.098293049" observedRunningTime="2024-10-08 19:44:19.431668146 +0000 UTC m=+10.133200493" watchObservedRunningTime="2024-10-08 19:44:19.432314566 +0000 UTC m=+10.133846953" Oct 8 19:44:21.093804 update_engine[1429]: I1008 19:44:21.093737 1429 update_attempter.cc:509] Updating boot flags... Oct 8 19:44:21.117760 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2825) Oct 8 19:44:21.158789 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2825) Oct 8 19:44:21.201821 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2825) Oct 8 19:44:22.537211 systemd[1]: Created slice kubepods-besteffort-podbfe5c16b_b3ec_49b0_a4f7_e1f205dadbd5.slice - libcontainer container kubepods-besteffort-podbfe5c16b_b3ec_49b0_a4f7_e1f205dadbd5.slice. Oct 8 19:44:22.553259 kubelet[2440]: I1008 19:44:22.553196 2440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jjhj\" (UniqueName: \"kubernetes.io/projected/bfe5c16b-b3ec-49b0-a4f7-e1f205dadbd5-kube-api-access-9jjhj\") pod \"calico-typha-9647c5f86-wwd99\" (UID: \"bfe5c16b-b3ec-49b0-a4f7-e1f205dadbd5\") " pod="calico-system/calico-typha-9647c5f86-wwd99" Oct 8 19:44:22.553259 kubelet[2440]: I1008 19:44:22.553242 2440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bfe5c16b-b3ec-49b0-a4f7-e1f205dadbd5-tigera-ca-bundle\") pod \"calico-typha-9647c5f86-wwd99\" (UID: \"bfe5c16b-b3ec-49b0-a4f7-e1f205dadbd5\") " pod="calico-system/calico-typha-9647c5f86-wwd99" Oct 8 19:44:22.553259 kubelet[2440]: I1008 19:44:22.553261 2440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/bfe5c16b-b3ec-49b0-a4f7-e1f205dadbd5-typha-certs\") pod \"calico-typha-9647c5f86-wwd99\" (UID: \"bfe5c16b-b3ec-49b0-a4f7-e1f205dadbd5\") " pod="calico-system/calico-typha-9647c5f86-wwd99" Oct 8 19:44:22.575473 systemd[1]: Created slice kubepods-besteffort-pod6da2bebe_f07b_4888_af0c_ca2cc820279b.slice - libcontainer container kubepods-besteffort-pod6da2bebe_f07b_4888_af0c_ca2cc820279b.slice. Oct 8 19:44:22.647399 kubelet[2440]: E1008 19:44:22.647368 2440 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:44:22.653692 kubelet[2440]: I1008 19:44:22.653655 2440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6da2bebe-f07b-4888-af0c-ca2cc820279b-xtables-lock\") pod \"calico-node-jslp9\" (UID: \"6da2bebe-f07b-4888-af0c-ca2cc820279b\") " pod="calico-system/calico-node-jslp9" Oct 8 19:44:22.653807 kubelet[2440]: I1008 19:44:22.653719 2440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6da2bebe-f07b-4888-af0c-ca2cc820279b-tigera-ca-bundle\") pod \"calico-node-jslp9\" (UID: \"6da2bebe-f07b-4888-af0c-ca2cc820279b\") " pod="calico-system/calico-node-jslp9" Oct 8 19:44:22.653807 kubelet[2440]: I1008 19:44:22.653736 2440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/6da2bebe-f07b-4888-af0c-ca2cc820279b-node-certs\") pod \"calico-node-jslp9\" (UID: \"6da2bebe-f07b-4888-af0c-ca2cc820279b\") " pod="calico-system/calico-node-jslp9" Oct 8 19:44:22.653807 kubelet[2440]: I1008 19:44:22.653752 2440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/6da2bebe-f07b-4888-af0c-ca2cc820279b-var-run-calico\") pod \"calico-node-jslp9\" (UID: \"6da2bebe-f07b-4888-af0c-ca2cc820279b\") " pod="calico-system/calico-node-jslp9" Oct 8 19:44:22.653807 kubelet[2440]: I1008 19:44:22.653767 2440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/6da2bebe-f07b-4888-af0c-ca2cc820279b-cni-net-dir\") pod \"calico-node-jslp9\" (UID: \"6da2bebe-f07b-4888-af0c-ca2cc820279b\") " pod="calico-system/calico-node-jslp9" Oct 8 19:44:22.653807 kubelet[2440]: I1008 19:44:22.653785 2440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6da2bebe-f07b-4888-af0c-ca2cc820279b-var-lib-calico\") pod \"calico-node-jslp9\" (UID: \"6da2bebe-f07b-4888-af0c-ca2cc820279b\") " pod="calico-system/calico-node-jslp9" Oct 8 19:44:22.653964 kubelet[2440]: I1008 19:44:22.653802 2440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/6da2bebe-f07b-4888-af0c-ca2cc820279b-cni-bin-dir\") pod \"calico-node-jslp9\" (UID: \"6da2bebe-f07b-4888-af0c-ca2cc820279b\") " pod="calico-system/calico-node-jslp9" Oct 8 19:44:22.653964 kubelet[2440]: I1008 19:44:22.653817 2440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/6da2bebe-f07b-4888-af0c-ca2cc820279b-cni-log-dir\") pod \"calico-node-jslp9\" (UID: \"6da2bebe-f07b-4888-af0c-ca2cc820279b\") " pod="calico-system/calico-node-jslp9" Oct 8 19:44:22.653964 kubelet[2440]: I1008 19:44:22.653833 2440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/6da2bebe-f07b-4888-af0c-ca2cc820279b-flexvol-driver-host\") pod \"calico-node-jslp9\" (UID: \"6da2bebe-f07b-4888-af0c-ca2cc820279b\") " pod="calico-system/calico-node-jslp9" Oct 8 19:44:22.653964 kubelet[2440]: I1008 19:44:22.653851 2440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4srpf\" (UniqueName: \"kubernetes.io/projected/6da2bebe-f07b-4888-af0c-ca2cc820279b-kube-api-access-4srpf\") pod \"calico-node-jslp9\" (UID: \"6da2bebe-f07b-4888-af0c-ca2cc820279b\") " pod="calico-system/calico-node-jslp9" Oct 8 19:44:22.653964 kubelet[2440]: I1008 19:44:22.653932 2440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6da2bebe-f07b-4888-af0c-ca2cc820279b-lib-modules\") pod \"calico-node-jslp9\" (UID: \"6da2bebe-f07b-4888-af0c-ca2cc820279b\") " pod="calico-system/calico-node-jslp9" Oct 8 19:44:22.654079 kubelet[2440]: I1008 19:44:22.653962 2440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/6da2bebe-f07b-4888-af0c-ca2cc820279b-policysync\") pod \"calico-node-jslp9\" (UID: \"6da2bebe-f07b-4888-af0c-ca2cc820279b\") " pod="calico-system/calico-node-jslp9" Oct 8 19:44:22.683599 kubelet[2440]: E1008 19:44:22.683547 2440 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nkd98" podUID="580b162c-9f56-423c-982e-ca1911345f68" Oct 8 19:44:22.754451 kubelet[2440]: I1008 19:44:22.754415 2440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/580b162c-9f56-423c-982e-ca1911345f68-kubelet-dir\") pod \"csi-node-driver-nkd98\" (UID: \"580b162c-9f56-423c-982e-ca1911345f68\") " pod="calico-system/csi-node-driver-nkd98" Oct 8 19:44:22.754590 kubelet[2440]: I1008 19:44:22.754482 2440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/580b162c-9f56-423c-982e-ca1911345f68-socket-dir\") pod \"csi-node-driver-nkd98\" (UID: \"580b162c-9f56-423c-982e-ca1911345f68\") " pod="calico-system/csi-node-driver-nkd98" Oct 8 19:44:22.754590 kubelet[2440]: I1008 19:44:22.754523 2440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6whb\" (UniqueName: \"kubernetes.io/projected/580b162c-9f56-423c-982e-ca1911345f68-kube-api-access-f6whb\") pod \"csi-node-driver-nkd98\" (UID: \"580b162c-9f56-423c-982e-ca1911345f68\") " pod="calico-system/csi-node-driver-nkd98" Oct 8 19:44:22.754590 kubelet[2440]: I1008 19:44:22.754542 2440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/580b162c-9f56-423c-982e-ca1911345f68-registration-dir\") pod \"csi-node-driver-nkd98\" (UID: \"580b162c-9f56-423c-982e-ca1911345f68\") " pod="calico-system/csi-node-driver-nkd98" Oct 8 19:44:22.754590 kubelet[2440]: I1008 19:44:22.754588 2440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/580b162c-9f56-423c-982e-ca1911345f68-varrun\") pod \"csi-node-driver-nkd98\" (UID: \"580b162c-9f56-423c-982e-ca1911345f68\") " pod="calico-system/csi-node-driver-nkd98" Oct 8 19:44:22.756320 kubelet[2440]: E1008 19:44:22.756283 2440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:44:22.756320 kubelet[2440]: W1008 19:44:22.756319 2440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:44:22.756427 kubelet[2440]: E1008 19:44:22.756414 2440 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:44:22.756666 kubelet[2440]: E1008 19:44:22.756647 2440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:44:22.756666 kubelet[2440]: W1008 19:44:22.756665 2440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:44:22.756738 kubelet[2440]: E1008 19:44:22.756691 2440 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:44:22.759857 kubelet[2440]: E1008 19:44:22.759836 2440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:44:22.759857 kubelet[2440]: W1008 19:44:22.759852 2440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:44:22.759950 kubelet[2440]: E1008 19:44:22.759869 2440 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:44:22.760131 kubelet[2440]: E1008 19:44:22.760114 2440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:44:22.760163 kubelet[2440]: W1008 19:44:22.760140 2440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:44:22.760163 kubelet[2440]: E1008 19:44:22.760155 2440 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:44:22.760420 kubelet[2440]: E1008 19:44:22.760405 2440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:44:22.760420 kubelet[2440]: W1008 19:44:22.760419 2440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:44:22.760478 kubelet[2440]: E1008 19:44:22.760431 2440 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:44:22.760650 kubelet[2440]: E1008 19:44:22.760634 2440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:44:22.760650 kubelet[2440]: W1008 19:44:22.760648 2440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:44:22.760708 kubelet[2440]: E1008 19:44:22.760657 2440 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:44:22.761028 kubelet[2440]: E1008 19:44:22.760984 2440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:44:22.761028 kubelet[2440]: W1008 19:44:22.761000 2440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:44:22.761028 kubelet[2440]: E1008 19:44:22.761011 2440 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:44:22.761208 kubelet[2440]: E1008 19:44:22.761196 2440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:44:22.761208 kubelet[2440]: W1008 19:44:22.761207 2440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:44:22.761260 kubelet[2440]: E1008 19:44:22.761217 2440 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:44:22.762385 kubelet[2440]: E1008 19:44:22.762362 2440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:44:22.762385 kubelet[2440]: W1008 19:44:22.762377 2440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:44:22.762484 kubelet[2440]: E1008 19:44:22.762390 2440 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:44:22.773421 kubelet[2440]: E1008 19:44:22.773393 2440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:44:22.773421 kubelet[2440]: W1008 19:44:22.773410 2440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:44:22.773522 kubelet[2440]: E1008 19:44:22.773425 2440 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:44:22.842647 kubelet[2440]: E1008 19:44:22.842513 2440 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:44:22.844797 containerd[1437]: time="2024-10-08T19:44:22.844419857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-9647c5f86-wwd99,Uid:bfe5c16b-b3ec-49b0-a4f7-e1f205dadbd5,Namespace:calico-system,Attempt:0,}" Oct 8 19:44:22.857149 kubelet[2440]: E1008 19:44:22.857123 2440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:44:22.857400 kubelet[2440]: W1008 19:44:22.857259 2440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:44:22.857400 kubelet[2440]: E1008 19:44:22.857285 2440 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:44:22.857692 kubelet[2440]: E1008 19:44:22.857621 2440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:44:22.857692 kubelet[2440]: W1008 19:44:22.857647 2440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:44:22.857692 kubelet[2440]: E1008 19:44:22.857664 2440 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:44:22.860731 kubelet[2440]: E1008 19:44:22.858722 2440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:44:22.860788 kubelet[2440]: W1008 19:44:22.860737 2440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:44:22.860788 kubelet[2440]: E1008 19:44:22.860762 2440 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:44:22.860990 kubelet[2440]: E1008 19:44:22.860978 2440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:44:22.860990 kubelet[2440]: W1008 19:44:22.860990 2440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:44:22.861047 kubelet[2440]: E1008 19:44:22.861023 2440 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:44:22.861403 kubelet[2440]: E1008 19:44:22.861377 2440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:44:22.861403 kubelet[2440]: W1008 19:44:22.861392 2440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:44:22.861403 kubelet[2440]: E1008 19:44:22.861427 2440 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:44:22.861635 kubelet[2440]: E1008 19:44:22.861564 2440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:44:22.861635 kubelet[2440]: W1008 19:44:22.861573 2440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:44:22.861771 kubelet[2440]: E1008 19:44:22.861748 2440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:44:22.861771 kubelet[2440]: W1008 19:44:22.861760 2440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:44:22.861771 kubelet[2440]: E1008 19:44:22.861771 2440 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:44:22.861886 kubelet[2440]: E1008 19:44:22.861747 2440 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:44:22.861980 kubelet[2440]: E1008 19:44:22.861963 2440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:44:22.861980 kubelet[2440]: W1008 19:44:22.861974 2440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:44:22.862045 kubelet[2440]: E1008 19:44:22.861983 2440 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:44:22.862179 kubelet[2440]: E1008 19:44:22.862148 2440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:44:22.862179 kubelet[2440]: W1008 19:44:22.862159 2440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:44:22.862179 kubelet[2440]: E1008 19:44:22.862167 2440 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:44:22.862317 kubelet[2440]: E1008 19:44:22.862307 2440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:44:22.862317 kubelet[2440]: W1008 19:44:22.862316 2440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:44:22.862379 kubelet[2440]: E1008 19:44:22.862331 2440 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:44:22.862523 kubelet[2440]: E1008 19:44:22.862507 2440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:44:22.862556 kubelet[2440]: W1008 19:44:22.862533 2440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:44:22.862584 kubelet[2440]: E1008 19:44:22.862553 2440 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:44:22.862876 kubelet[2440]: E1008 19:44:22.862733 2440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:44:22.862876 kubelet[2440]: W1008 19:44:22.862742 2440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:44:22.862876 kubelet[2440]: E1008 19:44:22.862794 2440 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:44:22.862987 kubelet[2440]: E1008 19:44:22.862973 2440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:44:22.862987 kubelet[2440]: W1008 19:44:22.862985 2440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:44:22.863044 kubelet[2440]: E1008 19:44:22.863024 2440 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:44:22.863308 kubelet[2440]: E1008 19:44:22.863292 2440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:44:22.863308 kubelet[2440]: W1008 19:44:22.863305 2440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:44:22.863387 kubelet[2440]: E1008 19:44:22.863360 2440 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:44:22.863570 kubelet[2440]: E1008 19:44:22.863553 2440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:44:22.863570 kubelet[2440]: W1008 19:44:22.863569 2440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:44:22.863658 kubelet[2440]: E1008 19:44:22.863584 2440 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:44:22.863963 kubelet[2440]: E1008 19:44:22.863945 2440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:44:22.863963 kubelet[2440]: W1008 19:44:22.863959 2440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:44:22.864043 kubelet[2440]: E1008 19:44:22.863992 2440 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:44:22.864231 kubelet[2440]: E1008 19:44:22.864213 2440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:44:22.864231 kubelet[2440]: W1008 19:44:22.864227 2440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:44:22.864300 kubelet[2440]: E1008 19:44:22.864240 2440 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:44:22.864609 kubelet[2440]: E1008 19:44:22.864460 2440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:44:22.864609 kubelet[2440]: W1008 19:44:22.864472 2440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:44:22.864609 kubelet[2440]: E1008 19:44:22.864601 2440 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:44:22.864845 kubelet[2440]: E1008 19:44:22.864606 2440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:44:22.864845 kubelet[2440]: W1008 19:44:22.864637 2440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:44:22.864845 kubelet[2440]: E1008 19:44:22.864648 2440 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:44:22.865263 kubelet[2440]: E1008 19:44:22.865239 2440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:44:22.865263 kubelet[2440]: W1008 19:44:22.865256 2440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:44:22.865343 kubelet[2440]: E1008 19:44:22.865279 2440 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:44:22.865514 kubelet[2440]: E1008 19:44:22.865500 2440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:44:22.865514 kubelet[2440]: W1008 19:44:22.865513 2440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:44:22.865578 kubelet[2440]: E1008 19:44:22.865526 2440 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:44:22.865811 kubelet[2440]: E1008 19:44:22.865796 2440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:44:22.865848 kubelet[2440]: W1008 19:44:22.865815 2440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:44:22.866002 kubelet[2440]: E1008 19:44:22.865976 2440 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:44:22.866322 kubelet[2440]: E1008 19:44:22.866306 2440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:44:22.866322 kubelet[2440]: W1008 19:44:22.866320 2440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:44:22.866377 kubelet[2440]: E1008 19:44:22.866331 2440 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:44:22.867135 kubelet[2440]: E1008 19:44:22.867117 2440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:44:22.867135 kubelet[2440]: W1008 19:44:22.867132 2440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:44:22.867212 kubelet[2440]: E1008 19:44:22.867144 2440 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:44:22.867374 kubelet[2440]: E1008 19:44:22.867359 2440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:44:22.867374 kubelet[2440]: W1008 19:44:22.867373 2440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:44:22.867444 kubelet[2440]: E1008 19:44:22.867384 2440 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:44:22.875956 containerd[1437]: time="2024-10-08T19:44:22.874609242Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:44:22.875956 containerd[1437]: time="2024-10-08T19:44:22.875783728Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:44:22.875956 containerd[1437]: time="2024-10-08T19:44:22.875821061Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:44:22.875956 containerd[1437]: time="2024-10-08T19:44:22.875868077Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:44:22.883593 kubelet[2440]: E1008 19:44:22.881421 2440 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:44:22.883888 containerd[1437]: time="2024-10-08T19:44:22.883543807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jslp9,Uid:6da2bebe-f07b-4888-af0c-ca2cc820279b,Namespace:calico-system,Attempt:0,}" Oct 8 19:44:22.884413 kubelet[2440]: E1008 19:44:22.884394 2440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:44:22.884472 kubelet[2440]: W1008 19:44:22.884450 2440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:44:22.884565 kubelet[2440]: E1008 19:44:22.884546 2440 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:44:22.897856 systemd[1]: Started cri-containerd-6139120f86bcef0132159dab52bb31d86b91fea9f1dbd95448f9fe429a1125d1.scope - libcontainer container 6139120f86bcef0132159dab52bb31d86b91fea9f1dbd95448f9fe429a1125d1. Oct 8 19:44:22.914402 containerd[1437]: time="2024-10-08T19:44:22.914311392Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:44:22.914402 containerd[1437]: time="2024-10-08T19:44:22.914370333Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:44:22.914402 containerd[1437]: time="2024-10-08T19:44:22.914384577Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:44:22.914402 containerd[1437]: time="2024-10-08T19:44:22.914394221Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:44:22.937045 containerd[1437]: time="2024-10-08T19:44:22.936939366Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-9647c5f86-wwd99,Uid:bfe5c16b-b3ec-49b0-a4f7-e1f205dadbd5,Namespace:calico-system,Attempt:0,} returns sandbox id \"6139120f86bcef0132159dab52bb31d86b91fea9f1dbd95448f9fe429a1125d1\"" Oct 8 19:44:22.938748 kubelet[2440]: E1008 19:44:22.938177 2440 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:44:22.939583 containerd[1437]: time="2024-10-08T19:44:22.939554389Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\"" Oct 8 19:44:22.942103 systemd[1]: Started cri-containerd-7635e81d8a01b5285f087e623376fd73284074018eb6609f04aed8cbcce6765c.scope - libcontainer container 7635e81d8a01b5285f087e623376fd73284074018eb6609f04aed8cbcce6765c. Oct 8 19:44:22.966813 containerd[1437]: time="2024-10-08T19:44:22.966743938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jslp9,Uid:6da2bebe-f07b-4888-af0c-ca2cc820279b,Namespace:calico-system,Attempt:0,} returns sandbox id \"7635e81d8a01b5285f087e623376fd73284074018eb6609f04aed8cbcce6765c\"" Oct 8 19:44:22.967625 kubelet[2440]: E1008 19:44:22.967606 2440 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:44:24.091143 containerd[1437]: time="2024-10-08T19:44:24.091092694Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.1: active requests=0, bytes read=27474479" Oct 8 19:44:24.098846 containerd[1437]: time="2024-10-08T19:44:24.098579156Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.1\" with image id \"sha256:c1d0081df1580fc17ebf95ca7499d2e1af1b1ab8c75835172213221419018924\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\", size \"28841990\" in 1.158986074s" Oct 8 19:44:24.098846 containerd[1437]: time="2024-10-08T19:44:24.098626851Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\" returns image reference \"sha256:c1d0081df1580fc17ebf95ca7499d2e1af1b1ab8c75835172213221419018924\"" Oct 8 19:44:24.101347 containerd[1437]: time="2024-10-08T19:44:24.101311131Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:44:24.105036 containerd[1437]: time="2024-10-08T19:44:24.104437109Z" level=info msg="ImageCreate event name:\"sha256:c1d0081df1580fc17ebf95ca7499d2e1af1b1ab8c75835172213221419018924\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:44:24.105036 containerd[1437]: time="2024-10-08T19:44:24.104731281Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\"" Oct 8 19:44:24.111134 containerd[1437]: time="2024-10-08T19:44:24.108730372Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:44:24.137277 containerd[1437]: time="2024-10-08T19:44:24.137240053Z" level=info msg="CreateContainer within sandbox \"6139120f86bcef0132159dab52bb31d86b91fea9f1dbd95448f9fe429a1125d1\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 8 19:44:24.162465 containerd[1437]: time="2024-10-08T19:44:24.162396244Z" level=info msg="CreateContainer within sandbox \"6139120f86bcef0132159dab52bb31d86b91fea9f1dbd95448f9fe429a1125d1\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"c7e44c8be74f6c6da89fe8c0b17e54c338245596349e05783488fac24ba19ee2\"" Oct 8 19:44:24.163552 containerd[1437]: time="2024-10-08T19:44:24.163252032Z" level=info msg="StartContainer for \"c7e44c8be74f6c6da89fe8c0b17e54c338245596349e05783488fac24ba19ee2\"" Oct 8 19:44:24.191869 systemd[1]: Started cri-containerd-c7e44c8be74f6c6da89fe8c0b17e54c338245596349e05783488fac24ba19ee2.scope - libcontainer container c7e44c8be74f6c6da89fe8c0b17e54c338245596349e05783488fac24ba19ee2. Oct 8 19:44:24.224930 containerd[1437]: time="2024-10-08T19:44:24.224884437Z" level=info msg="StartContainer for \"c7e44c8be74f6c6da89fe8c0b17e54c338245596349e05783488fac24ba19ee2\" returns successfully" Oct 8 19:44:24.377771 kubelet[2440]: E1008 19:44:24.377396 2440 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nkd98" podUID="580b162c-9f56-423c-982e-ca1911345f68" Oct 8 19:44:24.438773 kubelet[2440]: E1008 19:44:24.438742 2440 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:44:24.462450 kubelet[2440]: E1008 19:44:24.462402 2440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:44:24.462450 kubelet[2440]: W1008 19:44:24.462422 2440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:44:24.462450 kubelet[2440]: E1008 19:44:24.462440 2440 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:44:24.462747 kubelet[2440]: E1008 19:44:24.462609 2440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:44:24.462747 kubelet[2440]: W1008 19:44:24.462617 2440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:44:24.462747 kubelet[2440]: E1008 19:44:24.462625 2440 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:44:24.462869 kubelet[2440]: E1008 19:44:24.462772 2440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:44:24.462869 kubelet[2440]: W1008 19:44:24.462779 2440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:44:24.462869 kubelet[2440]: E1008 19:44:24.462787 2440 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:44:24.462950 kubelet[2440]: E1008 19:44:24.462922 2440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:44:24.462950 kubelet[2440]: W1008 19:44:24.462930 2440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:44:24.462950 kubelet[2440]: E1008 19:44:24.462937 2440 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:44:24.463075 kubelet[2440]: E1008 19:44:24.463064 2440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:44:24.463075 kubelet[2440]: W1008 19:44:24.463072 2440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:44:24.463140 kubelet[2440]: E1008 19:44:24.463080 2440 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:44:24.463234 kubelet[2440]: E1008 19:44:24.463221 2440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:44:24.463234 kubelet[2440]: W1008 19:44:24.463232 2440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:44:24.463296 kubelet[2440]: E1008 19:44:24.463240 2440 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:44:24.463483 kubelet[2440]: E1008 19:44:24.463373 2440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:44:24.463483 kubelet[2440]: W1008 19:44:24.463383 2440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:44:24.463483 kubelet[2440]: E1008 19:44:24.463390 2440 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:44:24.463583 kubelet[2440]: E1008 19:44:24.463524 2440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:44:24.463583 kubelet[2440]: W1008 19:44:24.463531 2440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:44:24.463583 kubelet[2440]: E1008 19:44:24.463538 2440 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:44:24.464715 kubelet[2440]: E1008 19:44:24.463861 2440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:44:24.464715 kubelet[2440]: W1008 19:44:24.463880 2440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:44:24.464715 kubelet[2440]: E1008 19:44:24.463894 2440 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:44:24.464715 kubelet[2440]: E1008 19:44:24.464087 2440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:44:24.464715 kubelet[2440]: W1008 19:44:24.464092 2440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:44:24.464715 kubelet[2440]: E1008 19:44:24.464099 2440 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:44:24.464715 kubelet[2440]: E1008 19:44:24.464221 2440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:44:24.464715 kubelet[2440]: W1008 19:44:24.464227 2440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:44:24.464715 kubelet[2440]: E1008 19:44:24.464235 2440 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:44:24.464715 kubelet[2440]: E1008 19:44:24.464359 2440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:44:24.465053 kubelet[2440]: W1008 19:44:24.464366 2440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:44:24.465053 kubelet[2440]: E1008 19:44:24.464373 2440 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:44:24.465053 kubelet[2440]: E1008 19:44:24.464503 2440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:44:24.465053 kubelet[2440]: W1008 19:44:24.464513 2440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:44:24.465053 kubelet[2440]: E1008 19:44:24.464520 2440 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:44:24.465053 kubelet[2440]: E1008 19:44:24.464644 2440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:44:24.465053 kubelet[2440]: W1008 19:44:24.464650 2440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:44:24.465053 kubelet[2440]: E1008 19:44:24.464657 2440 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:44:24.465053 kubelet[2440]: E1008 19:44:24.464796 2440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:44:24.465053 kubelet[2440]: W1008 19:44:24.464805 2440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:44:24.465398 kubelet[2440]: E1008 19:44:24.464816 2440 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:44:24.475310 kubelet[2440]: E1008 19:44:24.475276 2440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:44:24.475310 kubelet[2440]: W1008 19:44:24.475302 2440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:44:24.475507 kubelet[2440]: E1008 19:44:24.475321 2440 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:44:24.475507 kubelet[2440]: E1008 19:44:24.475488 2440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:44:24.475507 kubelet[2440]: W1008 19:44:24.475496 2440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:44:24.475507 kubelet[2440]: E1008 19:44:24.475505 2440 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:44:24.475856 kubelet[2440]: E1008 19:44:24.475672 2440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:44:24.475856 kubelet[2440]: W1008 19:44:24.475690 2440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:44:24.475856 kubelet[2440]: E1008 19:44:24.475699 2440 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:44:24.476099 kubelet[2440]: E1008 19:44:24.476050 2440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:44:24.476099 kubelet[2440]: W1008 19:44:24.476069 2440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:44:24.476260 kubelet[2440]: E1008 19:44:24.476186 2440 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:44:24.476500 kubelet[2440]: E1008 19:44:24.476455 2440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:44:24.476500 kubelet[2440]: W1008 19:44:24.476468 2440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:44:24.476728 kubelet[2440]: E1008 19:44:24.476486 2440 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:44:24.476905 kubelet[2440]: E1008 19:44:24.476871 2440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:44:24.476905 kubelet[2440]: W1008 19:44:24.476888 2440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:44:24.477146 kubelet[2440]: E1008 19:44:24.477102 2440 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:44:24.477289 kubelet[2440]: E1008 19:44:24.477225 2440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:44:24.477289 kubelet[2440]: W1008 19:44:24.477234 2440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:44:24.477423 kubelet[2440]: E1008 19:44:24.477384 2440 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:44:24.477733 kubelet[2440]: E1008 19:44:24.477719 2440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:44:24.477811 kubelet[2440]: W1008 19:44:24.477798 2440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:44:24.477909 kubelet[2440]: E1008 19:44:24.477876 2440 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:44:24.478076 kubelet[2440]: E1008 19:44:24.478063 2440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:44:24.478076 kubelet[2440]: W1008 19:44:24.478076 2440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:44:24.478300 kubelet[2440]: E1008 19:44:24.478093 2440 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:44:24.478300 kubelet[2440]: E1008 19:44:24.478255 2440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:44:24.478300 kubelet[2440]: W1008 19:44:24.478263 2440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:44:24.478300 kubelet[2440]: E1008 19:44:24.478297 2440 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:44:24.478517 kubelet[2440]: E1008 19:44:24.478392 2440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:44:24.478517 kubelet[2440]: W1008 19:44:24.478399 2440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:44:24.478517 kubelet[2440]: E1008 19:44:24.478439 2440 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:44:24.478624 kubelet[2440]: E1008 19:44:24.478537 2440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:44:24.478624 kubelet[2440]: W1008 19:44:24.478545 2440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:44:24.478624 kubelet[2440]: E1008 19:44:24.478559 2440 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:44:24.478799 kubelet[2440]: E1008 19:44:24.478786 2440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:44:24.478799 kubelet[2440]: W1008 19:44:24.478798 2440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:44:24.478877 kubelet[2440]: E1008 19:44:24.478812 2440 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:44:24.479173 kubelet[2440]: E1008 19:44:24.479125 2440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:44:24.479173 kubelet[2440]: W1008 19:44:24.479139 2440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:44:24.479314 kubelet[2440]: E1008 19:44:24.479157 2440 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:44:24.479403 kubelet[2440]: E1008 19:44:24.479388 2440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:44:24.479403 kubelet[2440]: W1008 19:44:24.479401 2440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:44:24.479481 kubelet[2440]: E1008 19:44:24.479417 2440 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:44:24.479647 kubelet[2440]: E1008 19:44:24.479635 2440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:44:24.479647 kubelet[2440]: W1008 19:44:24.479647 2440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:44:24.479728 kubelet[2440]: E1008 19:44:24.479665 2440 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:44:24.480063 kubelet[2440]: E1008 19:44:24.479935 2440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:44:24.480063 kubelet[2440]: W1008 19:44:24.479949 2440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:44:24.480063 kubelet[2440]: E1008 19:44:24.479966 2440 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:44:24.480275 kubelet[2440]: E1008 19:44:24.480263 2440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:44:24.480350 kubelet[2440]: W1008 19:44:24.480338 2440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:44:24.480423 kubelet[2440]: E1008 19:44:24.480414 2440 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:44:24.688289 kubelet[2440]: E1008 19:44:24.688176 2440 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:44:24.703316 kubelet[2440]: I1008 19:44:24.703236 2440 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-9647c5f86-wwd99" podStartSLOduration=1.539982519 podStartE2EDuration="2.703217506s" podCreationTimestamp="2024-10-08 19:44:22 +0000 UTC" firstStartedPulling="2024-10-08 19:44:22.938825698 +0000 UTC m=+13.640358085" lastFinishedPulling="2024-10-08 19:44:24.102060645 +0000 UTC m=+14.803593072" observedRunningTime="2024-10-08 19:44:24.452120298 +0000 UTC m=+15.153652685" watchObservedRunningTime="2024-10-08 19:44:24.703217506 +0000 UTC m=+15.404749893" Oct 8 19:44:24.767019 kubelet[2440]: E1008 19:44:24.766983 2440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:44:24.767019 kubelet[2440]: W1008 19:44:24.767006 2440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:44:24.767019 kubelet[2440]: E1008 19:44:24.767026 2440 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:44:24.767215 kubelet[2440]: E1008 19:44:24.767205 2440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:44:24.767215 kubelet[2440]: W1008 19:44:24.767213 2440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:44:24.767298 kubelet[2440]: E1008 19:44:24.767224 2440 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:44:24.767418 kubelet[2440]: E1008 19:44:24.767400 2440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:44:24.767418 kubelet[2440]: W1008 19:44:24.767417 2440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:44:24.767479 kubelet[2440]: E1008 19:44:24.767427 2440 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:44:24.768058 kubelet[2440]: E1008 19:44:24.767666 2440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:44:24.768058 kubelet[2440]: W1008 19:44:24.767700 2440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:44:24.768058 kubelet[2440]: E1008 19:44:24.767712 2440 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:44:24.768058 kubelet[2440]: E1008 19:44:24.767905 2440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:44:24.768058 kubelet[2440]: W1008 19:44:24.767913 2440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:44:24.768058 kubelet[2440]: E1008 19:44:24.767921 2440 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:44:24.768245 kubelet[2440]: E1008 19:44:24.768199 2440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:44:24.768245 kubelet[2440]: W1008 19:44:24.768209 2440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:44:24.768245 kubelet[2440]: E1008 19:44:24.768219 2440 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:44:24.769634 kubelet[2440]: E1008 19:44:24.768429 2440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:44:24.769634 kubelet[2440]: W1008 19:44:24.768442 2440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:44:24.769634 kubelet[2440]: E1008 19:44:24.768457 2440 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:44:24.769634 kubelet[2440]: E1008 19:44:24.768614 2440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:44:24.769634 kubelet[2440]: W1008 19:44:24.768622 2440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:44:24.769634 kubelet[2440]: E1008 19:44:24.768630 2440 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:44:24.769634 kubelet[2440]: E1008 19:44:24.768822 2440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:44:24.769634 kubelet[2440]: W1008 19:44:24.768831 2440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:44:24.769634 kubelet[2440]: E1008 19:44:24.768840 2440 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:44:24.769634 kubelet[2440]: E1008 19:44:24.768985 2440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:44:24.770066 kubelet[2440]: W1008 19:44:24.768994 2440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:44:24.770066 kubelet[2440]: E1008 19:44:24.769002 2440 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:44:24.770066 kubelet[2440]: E1008 19:44:24.769160 2440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:44:24.770066 kubelet[2440]: W1008 19:44:24.769169 2440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:44:24.770066 kubelet[2440]: E1008 19:44:24.769178 2440 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:44:24.770066 kubelet[2440]: E1008 19:44:24.769352 2440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:44:24.770066 kubelet[2440]: W1008 19:44:24.769361 2440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:44:24.770066 kubelet[2440]: E1008 19:44:24.769377 2440 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:44:24.770066 kubelet[2440]: E1008 19:44:24.769526 2440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:44:24.770066 kubelet[2440]: W1008 19:44:24.769542 2440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:44:24.770313 kubelet[2440]: E1008 19:44:24.769551 2440 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:44:24.770313 kubelet[2440]: E1008 19:44:24.769768 2440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:44:24.770313 kubelet[2440]: W1008 19:44:24.769777 2440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:44:24.770313 kubelet[2440]: E1008 19:44:24.769788 2440 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:44:24.771808 kubelet[2440]: E1008 19:44:24.770699 2440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:44:24.771808 kubelet[2440]: W1008 19:44:24.770716 2440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:44:24.771808 kubelet[2440]: E1008 19:44:24.770730 2440 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:44:25.273262 containerd[1437]: time="2024-10-08T19:44:25.273216783Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:44:25.274924 containerd[1437]: time="2024-10-08T19:44:25.273953842Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1: active requests=0, bytes read=4916957" Oct 8 19:44:25.274924 containerd[1437]: time="2024-10-08T19:44:25.274707747Z" level=info msg="ImageCreate event name:\"sha256:20b54f73684933653d4a4b8b63c59211e3c828f94251ecf4d1bff2a334ff4ba0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:44:25.276604 containerd[1437]: time="2024-10-08T19:44:25.276541934Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:44:25.277734 containerd[1437]: time="2024-10-08T19:44:25.277167321Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" with image id \"sha256:20b54f73684933653d4a4b8b63c59211e3c828f94251ecf4d1bff2a334ff4ba0\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\", size \"6284436\" in 1.17240595s" Oct 8 19:44:25.277734 containerd[1437]: time="2024-10-08T19:44:25.277198610Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" returns image reference \"sha256:20b54f73684933653d4a4b8b63c59211e3c828f94251ecf4d1bff2a334ff4ba0\"" Oct 8 19:44:25.279992 containerd[1437]: time="2024-10-08T19:44:25.279807428Z" level=info msg="CreateContainer within sandbox \"7635e81d8a01b5285f087e623376fd73284074018eb6609f04aed8cbcce6765c\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 8 19:44:25.307128 containerd[1437]: time="2024-10-08T19:44:25.307049191Z" level=info msg="CreateContainer within sandbox \"7635e81d8a01b5285f087e623376fd73284074018eb6609f04aed8cbcce6765c\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"79519386545577ef59f6c0fd508ea1107e207e3fe77c81f5e22220053331ecd5\"" Oct 8 19:44:25.308826 containerd[1437]: time="2024-10-08T19:44:25.308794712Z" level=info msg="StartContainer for \"79519386545577ef59f6c0fd508ea1107e207e3fe77c81f5e22220053331ecd5\"" Oct 8 19:44:25.343897 systemd[1]: Started cri-containerd-79519386545577ef59f6c0fd508ea1107e207e3fe77c81f5e22220053331ecd5.scope - libcontainer container 79519386545577ef59f6c0fd508ea1107e207e3fe77c81f5e22220053331ecd5. Oct 8 19:44:25.370418 containerd[1437]: time="2024-10-08T19:44:25.370369112Z" level=info msg="StartContainer for \"79519386545577ef59f6c0fd508ea1107e207e3fe77c81f5e22220053331ecd5\" returns successfully" Oct 8 19:44:25.425960 systemd[1]: cri-containerd-79519386545577ef59f6c0fd508ea1107e207e3fe77c81f5e22220053331ecd5.scope: Deactivated successfully. Oct 8 19:44:25.438631 kubelet[2440]: I1008 19:44:25.438590 2440 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 8 19:44:25.439017 kubelet[2440]: E1008 19:44:25.438924 2440 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:44:25.439672 kubelet[2440]: E1008 19:44:25.439653 2440 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:44:25.448952 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-79519386545577ef59f6c0fd508ea1107e207e3fe77c81f5e22220053331ecd5-rootfs.mount: Deactivated successfully. Oct 8 19:44:25.551091 containerd[1437]: time="2024-10-08T19:44:25.546555929Z" level=info msg="shim disconnected" id=79519386545577ef59f6c0fd508ea1107e207e3fe77c81f5e22220053331ecd5 namespace=k8s.io Oct 8 19:44:25.551091 containerd[1437]: time="2024-10-08T19:44:25.551017139Z" level=warning msg="cleaning up after shim disconnected" id=79519386545577ef59f6c0fd508ea1107e207e3fe77c81f5e22220053331ecd5 namespace=k8s.io Oct 8 19:44:25.551091 containerd[1437]: time="2024-10-08T19:44:25.551031664Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:44:26.375834 kubelet[2440]: E1008 19:44:26.375771 2440 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nkd98" podUID="580b162c-9f56-423c-982e-ca1911345f68" Oct 8 19:44:26.442147 kubelet[2440]: E1008 19:44:26.442101 2440 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:44:26.444752 containerd[1437]: time="2024-10-08T19:44:26.444717186Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\"" Oct 8 19:44:28.376786 kubelet[2440]: E1008 19:44:28.376742 2440 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nkd98" podUID="580b162c-9f56-423c-982e-ca1911345f68" Oct 8 19:44:29.408029 containerd[1437]: time="2024-10-08T19:44:29.407985894Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:44:29.408878 containerd[1437]: time="2024-10-08T19:44:29.408530909Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.1: active requests=0, bytes read=86859887" Oct 8 19:44:29.409514 containerd[1437]: time="2024-10-08T19:44:29.409467061Z" level=info msg="ImageCreate event name:\"sha256:6123e515001d9cafdf3dbe8f8dc8b5ae1c56165013052b8cbc7d27f3395cfd85\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:44:29.412675 containerd[1437]: time="2024-10-08T19:44:29.412407070Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:44:29.414071 containerd[1437]: time="2024-10-08T19:44:29.414029193Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.1\" with image id \"sha256:6123e515001d9cafdf3dbe8f8dc8b5ae1c56165013052b8cbc7d27f3395cfd85\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\", size \"88227406\" in 2.969275716s" Oct 8 19:44:29.414071 containerd[1437]: time="2024-10-08T19:44:29.414066242Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\" returns image reference \"sha256:6123e515001d9cafdf3dbe8f8dc8b5ae1c56165013052b8cbc7d27f3395cfd85\"" Oct 8 19:44:29.416768 containerd[1437]: time="2024-10-08T19:44:29.416672048Z" level=info msg="CreateContainer within sandbox \"7635e81d8a01b5285f087e623376fd73284074018eb6609f04aed8cbcce6765c\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 8 19:44:29.428961 containerd[1437]: time="2024-10-08T19:44:29.428895879Z" level=info msg="CreateContainer within sandbox \"7635e81d8a01b5285f087e623376fd73284074018eb6609f04aed8cbcce6765c\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"72d6688013dc56e71e217fc41daa9ae491ef86171e60000d93b6a31e6d10e38c\"" Oct 8 19:44:29.429432 containerd[1437]: time="2024-10-08T19:44:29.429370077Z" level=info msg="StartContainer for \"72d6688013dc56e71e217fc41daa9ae491ef86171e60000d93b6a31e6d10e38c\"" Oct 8 19:44:29.456908 systemd[1]: Started cri-containerd-72d6688013dc56e71e217fc41daa9ae491ef86171e60000d93b6a31e6d10e38c.scope - libcontainer container 72d6688013dc56e71e217fc41daa9ae491ef86171e60000d93b6a31e6d10e38c. Oct 8 19:44:29.483135 containerd[1437]: time="2024-10-08T19:44:29.480610904Z" level=info msg="StartContainer for \"72d6688013dc56e71e217fc41daa9ae491ef86171e60000d93b6a31e6d10e38c\" returns successfully" Oct 8 19:44:29.996106 systemd[1]: cri-containerd-72d6688013dc56e71e217fc41daa9ae491ef86171e60000d93b6a31e6d10e38c.scope: Deactivated successfully. Oct 8 19:44:30.010653 kubelet[2440]: I1008 19:44:30.010623 2440 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Oct 8 19:44:30.021631 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-72d6688013dc56e71e217fc41daa9ae491ef86171e60000d93b6a31e6d10e38c-rootfs.mount: Deactivated successfully. Oct 8 19:44:30.100739 containerd[1437]: time="2024-10-08T19:44:30.100660481Z" level=info msg="shim disconnected" id=72d6688013dc56e71e217fc41daa9ae491ef86171e60000d93b6a31e6d10e38c namespace=k8s.io Oct 8 19:44:30.100739 containerd[1437]: time="2024-10-08T19:44:30.100735139Z" level=warning msg="cleaning up after shim disconnected" id=72d6688013dc56e71e217fc41daa9ae491ef86171e60000d93b6a31e6d10e38c namespace=k8s.io Oct 8 19:44:30.100739 containerd[1437]: time="2024-10-08T19:44:30.100744181Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:44:30.134884 systemd[1]: Created slice kubepods-burstable-pod7fbc9347_b968_44a3_a96a_e937c3f2240a.slice - libcontainer container kubepods-burstable-pod7fbc9347_b968_44a3_a96a_e937c3f2240a.slice. Oct 8 19:44:30.144125 systemd[1]: Created slice kubepods-burstable-pode4211795_44be_4c33_a1ed_5582f08a21b7.slice - libcontainer container kubepods-burstable-pode4211795_44be_4c33_a1ed_5582f08a21b7.slice. Oct 8 19:44:30.151816 systemd[1]: Created slice kubepods-besteffort-pod1544abfa_775d_4d7b_9360_1ce3bf50e572.slice - libcontainer container kubepods-besteffort-pod1544abfa_775d_4d7b_9360_1ce3bf50e572.slice. Oct 8 19:44:30.216988 kubelet[2440]: I1008 19:44:30.216901 2440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bl6h8\" (UniqueName: \"kubernetes.io/projected/e4211795-44be-4c33-a1ed-5582f08a21b7-kube-api-access-bl6h8\") pod \"coredns-6f6b679f8f-9qdpq\" (UID: \"e4211795-44be-4c33-a1ed-5582f08a21b7\") " pod="kube-system/coredns-6f6b679f8f-9qdpq" Oct 8 19:44:30.216988 kubelet[2440]: I1008 19:44:30.216946 2440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e4211795-44be-4c33-a1ed-5582f08a21b7-config-volume\") pod \"coredns-6f6b679f8f-9qdpq\" (UID: \"e4211795-44be-4c33-a1ed-5582f08a21b7\") " pod="kube-system/coredns-6f6b679f8f-9qdpq" Oct 8 19:44:30.216988 kubelet[2440]: I1008 19:44:30.216971 2440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1544abfa-775d-4d7b-9360-1ce3bf50e572-tigera-ca-bundle\") pod \"calico-kube-controllers-6bb77fd48c-ql2qz\" (UID: \"1544abfa-775d-4d7b-9360-1ce3bf50e572\") " pod="calico-system/calico-kube-controllers-6bb77fd48c-ql2qz" Oct 8 19:44:30.216988 kubelet[2440]: I1008 19:44:30.216990 2440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82jgx\" (UniqueName: \"kubernetes.io/projected/1544abfa-775d-4d7b-9360-1ce3bf50e572-kube-api-access-82jgx\") pod \"calico-kube-controllers-6bb77fd48c-ql2qz\" (UID: \"1544abfa-775d-4d7b-9360-1ce3bf50e572\") " pod="calico-system/calico-kube-controllers-6bb77fd48c-ql2qz" Oct 8 19:44:30.216988 kubelet[2440]: I1008 19:44:30.217013 2440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdlh4\" (UniqueName: \"kubernetes.io/projected/7fbc9347-b968-44a3-a96a-e937c3f2240a-kube-api-access-tdlh4\") pod \"coredns-6f6b679f8f-9vxz5\" (UID: \"7fbc9347-b968-44a3-a96a-e937c3f2240a\") " pod="kube-system/coredns-6f6b679f8f-9vxz5" Oct 8 19:44:30.217404 kubelet[2440]: I1008 19:44:30.217033 2440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7fbc9347-b968-44a3-a96a-e937c3f2240a-config-volume\") pod \"coredns-6f6b679f8f-9vxz5\" (UID: \"7fbc9347-b968-44a3-a96a-e937c3f2240a\") " pod="kube-system/coredns-6f6b679f8f-9vxz5" Oct 8 19:44:30.381353 systemd[1]: Created slice kubepods-besteffort-pod580b162c_9f56_423c_982e_ca1911345f68.slice - libcontainer container kubepods-besteffort-pod580b162c_9f56_423c_982e_ca1911345f68.slice. Oct 8 19:44:30.384246 containerd[1437]: time="2024-10-08T19:44:30.384204534Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nkd98,Uid:580b162c-9f56-423c-982e-ca1911345f68,Namespace:calico-system,Attempt:0,}" Oct 8 19:44:30.443871 kubelet[2440]: E1008 19:44:30.441045 2440 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:44:30.444106 containerd[1437]: time="2024-10-08T19:44:30.443432111Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-9vxz5,Uid:7fbc9347-b968-44a3-a96a-e937c3f2240a,Namespace:kube-system,Attempt:0,}" Oct 8 19:44:30.450258 kubelet[2440]: E1008 19:44:30.450231 2440 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:44:30.451157 containerd[1437]: time="2024-10-08T19:44:30.450794018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-9qdpq,Uid:e4211795-44be-4c33-a1ed-5582f08a21b7,Namespace:kube-system,Attempt:0,}" Oct 8 19:44:30.456442 containerd[1437]: time="2024-10-08T19:44:30.456390666Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6bb77fd48c-ql2qz,Uid:1544abfa-775d-4d7b-9360-1ce3bf50e572,Namespace:calico-system,Attempt:0,}" Oct 8 19:44:30.459305 kubelet[2440]: E1008 19:44:30.459267 2440 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:44:30.460477 containerd[1437]: time="2024-10-08T19:44:30.460372291Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\"" Oct 8 19:44:30.697119 containerd[1437]: time="2024-10-08T19:44:30.696408550Z" level=error msg="Failed to destroy network for sandbox \"963fbfe03f67f4c30a087f2097b216f615da187efd50095b384472198e07e5f3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:44:30.697535 containerd[1437]: time="2024-10-08T19:44:30.697501049Z" level=error msg="encountered an error cleaning up failed sandbox \"963fbfe03f67f4c30a087f2097b216f615da187efd50095b384472198e07e5f3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:44:30.697841 containerd[1437]: time="2024-10-08T19:44:30.697579068Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nkd98,Uid:580b162c-9f56-423c-982e-ca1911345f68,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"963fbfe03f67f4c30a087f2097b216f615da187efd50095b384472198e07e5f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:44:30.704201 kubelet[2440]: E1008 19:44:30.704072 2440 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"963fbfe03f67f4c30a087f2097b216f615da187efd50095b384472198e07e5f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:44:30.704358 kubelet[2440]: E1008 19:44:30.704244 2440 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"963fbfe03f67f4c30a087f2097b216f615da187efd50095b384472198e07e5f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-nkd98" Oct 8 19:44:30.704398 kubelet[2440]: E1008 19:44:30.704356 2440 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"963fbfe03f67f4c30a087f2097b216f615da187efd50095b384472198e07e5f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-nkd98" Oct 8 19:44:30.704586 kubelet[2440]: E1008 19:44:30.704433 2440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-nkd98_calico-system(580b162c-9f56-423c-982e-ca1911345f68)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-nkd98_calico-system(580b162c-9f56-423c-982e-ca1911345f68)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"963fbfe03f67f4c30a087f2097b216f615da187efd50095b384472198e07e5f3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-nkd98" podUID="580b162c-9f56-423c-982e-ca1911345f68" Oct 8 19:44:30.705775 containerd[1437]: time="2024-10-08T19:44:30.705716319Z" level=error msg="Failed to destroy network for sandbox \"ed980de4e4b3d85aeb172300c5aaacc9ec0959991008fbfd94e340d0c41f776f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:44:30.707088 containerd[1437]: time="2024-10-08T19:44:30.706881355Z" level=error msg="encountered an error cleaning up failed sandbox \"ed980de4e4b3d85aeb172300c5aaacc9ec0959991008fbfd94e340d0c41f776f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:44:30.707376 containerd[1437]: time="2024-10-08T19:44:30.707163982Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-9qdpq,Uid:e4211795-44be-4c33-a1ed-5582f08a21b7,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ed980de4e4b3d85aeb172300c5aaacc9ec0959991008fbfd94e340d0c41f776f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:44:30.707522 kubelet[2440]: E1008 19:44:30.707488 2440 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed980de4e4b3d85aeb172300c5aaacc9ec0959991008fbfd94e340d0c41f776f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:44:30.707565 kubelet[2440]: E1008 19:44:30.707543 2440 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed980de4e4b3d85aeb172300c5aaacc9ec0959991008fbfd94e340d0c41f776f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-9qdpq" Oct 8 19:44:30.707593 kubelet[2440]: E1008 19:44:30.707563 2440 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed980de4e4b3d85aeb172300c5aaacc9ec0959991008fbfd94e340d0c41f776f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-9qdpq" Oct 8 19:44:30.708213 kubelet[2440]: E1008 19:44:30.707607 2440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-9qdpq_kube-system(e4211795-44be-4c33-a1ed-5582f08a21b7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-9qdpq_kube-system(e4211795-44be-4c33-a1ed-5582f08a21b7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ed980de4e4b3d85aeb172300c5aaacc9ec0959991008fbfd94e340d0c41f776f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-9qdpq" podUID="e4211795-44be-4c33-a1ed-5582f08a21b7" Oct 8 19:44:30.712098 containerd[1437]: time="2024-10-08T19:44:30.711875181Z" level=error msg="Failed to destroy network for sandbox \"0d8d1d363482dd5cf714d0fbcb0affd638ce6d12eec14319fad6a3e43d828744\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:44:30.712695 containerd[1437]: time="2024-10-08T19:44:30.712319126Z" level=error msg="encountered an error cleaning up failed sandbox \"0d8d1d363482dd5cf714d0fbcb0affd638ce6d12eec14319fad6a3e43d828744\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:44:30.712695 containerd[1437]: time="2024-10-08T19:44:30.712373459Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6bb77fd48c-ql2qz,Uid:1544abfa-775d-4d7b-9360-1ce3bf50e572,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0d8d1d363482dd5cf714d0fbcb0affd638ce6d12eec14319fad6a3e43d828744\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:44:30.712886 kubelet[2440]: E1008 19:44:30.712633 2440 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d8d1d363482dd5cf714d0fbcb0affd638ce6d12eec14319fad6a3e43d828744\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:44:30.712886 kubelet[2440]: E1008 19:44:30.712703 2440 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d8d1d363482dd5cf714d0fbcb0affd638ce6d12eec14319fad6a3e43d828744\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6bb77fd48c-ql2qz" Oct 8 19:44:30.712886 kubelet[2440]: E1008 19:44:30.712723 2440 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d8d1d363482dd5cf714d0fbcb0affd638ce6d12eec14319fad6a3e43d828744\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6bb77fd48c-ql2qz" Oct 8 19:44:30.712976 kubelet[2440]: E1008 19:44:30.712799 2440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6bb77fd48c-ql2qz_calico-system(1544abfa-775d-4d7b-9360-1ce3bf50e572)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6bb77fd48c-ql2qz_calico-system(1544abfa-775d-4d7b-9360-1ce3bf50e572)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0d8d1d363482dd5cf714d0fbcb0affd638ce6d12eec14319fad6a3e43d828744\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6bb77fd48c-ql2qz" podUID="1544abfa-775d-4d7b-9360-1ce3bf50e572" Oct 8 19:44:30.717254 containerd[1437]: time="2024-10-08T19:44:30.717192162Z" level=error msg="Failed to destroy network for sandbox \"9d21ec807eb0f82ea1c209b4a063f87b6947c7ba977f6b3c5c442f3964797d1e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:44:30.717557 containerd[1437]: time="2024-10-08T19:44:30.717525362Z" level=error msg="encountered an error cleaning up failed sandbox \"9d21ec807eb0f82ea1c209b4a063f87b6947c7ba977f6b3c5c442f3964797d1e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:44:30.717603 containerd[1437]: time="2024-10-08T19:44:30.717584416Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-9vxz5,Uid:7fbc9347-b968-44a3-a96a-e937c3f2240a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9d21ec807eb0f82ea1c209b4a063f87b6947c7ba977f6b3c5c442f3964797d1e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:44:30.717894 kubelet[2440]: E1008 19:44:30.717860 2440 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d21ec807eb0f82ea1c209b4a063f87b6947c7ba977f6b3c5c442f3964797d1e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:44:30.717978 kubelet[2440]: E1008 19:44:30.717912 2440 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d21ec807eb0f82ea1c209b4a063f87b6947c7ba977f6b3c5c442f3964797d1e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-9vxz5" Oct 8 19:44:30.717978 kubelet[2440]: E1008 19:44:30.717932 2440 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d21ec807eb0f82ea1c209b4a063f87b6947c7ba977f6b3c5c442f3964797d1e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-9vxz5" Oct 8 19:44:30.718044 kubelet[2440]: E1008 19:44:30.717970 2440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-9vxz5_kube-system(7fbc9347-b968-44a3-a96a-e937c3f2240a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-9vxz5_kube-system(7fbc9347-b968-44a3-a96a-e937c3f2240a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9d21ec807eb0f82ea1c209b4a063f87b6947c7ba977f6b3c5c442f3964797d1e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-9vxz5" podUID="7fbc9347-b968-44a3-a96a-e937c3f2240a" Oct 8 19:44:31.425988 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9d21ec807eb0f82ea1c209b4a063f87b6947c7ba977f6b3c5c442f3964797d1e-shm.mount: Deactivated successfully. Oct 8 19:44:31.426523 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ed980de4e4b3d85aeb172300c5aaacc9ec0959991008fbfd94e340d0c41f776f-shm.mount: Deactivated successfully. Oct 8 19:44:31.426705 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-963fbfe03f67f4c30a087f2097b216f615da187efd50095b384472198e07e5f3-shm.mount: Deactivated successfully. Oct 8 19:44:31.461455 kubelet[2440]: I1008 19:44:31.461416 2440 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d8d1d363482dd5cf714d0fbcb0affd638ce6d12eec14319fad6a3e43d828744" Oct 8 19:44:31.462713 kubelet[2440]: I1008 19:44:31.462027 2440 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed980de4e4b3d85aeb172300c5aaacc9ec0959991008fbfd94e340d0c41f776f" Oct 8 19:44:31.463125 containerd[1437]: time="2024-10-08T19:44:31.462941094Z" level=info msg="StopPodSandbox for \"ed980de4e4b3d85aeb172300c5aaacc9ec0959991008fbfd94e340d0c41f776f\"" Oct 8 19:44:31.464136 kubelet[2440]: I1008 19:44:31.463631 2440 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d21ec807eb0f82ea1c209b4a063f87b6947c7ba977f6b3c5c442f3964797d1e" Oct 8 19:44:31.464197 containerd[1437]: time="2024-10-08T19:44:31.464134125Z" level=info msg="StopPodSandbox for \"9d21ec807eb0f82ea1c209b4a063f87b6947c7ba977f6b3c5c442f3964797d1e\"" Oct 8 19:44:31.465883 containerd[1437]: time="2024-10-08T19:44:31.465832151Z" level=info msg="StopPodSandbox for \"0d8d1d363482dd5cf714d0fbcb0affd638ce6d12eec14319fad6a3e43d828744\"" Oct 8 19:44:31.467201 containerd[1437]: time="2024-10-08T19:44:31.467155292Z" level=info msg="Ensure that sandbox 0d8d1d363482dd5cf714d0fbcb0affd638ce6d12eec14319fad6a3e43d828744 in task-service has been cleanup successfully" Oct 8 19:44:31.468141 containerd[1437]: time="2024-10-08T19:44:31.468029291Z" level=info msg="Ensure that sandbox 9d21ec807eb0f82ea1c209b4a063f87b6947c7ba977f6b3c5c442f3964797d1e in task-service has been cleanup successfully" Oct 8 19:44:31.468395 kubelet[2440]: I1008 19:44:31.468368 2440 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="963fbfe03f67f4c30a087f2097b216f615da187efd50095b384472198e07e5f3" Oct 8 19:44:31.470523 containerd[1437]: time="2024-10-08T19:44:31.470483129Z" level=info msg="StopPodSandbox for \"963fbfe03f67f4c30a087f2097b216f615da187efd50095b384472198e07e5f3\"" Oct 8 19:44:31.470736 containerd[1437]: time="2024-10-08T19:44:31.470708300Z" level=info msg="Ensure that sandbox 963fbfe03f67f4c30a087f2097b216f615da187efd50095b384472198e07e5f3 in task-service has been cleanup successfully" Oct 8 19:44:31.474717 containerd[1437]: time="2024-10-08T19:44:31.474173248Z" level=info msg="Ensure that sandbox ed980de4e4b3d85aeb172300c5aaacc9ec0959991008fbfd94e340d0c41f776f in task-service has been cleanup successfully" Oct 8 19:44:31.531840 containerd[1437]: time="2024-10-08T19:44:31.531783465Z" level=error msg="StopPodSandbox for \"ed980de4e4b3d85aeb172300c5aaacc9ec0959991008fbfd94e340d0c41f776f\" failed" error="failed to destroy network for sandbox \"ed980de4e4b3d85aeb172300c5aaacc9ec0959991008fbfd94e340d0c41f776f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:44:31.532278 kubelet[2440]: E1008 19:44:31.532094 2440 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ed980de4e4b3d85aeb172300c5aaacc9ec0959991008fbfd94e340d0c41f776f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ed980de4e4b3d85aeb172300c5aaacc9ec0959991008fbfd94e340d0c41f776f" Oct 8 19:44:31.532278 kubelet[2440]: E1008 19:44:31.532155 2440 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ed980de4e4b3d85aeb172300c5aaacc9ec0959991008fbfd94e340d0c41f776f"} Oct 8 19:44:31.532278 kubelet[2440]: E1008 19:44:31.532225 2440 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e4211795-44be-4c33-a1ed-5582f08a21b7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ed980de4e4b3d85aeb172300c5aaacc9ec0959991008fbfd94e340d0c41f776f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 8 19:44:31.532278 kubelet[2440]: E1008 19:44:31.532247 2440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e4211795-44be-4c33-a1ed-5582f08a21b7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ed980de4e4b3d85aeb172300c5aaacc9ec0959991008fbfd94e340d0c41f776f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-9qdpq" podUID="e4211795-44be-4c33-a1ed-5582f08a21b7" Oct 8 19:44:31.542562 containerd[1437]: time="2024-10-08T19:44:31.542504582Z" level=error msg="StopPodSandbox for \"0d8d1d363482dd5cf714d0fbcb0affd638ce6d12eec14319fad6a3e43d828744\" failed" error="failed to destroy network for sandbox \"0d8d1d363482dd5cf714d0fbcb0affd638ce6d12eec14319fad6a3e43d828744\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:44:31.542774 kubelet[2440]: E1008 19:44:31.542736 2440 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0d8d1d363482dd5cf714d0fbcb0affd638ce6d12eec14319fad6a3e43d828744\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0d8d1d363482dd5cf714d0fbcb0affd638ce6d12eec14319fad6a3e43d828744" Oct 8 19:44:31.542817 kubelet[2440]: E1008 19:44:31.542791 2440 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0d8d1d363482dd5cf714d0fbcb0affd638ce6d12eec14319fad6a3e43d828744"} Oct 8 19:44:31.542840 kubelet[2440]: E1008 19:44:31.542825 2440 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1544abfa-775d-4d7b-9360-1ce3bf50e572\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0d8d1d363482dd5cf714d0fbcb0affd638ce6d12eec14319fad6a3e43d828744\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 8 19:44:31.542908 kubelet[2440]: E1008 19:44:31.542847 2440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1544abfa-775d-4d7b-9360-1ce3bf50e572\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0d8d1d363482dd5cf714d0fbcb0affd638ce6d12eec14319fad6a3e43d828744\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6bb77fd48c-ql2qz" podUID="1544abfa-775d-4d7b-9360-1ce3bf50e572" Oct 8 19:44:31.543953 containerd[1437]: time="2024-10-08T19:44:31.543906901Z" level=error msg="StopPodSandbox for \"9d21ec807eb0f82ea1c209b4a063f87b6947c7ba977f6b3c5c442f3964797d1e\" failed" error="failed to destroy network for sandbox \"9d21ec807eb0f82ea1c209b4a063f87b6947c7ba977f6b3c5c442f3964797d1e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:44:31.544381 containerd[1437]: time="2024-10-08T19:44:31.544199568Z" level=error msg="StopPodSandbox for \"963fbfe03f67f4c30a087f2097b216f615da187efd50095b384472198e07e5f3\" failed" error="failed to destroy network for sandbox \"963fbfe03f67f4c30a087f2097b216f615da187efd50095b384472198e07e5f3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:44:31.544904 kubelet[2440]: E1008 19:44:31.544864 2440 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9d21ec807eb0f82ea1c209b4a063f87b6947c7ba977f6b3c5c442f3964797d1e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9d21ec807eb0f82ea1c209b4a063f87b6947c7ba977f6b3c5c442f3964797d1e" Oct 8 19:44:31.544962 kubelet[2440]: E1008 19:44:31.544915 2440 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9d21ec807eb0f82ea1c209b4a063f87b6947c7ba977f6b3c5c442f3964797d1e"} Oct 8 19:44:31.544962 kubelet[2440]: E1008 19:44:31.544945 2440 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7fbc9347-b968-44a3-a96a-e937c3f2240a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9d21ec807eb0f82ea1c209b4a063f87b6947c7ba977f6b3c5c442f3964797d1e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 8 19:44:31.545033 kubelet[2440]: E1008 19:44:31.544969 2440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7fbc9347-b968-44a3-a96a-e937c3f2240a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9d21ec807eb0f82ea1c209b4a063f87b6947c7ba977f6b3c5c442f3964797d1e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-9vxz5" podUID="7fbc9347-b968-44a3-a96a-e937c3f2240a" Oct 8 19:44:31.545088 kubelet[2440]: E1008 19:44:31.545061 2440 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"963fbfe03f67f4c30a087f2097b216f615da187efd50095b384472198e07e5f3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="963fbfe03f67f4c30a087f2097b216f615da187efd50095b384472198e07e5f3" Oct 8 19:44:31.545115 kubelet[2440]: E1008 19:44:31.545091 2440 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"963fbfe03f67f4c30a087f2097b216f615da187efd50095b384472198e07e5f3"} Oct 8 19:44:31.545147 kubelet[2440]: E1008 19:44:31.545114 2440 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"580b162c-9f56-423c-982e-ca1911345f68\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"963fbfe03f67f4c30a087f2097b216f615da187efd50095b384472198e07e5f3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 8 19:44:31.545147 kubelet[2440]: E1008 19:44:31.545132 2440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"580b162c-9f56-423c-982e-ca1911345f68\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"963fbfe03f67f4c30a087f2097b216f615da187efd50095b384472198e07e5f3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-nkd98" podUID="580b162c-9f56-423c-982e-ca1911345f68" Oct 8 19:44:33.492274 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3082372190.mount: Deactivated successfully. Oct 8 19:44:33.550892 containerd[1437]: time="2024-10-08T19:44:33.550820461Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.1: active requests=0, bytes read=113057300" Oct 8 19:44:33.556297 containerd[1437]: time="2024-10-08T19:44:33.556238194Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:44:33.558322 containerd[1437]: time="2024-10-08T19:44:33.558281502Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.1\" with image id \"sha256:373272045e41e00ebf8da7ce9fc6b26d326fb8b3e665d9f78bb121976f83b1dc\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\", size \"113057162\" in 3.097847797s" Oct 8 19:44:33.558366 containerd[1437]: time="2024-10-08T19:44:33.558323350Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\" returns image reference \"sha256:373272045e41e00ebf8da7ce9fc6b26d326fb8b3e665d9f78bb121976f83b1dc\"" Oct 8 19:44:33.558987 containerd[1437]: time="2024-10-08T19:44:33.558950522Z" level=info msg="ImageCreate event name:\"sha256:373272045e41e00ebf8da7ce9fc6b26d326fb8b3e665d9f78bb121976f83b1dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:44:33.561876 containerd[1437]: time="2024-10-08T19:44:33.561286930Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:44:33.566187 containerd[1437]: time="2024-10-08T19:44:33.566141826Z" level=info msg="CreateContainer within sandbox \"7635e81d8a01b5285f087e623376fd73284074018eb6609f04aed8cbcce6765c\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 8 19:44:33.581357 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1111623005.mount: Deactivated successfully. Oct 8 19:44:33.583698 containerd[1437]: time="2024-10-08T19:44:33.583646248Z" level=info msg="CreateContainer within sandbox \"7635e81d8a01b5285f087e623376fd73284074018eb6609f04aed8cbcce6765c\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"4287eb24a3891eda2d3016723f2cbfa56c5bb01d71f1866803d6227e189cbad5\"" Oct 8 19:44:33.584551 containerd[1437]: time="2024-10-08T19:44:33.584508508Z" level=info msg="StartContainer for \"4287eb24a3891eda2d3016723f2cbfa56c5bb01d71f1866803d6227e189cbad5\"" Oct 8 19:44:33.629885 systemd[1]: Started cri-containerd-4287eb24a3891eda2d3016723f2cbfa56c5bb01d71f1866803d6227e189cbad5.scope - libcontainer container 4287eb24a3891eda2d3016723f2cbfa56c5bb01d71f1866803d6227e189cbad5. Oct 8 19:44:33.656061 containerd[1437]: time="2024-10-08T19:44:33.656004825Z" level=info msg="StartContainer for \"4287eb24a3891eda2d3016723f2cbfa56c5bb01d71f1866803d6227e189cbad5\" returns successfully" Oct 8 19:44:33.831740 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 8 19:44:33.831951 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 8 19:44:34.479770 kubelet[2440]: E1008 19:44:34.479705 2440 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:44:34.509127 kubelet[2440]: I1008 19:44:34.508945 2440 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-jslp9" podStartSLOduration=1.9177743889999999 podStartE2EDuration="12.508929393s" podCreationTimestamp="2024-10-08 19:44:22 +0000 UTC" firstStartedPulling="2024-10-08 19:44:22.968211605 +0000 UTC m=+13.669743992" lastFinishedPulling="2024-10-08 19:44:33.559366609 +0000 UTC m=+24.260898996" observedRunningTime="2024-10-08 19:44:34.505061536 +0000 UTC m=+25.206593923" watchObservedRunningTime="2024-10-08 19:44:34.508929393 +0000 UTC m=+25.210461780" Oct 8 19:44:35.481097 kubelet[2440]: E1008 19:44:35.481060 2440 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:44:41.356942 systemd[1]: Started sshd@7-10.0.0.84:22-10.0.0.1:52106.service - OpenSSH per-connection server daemon (10.0.0.1:52106). Oct 8 19:44:41.393670 sshd[3780]: Accepted publickey for core from 10.0.0.1 port 52106 ssh2: RSA SHA256:7GlzoUcthdqM2/gWbc3rpA5Lm+7Qkd3pe7wSn/JGGIM Oct 8 19:44:41.395014 sshd[3780]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:44:41.398940 systemd-logind[1424]: New session 8 of user core. Oct 8 19:44:41.415871 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 8 19:44:41.639574 sshd[3780]: pam_unix(sshd:session): session closed for user core Oct 8 19:44:41.643185 systemd[1]: sshd@7-10.0.0.84:22-10.0.0.1:52106.service: Deactivated successfully. Oct 8 19:44:41.645035 systemd[1]: session-8.scope: Deactivated successfully. Oct 8 19:44:41.645656 systemd-logind[1424]: Session 8 logged out. Waiting for processes to exit. Oct 8 19:44:41.646540 systemd-logind[1424]: Removed session 8. Oct 8 19:44:43.377931 containerd[1437]: time="2024-10-08T19:44:43.377565640Z" level=info msg="StopPodSandbox for \"963fbfe03f67f4c30a087f2097b216f615da187efd50095b384472198e07e5f3\"" Oct 8 19:44:43.377931 containerd[1437]: time="2024-10-08T19:44:43.377777071Z" level=info msg="StopPodSandbox for \"ed980de4e4b3d85aeb172300c5aaacc9ec0959991008fbfd94e340d0c41f776f\"" Oct 8 19:44:43.662014 containerd[1437]: 2024-10-08 19:44:43.495 [INFO][3873] k8s.go 608: Cleaning up netns ContainerID="ed980de4e4b3d85aeb172300c5aaacc9ec0959991008fbfd94e340d0c41f776f" Oct 8 19:44:43.662014 containerd[1437]: 2024-10-08 19:44:43.497 [INFO][3873] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="ed980de4e4b3d85aeb172300c5aaacc9ec0959991008fbfd94e340d0c41f776f" iface="eth0" netns="/var/run/netns/cni-329f1812-b41b-14c3-ccca-282bfbc5631b" Oct 8 19:44:43.662014 containerd[1437]: 2024-10-08 19:44:43.498 [INFO][3873] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="ed980de4e4b3d85aeb172300c5aaacc9ec0959991008fbfd94e340d0c41f776f" iface="eth0" netns="/var/run/netns/cni-329f1812-b41b-14c3-ccca-282bfbc5631b" Oct 8 19:44:43.662014 containerd[1437]: 2024-10-08 19:44:43.502 [INFO][3873] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="ed980de4e4b3d85aeb172300c5aaacc9ec0959991008fbfd94e340d0c41f776f" iface="eth0" netns="/var/run/netns/cni-329f1812-b41b-14c3-ccca-282bfbc5631b" Oct 8 19:44:43.662014 containerd[1437]: 2024-10-08 19:44:43.502 [INFO][3873] k8s.go 615: Releasing IP address(es) ContainerID="ed980de4e4b3d85aeb172300c5aaacc9ec0959991008fbfd94e340d0c41f776f" Oct 8 19:44:43.662014 containerd[1437]: 2024-10-08 19:44:43.502 [INFO][3873] utils.go 188: Calico CNI releasing IP address ContainerID="ed980de4e4b3d85aeb172300c5aaacc9ec0959991008fbfd94e340d0c41f776f" Oct 8 19:44:43.662014 containerd[1437]: 2024-10-08 19:44:43.644 [INFO][3891] ipam_plugin.go 417: Releasing address using handleID ContainerID="ed980de4e4b3d85aeb172300c5aaacc9ec0959991008fbfd94e340d0c41f776f" HandleID="k8s-pod-network.ed980de4e4b3d85aeb172300c5aaacc9ec0959991008fbfd94e340d0c41f776f" Workload="localhost-k8s-coredns--6f6b679f8f--9qdpq-eth0" Oct 8 19:44:43.662014 containerd[1437]: 2024-10-08 19:44:43.644 [INFO][3891] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:44:43.662014 containerd[1437]: 2024-10-08 19:44:43.644 [INFO][3891] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:44:43.662014 containerd[1437]: 2024-10-08 19:44:43.653 [WARNING][3891] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="ed980de4e4b3d85aeb172300c5aaacc9ec0959991008fbfd94e340d0c41f776f" HandleID="k8s-pod-network.ed980de4e4b3d85aeb172300c5aaacc9ec0959991008fbfd94e340d0c41f776f" Workload="localhost-k8s-coredns--6f6b679f8f--9qdpq-eth0" Oct 8 19:44:43.662014 containerd[1437]: 2024-10-08 19:44:43.653 [INFO][3891] ipam_plugin.go 445: Releasing address using workloadID ContainerID="ed980de4e4b3d85aeb172300c5aaacc9ec0959991008fbfd94e340d0c41f776f" HandleID="k8s-pod-network.ed980de4e4b3d85aeb172300c5aaacc9ec0959991008fbfd94e340d0c41f776f" Workload="localhost-k8s-coredns--6f6b679f8f--9qdpq-eth0" Oct 8 19:44:43.662014 containerd[1437]: 2024-10-08 19:44:43.654 [INFO][3891] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:44:43.662014 containerd[1437]: 2024-10-08 19:44:43.657 [INFO][3873] k8s.go 621: Teardown processing complete. ContainerID="ed980de4e4b3d85aeb172300c5aaacc9ec0959991008fbfd94e340d0c41f776f" Oct 8 19:44:43.664520 containerd[1437]: time="2024-10-08T19:44:43.664326312Z" level=info msg="TearDown network for sandbox \"ed980de4e4b3d85aeb172300c5aaacc9ec0959991008fbfd94e340d0c41f776f\" successfully" Oct 8 19:44:43.664520 containerd[1437]: time="2024-10-08T19:44:43.664356996Z" level=info msg="StopPodSandbox for \"ed980de4e4b3d85aeb172300c5aaacc9ec0959991008fbfd94e340d0c41f776f\" returns successfully" Oct 8 19:44:43.665165 kubelet[2440]: E1008 19:44:43.665134 2440 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:44:43.666094 systemd[1]: run-netns-cni\x2d329f1812\x2db41b\x2d14c3\x2dccca\x2d282bfbc5631b.mount: Deactivated successfully. Oct 8 19:44:43.667187 containerd[1437]: time="2024-10-08T19:44:43.666794273Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-9qdpq,Uid:e4211795-44be-4c33-a1ed-5582f08a21b7,Namespace:kube-system,Attempt:1,}" Oct 8 19:44:43.670217 containerd[1437]: 2024-10-08 19:44:43.496 [INFO][3877] k8s.go 608: Cleaning up netns ContainerID="963fbfe03f67f4c30a087f2097b216f615da187efd50095b384472198e07e5f3" Oct 8 19:44:43.670217 containerd[1437]: 2024-10-08 19:44:43.496 [INFO][3877] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="963fbfe03f67f4c30a087f2097b216f615da187efd50095b384472198e07e5f3" iface="eth0" netns="/var/run/netns/cni-a45d51d2-0d44-ff66-48bc-c55bac5b31f7" Oct 8 19:44:43.670217 containerd[1437]: 2024-10-08 19:44:43.498 [INFO][3877] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="963fbfe03f67f4c30a087f2097b216f615da187efd50095b384472198e07e5f3" iface="eth0" netns="/var/run/netns/cni-a45d51d2-0d44-ff66-48bc-c55bac5b31f7" Oct 8 19:44:43.670217 containerd[1437]: 2024-10-08 19:44:43.502 [INFO][3877] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="963fbfe03f67f4c30a087f2097b216f615da187efd50095b384472198e07e5f3" iface="eth0" netns="/var/run/netns/cni-a45d51d2-0d44-ff66-48bc-c55bac5b31f7" Oct 8 19:44:43.670217 containerd[1437]: 2024-10-08 19:44:43.502 [INFO][3877] k8s.go 615: Releasing IP address(es) ContainerID="963fbfe03f67f4c30a087f2097b216f615da187efd50095b384472198e07e5f3" Oct 8 19:44:43.670217 containerd[1437]: 2024-10-08 19:44:43.502 [INFO][3877] utils.go 188: Calico CNI releasing IP address ContainerID="963fbfe03f67f4c30a087f2097b216f615da187efd50095b384472198e07e5f3" Oct 8 19:44:43.670217 containerd[1437]: 2024-10-08 19:44:43.644 [INFO][3892] ipam_plugin.go 417: Releasing address using handleID ContainerID="963fbfe03f67f4c30a087f2097b216f615da187efd50095b384472198e07e5f3" HandleID="k8s-pod-network.963fbfe03f67f4c30a087f2097b216f615da187efd50095b384472198e07e5f3" Workload="localhost-k8s-csi--node--driver--nkd98-eth0" Oct 8 19:44:43.670217 containerd[1437]: 2024-10-08 19:44:43.644 [INFO][3892] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:44:43.670217 containerd[1437]: 2024-10-08 19:44:43.654 [INFO][3892] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:44:43.670217 containerd[1437]: 2024-10-08 19:44:43.664 [WARNING][3892] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="963fbfe03f67f4c30a087f2097b216f615da187efd50095b384472198e07e5f3" HandleID="k8s-pod-network.963fbfe03f67f4c30a087f2097b216f615da187efd50095b384472198e07e5f3" Workload="localhost-k8s-csi--node--driver--nkd98-eth0" Oct 8 19:44:43.670217 containerd[1437]: 2024-10-08 19:44:43.664 [INFO][3892] ipam_plugin.go 445: Releasing address using workloadID ContainerID="963fbfe03f67f4c30a087f2097b216f615da187efd50095b384472198e07e5f3" HandleID="k8s-pod-network.963fbfe03f67f4c30a087f2097b216f615da187efd50095b384472198e07e5f3" Workload="localhost-k8s-csi--node--driver--nkd98-eth0" Oct 8 19:44:43.670217 containerd[1437]: 2024-10-08 19:44:43.665 [INFO][3892] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:44:43.670217 containerd[1437]: 2024-10-08 19:44:43.667 [INFO][3877] k8s.go 621: Teardown processing complete. ContainerID="963fbfe03f67f4c30a087f2097b216f615da187efd50095b384472198e07e5f3" Oct 8 19:44:43.671968 containerd[1437]: time="2024-10-08T19:44:43.670338633Z" level=info msg="TearDown network for sandbox \"963fbfe03f67f4c30a087f2097b216f615da187efd50095b384472198e07e5f3\" successfully" Oct 8 19:44:43.671968 containerd[1437]: time="2024-10-08T19:44:43.670358476Z" level=info msg="StopPodSandbox for \"963fbfe03f67f4c30a087f2097b216f615da187efd50095b384472198e07e5f3\" returns successfully" Oct 8 19:44:43.672114 containerd[1437]: time="2024-10-08T19:44:43.672072767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nkd98,Uid:580b162c-9f56-423c-982e-ca1911345f68,Namespace:calico-system,Attempt:1,}" Oct 8 19:44:43.674173 systemd[1]: run-netns-cni\x2da45d51d2\x2d0d44\x2dff66\x2d48bc\x2dc55bac5b31f7.mount: Deactivated successfully. Oct 8 19:44:43.867111 systemd-networkd[1383]: cali5c53a0fcf17: Link UP Oct 8 19:44:43.867313 systemd-networkd[1383]: cali5c53a0fcf17: Gained carrier Oct 8 19:44:43.879788 containerd[1437]: 2024-10-08 19:44:43.765 [INFO][3927] utils.go 100: File /var/lib/calico/mtu does not exist Oct 8 19:44:43.879788 containerd[1437]: 2024-10-08 19:44:43.781 [INFO][3927] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--6f6b679f8f--9qdpq-eth0 coredns-6f6b679f8f- kube-system e4211795-44be-4c33-a1ed-5582f08a21b7 724 0 2024-10-08 19:44:16 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-6f6b679f8f-9qdpq eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali5c53a0fcf17 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="32def399e181e4a27e495f07e7f3d7572c5d7cee2609821689723c00844813ac" Namespace="kube-system" Pod="coredns-6f6b679f8f-9qdpq" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--9qdpq-" Oct 8 19:44:43.879788 containerd[1437]: 2024-10-08 19:44:43.781 [INFO][3927] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="32def399e181e4a27e495f07e7f3d7572c5d7cee2609821689723c00844813ac" Namespace="kube-system" Pod="coredns-6f6b679f8f-9qdpq" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--9qdpq-eth0" Oct 8 19:44:43.879788 containerd[1437]: 2024-10-08 19:44:43.819 [INFO][3952] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="32def399e181e4a27e495f07e7f3d7572c5d7cee2609821689723c00844813ac" HandleID="k8s-pod-network.32def399e181e4a27e495f07e7f3d7572c5d7cee2609821689723c00844813ac" Workload="localhost-k8s-coredns--6f6b679f8f--9qdpq-eth0" Oct 8 19:44:43.879788 containerd[1437]: 2024-10-08 19:44:43.831 [INFO][3952] ipam_plugin.go 270: Auto assigning IP ContainerID="32def399e181e4a27e495f07e7f3d7572c5d7cee2609821689723c00844813ac" HandleID="k8s-pod-network.32def399e181e4a27e495f07e7f3d7572c5d7cee2609821689723c00844813ac" Workload="localhost-k8s-coredns--6f6b679f8f--9qdpq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40006bdc30), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-6f6b679f8f-9qdpq", "timestamp":"2024-10-08 19:44:43.819063032 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 19:44:43.879788 containerd[1437]: 2024-10-08 19:44:43.831 [INFO][3952] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:44:43.879788 containerd[1437]: 2024-10-08 19:44:43.831 [INFO][3952] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:44:43.879788 containerd[1437]: 2024-10-08 19:44:43.835 [INFO][3952] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 8 19:44:43.879788 containerd[1437]: 2024-10-08 19:44:43.837 [INFO][3952] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.32def399e181e4a27e495f07e7f3d7572c5d7cee2609821689723c00844813ac" host="localhost" Oct 8 19:44:43.879788 containerd[1437]: 2024-10-08 19:44:43.843 [INFO][3952] ipam.go 372: Looking up existing affinities for host host="localhost" Oct 8 19:44:43.879788 containerd[1437]: 2024-10-08 19:44:43.847 [INFO][3952] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Oct 8 19:44:43.879788 containerd[1437]: 2024-10-08 19:44:43.848 [INFO][3952] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 8 19:44:43.879788 containerd[1437]: 2024-10-08 19:44:43.850 [INFO][3952] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 8 19:44:43.879788 containerd[1437]: 2024-10-08 19:44:43.850 [INFO][3952] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.32def399e181e4a27e495f07e7f3d7572c5d7cee2609821689723c00844813ac" host="localhost" Oct 8 19:44:43.879788 containerd[1437]: 2024-10-08 19:44:43.851 [INFO][3952] ipam.go 1685: Creating new handle: k8s-pod-network.32def399e181e4a27e495f07e7f3d7572c5d7cee2609821689723c00844813ac Oct 8 19:44:43.879788 containerd[1437]: 2024-10-08 19:44:43.855 [INFO][3952] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.32def399e181e4a27e495f07e7f3d7572c5d7cee2609821689723c00844813ac" host="localhost" Oct 8 19:44:43.879788 containerd[1437]: 2024-10-08 19:44:43.859 [INFO][3952] ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.32def399e181e4a27e495f07e7f3d7572c5d7cee2609821689723c00844813ac" host="localhost" Oct 8 19:44:43.879788 containerd[1437]: 2024-10-08 19:44:43.860 [INFO][3952] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.32def399e181e4a27e495f07e7f3d7572c5d7cee2609821689723c00844813ac" host="localhost" Oct 8 19:44:43.879788 containerd[1437]: 2024-10-08 19:44:43.860 [INFO][3952] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:44:43.879788 containerd[1437]: 2024-10-08 19:44:43.860 [INFO][3952] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="32def399e181e4a27e495f07e7f3d7572c5d7cee2609821689723c00844813ac" HandleID="k8s-pod-network.32def399e181e4a27e495f07e7f3d7572c5d7cee2609821689723c00844813ac" Workload="localhost-k8s-coredns--6f6b679f8f--9qdpq-eth0" Oct 8 19:44:43.880344 containerd[1437]: 2024-10-08 19:44:43.862 [INFO][3927] k8s.go 386: Populated endpoint ContainerID="32def399e181e4a27e495f07e7f3d7572c5d7cee2609821689723c00844813ac" Namespace="kube-system" Pod="coredns-6f6b679f8f-9qdpq" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--9qdpq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--9qdpq-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"e4211795-44be-4c33-a1ed-5582f08a21b7", ResourceVersion:"724", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 44, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-6f6b679f8f-9qdpq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5c53a0fcf17", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:44:43.880344 containerd[1437]: 2024-10-08 19:44:43.862 [INFO][3927] k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="32def399e181e4a27e495f07e7f3d7572c5d7cee2609821689723c00844813ac" Namespace="kube-system" Pod="coredns-6f6b679f8f-9qdpq" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--9qdpq-eth0" Oct 8 19:44:43.880344 containerd[1437]: 2024-10-08 19:44:43.862 [INFO][3927] dataplane_linux.go 68: Setting the host side veth name to cali5c53a0fcf17 ContainerID="32def399e181e4a27e495f07e7f3d7572c5d7cee2609821689723c00844813ac" Namespace="kube-system" Pod="coredns-6f6b679f8f-9qdpq" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--9qdpq-eth0" Oct 8 19:44:43.880344 containerd[1437]: 2024-10-08 19:44:43.867 [INFO][3927] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="32def399e181e4a27e495f07e7f3d7572c5d7cee2609821689723c00844813ac" Namespace="kube-system" Pod="coredns-6f6b679f8f-9qdpq" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--9qdpq-eth0" Oct 8 19:44:43.880344 containerd[1437]: 2024-10-08 19:44:43.867 [INFO][3927] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="32def399e181e4a27e495f07e7f3d7572c5d7cee2609821689723c00844813ac" Namespace="kube-system" Pod="coredns-6f6b679f8f-9qdpq" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--9qdpq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--9qdpq-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"e4211795-44be-4c33-a1ed-5582f08a21b7", ResourceVersion:"724", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 44, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"32def399e181e4a27e495f07e7f3d7572c5d7cee2609821689723c00844813ac", Pod:"coredns-6f6b679f8f-9qdpq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5c53a0fcf17", MAC:"46:13:cb:36:06:95", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:44:43.880344 containerd[1437]: 2024-10-08 19:44:43.876 [INFO][3927] k8s.go 500: Wrote updated endpoint to datastore ContainerID="32def399e181e4a27e495f07e7f3d7572c5d7cee2609821689723c00844813ac" Namespace="kube-system" Pod="coredns-6f6b679f8f-9qdpq" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--9qdpq-eth0" Oct 8 19:44:43.896699 containerd[1437]: time="2024-10-08T19:44:43.896533427Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:44:43.896699 containerd[1437]: time="2024-10-08T19:44:43.896583154Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:44:43.896699 containerd[1437]: time="2024-10-08T19:44:43.896597557Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:44:43.896699 containerd[1437]: time="2024-10-08T19:44:43.896607158Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:44:43.915845 systemd[1]: Started cri-containerd-32def399e181e4a27e495f07e7f3d7572c5d7cee2609821689723c00844813ac.scope - libcontainer container 32def399e181e4a27e495f07e7f3d7572c5d7cee2609821689723c00844813ac. Oct 8 19:44:43.927818 systemd-resolved[1309]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 8 19:44:43.944033 containerd[1437]: time="2024-10-08T19:44:43.943890448Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-9qdpq,Uid:e4211795-44be-4c33-a1ed-5582f08a21b7,Namespace:kube-system,Attempt:1,} returns sandbox id \"32def399e181e4a27e495f07e7f3d7572c5d7cee2609821689723c00844813ac\"" Oct 8 19:44:43.944742 kubelet[2440]: E1008 19:44:43.944458 2440 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:44:43.947167 containerd[1437]: time="2024-10-08T19:44:43.947132604Z" level=info msg="CreateContainer within sandbox \"32def399e181e4a27e495f07e7f3d7572c5d7cee2609821689723c00844813ac\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 8 19:44:43.957697 containerd[1437]: time="2024-10-08T19:44:43.957617701Z" level=info msg="CreateContainer within sandbox \"32def399e181e4a27e495f07e7f3d7572c5d7cee2609821689723c00844813ac\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f33bde6162691ffff1a7698f92cadd157afaebd56ce09e1b25ef1e261284bc98\"" Oct 8 19:44:43.958234 containerd[1437]: time="2024-10-08T19:44:43.958204947Z" level=info msg="StartContainer for \"f33bde6162691ffff1a7698f92cadd157afaebd56ce09e1b25ef1e261284bc98\"" Oct 8 19:44:43.969314 systemd-networkd[1383]: cali92cca13629b: Link UP Oct 8 19:44:43.970184 systemd-networkd[1383]: cali92cca13629b: Gained carrier Oct 8 19:44:43.985603 containerd[1437]: 2024-10-08 19:44:43.779 [INFO][3937] utils.go 100: File /var/lib/calico/mtu does not exist Oct 8 19:44:43.985603 containerd[1437]: 2024-10-08 19:44:43.796 [INFO][3937] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--nkd98-eth0 csi-node-driver- calico-system 580b162c-9f56-423c-982e-ca1911345f68 725 0 2024-10-08 19:44:22 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:779867c8f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s localhost csi-node-driver-nkd98 eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali92cca13629b [] []}} ContainerID="809cdbf663fa8bbdd2874fc0d3c6011161657151427c6bb9f3b8a3da813b84a3" Namespace="calico-system" Pod="csi-node-driver-nkd98" WorkloadEndpoint="localhost-k8s-csi--node--driver--nkd98-" Oct 8 19:44:43.985603 containerd[1437]: 2024-10-08 19:44:43.796 [INFO][3937] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="809cdbf663fa8bbdd2874fc0d3c6011161657151427c6bb9f3b8a3da813b84a3" Namespace="calico-system" Pod="csi-node-driver-nkd98" WorkloadEndpoint="localhost-k8s-csi--node--driver--nkd98-eth0" Oct 8 19:44:43.985603 containerd[1437]: 2024-10-08 19:44:43.826 [INFO][3957] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="809cdbf663fa8bbdd2874fc0d3c6011161657151427c6bb9f3b8a3da813b84a3" HandleID="k8s-pod-network.809cdbf663fa8bbdd2874fc0d3c6011161657151427c6bb9f3b8a3da813b84a3" Workload="localhost-k8s-csi--node--driver--nkd98-eth0" Oct 8 19:44:43.985603 containerd[1437]: 2024-10-08 19:44:43.836 [INFO][3957] ipam_plugin.go 270: Auto assigning IP ContainerID="809cdbf663fa8bbdd2874fc0d3c6011161657151427c6bb9f3b8a3da813b84a3" HandleID="k8s-pod-network.809cdbf663fa8bbdd2874fc0d3c6011161657151427c6bb9f3b8a3da813b84a3" Workload="localhost-k8s-csi--node--driver--nkd98-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000637c20), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-nkd98", "timestamp":"2024-10-08 19:44:43.826507083 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 19:44:43.985603 containerd[1437]: 2024-10-08 19:44:43.836 [INFO][3957] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:44:43.985603 containerd[1437]: 2024-10-08 19:44:43.860 [INFO][3957] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:44:43.985603 containerd[1437]: 2024-10-08 19:44:43.860 [INFO][3957] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 8 19:44:43.985603 containerd[1437]: 2024-10-08 19:44:43.939 [INFO][3957] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.809cdbf663fa8bbdd2874fc0d3c6011161657151427c6bb9f3b8a3da813b84a3" host="localhost" Oct 8 19:44:43.985603 containerd[1437]: 2024-10-08 19:44:43.943 [INFO][3957] ipam.go 372: Looking up existing affinities for host host="localhost" Oct 8 19:44:43.985603 containerd[1437]: 2024-10-08 19:44:43.949 [INFO][3957] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Oct 8 19:44:43.985603 containerd[1437]: 2024-10-08 19:44:43.951 [INFO][3957] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 8 19:44:43.985603 containerd[1437]: 2024-10-08 19:44:43.953 [INFO][3957] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 8 19:44:43.985603 containerd[1437]: 2024-10-08 19:44:43.953 [INFO][3957] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.809cdbf663fa8bbdd2874fc0d3c6011161657151427c6bb9f3b8a3da813b84a3" host="localhost" Oct 8 19:44:43.985603 containerd[1437]: 2024-10-08 19:44:43.954 [INFO][3957] ipam.go 1685: Creating new handle: k8s-pod-network.809cdbf663fa8bbdd2874fc0d3c6011161657151427c6bb9f3b8a3da813b84a3 Oct 8 19:44:43.985603 containerd[1437]: 2024-10-08 19:44:43.958 [INFO][3957] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.809cdbf663fa8bbdd2874fc0d3c6011161657151427c6bb9f3b8a3da813b84a3" host="localhost" Oct 8 19:44:43.985603 containerd[1437]: 2024-10-08 19:44:43.965 [INFO][3957] ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.809cdbf663fa8bbdd2874fc0d3c6011161657151427c6bb9f3b8a3da813b84a3" host="localhost" Oct 8 19:44:43.985603 containerd[1437]: 2024-10-08 19:44:43.965 [INFO][3957] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.809cdbf663fa8bbdd2874fc0d3c6011161657151427c6bb9f3b8a3da813b84a3" host="localhost" Oct 8 19:44:43.985603 containerd[1437]: 2024-10-08 19:44:43.965 [INFO][3957] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:44:43.985603 containerd[1437]: 2024-10-08 19:44:43.965 [INFO][3957] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="809cdbf663fa8bbdd2874fc0d3c6011161657151427c6bb9f3b8a3da813b84a3" HandleID="k8s-pod-network.809cdbf663fa8bbdd2874fc0d3c6011161657151427c6bb9f3b8a3da813b84a3" Workload="localhost-k8s-csi--node--driver--nkd98-eth0" Oct 8 19:44:43.986154 containerd[1437]: 2024-10-08 19:44:43.967 [INFO][3937] k8s.go 386: Populated endpoint ContainerID="809cdbf663fa8bbdd2874fc0d3c6011161657151427c6bb9f3b8a3da813b84a3" Namespace="calico-system" Pod="csi-node-driver-nkd98" WorkloadEndpoint="localhost-k8s-csi--node--driver--nkd98-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--nkd98-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"580b162c-9f56-423c-982e-ca1911345f68", ResourceVersion:"725", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 44, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"779867c8f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-nkd98", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali92cca13629b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:44:43.986154 containerd[1437]: 2024-10-08 19:44:43.968 [INFO][3937] k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="809cdbf663fa8bbdd2874fc0d3c6011161657151427c6bb9f3b8a3da813b84a3" Namespace="calico-system" Pod="csi-node-driver-nkd98" WorkloadEndpoint="localhost-k8s-csi--node--driver--nkd98-eth0" Oct 8 19:44:43.986154 containerd[1437]: 2024-10-08 19:44:43.968 [INFO][3937] dataplane_linux.go 68: Setting the host side veth name to cali92cca13629b ContainerID="809cdbf663fa8bbdd2874fc0d3c6011161657151427c6bb9f3b8a3da813b84a3" Namespace="calico-system" Pod="csi-node-driver-nkd98" WorkloadEndpoint="localhost-k8s-csi--node--driver--nkd98-eth0" Oct 8 19:44:43.986154 containerd[1437]: 2024-10-08 19:44:43.969 [INFO][3937] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="809cdbf663fa8bbdd2874fc0d3c6011161657151427c6bb9f3b8a3da813b84a3" Namespace="calico-system" Pod="csi-node-driver-nkd98" WorkloadEndpoint="localhost-k8s-csi--node--driver--nkd98-eth0" Oct 8 19:44:43.986154 containerd[1437]: 2024-10-08 19:44:43.969 [INFO][3937] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="809cdbf663fa8bbdd2874fc0d3c6011161657151427c6bb9f3b8a3da813b84a3" Namespace="calico-system" Pod="csi-node-driver-nkd98" WorkloadEndpoint="localhost-k8s-csi--node--driver--nkd98-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--nkd98-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"580b162c-9f56-423c-982e-ca1911345f68", ResourceVersion:"725", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 44, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"779867c8f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"809cdbf663fa8bbdd2874fc0d3c6011161657151427c6bb9f3b8a3da813b84a3", Pod:"csi-node-driver-nkd98", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali92cca13629b", MAC:"4e:20:64:3d:a3:de", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:44:43.986154 containerd[1437]: 2024-10-08 19:44:43.982 [INFO][3937] k8s.go 500: Wrote updated endpoint to datastore ContainerID="809cdbf663fa8bbdd2874fc0d3c6011161657151427c6bb9f3b8a3da813b84a3" Namespace="calico-system" Pod="csi-node-driver-nkd98" WorkloadEndpoint="localhost-k8s-csi--node--driver--nkd98-eth0" Oct 8 19:44:43.989847 systemd[1]: Started cri-containerd-f33bde6162691ffff1a7698f92cadd157afaebd56ce09e1b25ef1e261284bc98.scope - libcontainer container f33bde6162691ffff1a7698f92cadd157afaebd56ce09e1b25ef1e261284bc98. Oct 8 19:44:44.004369 containerd[1437]: time="2024-10-08T19:44:44.004282209Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:44:44.004454 containerd[1437]: time="2024-10-08T19:44:44.004382143Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:44:44.004454 containerd[1437]: time="2024-10-08T19:44:44.004413548Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:44:44.004512 containerd[1437]: time="2024-10-08T19:44:44.004442512Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:44:44.025112 systemd[1]: Started cri-containerd-809cdbf663fa8bbdd2874fc0d3c6011161657151427c6bb9f3b8a3da813b84a3.scope - libcontainer container 809cdbf663fa8bbdd2874fc0d3c6011161657151427c6bb9f3b8a3da813b84a3. Oct 8 19:44:44.032028 containerd[1437]: time="2024-10-08T19:44:44.031896057Z" level=info msg="StartContainer for \"f33bde6162691ffff1a7698f92cadd157afaebd56ce09e1b25ef1e261284bc98\" returns successfully" Oct 8 19:44:44.041441 systemd-resolved[1309]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 8 19:44:44.052869 containerd[1437]: time="2024-10-08T19:44:44.052825355Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nkd98,Uid:580b162c-9f56-423c-982e-ca1911345f68,Namespace:calico-system,Attempt:1,} returns sandbox id \"809cdbf663fa8bbdd2874fc0d3c6011161657151427c6bb9f3b8a3da813b84a3\"" Oct 8 19:44:44.054612 containerd[1437]: time="2024-10-08T19:44:44.054587085Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\"" Oct 8 19:44:44.501513 kubelet[2440]: E1008 19:44:44.501319 2440 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:44:44.513924 kubelet[2440]: I1008 19:44:44.513772 2440 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-9qdpq" podStartSLOduration=28.513756285 podStartE2EDuration="28.513756285s" podCreationTimestamp="2024-10-08 19:44:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:44:44.512825473 +0000 UTC m=+35.214357860" watchObservedRunningTime="2024-10-08 19:44:44.513756285 +0000 UTC m=+35.215288632" Oct 8 19:44:44.933639 containerd[1437]: time="2024-10-08T19:44:44.933524080Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:44:44.934991 containerd[1437]: time="2024-10-08T19:44:44.934965005Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.1: active requests=0, bytes read=7211060" Oct 8 19:44:44.935701 containerd[1437]: time="2024-10-08T19:44:44.935666145Z" level=info msg="ImageCreate event name:\"sha256:dd6cf4bf9b3656f9dd9713f21ac1be96858f750a9a3bf340983fb7072f4eda2a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:44:44.938948 containerd[1437]: time="2024-10-08T19:44:44.938536513Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:44:44.939503 containerd[1437]: time="2024-10-08T19:44:44.939469726Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.1\" with image id \"sha256:dd6cf4bf9b3656f9dd9713f21ac1be96858f750a9a3bf340983fb7072f4eda2a\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\", size \"8578579\" in 884.851717ms" Oct 8 19:44:44.939602 containerd[1437]: time="2024-10-08T19:44:44.939584422Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\" returns image reference \"sha256:dd6cf4bf9b3656f9dd9713f21ac1be96858f750a9a3bf340983fb7072f4eda2a\"" Oct 8 19:44:44.942768 containerd[1437]: time="2024-10-08T19:44:44.942732870Z" level=info msg="CreateContainer within sandbox \"809cdbf663fa8bbdd2874fc0d3c6011161657151427c6bb9f3b8a3da813b84a3\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Oct 8 19:44:44.958651 containerd[1437]: time="2024-10-08T19:44:44.958613289Z" level=info msg="CreateContainer within sandbox \"809cdbf663fa8bbdd2874fc0d3c6011161657151427c6bb9f3b8a3da813b84a3\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"b6f1f66fd5487a540c7af816d6f4aedd9ca1fbe8c9ff706f0f206b5736f9fe24\"" Oct 8 19:44:44.959613 containerd[1437]: time="2024-10-08T19:44:44.959181810Z" level=info msg="StartContainer for \"b6f1f66fd5487a540c7af816d6f4aedd9ca1fbe8c9ff706f0f206b5736f9fe24\"" Oct 8 19:44:44.964964 systemd-networkd[1383]: cali5c53a0fcf17: Gained IPv6LL Oct 8 19:44:44.997941 systemd[1]: Started cri-containerd-b6f1f66fd5487a540c7af816d6f4aedd9ca1fbe8c9ff706f0f206b5736f9fe24.scope - libcontainer container b6f1f66fd5487a540c7af816d6f4aedd9ca1fbe8c9ff706f0f206b5736f9fe24. Oct 8 19:44:45.021798 containerd[1437]: time="2024-10-08T19:44:45.021754230Z" level=info msg="StartContainer for \"b6f1f66fd5487a540c7af816d6f4aedd9ca1fbe8c9ff706f0f206b5736f9fe24\" returns successfully" Oct 8 19:44:45.024602 containerd[1437]: time="2024-10-08T19:44:45.024574060Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\"" Oct 8 19:44:45.282813 systemd-networkd[1383]: cali92cca13629b: Gained IPv6LL Oct 8 19:44:45.377371 containerd[1437]: time="2024-10-08T19:44:45.377319573Z" level=info msg="StopPodSandbox for \"0d8d1d363482dd5cf714d0fbcb0affd638ce6d12eec14319fad6a3e43d828744\"" Oct 8 19:44:45.466603 containerd[1437]: 2024-10-08 19:44:45.423 [INFO][4199] k8s.go 608: Cleaning up netns ContainerID="0d8d1d363482dd5cf714d0fbcb0affd638ce6d12eec14319fad6a3e43d828744" Oct 8 19:44:45.466603 containerd[1437]: 2024-10-08 19:44:45.424 [INFO][4199] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="0d8d1d363482dd5cf714d0fbcb0affd638ce6d12eec14319fad6a3e43d828744" iface="eth0" netns="/var/run/netns/cni-74cbbab7-a427-eb8c-9d26-1f3accfbcb6f" Oct 8 19:44:45.466603 containerd[1437]: 2024-10-08 19:44:45.425 [INFO][4199] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="0d8d1d363482dd5cf714d0fbcb0affd638ce6d12eec14319fad6a3e43d828744" iface="eth0" netns="/var/run/netns/cni-74cbbab7-a427-eb8c-9d26-1f3accfbcb6f" Oct 8 19:44:45.466603 containerd[1437]: 2024-10-08 19:44:45.428 [INFO][4199] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="0d8d1d363482dd5cf714d0fbcb0affd638ce6d12eec14319fad6a3e43d828744" iface="eth0" netns="/var/run/netns/cni-74cbbab7-a427-eb8c-9d26-1f3accfbcb6f" Oct 8 19:44:45.466603 containerd[1437]: 2024-10-08 19:44:45.428 [INFO][4199] k8s.go 615: Releasing IP address(es) ContainerID="0d8d1d363482dd5cf714d0fbcb0affd638ce6d12eec14319fad6a3e43d828744" Oct 8 19:44:45.466603 containerd[1437]: 2024-10-08 19:44:45.428 [INFO][4199] utils.go 188: Calico CNI releasing IP address ContainerID="0d8d1d363482dd5cf714d0fbcb0affd638ce6d12eec14319fad6a3e43d828744" Oct 8 19:44:45.466603 containerd[1437]: 2024-10-08 19:44:45.452 [INFO][4207] ipam_plugin.go 417: Releasing address using handleID ContainerID="0d8d1d363482dd5cf714d0fbcb0affd638ce6d12eec14319fad6a3e43d828744" HandleID="k8s-pod-network.0d8d1d363482dd5cf714d0fbcb0affd638ce6d12eec14319fad6a3e43d828744" Workload="localhost-k8s-calico--kube--controllers--6bb77fd48c--ql2qz-eth0" Oct 8 19:44:45.466603 containerd[1437]: 2024-10-08 19:44:45.452 [INFO][4207] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:44:45.466603 containerd[1437]: 2024-10-08 19:44:45.452 [INFO][4207] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:44:45.466603 containerd[1437]: 2024-10-08 19:44:45.460 [WARNING][4207] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="0d8d1d363482dd5cf714d0fbcb0affd638ce6d12eec14319fad6a3e43d828744" HandleID="k8s-pod-network.0d8d1d363482dd5cf714d0fbcb0affd638ce6d12eec14319fad6a3e43d828744" Workload="localhost-k8s-calico--kube--controllers--6bb77fd48c--ql2qz-eth0" Oct 8 19:44:45.466603 containerd[1437]: 2024-10-08 19:44:45.460 [INFO][4207] ipam_plugin.go 445: Releasing address using workloadID ContainerID="0d8d1d363482dd5cf714d0fbcb0affd638ce6d12eec14319fad6a3e43d828744" HandleID="k8s-pod-network.0d8d1d363482dd5cf714d0fbcb0affd638ce6d12eec14319fad6a3e43d828744" Workload="localhost-k8s-calico--kube--controllers--6bb77fd48c--ql2qz-eth0" Oct 8 19:44:45.466603 containerd[1437]: 2024-10-08 19:44:45.461 [INFO][4207] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:44:45.466603 containerd[1437]: 2024-10-08 19:44:45.463 [INFO][4199] k8s.go 621: Teardown processing complete. ContainerID="0d8d1d363482dd5cf714d0fbcb0affd638ce6d12eec14319fad6a3e43d828744" Oct 8 19:44:45.467010 containerd[1437]: time="2024-10-08T19:44:45.466816742Z" level=info msg="TearDown network for sandbox \"0d8d1d363482dd5cf714d0fbcb0affd638ce6d12eec14319fad6a3e43d828744\" successfully" Oct 8 19:44:45.467010 containerd[1437]: time="2024-10-08T19:44:45.466842626Z" level=info msg="StopPodSandbox for \"0d8d1d363482dd5cf714d0fbcb0affd638ce6d12eec14319fad6a3e43d828744\" returns successfully" Oct 8 19:44:45.467437 containerd[1437]: time="2024-10-08T19:44:45.467406464Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6bb77fd48c-ql2qz,Uid:1544abfa-775d-4d7b-9360-1ce3bf50e572,Namespace:calico-system,Attempt:1,}" Oct 8 19:44:45.506040 kubelet[2440]: E1008 19:44:45.506009 2440 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:44:45.667357 systemd[1]: run-netns-cni\x2d74cbbab7\x2da427\x2deb8c\x2d9d26\x2d1f3accfbcb6f.mount: Deactivated successfully. Oct 8 19:44:45.676886 systemd-networkd[1383]: cali5fe17e05f47: Link UP Oct 8 19:44:45.678853 systemd-networkd[1383]: cali5fe17e05f47: Gained carrier Oct 8 19:44:45.693026 containerd[1437]: 2024-10-08 19:44:45.501 [INFO][4216] utils.go 100: File /var/lib/calico/mtu does not exist Oct 8 19:44:45.693026 containerd[1437]: 2024-10-08 19:44:45.515 [INFO][4216] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--6bb77fd48c--ql2qz-eth0 calico-kube-controllers-6bb77fd48c- calico-system 1544abfa-775d-4d7b-9360-1ce3bf50e572 763 0 2024-10-08 19:44:22 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6bb77fd48c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-6bb77fd48c-ql2qz eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali5fe17e05f47 [] []}} ContainerID="3393f5c9d3b44c0b764fb6ec6e24ac3e89fcd79588d71df349f7259faac4b853" Namespace="calico-system" Pod="calico-kube-controllers-6bb77fd48c-ql2qz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6bb77fd48c--ql2qz-" Oct 8 19:44:45.693026 containerd[1437]: 2024-10-08 19:44:45.515 [INFO][4216] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="3393f5c9d3b44c0b764fb6ec6e24ac3e89fcd79588d71df349f7259faac4b853" Namespace="calico-system" Pod="calico-kube-controllers-6bb77fd48c-ql2qz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6bb77fd48c--ql2qz-eth0" Oct 8 19:44:45.693026 containerd[1437]: 2024-10-08 19:44:45.538 [INFO][4229] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3393f5c9d3b44c0b764fb6ec6e24ac3e89fcd79588d71df349f7259faac4b853" HandleID="k8s-pod-network.3393f5c9d3b44c0b764fb6ec6e24ac3e89fcd79588d71df349f7259faac4b853" Workload="localhost-k8s-calico--kube--controllers--6bb77fd48c--ql2qz-eth0" Oct 8 19:44:45.693026 containerd[1437]: 2024-10-08 19:44:45.547 [INFO][4229] ipam_plugin.go 270: Auto assigning IP ContainerID="3393f5c9d3b44c0b764fb6ec6e24ac3e89fcd79588d71df349f7259faac4b853" HandleID="k8s-pod-network.3393f5c9d3b44c0b764fb6ec6e24ac3e89fcd79588d71df349f7259faac4b853" Workload="localhost-k8s-calico--kube--controllers--6bb77fd48c--ql2qz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400027de20), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-6bb77fd48c-ql2qz", "timestamp":"2024-10-08 19:44:45.538571579 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 19:44:45.693026 containerd[1437]: 2024-10-08 19:44:45.548 [INFO][4229] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:44:45.693026 containerd[1437]: 2024-10-08 19:44:45.548 [INFO][4229] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:44:45.693026 containerd[1437]: 2024-10-08 19:44:45.548 [INFO][4229] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 8 19:44:45.693026 containerd[1437]: 2024-10-08 19:44:45.549 [INFO][4229] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3393f5c9d3b44c0b764fb6ec6e24ac3e89fcd79588d71df349f7259faac4b853" host="localhost" Oct 8 19:44:45.693026 containerd[1437]: 2024-10-08 19:44:45.648 [INFO][4229] ipam.go 372: Looking up existing affinities for host host="localhost" Oct 8 19:44:45.693026 containerd[1437]: 2024-10-08 19:44:45.653 [INFO][4229] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Oct 8 19:44:45.693026 containerd[1437]: 2024-10-08 19:44:45.654 [INFO][4229] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 8 19:44:45.693026 containerd[1437]: 2024-10-08 19:44:45.657 [INFO][4229] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 8 19:44:45.693026 containerd[1437]: 2024-10-08 19:44:45.657 [INFO][4229] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3393f5c9d3b44c0b764fb6ec6e24ac3e89fcd79588d71df349f7259faac4b853" host="localhost" Oct 8 19:44:45.693026 containerd[1437]: 2024-10-08 19:44:45.659 [INFO][4229] ipam.go 1685: Creating new handle: k8s-pod-network.3393f5c9d3b44c0b764fb6ec6e24ac3e89fcd79588d71df349f7259faac4b853 Oct 8 19:44:45.693026 containerd[1437]: 2024-10-08 19:44:45.663 [INFO][4229] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3393f5c9d3b44c0b764fb6ec6e24ac3e89fcd79588d71df349f7259faac4b853" host="localhost" Oct 8 19:44:45.693026 containerd[1437]: 2024-10-08 19:44:45.670 [INFO][4229] ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.3393f5c9d3b44c0b764fb6ec6e24ac3e89fcd79588d71df349f7259faac4b853" host="localhost" Oct 8 19:44:45.693026 containerd[1437]: 2024-10-08 19:44:45.670 [INFO][4229] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.3393f5c9d3b44c0b764fb6ec6e24ac3e89fcd79588d71df349f7259faac4b853" host="localhost" Oct 8 19:44:45.693026 containerd[1437]: 2024-10-08 19:44:45.670 [INFO][4229] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:44:45.693026 containerd[1437]: 2024-10-08 19:44:45.670 [INFO][4229] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="3393f5c9d3b44c0b764fb6ec6e24ac3e89fcd79588d71df349f7259faac4b853" HandleID="k8s-pod-network.3393f5c9d3b44c0b764fb6ec6e24ac3e89fcd79588d71df349f7259faac4b853" Workload="localhost-k8s-calico--kube--controllers--6bb77fd48c--ql2qz-eth0" Oct 8 19:44:45.693551 containerd[1437]: 2024-10-08 19:44:45.672 [INFO][4216] k8s.go 386: Populated endpoint ContainerID="3393f5c9d3b44c0b764fb6ec6e24ac3e89fcd79588d71df349f7259faac4b853" Namespace="calico-system" Pod="calico-kube-controllers-6bb77fd48c-ql2qz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6bb77fd48c--ql2qz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6bb77fd48c--ql2qz-eth0", GenerateName:"calico-kube-controllers-6bb77fd48c-", Namespace:"calico-system", SelfLink:"", UID:"1544abfa-775d-4d7b-9360-1ce3bf50e572", ResourceVersion:"763", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 44, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6bb77fd48c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-6bb77fd48c-ql2qz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5fe17e05f47", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:44:45.693551 containerd[1437]: 2024-10-08 19:44:45.672 [INFO][4216] k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="3393f5c9d3b44c0b764fb6ec6e24ac3e89fcd79588d71df349f7259faac4b853" Namespace="calico-system" Pod="calico-kube-controllers-6bb77fd48c-ql2qz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6bb77fd48c--ql2qz-eth0" Oct 8 19:44:45.693551 containerd[1437]: 2024-10-08 19:44:45.672 [INFO][4216] dataplane_linux.go 68: Setting the host side veth name to cali5fe17e05f47 ContainerID="3393f5c9d3b44c0b764fb6ec6e24ac3e89fcd79588d71df349f7259faac4b853" Namespace="calico-system" Pod="calico-kube-controllers-6bb77fd48c-ql2qz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6bb77fd48c--ql2qz-eth0" Oct 8 19:44:45.693551 containerd[1437]: 2024-10-08 19:44:45.677 [INFO][4216] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="3393f5c9d3b44c0b764fb6ec6e24ac3e89fcd79588d71df349f7259faac4b853" Namespace="calico-system" Pod="calico-kube-controllers-6bb77fd48c-ql2qz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6bb77fd48c--ql2qz-eth0" Oct 8 19:44:45.693551 containerd[1437]: 2024-10-08 19:44:45.679 [INFO][4216] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="3393f5c9d3b44c0b764fb6ec6e24ac3e89fcd79588d71df349f7259faac4b853" Namespace="calico-system" Pod="calico-kube-controllers-6bb77fd48c-ql2qz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6bb77fd48c--ql2qz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6bb77fd48c--ql2qz-eth0", GenerateName:"calico-kube-controllers-6bb77fd48c-", Namespace:"calico-system", SelfLink:"", UID:"1544abfa-775d-4d7b-9360-1ce3bf50e572", ResourceVersion:"763", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 44, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6bb77fd48c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3393f5c9d3b44c0b764fb6ec6e24ac3e89fcd79588d71df349f7259faac4b853", Pod:"calico-kube-controllers-6bb77fd48c-ql2qz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5fe17e05f47", MAC:"d6:3b:ee:de:9c:c1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:44:45.693551 containerd[1437]: 2024-10-08 19:44:45.688 [INFO][4216] k8s.go 500: Wrote updated endpoint to datastore ContainerID="3393f5c9d3b44c0b764fb6ec6e24ac3e89fcd79588d71df349f7259faac4b853" Namespace="calico-system" Pod="calico-kube-controllers-6bb77fd48c-ql2qz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6bb77fd48c--ql2qz-eth0" Oct 8 19:44:45.713330 containerd[1437]: time="2024-10-08T19:44:45.713026170Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:44:45.713330 containerd[1437]: time="2024-10-08T19:44:45.713076057Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:44:45.713330 containerd[1437]: time="2024-10-08T19:44:45.713092740Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:44:45.713330 containerd[1437]: time="2024-10-08T19:44:45.713102861Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:44:45.739929 systemd[1]: Started cri-containerd-3393f5c9d3b44c0b764fb6ec6e24ac3e89fcd79588d71df349f7259faac4b853.scope - libcontainer container 3393f5c9d3b44c0b764fb6ec6e24ac3e89fcd79588d71df349f7259faac4b853. Oct 8 19:44:45.755285 systemd-resolved[1309]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 8 19:44:45.778013 containerd[1437]: time="2024-10-08T19:44:45.777971947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6bb77fd48c-ql2qz,Uid:1544abfa-775d-4d7b-9360-1ce3bf50e572,Namespace:calico-system,Attempt:1,} returns sandbox id \"3393f5c9d3b44c0b764fb6ec6e24ac3e89fcd79588d71df349f7259faac4b853\"" Oct 8 19:44:45.990110 containerd[1437]: time="2024-10-08T19:44:45.989996090Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:44:45.990929 containerd[1437]: time="2024-10-08T19:44:45.990852849Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1: active requests=0, bytes read=12116870" Oct 8 19:44:45.991937 containerd[1437]: time="2024-10-08T19:44:45.991897433Z" level=info msg="ImageCreate event name:\"sha256:4df800f2dc90e056e3dc95be5afe5cd399ce8785c6817ddeaf07b498cb85207a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:44:45.994511 containerd[1437]: time="2024-10-08T19:44:45.994476750Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:44:45.995441 containerd[1437]: time="2024-10-08T19:44:45.995311585Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" with image id \"sha256:4df800f2dc90e056e3dc95be5afe5cd399ce8785c6817ddeaf07b498cb85207a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\", size \"13484341\" in 970.703201ms" Oct 8 19:44:45.995441 containerd[1437]: time="2024-10-08T19:44:45.995342149Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" returns image reference \"sha256:4df800f2dc90e056e3dc95be5afe5cd399ce8785c6817ddeaf07b498cb85207a\"" Oct 8 19:44:45.997125 containerd[1437]: time="2024-10-08T19:44:45.996915087Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\"" Oct 8 19:44:45.998375 containerd[1437]: time="2024-10-08T19:44:45.998328682Z" level=info msg="CreateContainer within sandbox \"809cdbf663fa8bbdd2874fc0d3c6011161657151427c6bb9f3b8a3da813b84a3\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Oct 8 19:44:46.011584 containerd[1437]: time="2024-10-08T19:44:46.011538027Z" level=info msg="CreateContainer within sandbox \"809cdbf663fa8bbdd2874fc0d3c6011161657151427c6bb9f3b8a3da813b84a3\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"2fd91255be3cbb9231d77e327b3e880efeee80f56b3d3ac6811771a038a7a892\"" Oct 8 19:44:46.014157 containerd[1437]: time="2024-10-08T19:44:46.012728747Z" level=info msg="StartContainer for \"2fd91255be3cbb9231d77e327b3e880efeee80f56b3d3ac6811771a038a7a892\"" Oct 8 19:44:46.037864 systemd[1]: Started cri-containerd-2fd91255be3cbb9231d77e327b3e880efeee80f56b3d3ac6811771a038a7a892.scope - libcontainer container 2fd91255be3cbb9231d77e327b3e880efeee80f56b3d3ac6811771a038a7a892. Oct 8 19:44:46.061126 containerd[1437]: time="2024-10-08T19:44:46.061086847Z" level=info msg="StartContainer for \"2fd91255be3cbb9231d77e327b3e880efeee80f56b3d3ac6811771a038a7a892\" returns successfully" Oct 8 19:44:46.377461 containerd[1437]: time="2024-10-08T19:44:46.377344276Z" level=info msg="StopPodSandbox for \"9d21ec807eb0f82ea1c209b4a063f87b6947c7ba977f6b3c5c442f3964797d1e\"" Oct 8 19:44:46.486829 containerd[1437]: 2024-10-08 19:44:46.433 [INFO][4374] k8s.go 608: Cleaning up netns ContainerID="9d21ec807eb0f82ea1c209b4a063f87b6947c7ba977f6b3c5c442f3964797d1e" Oct 8 19:44:46.486829 containerd[1437]: 2024-10-08 19:44:46.434 [INFO][4374] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="9d21ec807eb0f82ea1c209b4a063f87b6947c7ba977f6b3c5c442f3964797d1e" iface="eth0" netns="/var/run/netns/cni-b336f5fb-a693-7332-a406-472c7657ea45" Oct 8 19:44:46.486829 containerd[1437]: 2024-10-08 19:44:46.435 [INFO][4374] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="9d21ec807eb0f82ea1c209b4a063f87b6947c7ba977f6b3c5c442f3964797d1e" iface="eth0" netns="/var/run/netns/cni-b336f5fb-a693-7332-a406-472c7657ea45" Oct 8 19:44:46.486829 containerd[1437]: 2024-10-08 19:44:46.435 [INFO][4374] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="9d21ec807eb0f82ea1c209b4a063f87b6947c7ba977f6b3c5c442f3964797d1e" iface="eth0" netns="/var/run/netns/cni-b336f5fb-a693-7332-a406-472c7657ea45" Oct 8 19:44:46.486829 containerd[1437]: 2024-10-08 19:44:46.435 [INFO][4374] k8s.go 615: Releasing IP address(es) ContainerID="9d21ec807eb0f82ea1c209b4a063f87b6947c7ba977f6b3c5c442f3964797d1e" Oct 8 19:44:46.486829 containerd[1437]: 2024-10-08 19:44:46.435 [INFO][4374] utils.go 188: Calico CNI releasing IP address ContainerID="9d21ec807eb0f82ea1c209b4a063f87b6947c7ba977f6b3c5c442f3964797d1e" Oct 8 19:44:46.486829 containerd[1437]: 2024-10-08 19:44:46.473 [INFO][4381] ipam_plugin.go 417: Releasing address using handleID ContainerID="9d21ec807eb0f82ea1c209b4a063f87b6947c7ba977f6b3c5c442f3964797d1e" HandleID="k8s-pod-network.9d21ec807eb0f82ea1c209b4a063f87b6947c7ba977f6b3c5c442f3964797d1e" Workload="localhost-k8s-coredns--6f6b679f8f--9vxz5-eth0" Oct 8 19:44:46.486829 containerd[1437]: 2024-10-08 19:44:46.474 [INFO][4381] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:44:46.486829 containerd[1437]: 2024-10-08 19:44:46.474 [INFO][4381] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:44:46.486829 containerd[1437]: 2024-10-08 19:44:46.481 [WARNING][4381] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="9d21ec807eb0f82ea1c209b4a063f87b6947c7ba977f6b3c5c442f3964797d1e" HandleID="k8s-pod-network.9d21ec807eb0f82ea1c209b4a063f87b6947c7ba977f6b3c5c442f3964797d1e" Workload="localhost-k8s-coredns--6f6b679f8f--9vxz5-eth0" Oct 8 19:44:46.486829 containerd[1437]: 2024-10-08 19:44:46.481 [INFO][4381] ipam_plugin.go 445: Releasing address using workloadID ContainerID="9d21ec807eb0f82ea1c209b4a063f87b6947c7ba977f6b3c5c442f3964797d1e" HandleID="k8s-pod-network.9d21ec807eb0f82ea1c209b4a063f87b6947c7ba977f6b3c5c442f3964797d1e" Workload="localhost-k8s-coredns--6f6b679f8f--9vxz5-eth0" Oct 8 19:44:46.486829 containerd[1437]: 2024-10-08 19:44:46.482 [INFO][4381] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:44:46.486829 containerd[1437]: 2024-10-08 19:44:46.485 [INFO][4374] k8s.go 621: Teardown processing complete. ContainerID="9d21ec807eb0f82ea1c209b4a063f87b6947c7ba977f6b3c5c442f3964797d1e" Oct 8 19:44:46.487410 containerd[1437]: time="2024-10-08T19:44:46.487011337Z" level=info msg="TearDown network for sandbox \"9d21ec807eb0f82ea1c209b4a063f87b6947c7ba977f6b3c5c442f3964797d1e\" successfully" Oct 8 19:44:46.487410 containerd[1437]: time="2024-10-08T19:44:46.487047902Z" level=info msg="StopPodSandbox for \"9d21ec807eb0f82ea1c209b4a063f87b6947c7ba977f6b3c5c442f3964797d1e\" returns successfully" Oct 8 19:44:46.487458 kubelet[2440]: E1008 19:44:46.487288 2440 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:44:46.487734 containerd[1437]: time="2024-10-08T19:44:46.487710711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-9vxz5,Uid:7fbc9347-b968-44a3-a96a-e937c3f2240a,Namespace:kube-system,Attempt:1,}" Oct 8 19:44:46.511905 kubelet[2440]: E1008 19:44:46.511880 2440 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:44:46.513500 kubelet[2440]: I1008 19:44:46.513454 2440 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Oct 8 19:44:46.514747 kubelet[2440]: I1008 19:44:46.514727 2440 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Oct 8 19:44:46.617618 systemd-networkd[1383]: cali7780bac7f30: Link UP Oct 8 19:44:46.618360 systemd-networkd[1383]: cali7780bac7f30: Gained carrier Oct 8 19:44:46.626789 kubelet[2440]: I1008 19:44:46.626280 2440 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-nkd98" podStartSLOduration=22.684105276 podStartE2EDuration="24.626260935s" podCreationTimestamp="2024-10-08 19:44:22 +0000 UTC" firstStartedPulling="2024-10-08 19:44:44.054204551 +0000 UTC m=+34.755736938" lastFinishedPulling="2024-10-08 19:44:45.99636025 +0000 UTC m=+36.697892597" observedRunningTime="2024-10-08 19:44:46.525311045 +0000 UTC m=+37.226843432" watchObservedRunningTime="2024-10-08 19:44:46.626260935 +0000 UTC m=+37.327793322" Oct 8 19:44:46.630762 containerd[1437]: 2024-10-08 19:44:46.543 [INFO][4389] utils.go 100: File /var/lib/calico/mtu does not exist Oct 8 19:44:46.630762 containerd[1437]: 2024-10-08 19:44:46.555 [INFO][4389] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--6f6b679f8f--9vxz5-eth0 coredns-6f6b679f8f- kube-system 7fbc9347-b968-44a3-a96a-e937c3f2240a 784 0 2024-10-08 19:44:16 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-6f6b679f8f-9vxz5 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7780bac7f30 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="7873062e24d08fc995fed8f2c5387487548cb9917b47047d76a903085d5a381d" Namespace="kube-system" Pod="coredns-6f6b679f8f-9vxz5" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--9vxz5-" Oct 8 19:44:46.630762 containerd[1437]: 2024-10-08 19:44:46.555 [INFO][4389] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="7873062e24d08fc995fed8f2c5387487548cb9917b47047d76a903085d5a381d" Namespace="kube-system" Pod="coredns-6f6b679f8f-9vxz5" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--9vxz5-eth0" Oct 8 19:44:46.630762 containerd[1437]: 2024-10-08 19:44:46.580 [INFO][4403] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7873062e24d08fc995fed8f2c5387487548cb9917b47047d76a903085d5a381d" HandleID="k8s-pod-network.7873062e24d08fc995fed8f2c5387487548cb9917b47047d76a903085d5a381d" Workload="localhost-k8s-coredns--6f6b679f8f--9vxz5-eth0" Oct 8 19:44:46.630762 containerd[1437]: 2024-10-08 19:44:46.591 [INFO][4403] ipam_plugin.go 270: Auto assigning IP ContainerID="7873062e24d08fc995fed8f2c5387487548cb9917b47047d76a903085d5a381d" HandleID="k8s-pod-network.7873062e24d08fc995fed8f2c5387487548cb9917b47047d76a903085d5a381d" Workload="localhost-k8s-coredns--6f6b679f8f--9vxz5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002f3430), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-6f6b679f8f-9vxz5", "timestamp":"2024-10-08 19:44:46.580388049 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 19:44:46.630762 containerd[1437]: 2024-10-08 19:44:46.591 [INFO][4403] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:44:46.630762 containerd[1437]: 2024-10-08 19:44:46.591 [INFO][4403] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:44:46.630762 containerd[1437]: 2024-10-08 19:44:46.591 [INFO][4403] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 8 19:44:46.630762 containerd[1437]: 2024-10-08 19:44:46.593 [INFO][4403] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.7873062e24d08fc995fed8f2c5387487548cb9917b47047d76a903085d5a381d" host="localhost" Oct 8 19:44:46.630762 containerd[1437]: 2024-10-08 19:44:46.596 [INFO][4403] ipam.go 372: Looking up existing affinities for host host="localhost" Oct 8 19:44:46.630762 containerd[1437]: 2024-10-08 19:44:46.599 [INFO][4403] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Oct 8 19:44:46.630762 containerd[1437]: 2024-10-08 19:44:46.601 [INFO][4403] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 8 19:44:46.630762 containerd[1437]: 2024-10-08 19:44:46.603 [INFO][4403] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 8 19:44:46.630762 containerd[1437]: 2024-10-08 19:44:46.603 [INFO][4403] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7873062e24d08fc995fed8f2c5387487548cb9917b47047d76a903085d5a381d" host="localhost" Oct 8 19:44:46.630762 containerd[1437]: 2024-10-08 19:44:46.605 [INFO][4403] ipam.go 1685: Creating new handle: k8s-pod-network.7873062e24d08fc995fed8f2c5387487548cb9917b47047d76a903085d5a381d Oct 8 19:44:46.630762 containerd[1437]: 2024-10-08 19:44:46.609 [INFO][4403] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7873062e24d08fc995fed8f2c5387487548cb9917b47047d76a903085d5a381d" host="localhost" Oct 8 19:44:46.630762 containerd[1437]: 2024-10-08 19:44:46.614 [INFO][4403] ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.7873062e24d08fc995fed8f2c5387487548cb9917b47047d76a903085d5a381d" host="localhost" Oct 8 19:44:46.630762 containerd[1437]: 2024-10-08 19:44:46.614 [INFO][4403] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.7873062e24d08fc995fed8f2c5387487548cb9917b47047d76a903085d5a381d" host="localhost" Oct 8 19:44:46.630762 containerd[1437]: 2024-10-08 19:44:46.614 [INFO][4403] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:44:46.630762 containerd[1437]: 2024-10-08 19:44:46.614 [INFO][4403] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="7873062e24d08fc995fed8f2c5387487548cb9917b47047d76a903085d5a381d" HandleID="k8s-pod-network.7873062e24d08fc995fed8f2c5387487548cb9917b47047d76a903085d5a381d" Workload="localhost-k8s-coredns--6f6b679f8f--9vxz5-eth0" Oct 8 19:44:46.631942 containerd[1437]: 2024-10-08 19:44:46.616 [INFO][4389] k8s.go 386: Populated endpoint ContainerID="7873062e24d08fc995fed8f2c5387487548cb9917b47047d76a903085d5a381d" Namespace="kube-system" Pod="coredns-6f6b679f8f-9vxz5" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--9vxz5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--9vxz5-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"7fbc9347-b968-44a3-a96a-e937c3f2240a", ResourceVersion:"784", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 44, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-6f6b679f8f-9vxz5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7780bac7f30", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:44:46.631942 containerd[1437]: 2024-10-08 19:44:46.616 [INFO][4389] k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="7873062e24d08fc995fed8f2c5387487548cb9917b47047d76a903085d5a381d" Namespace="kube-system" Pod="coredns-6f6b679f8f-9vxz5" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--9vxz5-eth0" Oct 8 19:44:46.631942 containerd[1437]: 2024-10-08 19:44:46.616 [INFO][4389] dataplane_linux.go 68: Setting the host side veth name to cali7780bac7f30 ContainerID="7873062e24d08fc995fed8f2c5387487548cb9917b47047d76a903085d5a381d" Namespace="kube-system" Pod="coredns-6f6b679f8f-9vxz5" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--9vxz5-eth0" Oct 8 19:44:46.631942 containerd[1437]: 2024-10-08 19:44:46.617 [INFO][4389] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="7873062e24d08fc995fed8f2c5387487548cb9917b47047d76a903085d5a381d" Namespace="kube-system" Pod="coredns-6f6b679f8f-9vxz5" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--9vxz5-eth0" Oct 8 19:44:46.631942 containerd[1437]: 2024-10-08 19:44:46.617 [INFO][4389] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="7873062e24d08fc995fed8f2c5387487548cb9917b47047d76a903085d5a381d" Namespace="kube-system" Pod="coredns-6f6b679f8f-9vxz5" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--9vxz5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--9vxz5-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"7fbc9347-b968-44a3-a96a-e937c3f2240a", ResourceVersion:"784", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 44, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7873062e24d08fc995fed8f2c5387487548cb9917b47047d76a903085d5a381d", Pod:"coredns-6f6b679f8f-9vxz5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7780bac7f30", MAC:"8e:84:f8:9b:32:09", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:44:46.631942 containerd[1437]: 2024-10-08 19:44:46.628 [INFO][4389] k8s.go 500: Wrote updated endpoint to datastore ContainerID="7873062e24d08fc995fed8f2c5387487548cb9917b47047d76a903085d5a381d" Namespace="kube-system" Pod="coredns-6f6b679f8f-9vxz5" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--9vxz5-eth0" Oct 8 19:44:46.646526 containerd[1437]: time="2024-10-08T19:44:46.646413163Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:44:46.646526 containerd[1437]: time="2024-10-08T19:44:46.646468771Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:44:46.647172 containerd[1437]: time="2024-10-08T19:44:46.647099096Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:44:46.647172 containerd[1437]: time="2024-10-08T19:44:46.647145222Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:44:46.654979 systemd[1]: Started sshd@8-10.0.0.84:22-10.0.0.1:60294.service - OpenSSH per-connection server daemon (10.0.0.1:60294). Oct 8 19:44:46.676953 systemd[1]: Started cri-containerd-7873062e24d08fc995fed8f2c5387487548cb9917b47047d76a903085d5a381d.scope - libcontainer container 7873062e24d08fc995fed8f2c5387487548cb9917b47047d76a903085d5a381d. Oct 8 19:44:46.683508 systemd[1]: run-netns-cni\x2db336f5fb\x2da693\x2d7332\x2da406\x2d472c7657ea45.mount: Deactivated successfully. Oct 8 19:44:46.691131 systemd-resolved[1309]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 8 19:44:46.701362 sshd[4439]: Accepted publickey for core from 10.0.0.1 port 60294 ssh2: RSA SHA256:7GlzoUcthdqM2/gWbc3rpA5Lm+7Qkd3pe7wSn/JGGIM Oct 8 19:44:46.702066 sshd[4439]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:44:46.706820 systemd-logind[1424]: New session 9 of user core. Oct 8 19:44:46.713841 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 8 19:44:46.715199 containerd[1437]: time="2024-10-08T19:44:46.715100556Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-9vxz5,Uid:7fbc9347-b968-44a3-a96a-e937c3f2240a,Namespace:kube-system,Attempt:1,} returns sandbox id \"7873062e24d08fc995fed8f2c5387487548cb9917b47047d76a903085d5a381d\"" Oct 8 19:44:46.715847 kubelet[2440]: E1008 19:44:46.715820 2440 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:44:46.718753 containerd[1437]: time="2024-10-08T19:44:46.718668836Z" level=info msg="CreateContainer within sandbox \"7873062e24d08fc995fed8f2c5387487548cb9917b47047d76a903085d5a381d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 8 19:44:46.738407 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount732067684.mount: Deactivated successfully. Oct 8 19:44:46.744836 containerd[1437]: time="2024-10-08T19:44:46.744713776Z" level=info msg="CreateContainer within sandbox \"7873062e24d08fc995fed8f2c5387487548cb9917b47047d76a903085d5a381d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"77c8e449d952f5dda1fd325e12fede86c7a2460a83a7d859c28fd0c35d7f0540\"" Oct 8 19:44:46.746415 containerd[1437]: time="2024-10-08T19:44:46.745478879Z" level=info msg="StartContainer for \"77c8e449d952f5dda1fd325e12fede86c7a2460a83a7d859c28fd0c35d7f0540\"" Oct 8 19:44:46.800844 systemd[1]: Started cri-containerd-77c8e449d952f5dda1fd325e12fede86c7a2460a83a7d859c28fd0c35d7f0540.scope - libcontainer container 77c8e449d952f5dda1fd325e12fede86c7a2460a83a7d859c28fd0c35d7f0540. Oct 8 19:44:46.881533 containerd[1437]: time="2024-10-08T19:44:46.880587840Z" level=info msg="StartContainer for \"77c8e449d952f5dda1fd325e12fede86c7a2460a83a7d859c28fd0c35d7f0540\" returns successfully" Oct 8 19:44:47.011783 systemd-networkd[1383]: cali5fe17e05f47: Gained IPv6LL Oct 8 19:44:47.048030 sshd[4439]: pam_unix(sshd:session): session closed for user core Oct 8 19:44:47.052659 systemd[1]: sshd@8-10.0.0.84:22-10.0.0.1:60294.service: Deactivated successfully. Oct 8 19:44:47.054377 systemd[1]: session-9.scope: Deactivated successfully. Oct 8 19:44:47.055814 systemd-logind[1424]: Session 9 logged out. Waiting for processes to exit. Oct 8 19:44:47.057044 systemd-logind[1424]: Removed session 9. Oct 8 19:44:47.219057 containerd[1437]: time="2024-10-08T19:44:47.218937303Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:44:47.220154 containerd[1437]: time="2024-10-08T19:44:47.220121778Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.1: active requests=0, bytes read=31361753" Oct 8 19:44:47.221202 containerd[1437]: time="2024-10-08T19:44:47.221121309Z" level=info msg="ImageCreate event name:\"sha256:dde0e0aa888dfe01de8f2b6b4879c4391e01cc95a7a8a608194d8ed663fe6a39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:44:47.223314 containerd[1437]: time="2024-10-08T19:44:47.223257908Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:44:47.224673 containerd[1437]: time="2024-10-08T19:44:47.224076455Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" with image id \"sha256:dde0e0aa888dfe01de8f2b6b4879c4391e01cc95a7a8a608194d8ed663fe6a39\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\", size \"32729240\" in 1.227129364s" Oct 8 19:44:47.224673 containerd[1437]: time="2024-10-08T19:44:47.224109580Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" returns image reference \"sha256:dde0e0aa888dfe01de8f2b6b4879c4391e01cc95a7a8a608194d8ed663fe6a39\"" Oct 8 19:44:47.231386 containerd[1437]: time="2024-10-08T19:44:47.231346567Z" level=info msg="CreateContainer within sandbox \"3393f5c9d3b44c0b764fb6ec6e24ac3e89fcd79588d71df349f7259faac4b853\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Oct 8 19:44:47.248891 containerd[1437]: time="2024-10-08T19:44:47.248837736Z" level=info msg="CreateContainer within sandbox \"3393f5c9d3b44c0b764fb6ec6e24ac3e89fcd79588d71df349f7259faac4b853\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"92d15278ca932ee567ba86ab1f93585665bcdd3036325e7bcde26ad48d88cd3d\"" Oct 8 19:44:47.249621 containerd[1437]: time="2024-10-08T19:44:47.249568751Z" level=info msg="StartContainer for \"92d15278ca932ee567ba86ab1f93585665bcdd3036325e7bcde26ad48d88cd3d\"" Oct 8 19:44:47.282899 systemd[1]: Started cri-containerd-92d15278ca932ee567ba86ab1f93585665bcdd3036325e7bcde26ad48d88cd3d.scope - libcontainer container 92d15278ca932ee567ba86ab1f93585665bcdd3036325e7bcde26ad48d88cd3d. Oct 8 19:44:47.319699 containerd[1437]: time="2024-10-08T19:44:47.319633720Z" level=info msg="StartContainer for \"92d15278ca932ee567ba86ab1f93585665bcdd3036325e7bcde26ad48d88cd3d\" returns successfully" Oct 8 19:44:47.518030 kubelet[2440]: E1008 19:44:47.517818 2440 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:44:47.532333 kubelet[2440]: I1008 19:44:47.531492 2440 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6bb77fd48c-ql2qz" podStartSLOduration=24.086742486 podStartE2EDuration="25.531477361s" podCreationTimestamp="2024-10-08 19:44:22 +0000 UTC" firstStartedPulling="2024-10-08 19:44:45.780003587 +0000 UTC m=+36.481535974" lastFinishedPulling="2024-10-08 19:44:47.224738462 +0000 UTC m=+37.926270849" observedRunningTime="2024-10-08 19:44:47.529202864 +0000 UTC m=+38.230735211" watchObservedRunningTime="2024-10-08 19:44:47.531477361 +0000 UTC m=+38.233009708" Oct 8 19:44:47.542137 kubelet[2440]: I1008 19:44:47.542068 2440 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-9vxz5" podStartSLOduration=31.542051425 podStartE2EDuration="31.542051425s" podCreationTimestamp="2024-10-08 19:44:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:44:47.541211795 +0000 UTC m=+38.242744182" watchObservedRunningTime="2024-10-08 19:44:47.542051425 +0000 UTC m=+38.243583812" Oct 8 19:44:48.290849 systemd-networkd[1383]: cali7780bac7f30: Gained IPv6LL Oct 8 19:44:48.518711 kubelet[2440]: I1008 19:44:48.518661 2440 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 8 19:44:48.519357 kubelet[2440]: E1008 19:44:48.519339 2440 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:44:49.520305 kubelet[2440]: E1008 19:44:49.520270 2440 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:44:50.261586 kubelet[2440]: I1008 19:44:50.261500 2440 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 8 19:44:50.261586 kubelet[2440]: E1008 19:44:50.261906 2440 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:44:50.522662 kubelet[2440]: E1008 19:44:50.522515 2440 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:44:50.627702 kernel: bpftool[4686]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Oct 8 19:44:50.788023 systemd-networkd[1383]: vxlan.calico: Link UP Oct 8 19:44:50.788034 systemd-networkd[1383]: vxlan.calico: Gained carrier Oct 8 19:44:52.002827 systemd-networkd[1383]: vxlan.calico: Gained IPv6LL Oct 8 19:44:52.062946 systemd[1]: Started sshd@9-10.0.0.84:22-10.0.0.1:60372.service - OpenSSH per-connection server daemon (10.0.0.1:60372). Oct 8 19:44:52.120840 sshd[4804]: Accepted publickey for core from 10.0.0.1 port 60372 ssh2: RSA SHA256:7GlzoUcthdqM2/gWbc3rpA5Lm+7Qkd3pe7wSn/JGGIM Oct 8 19:44:52.123114 sshd[4804]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:44:52.127335 systemd-logind[1424]: New session 10 of user core. Oct 8 19:44:52.136860 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 8 19:44:52.375794 sshd[4804]: pam_unix(sshd:session): session closed for user core Oct 8 19:44:52.383607 systemd[1]: sshd@9-10.0.0.84:22-10.0.0.1:60372.service: Deactivated successfully. Oct 8 19:44:52.385502 systemd[1]: session-10.scope: Deactivated successfully. Oct 8 19:44:52.388977 systemd-logind[1424]: Session 10 logged out. Waiting for processes to exit. Oct 8 19:44:52.390288 systemd[1]: Started sshd@10-10.0.0.84:22-10.0.0.1:60384.service - OpenSSH per-connection server daemon (10.0.0.1:60384). Oct 8 19:44:52.391017 systemd-logind[1424]: Removed session 10. Oct 8 19:44:52.428194 sshd[4822]: Accepted publickey for core from 10.0.0.1 port 60384 ssh2: RSA SHA256:7GlzoUcthdqM2/gWbc3rpA5Lm+7Qkd3pe7wSn/JGGIM Oct 8 19:44:52.430224 sshd[4822]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:44:52.433980 systemd-logind[1424]: New session 11 of user core. Oct 8 19:44:52.446874 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 8 19:44:52.672607 sshd[4822]: pam_unix(sshd:session): session closed for user core Oct 8 19:44:52.683577 systemd[1]: sshd@10-10.0.0.84:22-10.0.0.1:60384.service: Deactivated successfully. Oct 8 19:44:52.685601 systemd[1]: session-11.scope: Deactivated successfully. Oct 8 19:44:52.687561 systemd-logind[1424]: Session 11 logged out. Waiting for processes to exit. Oct 8 19:44:52.696066 systemd[1]: Started sshd@11-10.0.0.84:22-10.0.0.1:48824.service - OpenSSH per-connection server daemon (10.0.0.1:48824). Oct 8 19:44:52.698387 systemd-logind[1424]: Removed session 11. Oct 8 19:44:52.734339 sshd[4835]: Accepted publickey for core from 10.0.0.1 port 48824 ssh2: RSA SHA256:7GlzoUcthdqM2/gWbc3rpA5Lm+7Qkd3pe7wSn/JGGIM Oct 8 19:44:52.735948 sshd[4835]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:44:52.743212 systemd-logind[1424]: New session 12 of user core. Oct 8 19:44:52.749858 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 8 19:44:52.937586 sshd[4835]: pam_unix(sshd:session): session closed for user core Oct 8 19:44:52.941944 systemd[1]: session-12.scope: Deactivated successfully. Oct 8 19:44:52.945889 systemd-logind[1424]: Session 12 logged out. Waiting for processes to exit. Oct 8 19:44:52.946501 systemd[1]: sshd@11-10.0.0.84:22-10.0.0.1:48824.service: Deactivated successfully. Oct 8 19:44:52.951635 systemd-logind[1424]: Removed session 12. Oct 8 19:44:56.902495 kubelet[2440]: I1008 19:44:56.902308 2440 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 8 19:44:56.953152 systemd[1]: run-containerd-runc-k8s.io-92d15278ca932ee567ba86ab1f93585665bcdd3036325e7bcde26ad48d88cd3d-runc.JR5vMB.mount: Deactivated successfully. Oct 8 19:44:57.950495 systemd[1]: Started sshd@12-10.0.0.84:22-10.0.0.1:48834.service - OpenSSH per-connection server daemon (10.0.0.1:48834). Oct 8 19:44:58.003805 sshd[4930]: Accepted publickey for core from 10.0.0.1 port 48834 ssh2: RSA SHA256:7GlzoUcthdqM2/gWbc3rpA5Lm+7Qkd3pe7wSn/JGGIM Oct 8 19:44:58.005883 sshd[4930]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:44:58.010724 systemd-logind[1424]: New session 13 of user core. Oct 8 19:44:58.019895 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 8 19:44:58.196178 sshd[4930]: pam_unix(sshd:session): session closed for user core Oct 8 19:44:58.205489 systemd[1]: sshd@12-10.0.0.84:22-10.0.0.1:48834.service: Deactivated successfully. Oct 8 19:44:58.208427 systemd[1]: session-13.scope: Deactivated successfully. Oct 8 19:44:58.211013 systemd-logind[1424]: Session 13 logged out. Waiting for processes to exit. Oct 8 19:44:58.217466 systemd[1]: Started sshd@13-10.0.0.84:22-10.0.0.1:48844.service - OpenSSH per-connection server daemon (10.0.0.1:48844). Oct 8 19:44:58.219486 systemd-logind[1424]: Removed session 13. Oct 8 19:44:58.255010 sshd[4945]: Accepted publickey for core from 10.0.0.1 port 48844 ssh2: RSA SHA256:7GlzoUcthdqM2/gWbc3rpA5Lm+7Qkd3pe7wSn/JGGIM Oct 8 19:44:58.256407 sshd[4945]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:44:58.263610 systemd-logind[1424]: New session 14 of user core. Oct 8 19:44:58.273888 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 8 19:44:58.530244 sshd[4945]: pam_unix(sshd:session): session closed for user core Oct 8 19:44:58.540273 systemd[1]: sshd@13-10.0.0.84:22-10.0.0.1:48844.service: Deactivated successfully. Oct 8 19:44:58.542291 systemd[1]: session-14.scope: Deactivated successfully. Oct 8 19:44:58.544466 systemd-logind[1424]: Session 14 logged out. Waiting for processes to exit. Oct 8 19:44:58.559528 systemd[1]: Started sshd@14-10.0.0.84:22-10.0.0.1:48852.service - OpenSSH per-connection server daemon (10.0.0.1:48852). Oct 8 19:44:58.560718 systemd-logind[1424]: Removed session 14. Oct 8 19:44:58.588710 sshd[4957]: Accepted publickey for core from 10.0.0.1 port 48852 ssh2: RSA SHA256:7GlzoUcthdqM2/gWbc3rpA5Lm+7Qkd3pe7wSn/JGGIM Oct 8 19:44:58.589403 sshd[4957]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:44:58.593608 systemd-logind[1424]: New session 15 of user core. Oct 8 19:44:58.604878 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 8 19:44:59.946094 sshd[4957]: pam_unix(sshd:session): session closed for user core Oct 8 19:44:59.953395 systemd[1]: sshd@14-10.0.0.84:22-10.0.0.1:48852.service: Deactivated successfully. Oct 8 19:44:59.958287 systemd[1]: session-15.scope: Deactivated successfully. Oct 8 19:44:59.960535 systemd-logind[1424]: Session 15 logged out. Waiting for processes to exit. Oct 8 19:44:59.973048 systemd[1]: Started sshd@15-10.0.0.84:22-10.0.0.1:48866.service - OpenSSH per-connection server daemon (10.0.0.1:48866). Oct 8 19:44:59.974082 systemd-logind[1424]: Removed session 15. Oct 8 19:45:00.013465 sshd[4986]: Accepted publickey for core from 10.0.0.1 port 48866 ssh2: RSA SHA256:7GlzoUcthdqM2/gWbc3rpA5Lm+7Qkd3pe7wSn/JGGIM Oct 8 19:45:00.015291 sshd[4986]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:45:00.019578 systemd-logind[1424]: New session 16 of user core. Oct 8 19:45:00.027834 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 8 19:45:00.370954 sshd[4986]: pam_unix(sshd:session): session closed for user core Oct 8 19:45:00.380549 systemd[1]: sshd@15-10.0.0.84:22-10.0.0.1:48866.service: Deactivated successfully. Oct 8 19:45:00.383215 systemd[1]: session-16.scope: Deactivated successfully. Oct 8 19:45:00.385163 systemd-logind[1424]: Session 16 logged out. Waiting for processes to exit. Oct 8 19:45:00.394231 systemd[1]: Started sshd@16-10.0.0.84:22-10.0.0.1:48870.service - OpenSSH per-connection server daemon (10.0.0.1:48870). Oct 8 19:45:00.395671 systemd-logind[1424]: Removed session 16. Oct 8 19:45:00.423108 sshd[4999]: Accepted publickey for core from 10.0.0.1 port 48870 ssh2: RSA SHA256:7GlzoUcthdqM2/gWbc3rpA5Lm+7Qkd3pe7wSn/JGGIM Oct 8 19:45:00.424648 sshd[4999]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:45:00.429330 systemd-logind[1424]: New session 17 of user core. Oct 8 19:45:00.438860 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 8 19:45:00.585156 sshd[4999]: pam_unix(sshd:session): session closed for user core Oct 8 19:45:00.587984 systemd[1]: sshd@16-10.0.0.84:22-10.0.0.1:48870.service: Deactivated successfully. Oct 8 19:45:00.590239 systemd[1]: session-17.scope: Deactivated successfully. Oct 8 19:45:00.591819 systemd-logind[1424]: Session 17 logged out. Waiting for processes to exit. Oct 8 19:45:00.592736 systemd-logind[1424]: Removed session 17. Oct 8 19:45:05.596246 systemd[1]: Started sshd@17-10.0.0.84:22-10.0.0.1:54614.service - OpenSSH per-connection server daemon (10.0.0.1:54614). Oct 8 19:45:05.630186 sshd[5026]: Accepted publickey for core from 10.0.0.1 port 54614 ssh2: RSA SHA256:7GlzoUcthdqM2/gWbc3rpA5Lm+7Qkd3pe7wSn/JGGIM Oct 8 19:45:05.631427 sshd[5026]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:45:05.634977 systemd-logind[1424]: New session 18 of user core. Oct 8 19:45:05.645844 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 8 19:45:05.773086 sshd[5026]: pam_unix(sshd:session): session closed for user core Oct 8 19:45:05.776396 systemd[1]: sshd@17-10.0.0.84:22-10.0.0.1:54614.service: Deactivated successfully. Oct 8 19:45:05.778366 systemd[1]: session-18.scope: Deactivated successfully. Oct 8 19:45:05.780201 systemd-logind[1424]: Session 18 logged out. Waiting for processes to exit. Oct 8 19:45:05.781215 systemd-logind[1424]: Removed session 18. Oct 8 19:45:09.361160 containerd[1437]: time="2024-10-08T19:45:09.361118307Z" level=info msg="StopPodSandbox for \"ed980de4e4b3d85aeb172300c5aaacc9ec0959991008fbfd94e340d0c41f776f\"" Oct 8 19:45:09.433110 containerd[1437]: 2024-10-08 19:45:09.397 [WARNING][5057] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ed980de4e4b3d85aeb172300c5aaacc9ec0959991008fbfd94e340d0c41f776f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--9qdpq-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"e4211795-44be-4c33-a1ed-5582f08a21b7", ResourceVersion:"746", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 44, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"32def399e181e4a27e495f07e7f3d7572c5d7cee2609821689723c00844813ac", Pod:"coredns-6f6b679f8f-9qdpq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5c53a0fcf17", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:45:09.433110 containerd[1437]: 2024-10-08 19:45:09.397 [INFO][5057] k8s.go 608: Cleaning up netns ContainerID="ed980de4e4b3d85aeb172300c5aaacc9ec0959991008fbfd94e340d0c41f776f" Oct 8 19:45:09.433110 containerd[1437]: 2024-10-08 19:45:09.397 [INFO][5057] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="ed980de4e4b3d85aeb172300c5aaacc9ec0959991008fbfd94e340d0c41f776f" iface="eth0" netns="" Oct 8 19:45:09.433110 containerd[1437]: 2024-10-08 19:45:09.397 [INFO][5057] k8s.go 615: Releasing IP address(es) ContainerID="ed980de4e4b3d85aeb172300c5aaacc9ec0959991008fbfd94e340d0c41f776f" Oct 8 19:45:09.433110 containerd[1437]: 2024-10-08 19:45:09.397 [INFO][5057] utils.go 188: Calico CNI releasing IP address ContainerID="ed980de4e4b3d85aeb172300c5aaacc9ec0959991008fbfd94e340d0c41f776f" Oct 8 19:45:09.433110 containerd[1437]: 2024-10-08 19:45:09.419 [INFO][5067] ipam_plugin.go 417: Releasing address using handleID ContainerID="ed980de4e4b3d85aeb172300c5aaacc9ec0959991008fbfd94e340d0c41f776f" HandleID="k8s-pod-network.ed980de4e4b3d85aeb172300c5aaacc9ec0959991008fbfd94e340d0c41f776f" Workload="localhost-k8s-coredns--6f6b679f8f--9qdpq-eth0" Oct 8 19:45:09.433110 containerd[1437]: 2024-10-08 19:45:09.419 [INFO][5067] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:45:09.433110 containerd[1437]: 2024-10-08 19:45:09.419 [INFO][5067] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:45:09.433110 containerd[1437]: 2024-10-08 19:45:09.427 [WARNING][5067] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="ed980de4e4b3d85aeb172300c5aaacc9ec0959991008fbfd94e340d0c41f776f" HandleID="k8s-pod-network.ed980de4e4b3d85aeb172300c5aaacc9ec0959991008fbfd94e340d0c41f776f" Workload="localhost-k8s-coredns--6f6b679f8f--9qdpq-eth0" Oct 8 19:45:09.433110 containerd[1437]: 2024-10-08 19:45:09.427 [INFO][5067] ipam_plugin.go 445: Releasing address using workloadID ContainerID="ed980de4e4b3d85aeb172300c5aaacc9ec0959991008fbfd94e340d0c41f776f" HandleID="k8s-pod-network.ed980de4e4b3d85aeb172300c5aaacc9ec0959991008fbfd94e340d0c41f776f" Workload="localhost-k8s-coredns--6f6b679f8f--9qdpq-eth0" Oct 8 19:45:09.433110 containerd[1437]: 2024-10-08 19:45:09.428 [INFO][5067] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:45:09.433110 containerd[1437]: 2024-10-08 19:45:09.430 [INFO][5057] k8s.go 621: Teardown processing complete. ContainerID="ed980de4e4b3d85aeb172300c5aaacc9ec0959991008fbfd94e340d0c41f776f" Oct 8 19:45:09.433543 containerd[1437]: time="2024-10-08T19:45:09.433131856Z" level=info msg="TearDown network for sandbox \"ed980de4e4b3d85aeb172300c5aaacc9ec0959991008fbfd94e340d0c41f776f\" successfully" Oct 8 19:45:09.433543 containerd[1437]: time="2024-10-08T19:45:09.433155699Z" level=info msg="StopPodSandbox for \"ed980de4e4b3d85aeb172300c5aaacc9ec0959991008fbfd94e340d0c41f776f\" returns successfully" Oct 8 19:45:09.434944 containerd[1437]: time="2024-10-08T19:45:09.433775875Z" level=info msg="RemovePodSandbox for \"ed980de4e4b3d85aeb172300c5aaacc9ec0959991008fbfd94e340d0c41f776f\"" Oct 8 19:45:09.442246 containerd[1437]: time="2024-10-08T19:45:09.433808318Z" level=info msg="Forcibly stopping sandbox \"ed980de4e4b3d85aeb172300c5aaacc9ec0959991008fbfd94e340d0c41f776f\"" Oct 8 19:45:09.507225 containerd[1437]: 2024-10-08 19:45:09.474 [WARNING][5089] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ed980de4e4b3d85aeb172300c5aaacc9ec0959991008fbfd94e340d0c41f776f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--9qdpq-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"e4211795-44be-4c33-a1ed-5582f08a21b7", ResourceVersion:"746", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 44, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"32def399e181e4a27e495f07e7f3d7572c5d7cee2609821689723c00844813ac", Pod:"coredns-6f6b679f8f-9qdpq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5c53a0fcf17", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:45:09.507225 containerd[1437]: 2024-10-08 19:45:09.474 [INFO][5089] k8s.go 608: Cleaning up netns ContainerID="ed980de4e4b3d85aeb172300c5aaacc9ec0959991008fbfd94e340d0c41f776f" Oct 8 19:45:09.507225 containerd[1437]: 2024-10-08 19:45:09.474 [INFO][5089] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="ed980de4e4b3d85aeb172300c5aaacc9ec0959991008fbfd94e340d0c41f776f" iface="eth0" netns="" Oct 8 19:45:09.507225 containerd[1437]: 2024-10-08 19:45:09.474 [INFO][5089] k8s.go 615: Releasing IP address(es) ContainerID="ed980de4e4b3d85aeb172300c5aaacc9ec0959991008fbfd94e340d0c41f776f" Oct 8 19:45:09.507225 containerd[1437]: 2024-10-08 19:45:09.474 [INFO][5089] utils.go 188: Calico CNI releasing IP address ContainerID="ed980de4e4b3d85aeb172300c5aaacc9ec0959991008fbfd94e340d0c41f776f" Oct 8 19:45:09.507225 containerd[1437]: 2024-10-08 19:45:09.494 [INFO][5097] ipam_plugin.go 417: Releasing address using handleID ContainerID="ed980de4e4b3d85aeb172300c5aaacc9ec0959991008fbfd94e340d0c41f776f" HandleID="k8s-pod-network.ed980de4e4b3d85aeb172300c5aaacc9ec0959991008fbfd94e340d0c41f776f" Workload="localhost-k8s-coredns--6f6b679f8f--9qdpq-eth0" Oct 8 19:45:09.507225 containerd[1437]: 2024-10-08 19:45:09.494 [INFO][5097] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:45:09.507225 containerd[1437]: 2024-10-08 19:45:09.494 [INFO][5097] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:45:09.507225 containerd[1437]: 2024-10-08 19:45:09.502 [WARNING][5097] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="ed980de4e4b3d85aeb172300c5aaacc9ec0959991008fbfd94e340d0c41f776f" HandleID="k8s-pod-network.ed980de4e4b3d85aeb172300c5aaacc9ec0959991008fbfd94e340d0c41f776f" Workload="localhost-k8s-coredns--6f6b679f8f--9qdpq-eth0" Oct 8 19:45:09.507225 containerd[1437]: 2024-10-08 19:45:09.502 [INFO][5097] ipam_plugin.go 445: Releasing address using workloadID ContainerID="ed980de4e4b3d85aeb172300c5aaacc9ec0959991008fbfd94e340d0c41f776f" HandleID="k8s-pod-network.ed980de4e4b3d85aeb172300c5aaacc9ec0959991008fbfd94e340d0c41f776f" Workload="localhost-k8s-coredns--6f6b679f8f--9qdpq-eth0" Oct 8 19:45:09.507225 containerd[1437]: 2024-10-08 19:45:09.504 [INFO][5097] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:45:09.507225 containerd[1437]: 2024-10-08 19:45:09.505 [INFO][5089] k8s.go 621: Teardown processing complete. ContainerID="ed980de4e4b3d85aeb172300c5aaacc9ec0959991008fbfd94e340d0c41f776f" Oct 8 19:45:09.507764 containerd[1437]: time="2024-10-08T19:45:09.507734800Z" level=info msg="TearDown network for sandbox \"ed980de4e4b3d85aeb172300c5aaacc9ec0959991008fbfd94e340d0c41f776f\" successfully" Oct 8 19:45:09.513754 containerd[1437]: time="2024-10-08T19:45:09.513719101Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ed980de4e4b3d85aeb172300c5aaacc9ec0959991008fbfd94e340d0c41f776f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 8 19:45:09.513936 containerd[1437]: time="2024-10-08T19:45:09.513917999Z" level=info msg="RemovePodSandbox \"ed980de4e4b3d85aeb172300c5aaacc9ec0959991008fbfd94e340d0c41f776f\" returns successfully" Oct 8 19:45:09.519475 containerd[1437]: time="2024-10-08T19:45:09.519432498Z" level=info msg="StopPodSandbox for \"9d21ec807eb0f82ea1c209b4a063f87b6947c7ba977f6b3c5c442f3964797d1e\"" Oct 8 19:45:09.586328 containerd[1437]: 2024-10-08 19:45:09.552 [WARNING][5119] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9d21ec807eb0f82ea1c209b4a063f87b6947c7ba977f6b3c5c442f3964797d1e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--9vxz5-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"7fbc9347-b968-44a3-a96a-e937c3f2240a", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 44, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7873062e24d08fc995fed8f2c5387487548cb9917b47047d76a903085d5a381d", Pod:"coredns-6f6b679f8f-9vxz5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7780bac7f30", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:45:09.586328 containerd[1437]: 2024-10-08 19:45:09.552 [INFO][5119] k8s.go 608: Cleaning up netns ContainerID="9d21ec807eb0f82ea1c209b4a063f87b6947c7ba977f6b3c5c442f3964797d1e" Oct 8 19:45:09.586328 containerd[1437]: 2024-10-08 19:45:09.552 [INFO][5119] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="9d21ec807eb0f82ea1c209b4a063f87b6947c7ba977f6b3c5c442f3964797d1e" iface="eth0" netns="" Oct 8 19:45:09.586328 containerd[1437]: 2024-10-08 19:45:09.552 [INFO][5119] k8s.go 615: Releasing IP address(es) ContainerID="9d21ec807eb0f82ea1c209b4a063f87b6947c7ba977f6b3c5c442f3964797d1e" Oct 8 19:45:09.586328 containerd[1437]: 2024-10-08 19:45:09.552 [INFO][5119] utils.go 188: Calico CNI releasing IP address ContainerID="9d21ec807eb0f82ea1c209b4a063f87b6947c7ba977f6b3c5c442f3964797d1e" Oct 8 19:45:09.586328 containerd[1437]: 2024-10-08 19:45:09.570 [INFO][5127] ipam_plugin.go 417: Releasing address using handleID ContainerID="9d21ec807eb0f82ea1c209b4a063f87b6947c7ba977f6b3c5c442f3964797d1e" HandleID="k8s-pod-network.9d21ec807eb0f82ea1c209b4a063f87b6947c7ba977f6b3c5c442f3964797d1e" Workload="localhost-k8s-coredns--6f6b679f8f--9vxz5-eth0" Oct 8 19:45:09.586328 containerd[1437]: 2024-10-08 19:45:09.571 [INFO][5127] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:45:09.586328 containerd[1437]: 2024-10-08 19:45:09.571 [INFO][5127] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:45:09.586328 containerd[1437]: 2024-10-08 19:45:09.580 [WARNING][5127] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="9d21ec807eb0f82ea1c209b4a063f87b6947c7ba977f6b3c5c442f3964797d1e" HandleID="k8s-pod-network.9d21ec807eb0f82ea1c209b4a063f87b6947c7ba977f6b3c5c442f3964797d1e" Workload="localhost-k8s-coredns--6f6b679f8f--9vxz5-eth0" Oct 8 19:45:09.586328 containerd[1437]: 2024-10-08 19:45:09.580 [INFO][5127] ipam_plugin.go 445: Releasing address using workloadID ContainerID="9d21ec807eb0f82ea1c209b4a063f87b6947c7ba977f6b3c5c442f3964797d1e" HandleID="k8s-pod-network.9d21ec807eb0f82ea1c209b4a063f87b6947c7ba977f6b3c5c442f3964797d1e" Workload="localhost-k8s-coredns--6f6b679f8f--9vxz5-eth0" Oct 8 19:45:09.586328 containerd[1437]: 2024-10-08 19:45:09.582 [INFO][5127] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:45:09.586328 containerd[1437]: 2024-10-08 19:45:09.584 [INFO][5119] k8s.go 621: Teardown processing complete. ContainerID="9d21ec807eb0f82ea1c209b4a063f87b6947c7ba977f6b3c5c442f3964797d1e" Oct 8 19:45:09.586776 containerd[1437]: time="2024-10-08T19:45:09.586376429Z" level=info msg="TearDown network for sandbox \"9d21ec807eb0f82ea1c209b4a063f87b6947c7ba977f6b3c5c442f3964797d1e\" successfully" Oct 8 19:45:09.586776 containerd[1437]: time="2024-10-08T19:45:09.586401912Z" level=info msg="StopPodSandbox for \"9d21ec807eb0f82ea1c209b4a063f87b6947c7ba977f6b3c5c442f3964797d1e\" returns successfully" Oct 8 19:45:09.587319 containerd[1437]: time="2024-10-08T19:45:09.586806628Z" level=info msg="RemovePodSandbox for \"9d21ec807eb0f82ea1c209b4a063f87b6947c7ba977f6b3c5c442f3964797d1e\"" Oct 8 19:45:09.587319 containerd[1437]: time="2024-10-08T19:45:09.586839711Z" level=info msg="Forcibly stopping sandbox \"9d21ec807eb0f82ea1c209b4a063f87b6947c7ba977f6b3c5c442f3964797d1e\"" Oct 8 19:45:09.653127 containerd[1437]: 2024-10-08 19:45:09.622 [WARNING][5150] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9d21ec807eb0f82ea1c209b4a063f87b6947c7ba977f6b3c5c442f3964797d1e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--9vxz5-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"7fbc9347-b968-44a3-a96a-e937c3f2240a", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 44, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7873062e24d08fc995fed8f2c5387487548cb9917b47047d76a903085d5a381d", Pod:"coredns-6f6b679f8f-9vxz5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7780bac7f30", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:45:09.653127 containerd[1437]: 2024-10-08 19:45:09.622 [INFO][5150] k8s.go 608: Cleaning up netns ContainerID="9d21ec807eb0f82ea1c209b4a063f87b6947c7ba977f6b3c5c442f3964797d1e" Oct 8 19:45:09.653127 containerd[1437]: 2024-10-08 19:45:09.622 [INFO][5150] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="9d21ec807eb0f82ea1c209b4a063f87b6947c7ba977f6b3c5c442f3964797d1e" iface="eth0" netns="" Oct 8 19:45:09.653127 containerd[1437]: 2024-10-08 19:45:09.622 [INFO][5150] k8s.go 615: Releasing IP address(es) ContainerID="9d21ec807eb0f82ea1c209b4a063f87b6947c7ba977f6b3c5c442f3964797d1e" Oct 8 19:45:09.653127 containerd[1437]: 2024-10-08 19:45:09.622 [INFO][5150] utils.go 188: Calico CNI releasing IP address ContainerID="9d21ec807eb0f82ea1c209b4a063f87b6947c7ba977f6b3c5c442f3964797d1e" Oct 8 19:45:09.653127 containerd[1437]: 2024-10-08 19:45:09.640 [INFO][5157] ipam_plugin.go 417: Releasing address using handleID ContainerID="9d21ec807eb0f82ea1c209b4a063f87b6947c7ba977f6b3c5c442f3964797d1e" HandleID="k8s-pod-network.9d21ec807eb0f82ea1c209b4a063f87b6947c7ba977f6b3c5c442f3964797d1e" Workload="localhost-k8s-coredns--6f6b679f8f--9vxz5-eth0" Oct 8 19:45:09.653127 containerd[1437]: 2024-10-08 19:45:09.640 [INFO][5157] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:45:09.653127 containerd[1437]: 2024-10-08 19:45:09.641 [INFO][5157] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:45:09.653127 containerd[1437]: 2024-10-08 19:45:09.648 [WARNING][5157] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="9d21ec807eb0f82ea1c209b4a063f87b6947c7ba977f6b3c5c442f3964797d1e" HandleID="k8s-pod-network.9d21ec807eb0f82ea1c209b4a063f87b6947c7ba977f6b3c5c442f3964797d1e" Workload="localhost-k8s-coredns--6f6b679f8f--9vxz5-eth0" Oct 8 19:45:09.653127 containerd[1437]: 2024-10-08 19:45:09.648 [INFO][5157] ipam_plugin.go 445: Releasing address using workloadID ContainerID="9d21ec807eb0f82ea1c209b4a063f87b6947c7ba977f6b3c5c442f3964797d1e" HandleID="k8s-pod-network.9d21ec807eb0f82ea1c209b4a063f87b6947c7ba977f6b3c5c442f3964797d1e" Workload="localhost-k8s-coredns--6f6b679f8f--9vxz5-eth0" Oct 8 19:45:09.653127 containerd[1437]: 2024-10-08 19:45:09.650 [INFO][5157] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:45:09.653127 containerd[1437]: 2024-10-08 19:45:09.651 [INFO][5150] k8s.go 621: Teardown processing complete. ContainerID="9d21ec807eb0f82ea1c209b4a063f87b6947c7ba977f6b3c5c442f3964797d1e" Oct 8 19:45:09.653127 containerd[1437]: time="2024-10-08T19:45:09.653093060Z" level=info msg="TearDown network for sandbox \"9d21ec807eb0f82ea1c209b4a063f87b6947c7ba977f6b3c5c442f3964797d1e\" successfully" Oct 8 19:45:09.656252 containerd[1437]: time="2024-10-08T19:45:09.656177299Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9d21ec807eb0f82ea1c209b4a063f87b6947c7ba977f6b3c5c442f3964797d1e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 8 19:45:09.656303 containerd[1437]: time="2024-10-08T19:45:09.656285069Z" level=info msg="RemovePodSandbox \"9d21ec807eb0f82ea1c209b4a063f87b6947c7ba977f6b3c5c442f3964797d1e\" returns successfully" Oct 8 19:45:09.657011 containerd[1437]: time="2024-10-08T19:45:09.656732149Z" level=info msg="StopPodSandbox for \"0d8d1d363482dd5cf714d0fbcb0affd638ce6d12eec14319fad6a3e43d828744\"" Oct 8 19:45:09.719261 containerd[1437]: 2024-10-08 19:45:09.689 [WARNING][5179] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0d8d1d363482dd5cf714d0fbcb0affd638ce6d12eec14319fad6a3e43d828744" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6bb77fd48c--ql2qz-eth0", GenerateName:"calico-kube-controllers-6bb77fd48c-", Namespace:"calico-system", SelfLink:"", UID:"1544abfa-775d-4d7b-9360-1ce3bf50e572", ResourceVersion:"906", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 44, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6bb77fd48c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3393f5c9d3b44c0b764fb6ec6e24ac3e89fcd79588d71df349f7259faac4b853", Pod:"calico-kube-controllers-6bb77fd48c-ql2qz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5fe17e05f47", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:45:09.719261 containerd[1437]: 2024-10-08 19:45:09.690 [INFO][5179] k8s.go 608: Cleaning up netns ContainerID="0d8d1d363482dd5cf714d0fbcb0affd638ce6d12eec14319fad6a3e43d828744" Oct 8 19:45:09.719261 containerd[1437]: 2024-10-08 19:45:09.690 [INFO][5179] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="0d8d1d363482dd5cf714d0fbcb0affd638ce6d12eec14319fad6a3e43d828744" iface="eth0" netns="" Oct 8 19:45:09.719261 containerd[1437]: 2024-10-08 19:45:09.690 [INFO][5179] k8s.go 615: Releasing IP address(es) ContainerID="0d8d1d363482dd5cf714d0fbcb0affd638ce6d12eec14319fad6a3e43d828744" Oct 8 19:45:09.719261 containerd[1437]: 2024-10-08 19:45:09.690 [INFO][5179] utils.go 188: Calico CNI releasing IP address ContainerID="0d8d1d363482dd5cf714d0fbcb0affd638ce6d12eec14319fad6a3e43d828744" Oct 8 19:45:09.719261 containerd[1437]: 2024-10-08 19:45:09.707 [INFO][5186] ipam_plugin.go 417: Releasing address using handleID ContainerID="0d8d1d363482dd5cf714d0fbcb0affd638ce6d12eec14319fad6a3e43d828744" HandleID="k8s-pod-network.0d8d1d363482dd5cf714d0fbcb0affd638ce6d12eec14319fad6a3e43d828744" Workload="localhost-k8s-calico--kube--controllers--6bb77fd48c--ql2qz-eth0" Oct 8 19:45:09.719261 containerd[1437]: 2024-10-08 19:45:09.707 [INFO][5186] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:45:09.719261 containerd[1437]: 2024-10-08 19:45:09.707 [INFO][5186] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:45:09.719261 containerd[1437]: 2024-10-08 19:45:09.715 [WARNING][5186] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="0d8d1d363482dd5cf714d0fbcb0affd638ce6d12eec14319fad6a3e43d828744" HandleID="k8s-pod-network.0d8d1d363482dd5cf714d0fbcb0affd638ce6d12eec14319fad6a3e43d828744" Workload="localhost-k8s-calico--kube--controllers--6bb77fd48c--ql2qz-eth0" Oct 8 19:45:09.719261 containerd[1437]: 2024-10-08 19:45:09.715 [INFO][5186] ipam_plugin.go 445: Releasing address using workloadID ContainerID="0d8d1d363482dd5cf714d0fbcb0affd638ce6d12eec14319fad6a3e43d828744" HandleID="k8s-pod-network.0d8d1d363482dd5cf714d0fbcb0affd638ce6d12eec14319fad6a3e43d828744" Workload="localhost-k8s-calico--kube--controllers--6bb77fd48c--ql2qz-eth0" Oct 8 19:45:09.719261 containerd[1437]: 2024-10-08 19:45:09.716 [INFO][5186] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:45:09.719261 containerd[1437]: 2024-10-08 19:45:09.717 [INFO][5179] k8s.go 621: Teardown processing complete. ContainerID="0d8d1d363482dd5cf714d0fbcb0affd638ce6d12eec14319fad6a3e43d828744" Oct 8 19:45:09.719261 containerd[1437]: time="2024-10-08T19:45:09.719137631Z" level=info msg="TearDown network for sandbox \"0d8d1d363482dd5cf714d0fbcb0affd638ce6d12eec14319fad6a3e43d828744\" successfully" Oct 8 19:45:09.719261 containerd[1437]: time="2024-10-08T19:45:09.719161793Z" level=info msg="StopPodSandbox for \"0d8d1d363482dd5cf714d0fbcb0affd638ce6d12eec14319fad6a3e43d828744\" returns successfully" Oct 8 19:45:09.719708 containerd[1437]: time="2024-10-08T19:45:09.719567589Z" level=info msg="RemovePodSandbox for \"0d8d1d363482dd5cf714d0fbcb0affd638ce6d12eec14319fad6a3e43d828744\"" Oct 8 19:45:09.719708 containerd[1437]: time="2024-10-08T19:45:09.719595952Z" level=info msg="Forcibly stopping sandbox \"0d8d1d363482dd5cf714d0fbcb0affd638ce6d12eec14319fad6a3e43d828744\"" Oct 8 19:45:09.782491 containerd[1437]: 2024-10-08 19:45:09.751 [WARNING][5209] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0d8d1d363482dd5cf714d0fbcb0affd638ce6d12eec14319fad6a3e43d828744" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6bb77fd48c--ql2qz-eth0", GenerateName:"calico-kube-controllers-6bb77fd48c-", Namespace:"calico-system", SelfLink:"", UID:"1544abfa-775d-4d7b-9360-1ce3bf50e572", ResourceVersion:"906", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 44, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6bb77fd48c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3393f5c9d3b44c0b764fb6ec6e24ac3e89fcd79588d71df349f7259faac4b853", Pod:"calico-kube-controllers-6bb77fd48c-ql2qz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5fe17e05f47", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:45:09.782491 containerd[1437]: 2024-10-08 19:45:09.752 [INFO][5209] k8s.go 608: Cleaning up netns ContainerID="0d8d1d363482dd5cf714d0fbcb0affd638ce6d12eec14319fad6a3e43d828744" Oct 8 19:45:09.782491 containerd[1437]: 2024-10-08 19:45:09.752 [INFO][5209] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="0d8d1d363482dd5cf714d0fbcb0affd638ce6d12eec14319fad6a3e43d828744" iface="eth0" netns="" Oct 8 19:45:09.782491 containerd[1437]: 2024-10-08 19:45:09.752 [INFO][5209] k8s.go 615: Releasing IP address(es) ContainerID="0d8d1d363482dd5cf714d0fbcb0affd638ce6d12eec14319fad6a3e43d828744" Oct 8 19:45:09.782491 containerd[1437]: 2024-10-08 19:45:09.752 [INFO][5209] utils.go 188: Calico CNI releasing IP address ContainerID="0d8d1d363482dd5cf714d0fbcb0affd638ce6d12eec14319fad6a3e43d828744" Oct 8 19:45:09.782491 containerd[1437]: 2024-10-08 19:45:09.770 [INFO][5216] ipam_plugin.go 417: Releasing address using handleID ContainerID="0d8d1d363482dd5cf714d0fbcb0affd638ce6d12eec14319fad6a3e43d828744" HandleID="k8s-pod-network.0d8d1d363482dd5cf714d0fbcb0affd638ce6d12eec14319fad6a3e43d828744" Workload="localhost-k8s-calico--kube--controllers--6bb77fd48c--ql2qz-eth0" Oct 8 19:45:09.782491 containerd[1437]: 2024-10-08 19:45:09.770 [INFO][5216] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:45:09.782491 containerd[1437]: 2024-10-08 19:45:09.770 [INFO][5216] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:45:09.782491 containerd[1437]: 2024-10-08 19:45:09.778 [WARNING][5216] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="0d8d1d363482dd5cf714d0fbcb0affd638ce6d12eec14319fad6a3e43d828744" HandleID="k8s-pod-network.0d8d1d363482dd5cf714d0fbcb0affd638ce6d12eec14319fad6a3e43d828744" Workload="localhost-k8s-calico--kube--controllers--6bb77fd48c--ql2qz-eth0" Oct 8 19:45:09.782491 containerd[1437]: 2024-10-08 19:45:09.778 [INFO][5216] ipam_plugin.go 445: Releasing address using workloadID ContainerID="0d8d1d363482dd5cf714d0fbcb0affd638ce6d12eec14319fad6a3e43d828744" HandleID="k8s-pod-network.0d8d1d363482dd5cf714d0fbcb0affd638ce6d12eec14319fad6a3e43d828744" Workload="localhost-k8s-calico--kube--controllers--6bb77fd48c--ql2qz-eth0" Oct 8 19:45:09.782491 containerd[1437]: 2024-10-08 19:45:09.779 [INFO][5216] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:45:09.782491 containerd[1437]: 2024-10-08 19:45:09.781 [INFO][5209] k8s.go 621: Teardown processing complete. ContainerID="0d8d1d363482dd5cf714d0fbcb0affd638ce6d12eec14319fad6a3e43d828744" Oct 8 19:45:09.783578 containerd[1437]: time="2024-10-08T19:45:09.782961160Z" level=info msg="TearDown network for sandbox \"0d8d1d363482dd5cf714d0fbcb0affd638ce6d12eec14319fad6a3e43d828744\" successfully" Oct 8 19:45:09.785554 containerd[1437]: time="2024-10-08T19:45:09.785513511Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0d8d1d363482dd5cf714d0fbcb0affd638ce6d12eec14319fad6a3e43d828744\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 8 19:45:09.785628 containerd[1437]: time="2024-10-08T19:45:09.785579197Z" level=info msg="RemovePodSandbox \"0d8d1d363482dd5cf714d0fbcb0affd638ce6d12eec14319fad6a3e43d828744\" returns successfully" Oct 8 19:45:09.786307 containerd[1437]: time="2024-10-08T19:45:09.786031998Z" level=info msg="StopPodSandbox for \"963fbfe03f67f4c30a087f2097b216f615da187efd50095b384472198e07e5f3\"" Oct 8 19:45:09.846873 containerd[1437]: 2024-10-08 19:45:09.818 [WARNING][5239] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="963fbfe03f67f4c30a087f2097b216f615da187efd50095b384472198e07e5f3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--nkd98-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"580b162c-9f56-423c-982e-ca1911345f68", ResourceVersion:"787", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 44, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"779867c8f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"809cdbf663fa8bbdd2874fc0d3c6011161657151427c6bb9f3b8a3da813b84a3", Pod:"csi-node-driver-nkd98", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali92cca13629b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:45:09.846873 containerd[1437]: 2024-10-08 19:45:09.818 [INFO][5239] k8s.go 608: Cleaning up netns ContainerID="963fbfe03f67f4c30a087f2097b216f615da187efd50095b384472198e07e5f3" Oct 8 19:45:09.846873 containerd[1437]: 2024-10-08 19:45:09.818 [INFO][5239] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="963fbfe03f67f4c30a087f2097b216f615da187efd50095b384472198e07e5f3" iface="eth0" netns="" Oct 8 19:45:09.846873 containerd[1437]: 2024-10-08 19:45:09.818 [INFO][5239] k8s.go 615: Releasing IP address(es) ContainerID="963fbfe03f67f4c30a087f2097b216f615da187efd50095b384472198e07e5f3" Oct 8 19:45:09.846873 containerd[1437]: 2024-10-08 19:45:09.818 [INFO][5239] utils.go 188: Calico CNI releasing IP address ContainerID="963fbfe03f67f4c30a087f2097b216f615da187efd50095b384472198e07e5f3" Oct 8 19:45:09.846873 containerd[1437]: 2024-10-08 19:45:09.834 [INFO][5246] ipam_plugin.go 417: Releasing address using handleID ContainerID="963fbfe03f67f4c30a087f2097b216f615da187efd50095b384472198e07e5f3" HandleID="k8s-pod-network.963fbfe03f67f4c30a087f2097b216f615da187efd50095b384472198e07e5f3" Workload="localhost-k8s-csi--node--driver--nkd98-eth0" Oct 8 19:45:09.846873 containerd[1437]: 2024-10-08 19:45:09.834 [INFO][5246] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:45:09.846873 containerd[1437]: 2024-10-08 19:45:09.835 [INFO][5246] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:45:09.846873 containerd[1437]: 2024-10-08 19:45:09.842 [WARNING][5246] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="963fbfe03f67f4c30a087f2097b216f615da187efd50095b384472198e07e5f3" HandleID="k8s-pod-network.963fbfe03f67f4c30a087f2097b216f615da187efd50095b384472198e07e5f3" Workload="localhost-k8s-csi--node--driver--nkd98-eth0" Oct 8 19:45:09.846873 containerd[1437]: 2024-10-08 19:45:09.842 [INFO][5246] ipam_plugin.go 445: Releasing address using workloadID ContainerID="963fbfe03f67f4c30a087f2097b216f615da187efd50095b384472198e07e5f3" HandleID="k8s-pod-network.963fbfe03f67f4c30a087f2097b216f615da187efd50095b384472198e07e5f3" Workload="localhost-k8s-csi--node--driver--nkd98-eth0" Oct 8 19:45:09.846873 containerd[1437]: 2024-10-08 19:45:09.844 [INFO][5246] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:45:09.846873 containerd[1437]: 2024-10-08 19:45:09.845 [INFO][5239] k8s.go 621: Teardown processing complete. ContainerID="963fbfe03f67f4c30a087f2097b216f615da187efd50095b384472198e07e5f3" Oct 8 19:45:09.847249 containerd[1437]: time="2024-10-08T19:45:09.846896020Z" level=info msg="TearDown network for sandbox \"963fbfe03f67f4c30a087f2097b216f615da187efd50095b384472198e07e5f3\" successfully" Oct 8 19:45:09.847249 containerd[1437]: time="2024-10-08T19:45:09.846919742Z" level=info msg="StopPodSandbox for \"963fbfe03f67f4c30a087f2097b216f615da187efd50095b384472198e07e5f3\" returns successfully" Oct 8 19:45:09.848019 containerd[1437]: time="2024-10-08T19:45:09.847663489Z" level=info msg="RemovePodSandbox for \"963fbfe03f67f4c30a087f2097b216f615da187efd50095b384472198e07e5f3\"" Oct 8 19:45:09.848019 containerd[1437]: time="2024-10-08T19:45:09.847713494Z" level=info msg="Forcibly stopping sandbox \"963fbfe03f67f4c30a087f2097b216f615da187efd50095b384472198e07e5f3\"" Oct 8 19:45:09.910537 containerd[1437]: 2024-10-08 19:45:09.880 [WARNING][5269] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="963fbfe03f67f4c30a087f2097b216f615da187efd50095b384472198e07e5f3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--nkd98-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"580b162c-9f56-423c-982e-ca1911345f68", ResourceVersion:"787", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 44, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"779867c8f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"809cdbf663fa8bbdd2874fc0d3c6011161657151427c6bb9f3b8a3da813b84a3", Pod:"csi-node-driver-nkd98", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali92cca13629b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:45:09.910537 containerd[1437]: 2024-10-08 19:45:09.880 [INFO][5269] k8s.go 608: Cleaning up netns ContainerID="963fbfe03f67f4c30a087f2097b216f615da187efd50095b384472198e07e5f3" Oct 8 19:45:09.910537 containerd[1437]: 2024-10-08 19:45:09.880 [INFO][5269] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="963fbfe03f67f4c30a087f2097b216f615da187efd50095b384472198e07e5f3" iface="eth0" netns="" Oct 8 19:45:09.910537 containerd[1437]: 2024-10-08 19:45:09.880 [INFO][5269] k8s.go 615: Releasing IP address(es) ContainerID="963fbfe03f67f4c30a087f2097b216f615da187efd50095b384472198e07e5f3" Oct 8 19:45:09.910537 containerd[1437]: 2024-10-08 19:45:09.880 [INFO][5269] utils.go 188: Calico CNI releasing IP address ContainerID="963fbfe03f67f4c30a087f2097b216f615da187efd50095b384472198e07e5f3" Oct 8 19:45:09.910537 containerd[1437]: 2024-10-08 19:45:09.898 [INFO][5277] ipam_plugin.go 417: Releasing address using handleID ContainerID="963fbfe03f67f4c30a087f2097b216f615da187efd50095b384472198e07e5f3" HandleID="k8s-pod-network.963fbfe03f67f4c30a087f2097b216f615da187efd50095b384472198e07e5f3" Workload="localhost-k8s-csi--node--driver--nkd98-eth0" Oct 8 19:45:09.910537 containerd[1437]: 2024-10-08 19:45:09.898 [INFO][5277] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:45:09.910537 containerd[1437]: 2024-10-08 19:45:09.898 [INFO][5277] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:45:09.910537 containerd[1437]: 2024-10-08 19:45:09.906 [WARNING][5277] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="963fbfe03f67f4c30a087f2097b216f615da187efd50095b384472198e07e5f3" HandleID="k8s-pod-network.963fbfe03f67f4c30a087f2097b216f615da187efd50095b384472198e07e5f3" Workload="localhost-k8s-csi--node--driver--nkd98-eth0" Oct 8 19:45:09.910537 containerd[1437]: 2024-10-08 19:45:09.906 [INFO][5277] ipam_plugin.go 445: Releasing address using workloadID ContainerID="963fbfe03f67f4c30a087f2097b216f615da187efd50095b384472198e07e5f3" HandleID="k8s-pod-network.963fbfe03f67f4c30a087f2097b216f615da187efd50095b384472198e07e5f3" Workload="localhost-k8s-csi--node--driver--nkd98-eth0" Oct 8 19:45:09.910537 containerd[1437]: 2024-10-08 19:45:09.907 [INFO][5277] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:45:09.910537 containerd[1437]: 2024-10-08 19:45:09.909 [INFO][5269] k8s.go 621: Teardown processing complete. ContainerID="963fbfe03f67f4c30a087f2097b216f615da187efd50095b384472198e07e5f3" Oct 8 19:45:09.912722 containerd[1437]: time="2024-10-08T19:45:09.911487218Z" level=info msg="TearDown network for sandbox \"963fbfe03f67f4c30a087f2097b216f615da187efd50095b384472198e07e5f3\" successfully" Oct 8 19:45:09.914547 containerd[1437]: time="2024-10-08T19:45:09.914517532Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"963fbfe03f67f4c30a087f2097b216f615da187efd50095b384472198e07e5f3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 8 19:45:09.914603 containerd[1437]: time="2024-10-08T19:45:09.914577498Z" level=info msg="RemovePodSandbox \"963fbfe03f67f4c30a087f2097b216f615da187efd50095b384472198e07e5f3\" returns successfully" Oct 8 19:45:10.784508 systemd[1]: Started sshd@18-10.0.0.84:22-10.0.0.1:54618.service - OpenSSH per-connection server daemon (10.0.0.1:54618). Oct 8 19:45:10.817844 sshd[5286]: Accepted publickey for core from 10.0.0.1 port 54618 ssh2: RSA SHA256:7GlzoUcthdqM2/gWbc3rpA5Lm+7Qkd3pe7wSn/JGGIM Oct 8 19:45:10.819105 sshd[5286]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:45:10.822644 systemd-logind[1424]: New session 19 of user core. Oct 8 19:45:10.835857 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 8 19:45:10.953665 sshd[5286]: pam_unix(sshd:session): session closed for user core Oct 8 19:45:10.957071 systemd[1]: sshd@18-10.0.0.84:22-10.0.0.1:54618.service: Deactivated successfully. Oct 8 19:45:10.959000 systemd[1]: session-19.scope: Deactivated successfully. Oct 8 19:45:10.959755 systemd-logind[1424]: Session 19 logged out. Waiting for processes to exit. Oct 8 19:45:10.960602 systemd-logind[1424]: Removed session 19. Oct 8 19:45:15.966563 systemd[1]: Started sshd@19-10.0.0.84:22-10.0.0.1:48138.service - OpenSSH per-connection server daemon (10.0.0.1:48138). Oct 8 19:45:16.002725 sshd[5314]: Accepted publickey for core from 10.0.0.1 port 48138 ssh2: RSA SHA256:7GlzoUcthdqM2/gWbc3rpA5Lm+7Qkd3pe7wSn/JGGIM Oct 8 19:45:16.004009 sshd[5314]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:45:16.008197 systemd-logind[1424]: New session 20 of user core. Oct 8 19:45:16.015869 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 8 19:45:16.145741 sshd[5314]: pam_unix(sshd:session): session closed for user core Oct 8 19:45:16.148499 systemd[1]: sshd@19-10.0.0.84:22-10.0.0.1:48138.service: Deactivated successfully. Oct 8 19:45:16.155812 systemd[1]: session-20.scope: Deactivated successfully. Oct 8 19:45:16.157721 systemd-logind[1424]: Session 20 logged out. Waiting for processes to exit. Oct 8 19:45:16.161121 systemd-logind[1424]: Removed session 20. Oct 8 19:45:21.156398 systemd[1]: Started sshd@20-10.0.0.84:22-10.0.0.1:48140.service - OpenSSH per-connection server daemon (10.0.0.1:48140). Oct 8 19:45:21.188639 sshd[5331]: Accepted publickey for core from 10.0.0.1 port 48140 ssh2: RSA SHA256:7GlzoUcthdqM2/gWbc3rpA5Lm+7Qkd3pe7wSn/JGGIM Oct 8 19:45:21.189853 sshd[5331]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:45:21.193311 systemd-logind[1424]: New session 21 of user core. Oct 8 19:45:21.201827 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 8 19:45:21.335728 sshd[5331]: pam_unix(sshd:session): session closed for user core Oct 8 19:45:21.340232 systemd[1]: sshd@20-10.0.0.84:22-10.0.0.1:48140.service: Deactivated successfully. Oct 8 19:45:21.342815 systemd[1]: session-21.scope: Deactivated successfully. Oct 8 19:45:21.343714 systemd-logind[1424]: Session 21 logged out. Waiting for processes to exit. Oct 8 19:45:21.344528 systemd-logind[1424]: Removed session 21. Oct 8 19:45:21.376483 kubelet[2440]: E1008 19:45:21.376386 2440 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:45:22.964480 kubelet[2440]: E1008 19:45:22.964436 2440 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"