Aug 5 21:51:17.919681 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Aug 5 21:51:17.919704 kernel: Linux version 6.6.43-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT Mon Aug 5 20:24:20 -00 2024 Aug 5 21:51:17.919714 kernel: KASLR enabled Aug 5 21:51:17.919720 kernel: efi: EFI v2.7 by EDK II Aug 5 21:51:17.919726 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb8fd018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Aug 5 21:51:17.919732 kernel: random: crng init done Aug 5 21:51:17.919739 kernel: ACPI: Early table checksum verification disabled Aug 5 21:51:17.919745 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Aug 5 21:51:17.919751 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Aug 5 21:51:17.919760 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Aug 5 21:51:17.919766 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Aug 5 21:51:17.919773 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Aug 5 21:51:17.919779 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Aug 5 21:51:17.919785 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Aug 5 21:51:17.919792 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 5 21:51:17.919800 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Aug 5 21:51:17.919807 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Aug 5 21:51:17.919813 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Aug 5 21:51:17.919820 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Aug 5 21:51:17.919826 kernel: NUMA: Failed to initialise from firmware Aug 5 21:51:17.919833 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Aug 5 21:51:17.919840 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Aug 5 21:51:17.919846 kernel: Zone ranges: Aug 5 21:51:17.919853 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Aug 5 21:51:17.919859 kernel: DMA32 empty Aug 5 21:51:17.919867 kernel: Normal empty Aug 5 21:51:17.919873 kernel: Movable zone start for each node Aug 5 21:51:17.919880 kernel: Early memory node ranges Aug 5 21:51:17.919886 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Aug 5 21:51:17.919893 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Aug 5 21:51:17.919899 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Aug 5 21:51:17.919906 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Aug 5 21:51:17.919912 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Aug 5 21:51:17.919919 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Aug 5 21:51:17.919926 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Aug 5 21:51:17.919932 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Aug 5 21:51:17.919939 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Aug 5 21:51:17.919947 kernel: psci: probing for conduit method from ACPI. Aug 5 21:51:17.919953 kernel: psci: PSCIv1.1 detected in firmware. Aug 5 21:51:17.919960 kernel: psci: Using standard PSCI v0.2 function IDs Aug 5 21:51:17.919969 kernel: psci: Trusted OS migration not required Aug 5 21:51:17.919976 kernel: psci: SMC Calling Convention v1.1 Aug 5 21:51:17.919983 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Aug 5 21:51:17.919992 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Aug 5 21:51:17.919999 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Aug 5 21:51:17.920006 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Aug 5 21:51:17.920013 kernel: Detected PIPT I-cache on CPU0 Aug 5 21:51:17.920020 kernel: CPU features: detected: GIC system register CPU interface Aug 5 21:51:17.920027 kernel: CPU features: detected: Hardware dirty bit management Aug 5 21:51:17.920034 kernel: CPU features: detected: Spectre-v4 Aug 5 21:51:17.920041 kernel: CPU features: detected: Spectre-BHB Aug 5 21:51:17.920048 kernel: CPU features: kernel page table isolation forced ON by KASLR Aug 5 21:51:17.920055 kernel: CPU features: detected: Kernel page table isolation (KPTI) Aug 5 21:51:17.920063 kernel: CPU features: detected: ARM erratum 1418040 Aug 5 21:51:17.920070 kernel: alternatives: applying boot alternatives Aug 5 21:51:17.920078 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=bb6c4f94d40caa6d83ad7b7b3f8907e11ce677871c150228b9a5377ddab3341e Aug 5 21:51:17.920085 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 5 21:51:17.920092 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 5 21:51:17.920099 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 5 21:51:17.920106 kernel: Fallback order for Node 0: 0 Aug 5 21:51:17.920113 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Aug 5 21:51:17.920120 kernel: Policy zone: DMA Aug 5 21:51:17.920127 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 5 21:51:17.920145 kernel: software IO TLB: area num 4. Aug 5 21:51:17.920155 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Aug 5 21:51:17.920163 kernel: Memory: 2386852K/2572288K available (10240K kernel code, 2182K rwdata, 8072K rodata, 39040K init, 897K bss, 185436K reserved, 0K cma-reserved) Aug 5 21:51:17.920170 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Aug 5 21:51:17.920177 kernel: trace event string verifier disabled Aug 5 21:51:17.920183 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 5 21:51:17.920191 kernel: rcu: RCU event tracing is enabled. Aug 5 21:51:17.920198 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Aug 5 21:51:17.920205 kernel: Trampoline variant of Tasks RCU enabled. Aug 5 21:51:17.920213 kernel: Tracing variant of Tasks RCU enabled. Aug 5 21:51:17.920220 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 5 21:51:17.920227 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Aug 5 21:51:17.920234 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Aug 5 21:51:17.920242 kernel: GICv3: 256 SPIs implemented Aug 5 21:51:17.920249 kernel: GICv3: 0 Extended SPIs implemented Aug 5 21:51:17.920256 kernel: Root IRQ handler: gic_handle_irq Aug 5 21:51:17.920263 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Aug 5 21:51:17.920272 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Aug 5 21:51:17.920283 kernel: ITS [mem 0x08080000-0x0809ffff] Aug 5 21:51:17.920292 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400d0000 (indirect, esz 8, psz 64K, shr 1) Aug 5 21:51:17.920299 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400e0000 (flat, esz 8, psz 64K, shr 1) Aug 5 21:51:17.920306 kernel: GICv3: using LPI property table @0x00000000400f0000 Aug 5 21:51:17.920313 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Aug 5 21:51:17.920320 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 5 21:51:17.920329 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 5 21:51:17.920336 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Aug 5 21:51:17.920343 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Aug 5 21:51:17.920350 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Aug 5 21:51:17.920357 kernel: arm-pv: using stolen time PV Aug 5 21:51:17.920364 kernel: Console: colour dummy device 80x25 Aug 5 21:51:17.920372 kernel: ACPI: Core revision 20230628 Aug 5 21:51:17.920379 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Aug 5 21:51:17.920387 kernel: pid_max: default: 32768 minimum: 301 Aug 5 21:51:17.920394 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Aug 5 21:51:17.920402 kernel: SELinux: Initializing. Aug 5 21:51:17.920410 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 5 21:51:17.920417 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 5 21:51:17.920424 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Aug 5 21:51:17.920432 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Aug 5 21:51:17.920439 kernel: rcu: Hierarchical SRCU implementation. Aug 5 21:51:17.920447 kernel: rcu: Max phase no-delay instances is 400. Aug 5 21:51:17.920454 kernel: Platform MSI: ITS@0x8080000 domain created Aug 5 21:51:17.920461 kernel: PCI/MSI: ITS@0x8080000 domain created Aug 5 21:51:17.920469 kernel: Remapping and enabling EFI services. Aug 5 21:51:17.920477 kernel: smp: Bringing up secondary CPUs ... Aug 5 21:51:17.920484 kernel: Detected PIPT I-cache on CPU1 Aug 5 21:51:17.920491 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Aug 5 21:51:17.920498 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Aug 5 21:51:17.920512 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 5 21:51:17.920519 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Aug 5 21:51:17.920526 kernel: Detected PIPT I-cache on CPU2 Aug 5 21:51:17.920534 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Aug 5 21:51:17.920541 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Aug 5 21:51:17.920551 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 5 21:51:17.920558 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Aug 5 21:51:17.920570 kernel: Detected PIPT I-cache on CPU3 Aug 5 21:51:17.920579 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Aug 5 21:51:17.920587 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Aug 5 21:51:17.920595 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 5 21:51:17.920602 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Aug 5 21:51:17.920609 kernel: smp: Brought up 1 node, 4 CPUs Aug 5 21:51:17.920616 kernel: SMP: Total of 4 processors activated. Aug 5 21:51:17.920625 kernel: CPU features: detected: 32-bit EL0 Support Aug 5 21:51:17.920633 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Aug 5 21:51:17.920641 kernel: CPU features: detected: Common not Private translations Aug 5 21:51:17.920648 kernel: CPU features: detected: CRC32 instructions Aug 5 21:51:17.920656 kernel: CPU features: detected: Enhanced Virtualization Traps Aug 5 21:51:17.920663 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Aug 5 21:51:17.920671 kernel: CPU features: detected: LSE atomic instructions Aug 5 21:51:17.920678 kernel: CPU features: detected: Privileged Access Never Aug 5 21:51:17.920687 kernel: CPU features: detected: RAS Extension Support Aug 5 21:51:17.920694 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Aug 5 21:51:17.920702 kernel: CPU: All CPU(s) started at EL1 Aug 5 21:51:17.920709 kernel: alternatives: applying system-wide alternatives Aug 5 21:51:17.920717 kernel: devtmpfs: initialized Aug 5 21:51:17.920724 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 5 21:51:17.920732 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Aug 5 21:51:17.920739 kernel: pinctrl core: initialized pinctrl subsystem Aug 5 21:51:17.920747 kernel: SMBIOS 3.0.0 present. Aug 5 21:51:17.920756 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Aug 5 21:51:17.920763 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 5 21:51:17.920771 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Aug 5 21:51:17.920778 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Aug 5 21:51:17.920786 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Aug 5 21:51:17.920793 kernel: audit: initializing netlink subsys (disabled) Aug 5 21:51:17.920801 kernel: audit: type=2000 audit(0.024:1): state=initialized audit_enabled=0 res=1 Aug 5 21:51:17.920808 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 5 21:51:17.920816 kernel: cpuidle: using governor menu Aug 5 21:51:17.920825 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Aug 5 21:51:17.920833 kernel: ASID allocator initialised with 32768 entries Aug 5 21:51:17.920840 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 5 21:51:17.920847 kernel: Serial: AMBA PL011 UART driver Aug 5 21:51:17.920855 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Aug 5 21:51:17.920862 kernel: Modules: 0 pages in range for non-PLT usage Aug 5 21:51:17.920870 kernel: Modules: 509120 pages in range for PLT usage Aug 5 21:51:17.920877 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 5 21:51:17.920885 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Aug 5 21:51:17.920894 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Aug 5 21:51:17.920901 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Aug 5 21:51:17.920908 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 5 21:51:17.920916 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Aug 5 21:51:17.920924 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Aug 5 21:51:17.920931 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Aug 5 21:51:17.920938 kernel: ACPI: Added _OSI(Module Device) Aug 5 21:51:17.920946 kernel: ACPI: Added _OSI(Processor Device) Aug 5 21:51:17.920953 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Aug 5 21:51:17.920962 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 5 21:51:17.920970 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 5 21:51:17.920977 kernel: ACPI: Interpreter enabled Aug 5 21:51:17.920985 kernel: ACPI: Using GIC for interrupt routing Aug 5 21:51:17.920993 kernel: ACPI: MCFG table detected, 1 entries Aug 5 21:51:17.921000 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Aug 5 21:51:17.921007 kernel: printk: console [ttyAMA0] enabled Aug 5 21:51:17.921015 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 5 21:51:17.921240 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Aug 5 21:51:17.921335 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Aug 5 21:51:17.921409 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Aug 5 21:51:17.921476 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Aug 5 21:51:17.921554 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Aug 5 21:51:17.921564 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Aug 5 21:51:17.921572 kernel: PCI host bridge to bus 0000:00 Aug 5 21:51:17.921648 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Aug 5 21:51:17.921714 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Aug 5 21:51:17.921777 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Aug 5 21:51:17.921838 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 5 21:51:17.921926 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Aug 5 21:51:17.922007 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Aug 5 21:51:17.922077 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Aug 5 21:51:17.922162 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Aug 5 21:51:17.922234 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Aug 5 21:51:17.922305 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Aug 5 21:51:17.922380 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Aug 5 21:51:17.922450 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Aug 5 21:51:17.922518 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Aug 5 21:51:17.922578 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Aug 5 21:51:17.922640 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Aug 5 21:51:17.922650 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Aug 5 21:51:17.922657 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Aug 5 21:51:17.922665 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Aug 5 21:51:17.922672 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Aug 5 21:51:17.922680 kernel: iommu: Default domain type: Translated Aug 5 21:51:17.922687 kernel: iommu: DMA domain TLB invalidation policy: strict mode Aug 5 21:51:17.922694 kernel: efivars: Registered efivars operations Aug 5 21:51:17.922702 kernel: vgaarb: loaded Aug 5 21:51:17.922711 kernel: clocksource: Switched to clocksource arch_sys_counter Aug 5 21:51:17.922719 kernel: VFS: Disk quotas dquot_6.6.0 Aug 5 21:51:17.922726 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 5 21:51:17.922734 kernel: pnp: PnP ACPI init Aug 5 21:51:17.922814 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Aug 5 21:51:17.922825 kernel: pnp: PnP ACPI: found 1 devices Aug 5 21:51:17.922833 kernel: NET: Registered PF_INET protocol family Aug 5 21:51:17.922840 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 5 21:51:17.922850 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Aug 5 21:51:17.922857 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 5 21:51:17.922865 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 5 21:51:17.922873 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Aug 5 21:51:17.922881 kernel: TCP: Hash tables configured (established 32768 bind 32768) Aug 5 21:51:17.922889 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 5 21:51:17.922896 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 5 21:51:17.922904 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 5 21:51:17.922911 kernel: PCI: CLS 0 bytes, default 64 Aug 5 21:51:17.922920 kernel: kvm [1]: HYP mode not available Aug 5 21:51:17.922928 kernel: Initialise system trusted keyrings Aug 5 21:51:17.922935 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Aug 5 21:51:17.922943 kernel: Key type asymmetric registered Aug 5 21:51:17.922950 kernel: Asymmetric key parser 'x509' registered Aug 5 21:51:17.922958 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Aug 5 21:51:17.922965 kernel: io scheduler mq-deadline registered Aug 5 21:51:17.922973 kernel: io scheduler kyber registered Aug 5 21:51:17.922981 kernel: io scheduler bfq registered Aug 5 21:51:17.922990 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Aug 5 21:51:17.922998 kernel: ACPI: button: Power Button [PWRB] Aug 5 21:51:17.923006 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Aug 5 21:51:17.923083 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Aug 5 21:51:17.923096 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 5 21:51:17.923106 kernel: thunder_xcv, ver 1.0 Aug 5 21:51:17.923113 kernel: thunder_bgx, ver 1.0 Aug 5 21:51:17.923121 kernel: nicpf, ver 1.0 Aug 5 21:51:17.923128 kernel: nicvf, ver 1.0 Aug 5 21:51:17.923224 kernel: rtc-efi rtc-efi.0: registered as rtc0 Aug 5 21:51:17.923295 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-08-05T21:51:17 UTC (1722894677) Aug 5 21:51:17.923305 kernel: hid: raw HID events driver (C) Jiri Kosina Aug 5 21:51:17.923313 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Aug 5 21:51:17.923321 kernel: watchdog: Delayed init of the lockup detector failed: -19 Aug 5 21:51:17.923330 kernel: watchdog: Hard watchdog permanently disabled Aug 5 21:51:17.923337 kernel: NET: Registered PF_INET6 protocol family Aug 5 21:51:17.923345 kernel: Segment Routing with IPv6 Aug 5 21:51:17.923356 kernel: In-situ OAM (IOAM) with IPv6 Aug 5 21:51:17.923363 kernel: NET: Registered PF_PACKET protocol family Aug 5 21:51:17.923372 kernel: Key type dns_resolver registered Aug 5 21:51:17.923379 kernel: registered taskstats version 1 Aug 5 21:51:17.923387 kernel: Loading compiled-in X.509 certificates Aug 5 21:51:17.923394 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.43-flatcar: 7b6de7a842f23ac7c1bb6bedfb9546933daaea09' Aug 5 21:51:17.923402 kernel: Key type .fscrypt registered Aug 5 21:51:17.923409 kernel: Key type fscrypt-provisioning registered Aug 5 21:51:17.923417 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 5 21:51:17.923426 kernel: ima: Allocated hash algorithm: sha1 Aug 5 21:51:17.923434 kernel: ima: No architecture policies found Aug 5 21:51:17.923441 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Aug 5 21:51:17.923449 kernel: clk: Disabling unused clocks Aug 5 21:51:17.923456 kernel: Freeing unused kernel memory: 39040K Aug 5 21:51:17.923463 kernel: Run /init as init process Aug 5 21:51:17.923471 kernel: with arguments: Aug 5 21:51:17.923478 kernel: /init Aug 5 21:51:17.923486 kernel: with environment: Aug 5 21:51:17.923495 kernel: HOME=/ Aug 5 21:51:17.923507 kernel: TERM=linux Aug 5 21:51:17.923515 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 5 21:51:17.923524 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 5 21:51:17.923534 systemd[1]: Detected virtualization kvm. Aug 5 21:51:17.923542 systemd[1]: Detected architecture arm64. Aug 5 21:51:17.923550 systemd[1]: Running in initrd. Aug 5 21:51:17.923558 systemd[1]: No hostname configured, using default hostname. Aug 5 21:51:17.923568 systemd[1]: Hostname set to . Aug 5 21:51:17.923576 systemd[1]: Initializing machine ID from VM UUID. Aug 5 21:51:17.923584 systemd[1]: Queued start job for default target initrd.target. Aug 5 21:51:17.923592 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 5 21:51:17.923601 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 5 21:51:17.923609 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 5 21:51:17.923617 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 5 21:51:17.923626 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 5 21:51:17.923636 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 5 21:51:17.923645 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 5 21:51:17.923653 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 5 21:51:17.923661 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 5 21:51:17.923670 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 5 21:51:17.923677 systemd[1]: Reached target paths.target - Path Units. Aug 5 21:51:17.923687 systemd[1]: Reached target slices.target - Slice Units. Aug 5 21:51:17.923695 systemd[1]: Reached target swap.target - Swaps. Aug 5 21:51:17.923703 systemd[1]: Reached target timers.target - Timer Units. Aug 5 21:51:17.923711 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 5 21:51:17.923719 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 5 21:51:17.923728 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 5 21:51:17.923736 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Aug 5 21:51:17.923744 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 5 21:51:17.923752 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 5 21:51:17.923762 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 5 21:51:17.923770 systemd[1]: Reached target sockets.target - Socket Units. Aug 5 21:51:17.923778 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 5 21:51:17.923786 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 5 21:51:17.923794 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 5 21:51:17.923802 systemd[1]: Starting systemd-fsck-usr.service... Aug 5 21:51:17.923810 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 5 21:51:17.923818 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 5 21:51:17.923827 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 21:51:17.923836 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 5 21:51:17.923844 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 5 21:51:17.923852 systemd[1]: Finished systemd-fsck-usr.service. Aug 5 21:51:17.923878 systemd-journald[237]: Collecting audit messages is disabled. Aug 5 21:51:17.923900 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 5 21:51:17.923909 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 21:51:17.923917 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 5 21:51:17.923926 systemd-journald[237]: Journal started Aug 5 21:51:17.923945 systemd-journald[237]: Runtime Journal (/run/log/journal/c8ba757300234dffaf1e7c27099275be) is 5.9M, max 47.3M, 41.4M free. Aug 5 21:51:17.912381 systemd-modules-load[238]: Inserted module 'overlay' Aug 5 21:51:17.928170 systemd[1]: Started systemd-journald.service - Journal Service. Aug 5 21:51:17.928200 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 5 21:51:17.930974 systemd-modules-load[238]: Inserted module 'br_netfilter' Aug 5 21:51:17.931907 kernel: Bridge firewalling registered Aug 5 21:51:17.932010 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 5 21:51:17.944293 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 5 21:51:17.946131 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 5 21:51:17.947915 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 5 21:51:17.953101 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Aug 5 21:51:17.958248 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 5 21:51:17.961757 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 5 21:51:17.965169 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Aug 5 21:51:17.967668 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 5 21:51:17.979345 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 5 21:51:17.981669 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 5 21:51:17.992636 dracut-cmdline[277]: dracut-dracut-053 Aug 5 21:51:17.995451 dracut-cmdline[277]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=bb6c4f94d40caa6d83ad7b7b3f8907e11ce677871c150228b9a5377ddab3341e Aug 5 21:51:18.013296 systemd-resolved[278]: Positive Trust Anchors: Aug 5 21:51:18.013317 systemd-resolved[278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 5 21:51:18.013347 systemd-resolved[278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Aug 5 21:51:18.018332 systemd-resolved[278]: Defaulting to hostname 'linux'. Aug 5 21:51:18.021120 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 5 21:51:18.022247 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 5 21:51:18.070172 kernel: SCSI subsystem initialized Aug 5 21:51:18.075152 kernel: Loading iSCSI transport class v2.0-870. Aug 5 21:51:18.083166 kernel: iscsi: registered transport (tcp) Aug 5 21:51:18.099179 kernel: iscsi: registered transport (qla4xxx) Aug 5 21:51:18.099234 kernel: QLogic iSCSI HBA Driver Aug 5 21:51:18.143519 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 5 21:51:18.155292 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 5 21:51:18.175923 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 5 21:51:18.175987 kernel: device-mapper: uevent: version 1.0.3 Aug 5 21:51:18.176010 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Aug 5 21:51:18.222175 kernel: raid6: neonx8 gen() 15227 MB/s Aug 5 21:51:18.239158 kernel: raid6: neonx4 gen() 15657 MB/s Aug 5 21:51:18.256151 kernel: raid6: neonx2 gen() 13236 MB/s Aug 5 21:51:18.273156 kernel: raid6: neonx1 gen() 10451 MB/s Aug 5 21:51:18.290157 kernel: raid6: int64x8 gen() 6953 MB/s Aug 5 21:51:18.307162 kernel: raid6: int64x4 gen() 7346 MB/s Aug 5 21:51:18.324157 kernel: raid6: int64x2 gen() 6024 MB/s Aug 5 21:51:18.341166 kernel: raid6: int64x1 gen() 5050 MB/s Aug 5 21:51:18.341193 kernel: raid6: using algorithm neonx4 gen() 15657 MB/s Aug 5 21:51:18.358194 kernel: raid6: .... xor() 12082 MB/s, rmw enabled Aug 5 21:51:18.358226 kernel: raid6: using neon recovery algorithm Aug 5 21:51:18.363161 kernel: xor: measuring software checksum speed Aug 5 21:51:18.364156 kernel: 8regs : 19864 MB/sec Aug 5 21:51:18.365157 kernel: 32regs : 19720 MB/sec Aug 5 21:51:18.366218 kernel: arm64_neon : 27197 MB/sec Aug 5 21:51:18.366241 kernel: xor: using function: arm64_neon (27197 MB/sec) Aug 5 21:51:18.418159 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 5 21:51:18.428802 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 5 21:51:18.439313 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 5 21:51:18.452702 systemd-udevd[461]: Using default interface naming scheme 'v255'. Aug 5 21:51:18.455940 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 5 21:51:18.465312 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 5 21:51:18.476688 dracut-pre-trigger[468]: rd.md=0: removing MD RAID activation Aug 5 21:51:18.504190 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 5 21:51:18.518309 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 5 21:51:18.559192 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 5 21:51:18.567515 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 5 21:51:18.583188 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 5 21:51:18.584743 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 5 21:51:18.586361 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 5 21:51:18.588585 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 5 21:51:18.597356 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 5 21:51:18.607682 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 5 21:51:18.618411 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Aug 5 21:51:18.632757 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Aug 5 21:51:18.632869 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 5 21:51:18.632881 kernel: GPT:9289727 != 19775487 Aug 5 21:51:18.632890 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 5 21:51:18.632900 kernel: GPT:9289727 != 19775487 Aug 5 21:51:18.632909 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 5 21:51:18.632918 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 5 21:51:18.619930 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 5 21:51:18.620043 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 5 21:51:18.624000 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 5 21:51:18.630958 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 5 21:51:18.631097 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 21:51:18.633313 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 21:51:18.641351 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 21:51:18.649166 kernel: BTRFS: device fsid 8a9ab799-ab52-4671-9234-72d7c6e57b99 devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (510) Aug 5 21:51:18.652154 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (511) Aug 5 21:51:18.652711 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Aug 5 21:51:18.657333 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 21:51:18.664838 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Aug 5 21:51:18.669241 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 5 21:51:18.672933 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Aug 5 21:51:18.674068 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Aug 5 21:51:18.686293 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 5 21:51:18.691305 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 5 21:51:18.693538 disk-uuid[552]: Primary Header is updated. Aug 5 21:51:18.693538 disk-uuid[552]: Secondary Entries is updated. Aug 5 21:51:18.693538 disk-uuid[552]: Secondary Header is updated. Aug 5 21:51:18.697166 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 5 21:51:18.710864 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 5 21:51:19.707171 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 5 21:51:19.707823 disk-uuid[553]: The operation has completed successfully. Aug 5 21:51:19.729883 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 5 21:51:19.729979 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 5 21:51:19.751318 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 5 21:51:19.755435 sh[574]: Success Aug 5 21:51:19.771172 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Aug 5 21:51:19.798728 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 5 21:51:19.811487 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 5 21:51:19.813338 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 5 21:51:19.822932 kernel: BTRFS info (device dm-0): first mount of filesystem 8a9ab799-ab52-4671-9234-72d7c6e57b99 Aug 5 21:51:19.822994 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Aug 5 21:51:19.823016 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Aug 5 21:51:19.824679 kernel: BTRFS info (device dm-0): disabling log replay at mount time Aug 5 21:51:19.824699 kernel: BTRFS info (device dm-0): using free space tree Aug 5 21:51:19.829053 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 5 21:51:19.830375 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 5 21:51:19.837341 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 5 21:51:19.839504 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 5 21:51:19.845461 kernel: BTRFS info (device vda6): first mount of filesystem 2fbfcd26-f9be-477f-9b31-7e91608e027d Aug 5 21:51:19.845507 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Aug 5 21:51:19.845520 kernel: BTRFS info (device vda6): using free space tree Aug 5 21:51:19.848168 kernel: BTRFS info (device vda6): auto enabling async discard Aug 5 21:51:19.857534 systemd[1]: mnt-oem.mount: Deactivated successfully. Aug 5 21:51:19.858917 kernel: BTRFS info (device vda6): last unmount of filesystem 2fbfcd26-f9be-477f-9b31-7e91608e027d Aug 5 21:51:19.864304 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 5 21:51:19.869299 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 5 21:51:19.935437 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 5 21:51:19.948300 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 5 21:51:19.966873 ignition[663]: Ignition 2.19.0 Aug 5 21:51:19.966885 ignition[663]: Stage: fetch-offline Aug 5 21:51:19.966924 ignition[663]: no configs at "/usr/lib/ignition/base.d" Aug 5 21:51:19.966933 ignition[663]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 5 21:51:19.967018 ignition[663]: parsed url from cmdline: "" Aug 5 21:51:19.967021 ignition[663]: no config URL provided Aug 5 21:51:19.967025 ignition[663]: reading system config file "/usr/lib/ignition/user.ign" Aug 5 21:51:19.967037 ignition[663]: no config at "/usr/lib/ignition/user.ign" Aug 5 21:51:19.967063 ignition[663]: op(1): [started] loading QEMU firmware config module Aug 5 21:51:19.967068 ignition[663]: op(1): executing: "modprobe" "qemu_fw_cfg" Aug 5 21:51:19.974800 systemd-networkd[766]: lo: Link UP Aug 5 21:51:19.975324 ignition[663]: op(1): [finished] loading QEMU firmware config module Aug 5 21:51:19.974804 systemd-networkd[766]: lo: Gained carrier Aug 5 21:51:19.975506 systemd-networkd[766]: Enumeration completed Aug 5 21:51:19.975772 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 5 21:51:19.975925 systemd-networkd[766]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 5 21:51:19.975929 systemd-networkd[766]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 5 21:51:19.976782 systemd-networkd[766]: eth0: Link UP Aug 5 21:51:19.976786 systemd-networkd[766]: eth0: Gained carrier Aug 5 21:51:19.976793 systemd-networkd[766]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 5 21:51:19.980262 systemd[1]: Reached target network.target - Network. Aug 5 21:51:19.993179 systemd-networkd[766]: eth0: DHCPv4 address 10.0.0.99/16, gateway 10.0.0.1 acquired from 10.0.0.1 Aug 5 21:51:20.022617 ignition[663]: parsing config with SHA512: bebb68ba6d6fab19b34bfab2d4e34de86364e2bb0d525f747e7debd249fdbe044bcc8db8eb5df12e3ae8642ecd500772ee62973348c8566275cff9a751c7d1b1 Aug 5 21:51:20.026716 unknown[663]: fetched base config from "system" Aug 5 21:51:20.026730 unknown[663]: fetched user config from "qemu" Aug 5 21:51:20.027124 ignition[663]: fetch-offline: fetch-offline passed Aug 5 21:51:20.029377 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 5 21:51:20.027196 ignition[663]: Ignition finished successfully Aug 5 21:51:20.031666 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Aug 5 21:51:20.039287 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 5 21:51:20.050359 ignition[772]: Ignition 2.19.0 Aug 5 21:51:20.050368 ignition[772]: Stage: kargs Aug 5 21:51:20.050645 ignition[772]: no configs at "/usr/lib/ignition/base.d" Aug 5 21:51:20.050656 ignition[772]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 5 21:51:20.051929 ignition[772]: kargs: kargs passed Aug 5 21:51:20.053957 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 5 21:51:20.051979 ignition[772]: Ignition finished successfully Aug 5 21:51:20.061317 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 5 21:51:20.070835 ignition[780]: Ignition 2.19.0 Aug 5 21:51:20.070845 ignition[780]: Stage: disks Aug 5 21:51:20.070991 ignition[780]: no configs at "/usr/lib/ignition/base.d" Aug 5 21:51:20.073453 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 5 21:51:20.071000 ignition[780]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 5 21:51:20.074882 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 5 21:51:20.071824 ignition[780]: disks: disks passed Aug 5 21:51:20.076418 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 5 21:51:20.071870 ignition[780]: Ignition finished successfully Aug 5 21:51:20.078246 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 5 21:51:20.079971 systemd[1]: Reached target sysinit.target - System Initialization. Aug 5 21:51:20.081355 systemd[1]: Reached target basic.target - Basic System. Aug 5 21:51:20.093276 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 5 21:51:20.104601 systemd-fsck[790]: ROOT: clean, 14/553520 files, 52654/553472 blocks Aug 5 21:51:20.108218 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 5 21:51:20.110761 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 5 21:51:20.154161 kernel: EXT4-fs (vda9): mounted filesystem ec701988-3dff-4e7d-a2a2-79d78965de5d r/w with ordered data mode. Quota mode: none. Aug 5 21:51:20.154275 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 5 21:51:20.155522 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 5 21:51:20.171222 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 5 21:51:20.172870 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 5 21:51:20.174113 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Aug 5 21:51:20.174224 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 5 21:51:20.174285 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 5 21:51:20.182125 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (798) Aug 5 21:51:20.180422 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 5 21:51:20.182318 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 5 21:51:20.187282 kernel: BTRFS info (device vda6): first mount of filesystem 2fbfcd26-f9be-477f-9b31-7e91608e027d Aug 5 21:51:20.187303 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Aug 5 21:51:20.187313 kernel: BTRFS info (device vda6): using free space tree Aug 5 21:51:20.188210 kernel: BTRFS info (device vda6): auto enabling async discard Aug 5 21:51:20.189965 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 5 21:51:20.228335 initrd-setup-root[822]: cut: /sysroot/etc/passwd: No such file or directory Aug 5 21:51:20.232698 initrd-setup-root[829]: cut: /sysroot/etc/group: No such file or directory Aug 5 21:51:20.236819 initrd-setup-root[836]: cut: /sysroot/etc/shadow: No such file or directory Aug 5 21:51:20.240837 initrd-setup-root[843]: cut: /sysroot/etc/gshadow: No such file or directory Aug 5 21:51:20.309693 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 5 21:51:20.323227 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 5 21:51:20.324784 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 5 21:51:20.331160 kernel: BTRFS info (device vda6): last unmount of filesystem 2fbfcd26-f9be-477f-9b31-7e91608e027d Aug 5 21:51:20.347150 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 5 21:51:20.349000 ignition[912]: INFO : Ignition 2.19.0 Aug 5 21:51:20.349000 ignition[912]: INFO : Stage: mount Aug 5 21:51:20.349000 ignition[912]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 5 21:51:20.349000 ignition[912]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 5 21:51:20.353800 ignition[912]: INFO : mount: mount passed Aug 5 21:51:20.353800 ignition[912]: INFO : Ignition finished successfully Aug 5 21:51:20.351310 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 5 21:51:20.358277 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 5 21:51:20.822011 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 5 21:51:20.836307 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 5 21:51:20.842605 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (926) Aug 5 21:51:20.842634 kernel: BTRFS info (device vda6): first mount of filesystem 2fbfcd26-f9be-477f-9b31-7e91608e027d Aug 5 21:51:20.842645 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Aug 5 21:51:20.844146 kernel: BTRFS info (device vda6): using free space tree Aug 5 21:51:20.846174 kernel: BTRFS info (device vda6): auto enabling async discard Aug 5 21:51:20.847169 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 5 21:51:20.863139 ignition[943]: INFO : Ignition 2.19.0 Aug 5 21:51:20.864011 ignition[943]: INFO : Stage: files Aug 5 21:51:20.864011 ignition[943]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 5 21:51:20.864011 ignition[943]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 5 21:51:20.867033 ignition[943]: DEBUG : files: compiled without relabeling support, skipping Aug 5 21:51:20.867033 ignition[943]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 5 21:51:20.867033 ignition[943]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 5 21:51:20.870591 ignition[943]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 5 21:51:20.870591 ignition[943]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 5 21:51:20.870591 ignition[943]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 5 21:51:20.870591 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Aug 5 21:51:20.870591 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Aug 5 21:51:20.867924 unknown[943]: wrote ssh authorized keys file for user: core Aug 5 21:51:20.906560 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 5 21:51:20.956519 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Aug 5 21:51:20.958421 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Aug 5 21:51:20.958421 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Aug 5 21:51:20.958421 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 5 21:51:20.958421 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 5 21:51:20.958421 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 5 21:51:20.958421 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 5 21:51:20.958421 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 5 21:51:20.958421 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 5 21:51:20.958421 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 5 21:51:20.958421 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 5 21:51:20.958421 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Aug 5 21:51:20.958421 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Aug 5 21:51:20.958421 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Aug 5 21:51:20.958421 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Aug 5 21:51:21.260300 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Aug 5 21:51:21.567233 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Aug 5 21:51:21.567233 ignition[943]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Aug 5 21:51:21.570375 ignition[943]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 5 21:51:21.570375 ignition[943]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 5 21:51:21.570375 ignition[943]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Aug 5 21:51:21.570375 ignition[943]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Aug 5 21:51:21.570375 ignition[943]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Aug 5 21:51:21.570375 ignition[943]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Aug 5 21:51:21.570375 ignition[943]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Aug 5 21:51:21.570375 ignition[943]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Aug 5 21:51:21.588545 ignition[943]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Aug 5 21:51:21.592748 ignition[943]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Aug 5 21:51:21.594128 ignition[943]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Aug 5 21:51:21.594128 ignition[943]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Aug 5 21:51:21.594128 ignition[943]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Aug 5 21:51:21.594128 ignition[943]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 5 21:51:21.594128 ignition[943]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 5 21:51:21.594128 ignition[943]: INFO : files: files passed Aug 5 21:51:21.594128 ignition[943]: INFO : Ignition finished successfully Aug 5 21:51:21.594910 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 5 21:51:21.608338 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 5 21:51:21.610167 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 5 21:51:21.614984 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 5 21:51:21.615112 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 5 21:51:21.618077 initrd-setup-root-after-ignition[972]: grep: /sysroot/oem/oem-release: No such file or directory Aug 5 21:51:21.619961 initrd-setup-root-after-ignition[974]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 5 21:51:21.619961 initrd-setup-root-after-ignition[974]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 5 21:51:21.622735 initrd-setup-root-after-ignition[978]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 5 21:51:21.621784 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 5 21:51:21.624230 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 5 21:51:21.645334 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 5 21:51:21.664713 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 5 21:51:21.664821 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 5 21:51:21.666849 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 5 21:51:21.668537 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 5 21:51:21.670094 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 5 21:51:21.670858 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 5 21:51:21.685593 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 5 21:51:21.687873 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 5 21:51:21.698954 systemd[1]: Stopped target network.target - Network. Aug 5 21:51:21.699928 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 5 21:51:21.701591 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 5 21:51:21.703544 systemd[1]: Stopped target timers.target - Timer Units. Aug 5 21:51:21.705132 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 5 21:51:21.705274 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 5 21:51:21.707652 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 5 21:51:21.709504 systemd[1]: Stopped target basic.target - Basic System. Aug 5 21:51:21.711030 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 5 21:51:21.712662 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 5 21:51:21.714445 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 5 21:51:21.716238 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 5 21:51:21.717970 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 5 21:51:21.719775 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 5 21:51:21.721581 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 5 21:51:21.723164 systemd[1]: Stopped target swap.target - Swaps. Aug 5 21:51:21.724561 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 5 21:51:21.724686 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 5 21:51:21.726863 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 5 21:51:21.728640 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 5 21:51:21.730363 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 5 21:51:21.731202 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 5 21:51:21.732348 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 5 21:51:21.732461 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 5 21:51:21.735034 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 5 21:51:21.735160 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 5 21:51:21.736987 systemd[1]: Stopped target paths.target - Path Units. Aug 5 21:51:21.738474 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 5 21:51:21.745198 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 5 21:51:21.746367 systemd[1]: Stopped target slices.target - Slice Units. Aug 5 21:51:21.748280 systemd[1]: Stopped target sockets.target - Socket Units. Aug 5 21:51:21.749645 systemd[1]: iscsid.socket: Deactivated successfully. Aug 5 21:51:21.749734 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 5 21:51:21.751184 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 5 21:51:21.751270 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 5 21:51:21.752724 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 5 21:51:21.752831 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 5 21:51:21.754419 systemd[1]: ignition-files.service: Deactivated successfully. Aug 5 21:51:21.754527 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 5 21:51:21.770371 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 5 21:51:21.771903 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 5 21:51:21.772917 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 5 21:51:21.774582 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 5 21:51:21.778167 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 5 21:51:21.778298 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 5 21:51:21.780126 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 5 21:51:21.780236 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 5 21:51:21.784209 systemd-networkd[766]: eth0: DHCPv6 lease lost Aug 5 21:51:21.786175 ignition[999]: INFO : Ignition 2.19.0 Aug 5 21:51:21.786175 ignition[999]: INFO : Stage: umount Aug 5 21:51:21.786175 ignition[999]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 5 21:51:21.786175 ignition[999]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 5 21:51:21.784919 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 5 21:51:21.799968 ignition[999]: INFO : umount: umount passed Aug 5 21:51:21.799968 ignition[999]: INFO : Ignition finished successfully Aug 5 21:51:21.785678 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 5 21:51:21.785769 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 5 21:51:21.788648 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 5 21:51:21.788729 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 5 21:51:21.790649 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 5 21:51:21.790727 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 5 21:51:21.795961 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 5 21:51:21.796044 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 5 21:51:21.799184 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 5 21:51:21.799219 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 5 21:51:21.801030 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 5 21:51:21.801079 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 5 21:51:21.802052 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 5 21:51:21.802093 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 5 21:51:21.803571 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 5 21:51:21.803611 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 5 21:51:21.805110 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 5 21:51:21.805178 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 5 21:51:21.815272 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 5 21:51:21.816167 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 5 21:51:21.816231 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 5 21:51:21.818051 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 5 21:51:21.818097 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 5 21:51:21.819696 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 5 21:51:21.819736 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 5 21:51:21.821309 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 5 21:51:21.821353 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Aug 5 21:51:21.823236 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 5 21:51:21.831655 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 5 21:51:21.831753 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 5 21:51:21.837725 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 5 21:51:21.837869 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 5 21:51:21.839429 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 5 21:51:21.839471 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 5 21:51:21.841081 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 5 21:51:21.841114 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 5 21:51:21.842998 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 5 21:51:21.843050 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 5 21:51:21.845494 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 5 21:51:21.845542 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 5 21:51:21.848241 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 5 21:51:21.848289 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 5 21:51:21.865333 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 5 21:51:21.866361 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 5 21:51:21.866421 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 5 21:51:21.868342 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Aug 5 21:51:21.868385 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 5 21:51:21.870142 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 5 21:51:21.870186 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 5 21:51:21.872148 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 5 21:51:21.872192 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 21:51:21.874299 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 5 21:51:21.874378 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 5 21:51:21.876106 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 5 21:51:21.876190 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 5 21:51:21.881938 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 5 21:51:21.882979 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 5 21:51:21.883047 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 5 21:51:21.896403 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 5 21:51:21.902149 systemd[1]: Switching root. Aug 5 21:51:21.932821 systemd-journald[237]: Journal stopped Aug 5 21:51:22.619914 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Aug 5 21:51:22.619974 kernel: SELinux: policy capability network_peer_controls=1 Aug 5 21:51:22.619986 kernel: SELinux: policy capability open_perms=1 Aug 5 21:51:22.619996 kernel: SELinux: policy capability extended_socket_class=1 Aug 5 21:51:22.620009 kernel: SELinux: policy capability always_check_network=0 Aug 5 21:51:22.620019 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 5 21:51:22.620029 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 5 21:51:22.620038 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 5 21:51:22.620048 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 5 21:51:22.620058 kernel: audit: type=1403 audit(1722894682.079:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 5 21:51:22.620068 systemd[1]: Successfully loaded SELinux policy in 31.400ms. Aug 5 21:51:22.620086 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.159ms. Aug 5 21:51:22.620099 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 5 21:51:22.620111 systemd[1]: Detected virtualization kvm. Aug 5 21:51:22.620122 systemd[1]: Detected architecture arm64. Aug 5 21:51:22.620144 systemd[1]: Detected first boot. Aug 5 21:51:22.620157 systemd[1]: Initializing machine ID from VM UUID. Aug 5 21:51:22.620169 zram_generator::config[1044]: No configuration found. Aug 5 21:51:22.620180 systemd[1]: Populated /etc with preset unit settings. Aug 5 21:51:22.620191 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 5 21:51:22.620202 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Aug 5 21:51:22.620219 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 5 21:51:22.620231 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 5 21:51:22.620242 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 5 21:51:22.620252 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 5 21:51:22.620264 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 5 21:51:22.620275 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 5 21:51:22.620286 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 5 21:51:22.620297 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 5 21:51:22.620309 systemd[1]: Created slice user.slice - User and Session Slice. Aug 5 21:51:22.620320 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 5 21:51:22.620331 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 5 21:51:22.620341 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 5 21:51:22.620352 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 5 21:51:22.620363 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 5 21:51:22.620373 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 5 21:51:22.620384 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Aug 5 21:51:22.620395 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 5 21:51:22.620407 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Aug 5 21:51:22.620417 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Aug 5 21:51:22.620428 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Aug 5 21:51:22.620439 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 5 21:51:22.620450 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 5 21:51:22.620461 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 5 21:51:22.620472 systemd[1]: Reached target slices.target - Slice Units. Aug 5 21:51:22.620482 systemd[1]: Reached target swap.target - Swaps. Aug 5 21:51:22.620504 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 5 21:51:22.620515 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 5 21:51:22.620525 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 5 21:51:22.620536 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 5 21:51:22.620548 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 5 21:51:22.620559 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 5 21:51:22.620569 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 5 21:51:22.620579 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 5 21:51:22.620591 systemd[1]: Mounting media.mount - External Media Directory... Aug 5 21:51:22.620603 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 5 21:51:22.620614 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 5 21:51:22.620624 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 5 21:51:22.620636 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 5 21:51:22.620646 systemd[1]: Reached target machines.target - Containers. Aug 5 21:51:22.620657 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 5 21:51:22.620667 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 5 21:51:22.620678 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 5 21:51:22.620691 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 5 21:51:22.620702 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 5 21:51:22.620712 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 5 21:51:22.620727 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 5 21:51:22.620737 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 5 21:51:22.620748 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 5 21:51:22.620759 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 5 21:51:22.620769 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 5 21:51:22.620780 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Aug 5 21:51:22.620792 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 5 21:51:22.620803 systemd[1]: Stopped systemd-fsck-usr.service. Aug 5 21:51:22.620813 kernel: fuse: init (API version 7.39) Aug 5 21:51:22.620822 kernel: loop: module loaded Aug 5 21:51:22.620832 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 5 21:51:22.620842 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 5 21:51:22.620853 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 5 21:51:22.620863 kernel: ACPI: bus type drm_connector registered Aug 5 21:51:22.620873 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 5 21:51:22.620885 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 5 21:51:22.620895 systemd[1]: verity-setup.service: Deactivated successfully. Aug 5 21:51:22.620906 systemd[1]: Stopped verity-setup.service. Aug 5 21:51:22.620935 systemd-journald[1103]: Collecting audit messages is disabled. Aug 5 21:51:22.620956 systemd-journald[1103]: Journal started Aug 5 21:51:22.620978 systemd-journald[1103]: Runtime Journal (/run/log/journal/c8ba757300234dffaf1e7c27099275be) is 5.9M, max 47.3M, 41.4M free. Aug 5 21:51:22.430427 systemd[1]: Queued start job for default target multi-user.target. Aug 5 21:51:22.448275 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Aug 5 21:51:22.448659 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 5 21:51:22.625374 systemd[1]: Started systemd-journald.service - Journal Service. Aug 5 21:51:22.626041 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 5 21:51:22.627233 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 5 21:51:22.628557 systemd[1]: Mounted media.mount - External Media Directory. Aug 5 21:51:22.629628 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 5 21:51:22.630863 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 5 21:51:22.632093 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 5 21:51:22.635153 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 5 21:51:22.636693 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 5 21:51:22.636845 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 5 21:51:22.638343 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 5 21:51:22.638478 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 5 21:51:22.639863 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 5 21:51:22.640005 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 5 21:51:22.643478 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 5 21:51:22.643773 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 5 21:51:22.646372 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 5 21:51:22.647738 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 5 21:51:22.647879 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 5 21:51:22.649128 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 5 21:51:22.649285 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 5 21:51:22.650655 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 5 21:51:22.652005 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 5 21:51:22.653661 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 5 21:51:22.666257 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 5 21:51:22.676280 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 5 21:51:22.678558 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 5 21:51:22.679645 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 5 21:51:22.679686 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 5 21:51:22.681592 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Aug 5 21:51:22.683886 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 5 21:51:22.686057 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 5 21:51:22.687191 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 5 21:51:22.688777 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 5 21:51:22.690900 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 5 21:51:22.692155 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 5 21:51:22.696305 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 5 21:51:22.698555 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 5 21:51:22.700304 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 5 21:51:22.702013 systemd-journald[1103]: Time spent on flushing to /var/log/journal/c8ba757300234dffaf1e7c27099275be is 15.042ms for 852 entries. Aug 5 21:51:22.702013 systemd-journald[1103]: System Journal (/var/log/journal/c8ba757300234dffaf1e7c27099275be) is 8.0M, max 195.6M, 187.6M free. Aug 5 21:51:22.726551 systemd-journald[1103]: Received client request to flush runtime journal. Aug 5 21:51:22.703625 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 5 21:51:22.706356 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 5 21:51:22.709045 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 5 21:51:22.710589 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 5 21:51:22.712349 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 5 21:51:22.715263 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 5 21:51:22.725965 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 5 21:51:22.730602 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 5 21:51:22.736831 kernel: loop0: detected capacity change from 0 to 194512 Aug 5 21:51:22.736873 kernel: block loop0: the capability attribute has been deprecated. Aug 5 21:51:22.737939 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 5 21:51:22.749358 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 5 21:51:22.750349 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Aug 5 21:51:22.753246 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Aug 5 21:51:22.755159 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 5 21:51:22.764118 systemd-tmpfiles[1156]: ACLs are not supported, ignoring. Aug 5 21:51:22.764144 systemd-tmpfiles[1156]: ACLs are not supported, ignoring. Aug 5 21:51:22.769247 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 5 21:51:22.781387 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 5 21:51:22.783110 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 5 21:51:22.783916 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Aug 5 21:51:22.786647 kernel: loop1: detected capacity change from 0 to 113712 Aug 5 21:51:22.789418 udevadm[1170]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Aug 5 21:51:22.808703 kernel: loop2: detected capacity change from 0 to 59688 Aug 5 21:51:22.808018 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 5 21:51:22.822335 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 5 21:51:22.833207 kernel: loop3: detected capacity change from 0 to 194512 Aug 5 21:51:22.834055 systemd-tmpfiles[1179]: ACLs are not supported, ignoring. Aug 5 21:51:22.834066 systemd-tmpfiles[1179]: ACLs are not supported, ignoring. Aug 5 21:51:22.838638 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 5 21:51:22.848157 kernel: loop4: detected capacity change from 0 to 113712 Aug 5 21:51:22.857165 kernel: loop5: detected capacity change from 0 to 59688 Aug 5 21:51:22.867309 (sd-merge)[1182]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Aug 5 21:51:22.867749 (sd-merge)[1182]: Merged extensions into '/usr'. Aug 5 21:51:22.872619 systemd[1]: Reloading requested from client PID 1155 ('systemd-sysext') (unit systemd-sysext.service)... Aug 5 21:51:22.872637 systemd[1]: Reloading... Aug 5 21:51:22.929218 zram_generator::config[1216]: No configuration found. Aug 5 21:51:22.995238 ldconfig[1150]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 5 21:51:23.005971 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 5 21:51:23.043512 systemd[1]: Reloading finished in 169 ms. Aug 5 21:51:23.071174 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 5 21:51:23.072574 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 5 21:51:23.090335 systemd[1]: Starting ensure-sysext.service... Aug 5 21:51:23.092246 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Aug 5 21:51:23.104991 systemd[1]: Reloading requested from client PID 1241 ('systemctl') (unit ensure-sysext.service)... Aug 5 21:51:23.105006 systemd[1]: Reloading... Aug 5 21:51:23.125200 systemd-tmpfiles[1242]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 5 21:51:23.125453 systemd-tmpfiles[1242]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 5 21:51:23.126218 systemd-tmpfiles[1242]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 5 21:51:23.126433 systemd-tmpfiles[1242]: ACLs are not supported, ignoring. Aug 5 21:51:23.126492 systemd-tmpfiles[1242]: ACLs are not supported, ignoring. Aug 5 21:51:23.128620 systemd-tmpfiles[1242]: Detected autofs mount point /boot during canonicalization of boot. Aug 5 21:51:23.128633 systemd-tmpfiles[1242]: Skipping /boot Aug 5 21:51:23.134780 systemd-tmpfiles[1242]: Detected autofs mount point /boot during canonicalization of boot. Aug 5 21:51:23.134795 systemd-tmpfiles[1242]: Skipping /boot Aug 5 21:51:23.153167 zram_generator::config[1267]: No configuration found. Aug 5 21:51:23.247895 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 5 21:51:23.285558 systemd[1]: Reloading finished in 180 ms. Aug 5 21:51:23.299108 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 5 21:51:23.307609 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Aug 5 21:51:23.315268 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 5 21:51:23.317537 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 5 21:51:23.320202 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 5 21:51:23.326482 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 5 21:51:23.331711 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 5 21:51:23.334792 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 5 21:51:23.341491 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 5 21:51:23.343878 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 5 21:51:23.345995 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 5 21:51:23.353455 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 5 21:51:23.354528 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 5 21:51:23.363439 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 5 21:51:23.365278 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 5 21:51:23.367386 systemd-udevd[1314]: Using default interface naming scheme 'v255'. Aug 5 21:51:23.368830 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 5 21:51:23.368993 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 5 21:51:23.370701 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 5 21:51:23.370840 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 5 21:51:23.372659 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 5 21:51:23.372815 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 5 21:51:23.379790 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 5 21:51:23.380050 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 5 21:51:23.390983 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 5 21:51:23.395803 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 5 21:51:23.399785 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 5 21:51:23.404509 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 5 21:51:23.420563 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 5 21:51:23.432457 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 5 21:51:23.435466 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 5 21:51:23.436470 augenrules[1357]: No rules Aug 5 21:51:23.436563 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 5 21:51:23.438444 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 5 21:51:23.439637 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 5 21:51:23.440464 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 5 21:51:23.442519 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 5 21:51:23.445726 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 5 21:51:23.448199 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1352) Aug 5 21:51:23.448543 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 5 21:51:23.450059 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 5 21:51:23.452887 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 5 21:51:23.459515 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 5 21:51:23.459652 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 5 21:51:23.466909 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 5 21:51:23.468057 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 5 21:51:23.469570 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1338) Aug 5 21:51:23.488697 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Aug 5 21:51:23.492654 systemd[1]: Finished ensure-sysext.service. Aug 5 21:51:23.504650 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 5 21:51:23.510937 systemd-resolved[1308]: Positive Trust Anchors: Aug 5 21:51:23.511111 systemd-resolved[1308]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 5 21:51:23.511155 systemd-resolved[1308]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Aug 5 21:51:23.517110 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 5 21:51:23.522328 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 5 21:51:23.531423 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 5 21:51:23.535372 systemd-resolved[1308]: Defaulting to hostname 'linux'. Aug 5 21:51:23.537354 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 5 21:51:23.538505 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 5 21:51:23.543671 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Aug 5 21:51:23.544916 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 5 21:51:23.545284 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 5 21:51:23.546705 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 5 21:51:23.546886 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 5 21:51:23.548317 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 5 21:51:23.548468 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 5 21:51:23.549794 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 5 21:51:23.549920 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 5 21:51:23.552323 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 5 21:51:23.552451 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 5 21:51:23.559946 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 5 21:51:23.562168 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 5 21:51:23.564699 systemd-networkd[1366]: lo: Link UP Aug 5 21:51:23.564712 systemd-networkd[1366]: lo: Gained carrier Aug 5 21:51:23.565605 systemd-networkd[1366]: Enumeration completed Aug 5 21:51:23.567474 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 5 21:51:23.569279 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 5 21:51:23.569343 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 5 21:51:23.569530 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 5 21:51:23.570709 systemd[1]: Reached target network.target - Network. Aug 5 21:51:23.576440 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 5 21:51:23.584040 systemd-networkd[1366]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 5 21:51:23.584049 systemd-networkd[1366]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 5 21:51:23.590975 systemd-networkd[1366]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 5 21:51:23.591007 systemd-networkd[1366]: eth0: Link UP Aug 5 21:51:23.591010 systemd-networkd[1366]: eth0: Gained carrier Aug 5 21:51:23.591018 systemd-networkd[1366]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 5 21:51:23.599098 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 5 21:51:23.602216 systemd-networkd[1366]: eth0: DHCPv4 address 10.0.0.99/16, gateway 10.0.0.1 acquired from 10.0.0.1 Aug 5 21:51:23.629541 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 21:51:23.629912 systemd-timesyncd[1386]: Contacted time server 10.0.0.1:123 (10.0.0.1). Aug 5 21:51:23.629971 systemd-timesyncd[1386]: Initial clock synchronization to Mon 2024-08-05 21:51:23.768847 UTC. Aug 5 21:51:23.630850 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Aug 5 21:51:23.632667 systemd[1]: Reached target time-set.target - System Time Set. Aug 5 21:51:23.643669 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Aug 5 21:51:23.646271 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Aug 5 21:51:23.667245 lvm[1403]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 5 21:51:23.677939 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 21:51:23.700195 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Aug 5 21:51:23.702153 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 5 21:51:23.703226 systemd[1]: Reached target sysinit.target - System Initialization. Aug 5 21:51:23.704339 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 5 21:51:23.705507 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 5 21:51:23.707059 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 5 21:51:23.708283 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 5 21:51:23.709514 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 5 21:51:23.710788 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 5 21:51:23.710825 systemd[1]: Reached target paths.target - Path Units. Aug 5 21:51:23.711704 systemd[1]: Reached target timers.target - Timer Units. Aug 5 21:51:23.713160 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 5 21:51:23.715536 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 5 21:51:23.731289 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 5 21:51:23.733610 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Aug 5 21:51:23.735209 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 5 21:51:23.736290 systemd[1]: Reached target sockets.target - Socket Units. Aug 5 21:51:23.737229 systemd[1]: Reached target basic.target - Basic System. Aug 5 21:51:23.738121 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 5 21:51:23.738222 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 5 21:51:23.739118 systemd[1]: Starting containerd.service - containerd container runtime... Aug 5 21:51:23.741063 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 5 21:51:23.742008 lvm[1410]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 5 21:51:23.743393 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 5 21:51:23.748095 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 5 21:51:23.749431 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 5 21:51:23.750510 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 5 21:51:23.753384 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 5 21:51:23.760598 jq[1413]: false Aug 5 21:51:23.761289 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 5 21:51:23.763567 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 5 21:51:23.767085 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 5 21:51:23.768491 extend-filesystems[1414]: Found loop3 Aug 5 21:51:23.769377 extend-filesystems[1414]: Found loop4 Aug 5 21:51:23.769377 extend-filesystems[1414]: Found loop5 Aug 5 21:51:23.769377 extend-filesystems[1414]: Found vda Aug 5 21:51:23.769377 extend-filesystems[1414]: Found vda1 Aug 5 21:51:23.769377 extend-filesystems[1414]: Found vda2 Aug 5 21:51:23.769377 extend-filesystems[1414]: Found vda3 Aug 5 21:51:23.769377 extend-filesystems[1414]: Found usr Aug 5 21:51:23.769377 extend-filesystems[1414]: Found vda4 Aug 5 21:51:23.769377 extend-filesystems[1414]: Found vda6 Aug 5 21:51:23.769377 extend-filesystems[1414]: Found vda7 Aug 5 21:51:23.769377 extend-filesystems[1414]: Found vda9 Aug 5 21:51:23.769377 extend-filesystems[1414]: Checking size of /dev/vda9 Aug 5 21:51:23.785590 dbus-daemon[1412]: [system] SELinux support is enabled Aug 5 21:51:23.770749 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 5 21:51:23.771197 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 5 21:51:23.772016 systemd[1]: Starting update-engine.service - Update Engine... Aug 5 21:51:23.773864 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 5 21:51:23.795555 jq[1425]: true Aug 5 21:51:23.778007 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Aug 5 21:51:23.780216 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 5 21:51:23.795819 jq[1434]: true Aug 5 21:51:23.780371 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 5 21:51:23.787659 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 5 21:51:23.800319 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 5 21:51:23.800362 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 5 21:51:23.800984 (ntainerd)[1439]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 5 21:51:23.804296 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 5 21:51:23.804351 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 5 21:51:23.809005 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 5 21:51:23.809383 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 5 21:51:23.820307 systemd[1]: motdgen.service: Deactivated successfully. Aug 5 21:51:23.822027 update_engine[1423]: I0805 21:51:23.820979 1423 main.cc:92] Flatcar Update Engine starting Aug 5 21:51:23.823858 extend-filesystems[1414]: Resized partition /dev/vda9 Aug 5 21:51:23.827014 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Aug 5 21:51:23.820508 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 5 21:51:23.827119 update_engine[1423]: I0805 21:51:23.824438 1423 update_check_scheduler.cc:74] Next update check in 4m28s Aug 5 21:51:23.827167 extend-filesystems[1452]: resize2fs 1.47.0 (5-Feb-2023) Aug 5 21:51:23.828291 tar[1430]: linux-arm64/helm Aug 5 21:51:23.837805 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1350) Aug 5 21:51:23.824419 systemd[1]: Started update-engine.service - Update Engine. Aug 5 21:51:23.835441 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 5 21:51:23.855166 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Aug 5 21:51:23.884947 extend-filesystems[1452]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Aug 5 21:51:23.884947 extend-filesystems[1452]: old_desc_blocks = 1, new_desc_blocks = 1 Aug 5 21:51:23.884947 extend-filesystems[1452]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Aug 5 21:51:23.893097 extend-filesystems[1414]: Resized filesystem in /dev/vda9 Aug 5 21:51:23.884975 systemd-logind[1420]: Watching system buttons on /dev/input/event0 (Power Button) Aug 5 21:51:23.885641 systemd-logind[1420]: New seat seat0. Aug 5 21:51:23.888454 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 5 21:51:23.888642 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 5 21:51:23.893262 systemd[1]: Started systemd-logind.service - User Login Management. Aug 5 21:51:23.896598 bash[1465]: Updated "/home/core/.ssh/authorized_keys" Aug 5 21:51:23.899227 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 5 21:51:23.902816 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Aug 5 21:51:23.928743 locksmithd[1456]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 5 21:51:23.968467 sshd_keygen[1435]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 5 21:51:23.988358 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 5 21:51:23.999427 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 5 21:51:24.004393 systemd[1]: issuegen.service: Deactivated successfully. Aug 5 21:51:24.006182 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 5 21:51:24.009955 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 5 21:51:24.025350 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 5 21:51:24.029462 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 5 21:51:24.031389 containerd[1439]: time="2024-08-05T21:51:24.031306642Z" level=info msg="starting containerd" revision=cd7148ac666309abf41fd4a49a8a5895b905e7f3 version=v1.7.18 Aug 5 21:51:24.035469 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Aug 5 21:51:24.037895 systemd[1]: Reached target getty.target - Login Prompts. Aug 5 21:51:24.059328 containerd[1439]: time="2024-08-05T21:51:24.059279540Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Aug 5 21:51:24.059328 containerd[1439]: time="2024-08-05T21:51:24.059336798Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Aug 5 21:51:24.061119 containerd[1439]: time="2024-08-05T21:51:24.061021294Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.43-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Aug 5 21:51:24.061119 containerd[1439]: time="2024-08-05T21:51:24.061069193Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Aug 5 21:51:24.061323 containerd[1439]: time="2024-08-05T21:51:24.061295865Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 5 21:51:24.061323 containerd[1439]: time="2024-08-05T21:51:24.061320811Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Aug 5 21:51:24.061408 containerd[1439]: time="2024-08-05T21:51:24.061394022Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Aug 5 21:51:24.061458 containerd[1439]: time="2024-08-05T21:51:24.061444728Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Aug 5 21:51:24.061483 containerd[1439]: time="2024-08-05T21:51:24.061460517Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Aug 5 21:51:24.061544 containerd[1439]: time="2024-08-05T21:51:24.061521357Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Aug 5 21:51:24.061738 containerd[1439]: time="2024-08-05T21:51:24.061719461Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Aug 5 21:51:24.061759 containerd[1439]: time="2024-08-05T21:51:24.061744122Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Aug 5 21:51:24.061759 containerd[1439]: time="2024-08-05T21:51:24.061755395Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Aug 5 21:51:24.061879 containerd[1439]: time="2024-08-05T21:51:24.061847854Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 5 21:51:24.061901 containerd[1439]: time="2024-08-05T21:51:24.061880207Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Aug 5 21:51:24.061948 containerd[1439]: time="2024-08-05T21:51:24.061934250Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Aug 5 21:51:24.061969 containerd[1439]: time="2024-08-05T21:51:24.061949226Z" level=info msg="metadata content store policy set" policy=shared Aug 5 21:51:24.065420 containerd[1439]: time="2024-08-05T21:51:24.065380360Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Aug 5 21:51:24.065420 containerd[1439]: time="2024-08-05T21:51:24.065420322Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Aug 5 21:51:24.065521 containerd[1439]: time="2024-08-05T21:51:24.065435013Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Aug 5 21:51:24.065521 containerd[1439]: time="2024-08-05T21:51:24.065469523Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Aug 5 21:51:24.065521 containerd[1439]: time="2024-08-05T21:51:24.065485109Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Aug 5 21:51:24.065521 containerd[1439]: time="2024-08-05T21:51:24.065495324Z" level=info msg="NRI interface is disabled by configuration." Aug 5 21:51:24.065521 containerd[1439]: time="2024-08-05T21:51:24.065507207Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Aug 5 21:51:24.065681 containerd[1439]: time="2024-08-05T21:51:24.065645611Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Aug 5 21:51:24.065681 containerd[1439]: time="2024-08-05T21:51:24.065669784Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Aug 5 21:51:24.065681 containerd[1439]: time="2024-08-05T21:51:24.065682684Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Aug 5 21:51:24.065750 containerd[1439]: time="2024-08-05T21:51:24.065698271Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Aug 5 21:51:24.065750 containerd[1439]: time="2024-08-05T21:51:24.065712839Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Aug 5 21:51:24.065750 containerd[1439]: time="2024-08-05T21:51:24.065729443Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Aug 5 21:51:24.065808 containerd[1439]: time="2024-08-05T21:51:24.065760209Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Aug 5 21:51:24.065808 containerd[1439]: time="2024-08-05T21:51:24.065775062Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Aug 5 21:51:24.065808 containerd[1439]: time="2024-08-05T21:51:24.065789753Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Aug 5 21:51:24.065808 containerd[1439]: time="2024-08-05T21:51:24.065802735Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Aug 5 21:51:24.065876 containerd[1439]: time="2024-08-05T21:51:24.065814944Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Aug 5 21:51:24.065876 containerd[1439]: time="2024-08-05T21:51:24.065826786Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Aug 5 21:51:24.065968 containerd[1439]: time="2024-08-05T21:51:24.065923925Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Aug 5 21:51:24.066221 containerd[1439]: time="2024-08-05T21:51:24.066192799Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Aug 5 21:51:24.066482 containerd[1439]: time="2024-08-05T21:51:24.066314599Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Aug 5 21:51:24.066482 containerd[1439]: time="2024-08-05T21:51:24.066338284Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Aug 5 21:51:24.066482 containerd[1439]: time="2024-08-05T21:51:24.066362620Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Aug 5 21:51:24.066590 containerd[1439]: time="2024-08-05T21:51:24.066574723Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Aug 5 21:51:24.066741 containerd[1439]: time="2024-08-05T21:51:24.066724888Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Aug 5 21:51:24.066806 containerd[1439]: time="2024-08-05T21:51:24.066792198Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Aug 5 21:51:24.068050 containerd[1439]: time="2024-08-05T21:51:24.066857677Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Aug 5 21:51:24.068050 containerd[1439]: time="2024-08-05T21:51:24.066877047Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Aug 5 21:51:24.068050 containerd[1439]: time="2024-08-05T21:51:24.066890477Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Aug 5 21:51:24.068050 containerd[1439]: time="2024-08-05T21:51:24.066902848Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Aug 5 21:51:24.068050 containerd[1439]: time="2024-08-05T21:51:24.066915586Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Aug 5 21:51:24.068050 containerd[1439]: time="2024-08-05T21:51:24.066931457Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Aug 5 21:51:24.068050 containerd[1439]: time="2024-08-05T21:51:24.067080320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Aug 5 21:51:24.068050 containerd[1439]: time="2024-08-05T21:51:24.067100220Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Aug 5 21:51:24.068050 containerd[1439]: time="2024-08-05T21:51:24.067113364Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Aug 5 21:51:24.068050 containerd[1439]: time="2024-08-05T21:51:24.067125817Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Aug 5 21:51:24.068050 containerd[1439]: time="2024-08-05T21:51:24.067138351Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Aug 5 21:51:24.068050 containerd[1439]: time="2024-08-05T21:51:24.067179657Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Aug 5 21:51:24.068050 containerd[1439]: time="2024-08-05T21:51:24.067192313Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Aug 5 21:51:24.068050 containerd[1439]: time="2024-08-05T21:51:24.067203748Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Aug 5 21:51:24.068362 containerd[1439]: time="2024-08-05T21:51:24.067466151Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Aug 5 21:51:24.068362 containerd[1439]: time="2024-08-05T21:51:24.067525810Z" level=info msg="Connect containerd service" Aug 5 21:51:24.068362 containerd[1439]: time="2024-08-05T21:51:24.067555029Z" level=info msg="using legacy CRI server" Aug 5 21:51:24.068362 containerd[1439]: time="2024-08-05T21:51:24.067562314Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 5 21:51:24.068362 containerd[1439]: time="2024-08-05T21:51:24.067719112Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Aug 5 21:51:24.068936 containerd[1439]: time="2024-08-05T21:51:24.068903831Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 5 21:51:24.069049 containerd[1439]: time="2024-08-05T21:51:24.069033404Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Aug 5 21:51:24.069144 containerd[1439]: time="2024-08-05T21:51:24.069116667Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Aug 5 21:51:24.069353 containerd[1439]: time="2024-08-05T21:51:24.069337194Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Aug 5 21:51:24.069420 containerd[1439]: time="2024-08-05T21:51:24.069404870Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Aug 5 21:51:24.069958 containerd[1439]: time="2024-08-05T21:51:24.069310946Z" level=info msg="Start subscribing containerd event" Aug 5 21:51:24.070044 containerd[1439]: time="2024-08-05T21:51:24.070030233Z" level=info msg="Start recovering state" Aug 5 21:51:24.070171 containerd[1439]: time="2024-08-05T21:51:24.070138848Z" level=info msg="Start event monitor" Aug 5 21:51:24.070242 containerd[1439]: time="2024-08-05T21:51:24.070228052Z" level=info msg="Start snapshots syncer" Aug 5 21:51:24.070299 containerd[1439]: time="2024-08-05T21:51:24.070286735Z" level=info msg="Start cni network conf syncer for default" Aug 5 21:51:24.070366 containerd[1439]: time="2024-08-05T21:51:24.070353271Z" level=info msg="Start streaming server" Aug 5 21:51:24.070549 containerd[1439]: time="2024-08-05T21:51:24.070523174Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 5 21:51:24.070593 containerd[1439]: time="2024-08-05T21:51:24.070580513Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 5 21:51:24.070645 containerd[1439]: time="2024-08-05T21:51:24.070633865Z" level=info msg="containerd successfully booted in 0.040519s" Aug 5 21:51:24.070736 systemd[1]: Started containerd.service - containerd container runtime. Aug 5 21:51:24.226423 tar[1430]: linux-arm64/LICENSE Aug 5 21:51:24.226423 tar[1430]: linux-arm64/README.md Aug 5 21:51:24.246210 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 5 21:51:25.338233 systemd-networkd[1366]: eth0: Gained IPv6LL Aug 5 21:51:25.341229 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 5 21:51:25.342840 systemd[1]: Reached target network-online.target - Network is Online. Aug 5 21:51:25.357370 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Aug 5 21:51:25.359635 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 21:51:25.361638 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 5 21:51:25.376434 systemd[1]: coreos-metadata.service: Deactivated successfully. Aug 5 21:51:25.376771 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Aug 5 21:51:25.380147 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 5 21:51:25.383486 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 5 21:51:25.843108 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 21:51:25.844615 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 5 21:51:25.847623 (kubelet)[1525]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 5 21:51:25.848229 systemd[1]: Startup finished in 559ms (kernel) + 4.370s (initrd) + 3.805s (userspace) = 8.736s. Aug 5 21:51:26.317083 kubelet[1525]: E0805 21:51:26.316908 1525 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 5 21:51:26.319758 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 5 21:51:26.319915 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 5 21:51:30.182869 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 5 21:51:30.183973 systemd[1]: Started sshd@0-10.0.0.99:22-10.0.0.1:50318.service - OpenSSH per-connection server daemon (10.0.0.1:50318). Aug 5 21:51:30.240206 sshd[1539]: Accepted publickey for core from 10.0.0.1 port 50318 ssh2: RSA SHA256:vLb6+tqq+AN0xHsfmostacJPBc0ER3TTPjmoL0pNEEc Aug 5 21:51:30.242099 sshd[1539]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:51:30.254819 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 5 21:51:30.269432 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 5 21:51:30.271236 systemd-logind[1420]: New session 1 of user core. Aug 5 21:51:30.280185 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 5 21:51:30.282524 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 5 21:51:30.289475 (systemd)[1543]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:51:30.379135 systemd[1543]: Queued start job for default target default.target. Aug 5 21:51:30.391182 systemd[1543]: Created slice app.slice - User Application Slice. Aug 5 21:51:30.391210 systemd[1543]: Reached target paths.target - Paths. Aug 5 21:51:30.391222 systemd[1543]: Reached target timers.target - Timers. Aug 5 21:51:30.392495 systemd[1543]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 5 21:51:30.403731 systemd[1543]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 5 21:51:30.403859 systemd[1543]: Reached target sockets.target - Sockets. Aug 5 21:51:30.403877 systemd[1543]: Reached target basic.target - Basic System. Aug 5 21:51:30.403919 systemd[1543]: Reached target default.target - Main User Target. Aug 5 21:51:30.403950 systemd[1543]: Startup finished in 108ms. Aug 5 21:51:30.404172 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 5 21:51:30.407024 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 5 21:51:30.467348 systemd[1]: Started sshd@1-10.0.0.99:22-10.0.0.1:50322.service - OpenSSH per-connection server daemon (10.0.0.1:50322). Aug 5 21:51:30.507491 sshd[1554]: Accepted publickey for core from 10.0.0.1 port 50322 ssh2: RSA SHA256:vLb6+tqq+AN0xHsfmostacJPBc0ER3TTPjmoL0pNEEc Aug 5 21:51:30.508830 sshd[1554]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:51:30.513067 systemd-logind[1420]: New session 2 of user core. Aug 5 21:51:30.525350 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 5 21:51:30.577636 sshd[1554]: pam_unix(sshd:session): session closed for user core Aug 5 21:51:30.587541 systemd[1]: sshd@1-10.0.0.99:22-10.0.0.1:50322.service: Deactivated successfully. Aug 5 21:51:30.589693 systemd[1]: session-2.scope: Deactivated successfully. Aug 5 21:51:30.591021 systemd-logind[1420]: Session 2 logged out. Waiting for processes to exit. Aug 5 21:51:30.598564 systemd[1]: Started sshd@2-10.0.0.99:22-10.0.0.1:50326.service - OpenSSH per-connection server daemon (10.0.0.1:50326). Aug 5 21:51:30.599573 systemd-logind[1420]: Removed session 2. Aug 5 21:51:30.633163 sshd[1561]: Accepted publickey for core from 10.0.0.1 port 50326 ssh2: RSA SHA256:vLb6+tqq+AN0xHsfmostacJPBc0ER3TTPjmoL0pNEEc Aug 5 21:51:30.634467 sshd[1561]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:51:30.638597 systemd-logind[1420]: New session 3 of user core. Aug 5 21:51:30.649372 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 5 21:51:30.697749 sshd[1561]: pam_unix(sshd:session): session closed for user core Aug 5 21:51:30.714676 systemd[1]: sshd@2-10.0.0.99:22-10.0.0.1:50326.service: Deactivated successfully. Aug 5 21:51:30.716187 systemd[1]: session-3.scope: Deactivated successfully. Aug 5 21:51:30.719306 systemd-logind[1420]: Session 3 logged out. Waiting for processes to exit. Aug 5 21:51:30.720402 systemd[1]: Started sshd@3-10.0.0.99:22-10.0.0.1:50332.service - OpenSSH per-connection server daemon (10.0.0.1:50332). Aug 5 21:51:30.721100 systemd-logind[1420]: Removed session 3. Aug 5 21:51:30.759349 sshd[1568]: Accepted publickey for core from 10.0.0.1 port 50332 ssh2: RSA SHA256:vLb6+tqq+AN0xHsfmostacJPBc0ER3TTPjmoL0pNEEc Aug 5 21:51:30.760687 sshd[1568]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:51:30.766430 systemd-logind[1420]: New session 4 of user core. Aug 5 21:51:30.775337 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 5 21:51:30.829471 sshd[1568]: pam_unix(sshd:session): session closed for user core Aug 5 21:51:30.838578 systemd[1]: sshd@3-10.0.0.99:22-10.0.0.1:50332.service: Deactivated successfully. Aug 5 21:51:30.840040 systemd[1]: session-4.scope: Deactivated successfully. Aug 5 21:51:30.842455 systemd-logind[1420]: Session 4 logged out. Waiting for processes to exit. Aug 5 21:51:30.843845 systemd-logind[1420]: Removed session 4. Aug 5 21:51:30.854708 systemd[1]: Started sshd@4-10.0.0.99:22-10.0.0.1:50346.service - OpenSSH per-connection server daemon (10.0.0.1:50346). Aug 5 21:51:30.889078 sshd[1575]: Accepted publickey for core from 10.0.0.1 port 50346 ssh2: RSA SHA256:vLb6+tqq+AN0xHsfmostacJPBc0ER3TTPjmoL0pNEEc Aug 5 21:51:30.890353 sshd[1575]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:51:30.895203 systemd-logind[1420]: New session 5 of user core. Aug 5 21:51:30.906407 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 5 21:51:30.968202 sudo[1578]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 5 21:51:30.968458 sudo[1578]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 5 21:51:30.982045 sudo[1578]: pam_unix(sudo:session): session closed for user root Aug 5 21:51:30.984389 sshd[1575]: pam_unix(sshd:session): session closed for user core Aug 5 21:51:30.993556 systemd[1]: sshd@4-10.0.0.99:22-10.0.0.1:50346.service: Deactivated successfully. Aug 5 21:51:30.996558 systemd[1]: session-5.scope: Deactivated successfully. Aug 5 21:51:30.997867 systemd-logind[1420]: Session 5 logged out. Waiting for processes to exit. Aug 5 21:51:31.009432 systemd[1]: Started sshd@5-10.0.0.99:22-10.0.0.1:50348.service - OpenSSH per-connection server daemon (10.0.0.1:50348). Aug 5 21:51:31.010286 systemd-logind[1420]: Removed session 5. Aug 5 21:51:31.044244 sshd[1583]: Accepted publickey for core from 10.0.0.1 port 50348 ssh2: RSA SHA256:vLb6+tqq+AN0xHsfmostacJPBc0ER3TTPjmoL0pNEEc Aug 5 21:51:31.045588 sshd[1583]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:51:31.049118 systemd-logind[1420]: New session 6 of user core. Aug 5 21:51:31.056330 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 5 21:51:31.107873 sudo[1587]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 5 21:51:31.108128 sudo[1587]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 5 21:51:31.111204 sudo[1587]: pam_unix(sudo:session): session closed for user root Aug 5 21:51:31.115951 sudo[1586]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Aug 5 21:51:31.116507 sudo[1586]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 5 21:51:31.135520 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Aug 5 21:51:31.136798 auditctl[1590]: No rules Aug 5 21:51:31.137693 systemd[1]: audit-rules.service: Deactivated successfully. Aug 5 21:51:31.137891 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Aug 5 21:51:31.139582 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 5 21:51:31.163989 augenrules[1608]: No rules Aug 5 21:51:31.165329 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 5 21:51:31.166649 sudo[1586]: pam_unix(sudo:session): session closed for user root Aug 5 21:51:31.168346 sshd[1583]: pam_unix(sshd:session): session closed for user core Aug 5 21:51:31.180790 systemd[1]: sshd@5-10.0.0.99:22-10.0.0.1:50348.service: Deactivated successfully. Aug 5 21:51:31.182281 systemd[1]: session-6.scope: Deactivated successfully. Aug 5 21:51:31.184040 systemd-logind[1420]: Session 6 logged out. Waiting for processes to exit. Aug 5 21:51:31.197451 systemd[1]: Started sshd@6-10.0.0.99:22-10.0.0.1:50356.service - OpenSSH per-connection server daemon (10.0.0.1:50356). Aug 5 21:51:31.198601 systemd-logind[1420]: Removed session 6. Aug 5 21:51:31.232107 sshd[1616]: Accepted publickey for core from 10.0.0.1 port 50356 ssh2: RSA SHA256:vLb6+tqq+AN0xHsfmostacJPBc0ER3TTPjmoL0pNEEc Aug 5 21:51:31.233428 sshd[1616]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:51:31.237240 systemd-logind[1420]: New session 7 of user core. Aug 5 21:51:31.243334 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 5 21:51:31.293622 sudo[1619]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 5 21:51:31.294197 sudo[1619]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 5 21:51:31.402416 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 5 21:51:31.402507 (dockerd)[1629]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 5 21:51:31.641830 dockerd[1629]: time="2024-08-05T21:51:31.639265329Z" level=info msg="Starting up" Aug 5 21:51:31.730620 dockerd[1629]: time="2024-08-05T21:51:31.730571669Z" level=info msg="Loading containers: start." Aug 5 21:51:31.821873 kernel: Initializing XFRM netlink socket Aug 5 21:51:31.889915 systemd-networkd[1366]: docker0: Link UP Aug 5 21:51:31.911939 dockerd[1629]: time="2024-08-05T21:51:31.911671411Z" level=info msg="Loading containers: done." Aug 5 21:51:31.973252 dockerd[1629]: time="2024-08-05T21:51:31.973199693Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 5 21:51:31.973441 dockerd[1629]: time="2024-08-05T21:51:31.973399407Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Aug 5 21:51:31.973544 dockerd[1629]: time="2024-08-05T21:51:31.973512534Z" level=info msg="Daemon has completed initialization" Aug 5 21:51:31.994996 dockerd[1629]: time="2024-08-05T21:51:31.994874782Z" level=info msg="API listen on /run/docker.sock" Aug 5 21:51:31.995216 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 5 21:51:32.667022 containerd[1439]: time="2024-08-05T21:51:32.666941038Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.7\"" Aug 5 21:51:33.300132 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount712326947.mount: Deactivated successfully. Aug 5 21:51:35.139562 containerd[1439]: time="2024-08-05T21:51:35.139510064Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:51:35.141228 containerd[1439]: time="2024-08-05T21:51:35.141160600Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.7: active requests=0, bytes read=32285113" Aug 5 21:51:35.141866 containerd[1439]: time="2024-08-05T21:51:35.141832397Z" level=info msg="ImageCreate event name:\"sha256:09da0e2c1634057a9cb3d1ab3187c1e87431acaae308ee0504a9f637fc1b1165\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:51:35.145163 containerd[1439]: time="2024-08-05T21:51:35.145058412Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:7b104771c13b9e3537846c3f6949000785e1fbc66d07f123ebcea22c8eb918b3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:51:35.146125 containerd[1439]: time="2024-08-05T21:51:35.146079079Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.7\" with image id \"sha256:09da0e2c1634057a9cb3d1ab3187c1e87431acaae308ee0504a9f637fc1b1165\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:7b104771c13b9e3537846c3f6949000785e1fbc66d07f123ebcea22c8eb918b3\", size \"32281911\" in 2.47909637s" Aug 5 21:51:35.146125 containerd[1439]: time="2024-08-05T21:51:35.146118677Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.7\" returns image reference \"sha256:09da0e2c1634057a9cb3d1ab3187c1e87431acaae308ee0504a9f637fc1b1165\"" Aug 5 21:51:35.166260 containerd[1439]: time="2024-08-05T21:51:35.166209895Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.7\"" Aug 5 21:51:36.570223 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 5 21:51:36.579344 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 21:51:36.671522 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 21:51:36.675578 (kubelet)[1840]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 5 21:51:36.716887 kubelet[1840]: E0805 21:51:36.716785 1840 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 5 21:51:36.720953 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 5 21:51:36.721171 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 5 21:51:37.341125 containerd[1439]: time="2024-08-05T21:51:37.341062718Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:51:37.341866 containerd[1439]: time="2024-08-05T21:51:37.341825086Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.7: active requests=0, bytes read=29362253" Aug 5 21:51:37.342393 containerd[1439]: time="2024-08-05T21:51:37.342360521Z" level=info msg="ImageCreate event name:\"sha256:42d71ec0804ba94e173cb2bf05d873aad38ec4db300c158498d54f2b8c8368d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:51:37.345280 containerd[1439]: time="2024-08-05T21:51:37.345248139Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e3356f078f7ce72984385d4ca5e726a8cb05ce355d6b158f41aa9b5dbaff9b19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:51:37.346411 containerd[1439]: time="2024-08-05T21:51:37.346371529Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.7\" with image id \"sha256:42d71ec0804ba94e173cb2bf05d873aad38ec4db300c158498d54f2b8c8368d1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e3356f078f7ce72984385d4ca5e726a8cb05ce355d6b158f41aa9b5dbaff9b19\", size \"30849518\" in 2.180040841s" Aug 5 21:51:37.346449 containerd[1439]: time="2024-08-05T21:51:37.346410608Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.7\" returns image reference \"sha256:42d71ec0804ba94e173cb2bf05d873aad38ec4db300c158498d54f2b8c8368d1\"" Aug 5 21:51:37.367753 containerd[1439]: time="2024-08-05T21:51:37.367718395Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.7\"" Aug 5 21:51:38.696670 containerd[1439]: time="2024-08-05T21:51:38.696609984Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:51:38.698780 containerd[1439]: time="2024-08-05T21:51:38.698736348Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.7: active requests=0, bytes read=15751351" Aug 5 21:51:38.701255 containerd[1439]: time="2024-08-05T21:51:38.701225160Z" level=info msg="ImageCreate event name:\"sha256:aa0debff447ecc9a9254154628d35be75d6ddcf6f680bc2672e176729f16ac03\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:51:38.706263 containerd[1439]: time="2024-08-05T21:51:38.706215580Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c6203fbc102cc80a7d934946b7eacb7491480a65db56db203cb3035deecaaa39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:51:38.707055 containerd[1439]: time="2024-08-05T21:51:38.707013913Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.7\" with image id \"sha256:aa0debff447ecc9a9254154628d35be75d6ddcf6f680bc2672e176729f16ac03\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c6203fbc102cc80a7d934946b7eacb7491480a65db56db203cb3035deecaaa39\", size \"17238634\" in 1.339254557s" Aug 5 21:51:38.707055 containerd[1439]: time="2024-08-05T21:51:38.707048085Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.7\" returns image reference \"sha256:aa0debff447ecc9a9254154628d35be75d6ddcf6f680bc2672e176729f16ac03\"" Aug 5 21:51:38.727103 containerd[1439]: time="2024-08-05T21:51:38.727060177Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.7\"" Aug 5 21:51:40.239099 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount832204381.mount: Deactivated successfully. Aug 5 21:51:40.579829 containerd[1439]: time="2024-08-05T21:51:40.579695130Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:51:40.580717 containerd[1439]: time="2024-08-05T21:51:40.580687001Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.7: active requests=0, bytes read=25251734" Aug 5 21:51:40.581528 containerd[1439]: time="2024-08-05T21:51:40.581507762Z" level=info msg="ImageCreate event name:\"sha256:25c9adc8cf12a1aec7e02751b8e9faca4907a0551a6d16c425e576622fdb59db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:51:40.583376 containerd[1439]: time="2024-08-05T21:51:40.583323961Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4d5e787d71c41243379cbb323d2b3a920fa50825cab19d20ef3344a808d18c4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:51:40.584117 containerd[1439]: time="2024-08-05T21:51:40.584067483Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.7\" with image id \"sha256:25c9adc8cf12a1aec7e02751b8e9faca4907a0551a6d16c425e576622fdb59db\", repo tag \"registry.k8s.io/kube-proxy:v1.29.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:4d5e787d71c41243379cbb323d2b3a920fa50825cab19d20ef3344a808d18c4e\", size \"25250751\" in 1.85696496s" Aug 5 21:51:40.584117 containerd[1439]: time="2024-08-05T21:51:40.584101553Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.7\" returns image reference \"sha256:25c9adc8cf12a1aec7e02751b8e9faca4907a0551a6d16c425e576622fdb59db\"" Aug 5 21:51:40.604194 containerd[1439]: time="2024-08-05T21:51:40.604145118Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Aug 5 21:51:41.097978 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2291814293.mount: Deactivated successfully. Aug 5 21:51:41.647215 containerd[1439]: time="2024-08-05T21:51:41.647165071Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:51:41.647646 containerd[1439]: time="2024-08-05T21:51:41.647611190Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Aug 5 21:51:41.648605 containerd[1439]: time="2024-08-05T21:51:41.648572032Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:51:41.651719 containerd[1439]: time="2024-08-05T21:51:41.651673912Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:51:41.655923 containerd[1439]: time="2024-08-05T21:51:41.655845188Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.051652416s" Aug 5 21:51:41.655923 containerd[1439]: time="2024-08-05T21:51:41.655921645Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Aug 5 21:51:41.676149 containerd[1439]: time="2024-08-05T21:51:41.676034813Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Aug 5 21:51:42.545064 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount967071947.mount: Deactivated successfully. Aug 5 21:51:42.767023 containerd[1439]: time="2024-08-05T21:51:42.766960087Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:51:42.770948 containerd[1439]: time="2024-08-05T21:51:42.770904875Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" Aug 5 21:51:42.771882 containerd[1439]: time="2024-08-05T21:51:42.771855125Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:51:42.774167 containerd[1439]: time="2024-08-05T21:51:42.774080095Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:51:42.775385 containerd[1439]: time="2024-08-05T21:51:42.774947776Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 1.098872854s" Aug 5 21:51:42.775385 containerd[1439]: time="2024-08-05T21:51:42.775002382Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Aug 5 21:51:42.795517 containerd[1439]: time="2024-08-05T21:51:42.795377502Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Aug 5 21:51:43.341580 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3302942864.mount: Deactivated successfully. Aug 5 21:51:44.706168 containerd[1439]: time="2024-08-05T21:51:44.705980974Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:51:44.707098 containerd[1439]: time="2024-08-05T21:51:44.706822465Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200788" Aug 5 21:51:44.707794 containerd[1439]: time="2024-08-05T21:51:44.707755185Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:51:44.711062 containerd[1439]: time="2024-08-05T21:51:44.711022991Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:51:44.712374 containerd[1439]: time="2024-08-05T21:51:44.712335928Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 1.916912718s" Aug 5 21:51:44.712426 containerd[1439]: time="2024-08-05T21:51:44.712375336Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Aug 5 21:51:46.971597 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 5 21:51:46.981352 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 21:51:47.078048 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 21:51:47.082376 (kubelet)[2071]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 5 21:51:47.125417 kubelet[2071]: E0805 21:51:47.125306 2071 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 5 21:51:47.128448 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 5 21:51:47.128586 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 5 21:51:48.693216 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 21:51:48.705615 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 21:51:48.727255 systemd[1]: Reloading requested from client PID 2086 ('systemctl') (unit session-7.scope)... Aug 5 21:51:48.727274 systemd[1]: Reloading... Aug 5 21:51:48.793263 zram_generator::config[2126]: No configuration found. Aug 5 21:51:48.876229 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 5 21:51:48.930751 systemd[1]: Reloading finished in 203 ms. Aug 5 21:51:48.971764 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 21:51:48.975595 systemd[1]: kubelet.service: Deactivated successfully. Aug 5 21:51:48.975793 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 21:51:48.979371 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 21:51:49.071807 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 21:51:49.077776 (kubelet)[2170]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 5 21:51:49.114109 kubelet[2170]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 5 21:51:49.114109 kubelet[2170]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 5 21:51:49.114109 kubelet[2170]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 5 21:51:49.114505 kubelet[2170]: I0805 21:51:49.114164 2170 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 5 21:51:49.566186 kubelet[2170]: I0805 21:51:49.566151 2170 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Aug 5 21:51:49.566186 kubelet[2170]: I0805 21:51:49.566185 2170 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 5 21:51:49.566396 kubelet[2170]: I0805 21:51:49.566381 2170 server.go:919] "Client rotation is on, will bootstrap in background" Aug 5 21:51:49.603829 kubelet[2170]: I0805 21:51:49.603789 2170 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 5 21:51:49.604939 kubelet[2170]: E0805 21:51:49.604899 2170 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.99:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.99:6443: connect: connection refused Aug 5 21:51:49.614850 kubelet[2170]: I0805 21:51:49.614815 2170 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 5 21:51:49.615069 kubelet[2170]: I0805 21:51:49.615046 2170 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 5 21:51:49.615271 kubelet[2170]: I0805 21:51:49.615247 2170 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Aug 5 21:51:49.615353 kubelet[2170]: I0805 21:51:49.615273 2170 topology_manager.go:138] "Creating topology manager with none policy" Aug 5 21:51:49.615353 kubelet[2170]: I0805 21:51:49.615282 2170 container_manager_linux.go:301] "Creating device plugin manager" Aug 5 21:51:49.615411 kubelet[2170]: I0805 21:51:49.615393 2170 state_mem.go:36] "Initialized new in-memory state store" Aug 5 21:51:49.617522 kubelet[2170]: I0805 21:51:49.617489 2170 kubelet.go:396] "Attempting to sync node with API server" Aug 5 21:51:49.617555 kubelet[2170]: I0805 21:51:49.617524 2170 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 5 21:51:49.617925 kubelet[2170]: I0805 21:51:49.617899 2170 kubelet.go:312] "Adding apiserver pod source" Aug 5 21:51:49.617957 kubelet[2170]: I0805 21:51:49.617934 2170 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 5 21:51:49.618026 kubelet[2170]: W0805 21:51:49.617974 2170 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.99:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.99:6443: connect: connection refused Aug 5 21:51:49.618055 kubelet[2170]: E0805 21:51:49.618032 2170 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.99:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.99:6443: connect: connection refused Aug 5 21:51:49.620208 kubelet[2170]: I0805 21:51:49.620187 2170 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.18" apiVersion="v1" Aug 5 21:51:49.620435 kubelet[2170]: W0805 21:51:49.620381 2170 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.99:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.99:6443: connect: connection refused Aug 5 21:51:49.620476 kubelet[2170]: E0805 21:51:49.620437 2170 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.99:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.99:6443: connect: connection refused Aug 5 21:51:49.620680 kubelet[2170]: I0805 21:51:49.620648 2170 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 5 21:51:49.620775 kubelet[2170]: W0805 21:51:49.620764 2170 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 5 21:51:49.621957 kubelet[2170]: I0805 21:51:49.621912 2170 server.go:1256] "Started kubelet" Aug 5 21:51:49.625204 kubelet[2170]: I0805 21:51:49.625031 2170 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Aug 5 21:51:49.625943 kubelet[2170]: I0805 21:51:49.625910 2170 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 5 21:51:49.626291 kubelet[2170]: I0805 21:51:49.626275 2170 server.go:461] "Adding debug handlers to kubelet server" Aug 5 21:51:49.626471 kubelet[2170]: I0805 21:51:49.626457 2170 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 5 21:51:49.627016 kubelet[2170]: I0805 21:51:49.626999 2170 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 5 21:51:49.630190 kubelet[2170]: I0805 21:51:49.627953 2170 volume_manager.go:291] "Starting Kubelet Volume Manager" Aug 5 21:51:49.630190 kubelet[2170]: I0805 21:51:49.628271 2170 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Aug 5 21:51:49.630190 kubelet[2170]: I0805 21:51:49.628331 2170 reconciler_new.go:29] "Reconciler: start to sync state" Aug 5 21:51:49.630190 kubelet[2170]: E0805 21:51:49.629988 2170 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.99:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.99:6443: connect: connection refused" interval="200ms" Aug 5 21:51:49.632184 kubelet[2170]: W0805 21:51:49.632117 2170 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.99:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.99:6443: connect: connection refused Aug 5 21:51:49.632257 kubelet[2170]: E0805 21:51:49.632188 2170 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.99:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.99:6443: connect: connection refused Aug 5 21:51:49.633649 kubelet[2170]: I0805 21:51:49.633624 2170 factory.go:221] Registration of the containerd container factory successfully Aug 5 21:51:49.633649 kubelet[2170]: I0805 21:51:49.633644 2170 factory.go:221] Registration of the systemd container factory successfully Aug 5 21:51:49.634008 kubelet[2170]: I0805 21:51:49.633706 2170 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 5 21:51:49.635160 kubelet[2170]: E0805 21:51:49.634915 2170 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.99:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.99:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.17e8f39c30699477 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-08-05 21:51:49.621888119 +0000 UTC m=+0.540914069,LastTimestamp:2024-08-05 21:51:49.621888119 +0000 UTC m=+0.540914069,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Aug 5 21:51:49.646598 kubelet[2170]: I0805 21:51:49.646573 2170 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 5 21:51:49.646598 kubelet[2170]: I0805 21:51:49.646593 2170 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 5 21:51:49.646706 kubelet[2170]: I0805 21:51:49.646607 2170 state_mem.go:36] "Initialized new in-memory state store" Aug 5 21:51:49.647518 kubelet[2170]: I0805 21:51:49.647483 2170 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 5 21:51:49.649181 kubelet[2170]: I0805 21:51:49.648412 2170 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 5 21:51:49.649181 kubelet[2170]: I0805 21:51:49.648428 2170 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 5 21:51:49.649181 kubelet[2170]: I0805 21:51:49.648445 2170 kubelet.go:2329] "Starting kubelet main sync loop" Aug 5 21:51:49.649181 kubelet[2170]: E0805 21:51:49.648487 2170 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 5 21:51:49.650038 kubelet[2170]: W0805 21:51:49.649984 2170 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.99:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.99:6443: connect: connection refused Aug 5 21:51:49.650038 kubelet[2170]: E0805 21:51:49.650037 2170 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.99:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.99:6443: connect: connection refused Aug 5 21:51:49.732529 kubelet[2170]: I0805 21:51:49.730286 2170 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Aug 5 21:51:49.732529 kubelet[2170]: E0805 21:51:49.732480 2170 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.99:6443/api/v1/nodes\": dial tcp 10.0.0.99:6443: connect: connection refused" node="localhost" Aug 5 21:51:49.748715 kubelet[2170]: E0805 21:51:49.748676 2170 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 5 21:51:49.793863 kubelet[2170]: I0805 21:51:49.793829 2170 policy_none.go:49] "None policy: Start" Aug 5 21:51:49.794596 kubelet[2170]: I0805 21:51:49.794577 2170 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 5 21:51:49.794652 kubelet[2170]: I0805 21:51:49.794625 2170 state_mem.go:35] "Initializing new in-memory state store" Aug 5 21:51:49.801556 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Aug 5 21:51:49.815801 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Aug 5 21:51:49.818553 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Aug 5 21:51:49.829097 kubelet[2170]: I0805 21:51:49.829062 2170 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 5 21:51:49.829389 kubelet[2170]: I0805 21:51:49.829359 2170 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 5 21:51:49.830733 kubelet[2170]: E0805 21:51:49.830693 2170 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.99:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.99:6443: connect: connection refused" interval="400ms" Aug 5 21:51:49.831862 kubelet[2170]: E0805 21:51:49.831842 2170 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Aug 5 21:51:49.934026 kubelet[2170]: I0805 21:51:49.933985 2170 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Aug 5 21:51:49.934342 kubelet[2170]: E0805 21:51:49.934314 2170 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.99:6443/api/v1/nodes\": dial tcp 10.0.0.99:6443: connect: connection refused" node="localhost" Aug 5 21:51:49.949538 kubelet[2170]: I0805 21:51:49.949508 2170 topology_manager.go:215] "Topology Admit Handler" podUID="abb94e9f424ada93d15192df8f753192" podNamespace="kube-system" podName="kube-apiserver-localhost" Aug 5 21:51:49.950486 kubelet[2170]: I0805 21:51:49.950457 2170 topology_manager.go:215] "Topology Admit Handler" podUID="088f5b844ad7241e38f298babde6e061" podNamespace="kube-system" podName="kube-controller-manager-localhost" Aug 5 21:51:49.951320 kubelet[2170]: I0805 21:51:49.951276 2170 topology_manager.go:215] "Topology Admit Handler" podUID="cb686d9581fc5af7d1cc8e14735ce3db" podNamespace="kube-system" podName="kube-scheduler-localhost" Aug 5 21:51:49.957060 systemd[1]: Created slice kubepods-burstable-podabb94e9f424ada93d15192df8f753192.slice - libcontainer container kubepods-burstable-podabb94e9f424ada93d15192df8f753192.slice. Aug 5 21:51:49.984116 systemd[1]: Created slice kubepods-burstable-pod088f5b844ad7241e38f298babde6e061.slice - libcontainer container kubepods-burstable-pod088f5b844ad7241e38f298babde6e061.slice. Aug 5 21:51:50.001063 systemd[1]: Created slice kubepods-burstable-podcb686d9581fc5af7d1cc8e14735ce3db.slice - libcontainer container kubepods-burstable-podcb686d9581fc5af7d1cc8e14735ce3db.slice. Aug 5 21:51:50.031667 kubelet[2170]: I0805 21:51:50.031505 2170 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/abb94e9f424ada93d15192df8f753192-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"abb94e9f424ada93d15192df8f753192\") " pod="kube-system/kube-apiserver-localhost" Aug 5 21:51:50.031667 kubelet[2170]: I0805 21:51:50.031553 2170 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/abb94e9f424ada93d15192df8f753192-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"abb94e9f424ada93d15192df8f753192\") " pod="kube-system/kube-apiserver-localhost" Aug 5 21:51:50.031667 kubelet[2170]: I0805 21:51:50.031576 2170 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/088f5b844ad7241e38f298babde6e061-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"088f5b844ad7241e38f298babde6e061\") " pod="kube-system/kube-controller-manager-localhost" Aug 5 21:51:50.031667 kubelet[2170]: I0805 21:51:50.031596 2170 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/088f5b844ad7241e38f298babde6e061-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"088f5b844ad7241e38f298babde6e061\") " pod="kube-system/kube-controller-manager-localhost" Aug 5 21:51:50.031667 kubelet[2170]: I0805 21:51:50.031639 2170 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cb686d9581fc5af7d1cc8e14735ce3db-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"cb686d9581fc5af7d1cc8e14735ce3db\") " pod="kube-system/kube-scheduler-localhost" Aug 5 21:51:50.031878 kubelet[2170]: I0805 21:51:50.031659 2170 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/abb94e9f424ada93d15192df8f753192-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"abb94e9f424ada93d15192df8f753192\") " pod="kube-system/kube-apiserver-localhost" Aug 5 21:51:50.031878 kubelet[2170]: I0805 21:51:50.031701 2170 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/088f5b844ad7241e38f298babde6e061-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"088f5b844ad7241e38f298babde6e061\") " pod="kube-system/kube-controller-manager-localhost" Aug 5 21:51:50.031878 kubelet[2170]: I0805 21:51:50.031755 2170 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/088f5b844ad7241e38f298babde6e061-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"088f5b844ad7241e38f298babde6e061\") " pod="kube-system/kube-controller-manager-localhost" Aug 5 21:51:50.031878 kubelet[2170]: I0805 21:51:50.031804 2170 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/088f5b844ad7241e38f298babde6e061-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"088f5b844ad7241e38f298babde6e061\") " pod="kube-system/kube-controller-manager-localhost" Aug 5 21:51:50.231264 kubelet[2170]: E0805 21:51:50.231119 2170 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.99:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.99:6443: connect: connection refused" interval="800ms" Aug 5 21:51:50.283277 kubelet[2170]: E0805 21:51:50.283222 2170 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:51:50.283909 containerd[1439]: time="2024-08-05T21:51:50.283863252Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:abb94e9f424ada93d15192df8f753192,Namespace:kube-system,Attempt:0,}" Aug 5 21:51:50.299493 kubelet[2170]: E0805 21:51:50.299449 2170 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:51:50.299901 containerd[1439]: time="2024-08-05T21:51:50.299859162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:088f5b844ad7241e38f298babde6e061,Namespace:kube-system,Attempt:0,}" Aug 5 21:51:50.303190 kubelet[2170]: E0805 21:51:50.303161 2170 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:51:50.303713 containerd[1439]: time="2024-08-05T21:51:50.303543070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:cb686d9581fc5af7d1cc8e14735ce3db,Namespace:kube-system,Attempt:0,}" Aug 5 21:51:50.336485 kubelet[2170]: I0805 21:51:50.336409 2170 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Aug 5 21:51:50.336818 kubelet[2170]: E0805 21:51:50.336786 2170 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.99:6443/api/v1/nodes\": dial tcp 10.0.0.99:6443: connect: connection refused" node="localhost" Aug 5 21:51:50.429150 kubelet[2170]: W0805 21:51:50.428184 2170 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.99:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.99:6443: connect: connection refused Aug 5 21:51:50.429150 kubelet[2170]: E0805 21:51:50.428220 2170 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.99:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.99:6443: connect: connection refused Aug 5 21:51:50.734995 kubelet[2170]: W0805 21:51:50.734951 2170 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.99:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.99:6443: connect: connection refused Aug 5 21:51:50.734995 kubelet[2170]: E0805 21:51:50.734992 2170 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.99:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.99:6443: connect: connection refused Aug 5 21:51:50.901024 kubelet[2170]: W0805 21:51:50.900937 2170 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.99:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.99:6443: connect: connection refused Aug 5 21:51:50.901024 kubelet[2170]: E0805 21:51:50.901009 2170 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.99:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.99:6443: connect: connection refused Aug 5 21:51:50.933960 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount425850252.mount: Deactivated successfully. Aug 5 21:51:50.940480 containerd[1439]: time="2024-08-05T21:51:50.939727714Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 21:51:50.940480 containerd[1439]: time="2024-08-05T21:51:50.940163549Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Aug 5 21:51:50.940908 containerd[1439]: time="2024-08-05T21:51:50.940840634Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 21:51:50.941584 containerd[1439]: time="2024-08-05T21:51:50.941550897Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 5 21:51:50.942273 containerd[1439]: time="2024-08-05T21:51:50.942240870Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 21:51:50.943005 containerd[1439]: time="2024-08-05T21:51:50.942956736Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 21:51:50.944403 containerd[1439]: time="2024-08-05T21:51:50.944158224Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 5 21:51:50.946715 containerd[1439]: time="2024-08-05T21:51:50.946648047Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 21:51:50.948911 containerd[1439]: time="2024-08-05T21:51:50.948839190Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 645.218118ms" Aug 5 21:51:50.950458 containerd[1439]: time="2024-08-05T21:51:50.950175951Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 650.224899ms" Aug 5 21:51:50.951970 containerd[1439]: time="2024-08-05T21:51:50.951932819Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 667.971114ms" Aug 5 21:51:51.036435 kubelet[2170]: E0805 21:51:51.036323 2170 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.99:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.99:6443: connect: connection refused" interval="1.6s" Aug 5 21:51:51.051374 kubelet[2170]: W0805 21:51:51.051335 2170 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.99:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.99:6443: connect: connection refused Aug 5 21:51:51.051374 kubelet[2170]: E0805 21:51:51.051376 2170 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.99:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.99:6443: connect: connection refused Aug 5 21:51:51.108845 containerd[1439]: time="2024-08-05T21:51:51.108704392Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 21:51:51.108845 containerd[1439]: time="2024-08-05T21:51:51.108804239Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:51:51.108845 containerd[1439]: time="2024-08-05T21:51:51.108819326Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 21:51:51.108845 containerd[1439]: time="2024-08-05T21:51:51.108829251Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:51:51.109561 containerd[1439]: time="2024-08-05T21:51:51.109469593Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 21:51:51.109670 containerd[1439]: time="2024-08-05T21:51:51.109533624Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:51:51.109756 containerd[1439]: time="2024-08-05T21:51:51.109653920Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 21:51:51.109836 containerd[1439]: time="2024-08-05T21:51:51.109802470Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:51:51.116332 containerd[1439]: time="2024-08-05T21:51:51.116247033Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 21:51:51.116417 containerd[1439]: time="2024-08-05T21:51:51.116350922Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:51:51.116417 containerd[1439]: time="2024-08-05T21:51:51.116383377Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 21:51:51.116417 containerd[1439]: time="2024-08-05T21:51:51.116408629Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:51:51.130309 systemd[1]: Started cri-containerd-2e636d71635b2a706b4675747c790da49faf826dde650b890b292b14e7adc00a.scope - libcontainer container 2e636d71635b2a706b4675747c790da49faf826dde650b890b292b14e7adc00a. Aug 5 21:51:51.131380 systemd[1]: Started cri-containerd-f16a7e52a8b10bdd7159d3226c8023c37e4b76c67abc7e734f12176d82b5d769.scope - libcontainer container f16a7e52a8b10bdd7159d3226c8023c37e4b76c67abc7e734f12176d82b5d769. Aug 5 21:51:51.136082 systemd[1]: Started cri-containerd-6bc6683b5e8ad7623782c5628e9835e53c02777e00950f50b30021431e8a25a6.scope - libcontainer container 6bc6683b5e8ad7623782c5628e9835e53c02777e00950f50b30021431e8a25a6. Aug 5 21:51:51.142179 kubelet[2170]: I0805 21:51:51.141736 2170 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Aug 5 21:51:51.142179 kubelet[2170]: E0805 21:51:51.142064 2170 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.99:6443/api/v1/nodes\": dial tcp 10.0.0.99:6443: connect: connection refused" node="localhost" Aug 5 21:51:51.165882 containerd[1439]: time="2024-08-05T21:51:51.165743922Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:cb686d9581fc5af7d1cc8e14735ce3db,Namespace:kube-system,Attempt:0,} returns sandbox id \"f16a7e52a8b10bdd7159d3226c8023c37e4b76c67abc7e734f12176d82b5d769\"" Aug 5 21:51:51.167214 kubelet[2170]: E0805 21:51:51.167192 2170 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:51:51.174157 containerd[1439]: time="2024-08-05T21:51:51.174078657Z" level=info msg="CreateContainer within sandbox \"f16a7e52a8b10bdd7159d3226c8023c37e4b76c67abc7e734f12176d82b5d769\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 5 21:51:51.177668 containerd[1439]: time="2024-08-05T21:51:51.177587513Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:088f5b844ad7241e38f298babde6e061,Namespace:kube-system,Attempt:0,} returns sandbox id \"2e636d71635b2a706b4675747c790da49faf826dde650b890b292b14e7adc00a\"" Aug 5 21:51:51.178333 kubelet[2170]: E0805 21:51:51.178310 2170 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:51:51.178928 containerd[1439]: time="2024-08-05T21:51:51.178902054Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:abb94e9f424ada93d15192df8f753192,Namespace:kube-system,Attempt:0,} returns sandbox id \"6bc6683b5e8ad7623782c5628e9835e53c02777e00950f50b30021431e8a25a6\"" Aug 5 21:51:51.179862 kubelet[2170]: E0805 21:51:51.179840 2170 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:51:51.180929 containerd[1439]: time="2024-08-05T21:51:51.180808914Z" level=info msg="CreateContainer within sandbox \"2e636d71635b2a706b4675747c790da49faf826dde650b890b292b14e7adc00a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 5 21:51:51.182788 containerd[1439]: time="2024-08-05T21:51:51.182739786Z" level=info msg="CreateContainer within sandbox \"6bc6683b5e8ad7623782c5628e9835e53c02777e00950f50b30021431e8a25a6\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 5 21:51:51.187006 containerd[1439]: time="2024-08-05T21:51:51.186969463Z" level=info msg="CreateContainer within sandbox \"f16a7e52a8b10bdd7159d3226c8023c37e4b76c67abc7e734f12176d82b5d769\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"3e42abdccca75e0e2ef0b121854bfe24fe07cc81aa835981802d3d7e992c6d3d\"" Aug 5 21:51:51.187618 containerd[1439]: time="2024-08-05T21:51:51.187586994Z" level=info msg="StartContainer for \"3e42abdccca75e0e2ef0b121854bfe24fe07cc81aa835981802d3d7e992c6d3d\"" Aug 5 21:51:51.198447 containerd[1439]: time="2024-08-05T21:51:51.198410905Z" level=info msg="CreateContainer within sandbox \"2e636d71635b2a706b4675747c790da49faf826dde650b890b292b14e7adc00a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d8b15c9821bf2fa397f344c6fb8a8027aff5a5464470e10c85baaa2ddb66e8d9\"" Aug 5 21:51:51.198946 containerd[1439]: time="2024-08-05T21:51:51.198874524Z" level=info msg="StartContainer for \"d8b15c9821bf2fa397f344c6fb8a8027aff5a5464470e10c85baaa2ddb66e8d9\"" Aug 5 21:51:51.200005 containerd[1439]: time="2024-08-05T21:51:51.199961317Z" level=info msg="CreateContainer within sandbox \"6bc6683b5e8ad7623782c5628e9835e53c02777e00950f50b30021431e8a25a6\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"7e1f7f6a148952b647c8437f364e8cf86f379f2a37c57aacf0a0b932a18dee61\"" Aug 5 21:51:51.200502 containerd[1439]: time="2024-08-05T21:51:51.200335894Z" level=info msg="StartContainer for \"7e1f7f6a148952b647c8437f364e8cf86f379f2a37c57aacf0a0b932a18dee61\"" Aug 5 21:51:51.214324 systemd[1]: Started cri-containerd-3e42abdccca75e0e2ef0b121854bfe24fe07cc81aa835981802d3d7e992c6d3d.scope - libcontainer container 3e42abdccca75e0e2ef0b121854bfe24fe07cc81aa835981802d3d7e992c6d3d. Aug 5 21:51:51.229315 systemd[1]: Started cri-containerd-d8b15c9821bf2fa397f344c6fb8a8027aff5a5464470e10c85baaa2ddb66e8d9.scope - libcontainer container d8b15c9821bf2fa397f344c6fb8a8027aff5a5464470e10c85baaa2ddb66e8d9. Aug 5 21:51:51.233008 systemd[1]: Started cri-containerd-7e1f7f6a148952b647c8437f364e8cf86f379f2a37c57aacf0a0b932a18dee61.scope - libcontainer container 7e1f7f6a148952b647c8437f364e8cf86f379f2a37c57aacf0a0b932a18dee61. Aug 5 21:51:51.296494 containerd[1439]: time="2024-08-05T21:51:51.296365632Z" level=info msg="StartContainer for \"7e1f7f6a148952b647c8437f364e8cf86f379f2a37c57aacf0a0b932a18dee61\" returns successfully" Aug 5 21:51:51.297379 containerd[1439]: time="2024-08-05T21:51:51.296390043Z" level=info msg="StartContainer for \"d8b15c9821bf2fa397f344c6fb8a8027aff5a5464470e10c85baaa2ddb66e8d9\" returns successfully" Aug 5 21:51:51.297379 containerd[1439]: time="2024-08-05T21:51:51.296396926Z" level=info msg="StartContainer for \"3e42abdccca75e0e2ef0b121854bfe24fe07cc81aa835981802d3d7e992c6d3d\" returns successfully" Aug 5 21:51:51.656897 kubelet[2170]: E0805 21:51:51.656798 2170 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:51:51.658641 kubelet[2170]: E0805 21:51:51.658546 2170 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:51:51.660098 kubelet[2170]: E0805 21:51:51.660079 2170 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:51:52.661703 kubelet[2170]: E0805 21:51:52.661674 2170 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:51:52.744267 kubelet[2170]: I0805 21:51:52.744233 2170 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Aug 5 21:51:53.005098 kubelet[2170]: I0805 21:51:53.004991 2170 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Aug 5 21:51:53.032918 kubelet[2170]: E0805 21:51:53.032878 2170 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 5 21:51:53.134013 kubelet[2170]: E0805 21:51:53.133970 2170 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 5 21:51:53.235032 kubelet[2170]: E0805 21:51:53.234964 2170 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 5 21:51:53.335170 kubelet[2170]: E0805 21:51:53.335045 2170 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 5 21:51:53.435572 kubelet[2170]: E0805 21:51:53.435537 2170 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 5 21:51:53.536629 kubelet[2170]: E0805 21:51:53.536590 2170 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 5 21:51:53.637574 kubelet[2170]: E0805 21:51:53.637454 2170 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 5 21:51:53.738501 kubelet[2170]: E0805 21:51:53.738455 2170 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 5 21:51:53.839262 kubelet[2170]: E0805 21:51:53.839221 2170 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 5 21:51:53.940543 kubelet[2170]: E0805 21:51:53.940433 2170 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 5 21:51:54.622538 kubelet[2170]: I0805 21:51:54.622454 2170 apiserver.go:52] "Watching apiserver" Aug 5 21:51:54.628894 kubelet[2170]: I0805 21:51:54.628841 2170 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Aug 5 21:51:55.709458 systemd[1]: Reloading requested from client PID 2449 ('systemctl') (unit session-7.scope)... Aug 5 21:51:55.709477 systemd[1]: Reloading... Aug 5 21:51:55.781190 zram_generator::config[2486]: No configuration found. Aug 5 21:51:55.866786 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 5 21:51:55.936349 systemd[1]: Reloading finished in 226 ms. Aug 5 21:51:55.972817 kubelet[2170]: I0805 21:51:55.972630 2170 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 5 21:51:55.972840 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 21:51:55.985251 systemd[1]: kubelet.service: Deactivated successfully. Aug 5 21:51:55.985528 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 21:51:55.999410 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 21:51:56.098980 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 21:51:56.104222 (kubelet)[2528]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 5 21:51:56.148986 kubelet[2528]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 5 21:51:56.148986 kubelet[2528]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 5 21:51:56.148986 kubelet[2528]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 5 21:51:56.149340 kubelet[2528]: I0805 21:51:56.149031 2528 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 5 21:51:56.153304 kubelet[2528]: I0805 21:51:56.153271 2528 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Aug 5 21:51:56.153304 kubelet[2528]: I0805 21:51:56.153300 2528 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 5 21:51:56.153579 kubelet[2528]: I0805 21:51:56.153493 2528 server.go:919] "Client rotation is on, will bootstrap in background" Aug 5 21:51:56.155145 kubelet[2528]: I0805 21:51:56.155119 2528 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 5 21:51:56.158857 kubelet[2528]: I0805 21:51:56.158716 2528 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 5 21:51:56.164163 kubelet[2528]: I0805 21:51:56.164029 2528 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 5 21:51:56.164384 kubelet[2528]: I0805 21:51:56.164371 2528 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 5 21:51:56.164631 kubelet[2528]: I0805 21:51:56.164612 2528 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Aug 5 21:51:56.165007 kubelet[2528]: I0805 21:51:56.164743 2528 topology_manager.go:138] "Creating topology manager with none policy" Aug 5 21:51:56.165007 kubelet[2528]: I0805 21:51:56.164760 2528 container_manager_linux.go:301] "Creating device plugin manager" Aug 5 21:51:56.165007 kubelet[2528]: I0805 21:51:56.164791 2528 state_mem.go:36] "Initialized new in-memory state store" Aug 5 21:51:56.165007 kubelet[2528]: I0805 21:51:56.164876 2528 kubelet.go:396] "Attempting to sync node with API server" Aug 5 21:51:56.165007 kubelet[2528]: I0805 21:51:56.164891 2528 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 5 21:51:56.165007 kubelet[2528]: I0805 21:51:56.164910 2528 kubelet.go:312] "Adding apiserver pod source" Aug 5 21:51:56.165007 kubelet[2528]: I0805 21:51:56.164923 2528 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 5 21:51:56.166447 kubelet[2528]: I0805 21:51:56.166418 2528 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.18" apiVersion="v1" Aug 5 21:51:56.166633 kubelet[2528]: I0805 21:51:56.166611 2528 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 5 21:51:56.166988 kubelet[2528]: I0805 21:51:56.166959 2528 server.go:1256] "Started kubelet" Aug 5 21:51:56.170172 kubelet[2528]: I0805 21:51:56.168670 2528 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 5 21:51:56.170172 kubelet[2528]: I0805 21:51:56.169813 2528 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Aug 5 21:51:56.171415 kubelet[2528]: I0805 21:51:56.170540 2528 server.go:461] "Adding debug handlers to kubelet server" Aug 5 21:51:56.173186 kubelet[2528]: I0805 21:51:56.171524 2528 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 5 21:51:56.173186 kubelet[2528]: I0805 21:51:56.171700 2528 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 5 21:51:56.173186 kubelet[2528]: I0805 21:51:56.171906 2528 volume_manager.go:291] "Starting Kubelet Volume Manager" Aug 5 21:51:56.173186 kubelet[2528]: I0805 21:51:56.172012 2528 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Aug 5 21:51:56.173186 kubelet[2528]: I0805 21:51:56.172161 2528 reconciler_new.go:29] "Reconciler: start to sync state" Aug 5 21:51:56.173186 kubelet[2528]: E0805 21:51:56.172255 2528 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 5 21:51:56.194964 kubelet[2528]: I0805 21:51:56.194815 2528 factory.go:221] Registration of the systemd container factory successfully Aug 5 21:51:56.194964 kubelet[2528]: I0805 21:51:56.194911 2528 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 5 21:51:56.197905 kubelet[2528]: E0805 21:51:56.197797 2528 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 5 21:51:56.199233 kubelet[2528]: I0805 21:51:56.199208 2528 factory.go:221] Registration of the containerd container factory successfully Aug 5 21:51:56.203302 kubelet[2528]: I0805 21:51:56.203278 2528 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 5 21:51:56.206550 kubelet[2528]: I0805 21:51:56.206518 2528 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 5 21:51:56.206550 kubelet[2528]: I0805 21:51:56.206544 2528 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 5 21:51:56.206764 kubelet[2528]: I0805 21:51:56.206569 2528 kubelet.go:2329] "Starting kubelet main sync loop" Aug 5 21:51:56.206764 kubelet[2528]: E0805 21:51:56.206634 2528 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 5 21:51:56.236128 kubelet[2528]: I0805 21:51:56.235688 2528 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 5 21:51:56.236128 kubelet[2528]: I0805 21:51:56.235709 2528 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 5 21:51:56.236128 kubelet[2528]: I0805 21:51:56.235727 2528 state_mem.go:36] "Initialized new in-memory state store" Aug 5 21:51:56.236128 kubelet[2528]: I0805 21:51:56.235958 2528 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 5 21:51:56.236128 kubelet[2528]: I0805 21:51:56.235987 2528 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 5 21:51:56.236128 kubelet[2528]: I0805 21:51:56.235995 2528 policy_none.go:49] "None policy: Start" Aug 5 21:51:56.238753 kubelet[2528]: I0805 21:51:56.238633 2528 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 5 21:51:56.238753 kubelet[2528]: I0805 21:51:56.238666 2528 state_mem.go:35] "Initializing new in-memory state store" Aug 5 21:51:56.239399 kubelet[2528]: I0805 21:51:56.239367 2528 state_mem.go:75] "Updated machine memory state" Aug 5 21:51:56.245234 kubelet[2528]: I0805 21:51:56.245212 2528 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 5 21:51:56.245703 kubelet[2528]: I0805 21:51:56.245688 2528 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 5 21:51:56.275787 kubelet[2528]: I0805 21:51:56.275759 2528 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Aug 5 21:51:56.282041 kubelet[2528]: I0805 21:51:56.281949 2528 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Aug 5 21:51:56.282041 kubelet[2528]: I0805 21:51:56.282034 2528 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Aug 5 21:51:56.306811 kubelet[2528]: I0805 21:51:56.306745 2528 topology_manager.go:215] "Topology Admit Handler" podUID="abb94e9f424ada93d15192df8f753192" podNamespace="kube-system" podName="kube-apiserver-localhost" Aug 5 21:51:56.306952 kubelet[2528]: I0805 21:51:56.306839 2528 topology_manager.go:215] "Topology Admit Handler" podUID="088f5b844ad7241e38f298babde6e061" podNamespace="kube-system" podName="kube-controller-manager-localhost" Aug 5 21:51:56.307377 kubelet[2528]: I0805 21:51:56.306894 2528 topology_manager.go:215] "Topology Admit Handler" podUID="cb686d9581fc5af7d1cc8e14735ce3db" podNamespace="kube-system" podName="kube-scheduler-localhost" Aug 5 21:51:56.473299 kubelet[2528]: I0805 21:51:56.473263 2528 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/088f5b844ad7241e38f298babde6e061-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"088f5b844ad7241e38f298babde6e061\") " pod="kube-system/kube-controller-manager-localhost" Aug 5 21:51:56.473299 kubelet[2528]: I0805 21:51:56.473310 2528 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cb686d9581fc5af7d1cc8e14735ce3db-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"cb686d9581fc5af7d1cc8e14735ce3db\") " pod="kube-system/kube-scheduler-localhost" Aug 5 21:51:56.473478 kubelet[2528]: I0805 21:51:56.473338 2528 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/abb94e9f424ada93d15192df8f753192-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"abb94e9f424ada93d15192df8f753192\") " pod="kube-system/kube-apiserver-localhost" Aug 5 21:51:56.473478 kubelet[2528]: I0805 21:51:56.473359 2528 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/abb94e9f424ada93d15192df8f753192-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"abb94e9f424ada93d15192df8f753192\") " pod="kube-system/kube-apiserver-localhost" Aug 5 21:51:56.473478 kubelet[2528]: I0805 21:51:56.473380 2528 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/088f5b844ad7241e38f298babde6e061-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"088f5b844ad7241e38f298babde6e061\") " pod="kube-system/kube-controller-manager-localhost" Aug 5 21:51:56.473478 kubelet[2528]: I0805 21:51:56.473401 2528 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/088f5b844ad7241e38f298babde6e061-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"088f5b844ad7241e38f298babde6e061\") " pod="kube-system/kube-controller-manager-localhost" Aug 5 21:51:56.473478 kubelet[2528]: I0805 21:51:56.473420 2528 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/abb94e9f424ada93d15192df8f753192-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"abb94e9f424ada93d15192df8f753192\") " pod="kube-system/kube-apiserver-localhost" Aug 5 21:51:56.473591 kubelet[2528]: I0805 21:51:56.473440 2528 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/088f5b844ad7241e38f298babde6e061-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"088f5b844ad7241e38f298babde6e061\") " pod="kube-system/kube-controller-manager-localhost" Aug 5 21:51:56.473591 kubelet[2528]: I0805 21:51:56.473462 2528 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/088f5b844ad7241e38f298babde6e061-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"088f5b844ad7241e38f298babde6e061\") " pod="kube-system/kube-controller-manager-localhost" Aug 5 21:51:56.651829 kubelet[2528]: E0805 21:51:56.651627 2528 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:51:56.651829 kubelet[2528]: E0805 21:51:56.651688 2528 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:51:56.653407 kubelet[2528]: E0805 21:51:56.653331 2528 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:51:57.166452 kubelet[2528]: I0805 21:51:57.166401 2528 apiserver.go:52] "Watching apiserver" Aug 5 21:51:57.172230 kubelet[2528]: I0805 21:51:57.172191 2528 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Aug 5 21:51:57.221682 kubelet[2528]: E0805 21:51:57.221644 2528 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:51:57.226608 kubelet[2528]: E0805 21:51:57.226574 2528 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Aug 5 21:51:57.228603 kubelet[2528]: E0805 21:51:57.227127 2528 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:51:57.228603 kubelet[2528]: E0805 21:51:57.227310 2528 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Aug 5 21:51:57.230261 kubelet[2528]: E0805 21:51:57.229509 2528 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:51:57.241826 kubelet[2528]: I0805 21:51:57.241790 2528 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.241741716 podStartE2EDuration="1.241741716s" podCreationTimestamp="2024-08-05 21:51:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 21:51:57.241474662 +0000 UTC m=+1.132925273" watchObservedRunningTime="2024-08-05 21:51:57.241741716 +0000 UTC m=+1.133192327" Aug 5 21:51:57.248577 kubelet[2528]: I0805 21:51:57.248538 2528 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.248496353 podStartE2EDuration="1.248496353s" podCreationTimestamp="2024-08-05 21:51:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 21:51:57.247952164 +0000 UTC m=+1.139402775" watchObservedRunningTime="2024-08-05 21:51:57.248496353 +0000 UTC m=+1.139946964" Aug 5 21:51:57.261792 kubelet[2528]: I0805 21:51:57.261738 2528 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.261702727 podStartE2EDuration="1.261702727s" podCreationTimestamp="2024-08-05 21:51:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 21:51:57.25474845 +0000 UTC m=+1.146199061" watchObservedRunningTime="2024-08-05 21:51:57.261702727 +0000 UTC m=+1.153153338" Aug 5 21:51:58.222535 kubelet[2528]: E0805 21:51:58.222498 2528 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:51:58.222880 kubelet[2528]: E0805 21:51:58.222641 2528 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:51:58.223905 kubelet[2528]: E0805 21:51:58.223173 2528 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:51:59.224178 kubelet[2528]: E0805 21:51:59.223955 2528 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:51:59.224178 kubelet[2528]: E0805 21:51:59.223959 2528 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:52:00.303003 sudo[1619]: pam_unix(sudo:session): session closed for user root Aug 5 21:52:00.304628 sshd[1616]: pam_unix(sshd:session): session closed for user core Aug 5 21:52:00.310358 systemd-logind[1420]: Session 7 logged out. Waiting for processes to exit. Aug 5 21:52:00.311629 systemd[1]: sshd@6-10.0.0.99:22-10.0.0.1:50356.service: Deactivated successfully. Aug 5 21:52:00.315269 systemd[1]: session-7.scope: Deactivated successfully. Aug 5 21:52:00.315695 systemd[1]: session-7.scope: Consumed 5.968s CPU time, 139.1M memory peak, 0B memory swap peak. Aug 5 21:52:00.317027 systemd-logind[1420]: Removed session 7. Aug 5 21:52:01.643379 kubelet[2528]: E0805 21:52:01.643348 2528 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:52:02.229034 kubelet[2528]: E0805 21:52:02.228779 2528 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:52:03.230623 kubelet[2528]: E0805 21:52:03.230553 2528 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:52:07.707469 kubelet[2528]: E0805 21:52:07.707383 2528 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:52:07.750535 kubelet[2528]: E0805 21:52:07.749878 2528 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:52:09.120280 update_engine[1423]: I0805 21:52:09.120215 1423 update_attempter.cc:509] Updating boot flags... Aug 5 21:52:09.143187 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2629) Aug 5 21:52:09.175223 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2627) Aug 5 21:52:09.418951 kubelet[2528]: I0805 21:52:09.418824 2528 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 5 21:52:09.419330 containerd[1439]: time="2024-08-05T21:52:09.419250763Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 5 21:52:09.420194 kubelet[2528]: I0805 21:52:09.419506 2528 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 5 21:52:10.058480 kubelet[2528]: I0805 21:52:10.058426 2528 topology_manager.go:215] "Topology Admit Handler" podUID="33caf3fa-6121-48cc-a4af-5b18170fb67f" podNamespace="kube-system" podName="kube-proxy-zth2m" Aug 5 21:52:10.064767 kubelet[2528]: I0805 21:52:10.064623 2528 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/33caf3fa-6121-48cc-a4af-5b18170fb67f-xtables-lock\") pod \"kube-proxy-zth2m\" (UID: \"33caf3fa-6121-48cc-a4af-5b18170fb67f\") " pod="kube-system/kube-proxy-zth2m" Aug 5 21:52:10.064767 kubelet[2528]: I0805 21:52:10.064666 2528 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/33caf3fa-6121-48cc-a4af-5b18170fb67f-lib-modules\") pod \"kube-proxy-zth2m\" (UID: \"33caf3fa-6121-48cc-a4af-5b18170fb67f\") " pod="kube-system/kube-proxy-zth2m" Aug 5 21:52:10.064767 kubelet[2528]: I0805 21:52:10.064687 2528 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/33caf3fa-6121-48cc-a4af-5b18170fb67f-kube-proxy\") pod \"kube-proxy-zth2m\" (UID: \"33caf3fa-6121-48cc-a4af-5b18170fb67f\") " pod="kube-system/kube-proxy-zth2m" Aug 5 21:52:10.064767 kubelet[2528]: I0805 21:52:10.064709 2528 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwwdq\" (UniqueName: \"kubernetes.io/projected/33caf3fa-6121-48cc-a4af-5b18170fb67f-kube-api-access-gwwdq\") pod \"kube-proxy-zth2m\" (UID: \"33caf3fa-6121-48cc-a4af-5b18170fb67f\") " pod="kube-system/kube-proxy-zth2m" Aug 5 21:52:10.068937 systemd[1]: Created slice kubepods-besteffort-pod33caf3fa_6121_48cc_a4af_5b18170fb67f.slice - libcontainer container kubepods-besteffort-pod33caf3fa_6121_48cc_a4af_5b18170fb67f.slice. Aug 5 21:52:10.172575 kubelet[2528]: E0805 21:52:10.172487 2528 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Aug 5 21:52:10.172575 kubelet[2528]: E0805 21:52:10.172520 2528 projected.go:200] Error preparing data for projected volume kube-api-access-gwwdq for pod kube-system/kube-proxy-zth2m: configmap "kube-root-ca.crt" not found Aug 5 21:52:10.172575 kubelet[2528]: E0805 21:52:10.172577 2528 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/33caf3fa-6121-48cc-a4af-5b18170fb67f-kube-api-access-gwwdq podName:33caf3fa-6121-48cc-a4af-5b18170fb67f nodeName:}" failed. No retries permitted until 2024-08-05 21:52:10.672557439 +0000 UTC m=+14.564008050 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-gwwdq" (UniqueName: "kubernetes.io/projected/33caf3fa-6121-48cc-a4af-5b18170fb67f-kube-api-access-gwwdq") pod "kube-proxy-zth2m" (UID: "33caf3fa-6121-48cc-a4af-5b18170fb67f") : configmap "kube-root-ca.crt" not found Aug 5 21:52:10.480249 kubelet[2528]: I0805 21:52:10.480204 2528 topology_manager.go:215] "Topology Admit Handler" podUID="5e0b6e93-9f00-44c6-8870-da1d63329054" podNamespace="tigera-operator" podName="tigera-operator-76c4974c85-rjmwf" Aug 5 21:52:10.489889 systemd[1]: Created slice kubepods-besteffort-pod5e0b6e93_9f00_44c6_8870_da1d63329054.slice - libcontainer container kubepods-besteffort-pod5e0b6e93_9f00_44c6_8870_da1d63329054.slice. Aug 5 21:52:10.567897 kubelet[2528]: I0805 21:52:10.567806 2528 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5e0b6e93-9f00-44c6-8870-da1d63329054-var-lib-calico\") pod \"tigera-operator-76c4974c85-rjmwf\" (UID: \"5e0b6e93-9f00-44c6-8870-da1d63329054\") " pod="tigera-operator/tigera-operator-76c4974c85-rjmwf" Aug 5 21:52:10.567897 kubelet[2528]: I0805 21:52:10.567873 2528 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9wcw\" (UniqueName: \"kubernetes.io/projected/5e0b6e93-9f00-44c6-8870-da1d63329054-kube-api-access-l9wcw\") pod \"tigera-operator-76c4974c85-rjmwf\" (UID: \"5e0b6e93-9f00-44c6-8870-da1d63329054\") " pod="tigera-operator/tigera-operator-76c4974c85-rjmwf" Aug 5 21:52:10.794392 containerd[1439]: time="2024-08-05T21:52:10.794045084Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-rjmwf,Uid:5e0b6e93-9f00-44c6-8870-da1d63329054,Namespace:tigera-operator,Attempt:0,}" Aug 5 21:52:10.817358 containerd[1439]: time="2024-08-05T21:52:10.816925978Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 21:52:10.817358 containerd[1439]: time="2024-08-05T21:52:10.817339020Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:52:10.817358 containerd[1439]: time="2024-08-05T21:52:10.817358702Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 21:52:10.817612 containerd[1439]: time="2024-08-05T21:52:10.817371383Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:52:10.836372 systemd[1]: Started cri-containerd-7fbd9761118ef633aac1ef5063308f31037fde67d2a6f7d45002f63dbe012f77.scope - libcontainer container 7fbd9761118ef633aac1ef5063308f31037fde67d2a6f7d45002f63dbe012f77. Aug 5 21:52:10.863869 containerd[1439]: time="2024-08-05T21:52:10.863815880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-rjmwf,Uid:5e0b6e93-9f00-44c6-8870-da1d63329054,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"7fbd9761118ef633aac1ef5063308f31037fde67d2a6f7d45002f63dbe012f77\"" Aug 5 21:52:10.865711 containerd[1439]: time="2024-08-05T21:52:10.865685548Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\"" Aug 5 21:52:10.981837 kubelet[2528]: E0805 21:52:10.981544 2528 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:52:10.982027 containerd[1439]: time="2024-08-05T21:52:10.981987771Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zth2m,Uid:33caf3fa-6121-48cc-a4af-5b18170fb67f,Namespace:kube-system,Attempt:0,}" Aug 5 21:52:10.999805 containerd[1439]: time="2024-08-05T21:52:10.999705868Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 21:52:10.999805 containerd[1439]: time="2024-08-05T21:52:10.999772515Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:52:10.999805 containerd[1439]: time="2024-08-05T21:52:10.999793837Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 21:52:10.999996 containerd[1439]: time="2024-08-05T21:52:10.999814759Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:52:11.018326 systemd[1]: Started cri-containerd-70724dba9a9c6c953e38d2eae31a9a69635f3fb3ca82c8660165f71418293089.scope - libcontainer container 70724dba9a9c6c953e38d2eae31a9a69635f3fb3ca82c8660165f71418293089. Aug 5 21:52:11.036529 containerd[1439]: time="2024-08-05T21:52:11.036483107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zth2m,Uid:33caf3fa-6121-48cc-a4af-5b18170fb67f,Namespace:kube-system,Attempt:0,} returns sandbox id \"70724dba9a9c6c953e38d2eae31a9a69635f3fb3ca82c8660165f71418293089\"" Aug 5 21:52:11.037278 kubelet[2528]: E0805 21:52:11.037258 2528 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:52:11.039390 containerd[1439]: time="2024-08-05T21:52:11.039354901Z" level=info msg="CreateContainer within sandbox \"70724dba9a9c6c953e38d2eae31a9a69635f3fb3ca82c8660165f71418293089\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 5 21:52:11.052160 containerd[1439]: time="2024-08-05T21:52:11.052048513Z" level=info msg="CreateContainer within sandbox \"70724dba9a9c6c953e38d2eae31a9a69635f3fb3ca82c8660165f71418293089\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"816ad174b2ea6ce52d6c3043f1526aefc6601419fefba30ccc99fa847d702cec\"" Aug 5 21:52:11.053381 containerd[1439]: time="2024-08-05T21:52:11.053356158Z" level=info msg="StartContainer for \"816ad174b2ea6ce52d6c3043f1526aefc6601419fefba30ccc99fa847d702cec\"" Aug 5 21:52:11.078514 systemd[1]: Started cri-containerd-816ad174b2ea6ce52d6c3043f1526aefc6601419fefba30ccc99fa847d702cec.scope - libcontainer container 816ad174b2ea6ce52d6c3043f1526aefc6601419fefba30ccc99fa847d702cec. Aug 5 21:52:11.106517 containerd[1439]: time="2024-08-05T21:52:11.106406744Z" level=info msg="StartContainer for \"816ad174b2ea6ce52d6c3043f1526aefc6601419fefba30ccc99fa847d702cec\" returns successfully" Aug 5 21:52:11.253889 kubelet[2528]: E0805 21:52:11.253857 2528 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:52:11.263207 kubelet[2528]: I0805 21:52:11.263044 2528 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-zth2m" podStartSLOduration=1.262353436 podStartE2EDuration="1.262353436s" podCreationTimestamp="2024-08-05 21:52:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 21:52:11.262246866 +0000 UTC m=+15.153697517" watchObservedRunningTime="2024-08-05 21:52:11.262353436 +0000 UTC m=+15.153804047" Aug 5 21:52:11.907841 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount595582786.mount: Deactivated successfully. Aug 5 21:52:12.690744 containerd[1439]: time="2024-08-05T21:52:12.690672659Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:52:12.691156 containerd[1439]: time="2024-08-05T21:52:12.691112539Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.0: active requests=0, bytes read=19473626" Aug 5 21:52:12.692026 containerd[1439]: time="2024-08-05T21:52:12.691979297Z" level=info msg="ImageCreate event name:\"sha256:5886f48e233edcb89c0e8e3cdbdc40101f3c2dfbe67d7717f01d19c27cd78f92\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:52:12.694559 containerd[1439]: time="2024-08-05T21:52:12.694510288Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:52:12.695923 containerd[1439]: time="2024-08-05T21:52:12.695888813Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.0\" with image id \"sha256:5886f48e233edcb89c0e8e3cdbdc40101f3c2dfbe67d7717f01d19c27cd78f92\", repo tag \"quay.io/tigera/operator:v1.34.0\", repo digest \"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\", size \"19467821\" in 1.830173422s" Aug 5 21:52:12.695967 containerd[1439]: time="2024-08-05T21:52:12.695924296Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\" returns image reference \"sha256:5886f48e233edcb89c0e8e3cdbdc40101f3c2dfbe67d7717f01d19c27cd78f92\"" Aug 5 21:52:12.698004 containerd[1439]: time="2024-08-05T21:52:12.697972403Z" level=info msg="CreateContainer within sandbox \"7fbd9761118ef633aac1ef5063308f31037fde67d2a6f7d45002f63dbe012f77\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Aug 5 21:52:12.708789 containerd[1439]: time="2024-08-05T21:52:12.708735382Z" level=info msg="CreateContainer within sandbox \"7fbd9761118ef633aac1ef5063308f31037fde67d2a6f7d45002f63dbe012f77\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"8150cc98be12b71e20d0b65694405f7c87403b33ed0b67463392c0ddcc86fb60\"" Aug 5 21:52:12.709324 containerd[1439]: time="2024-08-05T21:52:12.709288713Z" level=info msg="StartContainer for \"8150cc98be12b71e20d0b65694405f7c87403b33ed0b67463392c0ddcc86fb60\"" Aug 5 21:52:12.739305 systemd[1]: Started cri-containerd-8150cc98be12b71e20d0b65694405f7c87403b33ed0b67463392c0ddcc86fb60.scope - libcontainer container 8150cc98be12b71e20d0b65694405f7c87403b33ed0b67463392c0ddcc86fb60. Aug 5 21:52:12.761167 containerd[1439]: time="2024-08-05T21:52:12.761110509Z" level=info msg="StartContainer for \"8150cc98be12b71e20d0b65694405f7c87403b33ed0b67463392c0ddcc86fb60\" returns successfully" Aug 5 21:52:16.066391 kubelet[2528]: I0805 21:52:16.066342 2528 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4974c85-rjmwf" podStartSLOduration=4.235406386 podStartE2EDuration="6.066291517s" podCreationTimestamp="2024-08-05 21:52:10 +0000 UTC" firstStartedPulling="2024-08-05 21:52:10.865300429 +0000 UTC m=+14.756751040" lastFinishedPulling="2024-08-05 21:52:12.69618556 +0000 UTC m=+16.587636171" observedRunningTime="2024-08-05 21:52:13.264203266 +0000 UTC m=+17.155653877" watchObservedRunningTime="2024-08-05 21:52:16.066291517 +0000 UTC m=+19.957742128" Aug 5 21:52:16.066777 kubelet[2528]: I0805 21:52:16.066481 2528 topology_manager.go:215] "Topology Admit Handler" podUID="84ab6b85-4592-4db6-884e-c68ebb88fa01" podNamespace="calico-system" podName="calico-typha-6bb8699c87-hrz7f" Aug 5 21:52:16.087685 systemd[1]: Created slice kubepods-besteffort-pod84ab6b85_4592_4db6_884e_c68ebb88fa01.slice - libcontainer container kubepods-besteffort-pod84ab6b85_4592_4db6_884e_c68ebb88fa01.slice. Aug 5 21:52:16.107448 kubelet[2528]: I0805 21:52:16.107401 2528 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/84ab6b85-4592-4db6-884e-c68ebb88fa01-tigera-ca-bundle\") pod \"calico-typha-6bb8699c87-hrz7f\" (UID: \"84ab6b85-4592-4db6-884e-c68ebb88fa01\") " pod="calico-system/calico-typha-6bb8699c87-hrz7f" Aug 5 21:52:16.107448 kubelet[2528]: I0805 21:52:16.107452 2528 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/84ab6b85-4592-4db6-884e-c68ebb88fa01-typha-certs\") pod \"calico-typha-6bb8699c87-hrz7f\" (UID: \"84ab6b85-4592-4db6-884e-c68ebb88fa01\") " pod="calico-system/calico-typha-6bb8699c87-hrz7f" Aug 5 21:52:16.107622 kubelet[2528]: I0805 21:52:16.107477 2528 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhsxk\" (UniqueName: \"kubernetes.io/projected/84ab6b85-4592-4db6-884e-c68ebb88fa01-kube-api-access-rhsxk\") pod \"calico-typha-6bb8699c87-hrz7f\" (UID: \"84ab6b85-4592-4db6-884e-c68ebb88fa01\") " pod="calico-system/calico-typha-6bb8699c87-hrz7f" Aug 5 21:52:16.112608 kubelet[2528]: I0805 21:52:16.112575 2528 topology_manager.go:215] "Topology Admit Handler" podUID="2f7476cf-4171-43e3-8b5d-7385c93c6ff8" podNamespace="calico-system" podName="calico-node-g8xh6" Aug 5 21:52:16.122347 systemd[1]: Created slice kubepods-besteffort-pod2f7476cf_4171_43e3_8b5d_7385c93c6ff8.slice - libcontainer container kubepods-besteffort-pod2f7476cf_4171_43e3_8b5d_7385c93c6ff8.slice. Aug 5 21:52:16.208420 kubelet[2528]: I0805 21:52:16.208367 2528 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxjkq\" (UniqueName: \"kubernetes.io/projected/2f7476cf-4171-43e3-8b5d-7385c93c6ff8-kube-api-access-vxjkq\") pod \"calico-node-g8xh6\" (UID: \"2f7476cf-4171-43e3-8b5d-7385c93c6ff8\") " pod="calico-system/calico-node-g8xh6" Aug 5 21:52:16.208420 kubelet[2528]: I0805 21:52:16.208407 2528 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2f7476cf-4171-43e3-8b5d-7385c93c6ff8-tigera-ca-bundle\") pod \"calico-node-g8xh6\" (UID: \"2f7476cf-4171-43e3-8b5d-7385c93c6ff8\") " pod="calico-system/calico-node-g8xh6" Aug 5 21:52:16.208615 kubelet[2528]: I0805 21:52:16.208455 2528 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/2f7476cf-4171-43e3-8b5d-7385c93c6ff8-cni-net-dir\") pod \"calico-node-g8xh6\" (UID: \"2f7476cf-4171-43e3-8b5d-7385c93c6ff8\") " pod="calico-system/calico-node-g8xh6" Aug 5 21:52:16.208615 kubelet[2528]: I0805 21:52:16.208485 2528 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/2f7476cf-4171-43e3-8b5d-7385c93c6ff8-cni-log-dir\") pod \"calico-node-g8xh6\" (UID: \"2f7476cf-4171-43e3-8b5d-7385c93c6ff8\") " pod="calico-system/calico-node-g8xh6" Aug 5 21:52:16.208615 kubelet[2528]: I0805 21:52:16.208504 2528 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/2f7476cf-4171-43e3-8b5d-7385c93c6ff8-var-run-calico\") pod \"calico-node-g8xh6\" (UID: \"2f7476cf-4171-43e3-8b5d-7385c93c6ff8\") " pod="calico-system/calico-node-g8xh6" Aug 5 21:52:16.208615 kubelet[2528]: I0805 21:52:16.208532 2528 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/2f7476cf-4171-43e3-8b5d-7385c93c6ff8-flexvol-driver-host\") pod \"calico-node-g8xh6\" (UID: \"2f7476cf-4171-43e3-8b5d-7385c93c6ff8\") " pod="calico-system/calico-node-g8xh6" Aug 5 21:52:16.208756 kubelet[2528]: I0805 21:52:16.208617 2528 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2f7476cf-4171-43e3-8b5d-7385c93c6ff8-xtables-lock\") pod \"calico-node-g8xh6\" (UID: \"2f7476cf-4171-43e3-8b5d-7385c93c6ff8\") " pod="calico-system/calico-node-g8xh6" Aug 5 21:52:16.208756 kubelet[2528]: I0805 21:52:16.208729 2528 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2f7476cf-4171-43e3-8b5d-7385c93c6ff8-lib-modules\") pod \"calico-node-g8xh6\" (UID: \"2f7476cf-4171-43e3-8b5d-7385c93c6ff8\") " pod="calico-system/calico-node-g8xh6" Aug 5 21:52:16.208800 kubelet[2528]: I0805 21:52:16.208771 2528 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/2f7476cf-4171-43e3-8b5d-7385c93c6ff8-cni-bin-dir\") pod \"calico-node-g8xh6\" (UID: \"2f7476cf-4171-43e3-8b5d-7385c93c6ff8\") " pod="calico-system/calico-node-g8xh6" Aug 5 21:52:16.209029 kubelet[2528]: I0805 21:52:16.208837 2528 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/2f7476cf-4171-43e3-8b5d-7385c93c6ff8-node-certs\") pod \"calico-node-g8xh6\" (UID: \"2f7476cf-4171-43e3-8b5d-7385c93c6ff8\") " pod="calico-system/calico-node-g8xh6" Aug 5 21:52:16.209029 kubelet[2528]: I0805 21:52:16.208862 2528 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2f7476cf-4171-43e3-8b5d-7385c93c6ff8-var-lib-calico\") pod \"calico-node-g8xh6\" (UID: \"2f7476cf-4171-43e3-8b5d-7385c93c6ff8\") " pod="calico-system/calico-node-g8xh6" Aug 5 21:52:16.210199 kubelet[2528]: I0805 21:52:16.210111 2528 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/2f7476cf-4171-43e3-8b5d-7385c93c6ff8-policysync\") pod \"calico-node-g8xh6\" (UID: \"2f7476cf-4171-43e3-8b5d-7385c93c6ff8\") " pod="calico-system/calico-node-g8xh6" Aug 5 21:52:16.236345 kubelet[2528]: I0805 21:52:16.236308 2528 topology_manager.go:215] "Topology Admit Handler" podUID="d273804c-1785-4ad5-9b9f-33407f6c46a0" podNamespace="calico-system" podName="csi-node-driver-rsntl" Aug 5 21:52:16.236605 kubelet[2528]: E0805 21:52:16.236562 2528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rsntl" podUID="d273804c-1785-4ad5-9b9f-33407f6c46a0" Aug 5 21:52:16.314327 kubelet[2528]: I0805 21:52:16.310940 2528 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/d273804c-1785-4ad5-9b9f-33407f6c46a0-varrun\") pod \"csi-node-driver-rsntl\" (UID: \"d273804c-1785-4ad5-9b9f-33407f6c46a0\") " pod="calico-system/csi-node-driver-rsntl" Aug 5 21:52:16.314327 kubelet[2528]: I0805 21:52:16.311026 2528 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nt7mh\" (UniqueName: \"kubernetes.io/projected/d273804c-1785-4ad5-9b9f-33407f6c46a0-kube-api-access-nt7mh\") pod \"csi-node-driver-rsntl\" (UID: \"d273804c-1785-4ad5-9b9f-33407f6c46a0\") " pod="calico-system/csi-node-driver-rsntl" Aug 5 21:52:16.314327 kubelet[2528]: I0805 21:52:16.311061 2528 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d273804c-1785-4ad5-9b9f-33407f6c46a0-kubelet-dir\") pod \"csi-node-driver-rsntl\" (UID: \"d273804c-1785-4ad5-9b9f-33407f6c46a0\") " pod="calico-system/csi-node-driver-rsntl" Aug 5 21:52:16.314327 kubelet[2528]: I0805 21:52:16.311080 2528 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d273804c-1785-4ad5-9b9f-33407f6c46a0-registration-dir\") pod \"csi-node-driver-rsntl\" (UID: \"d273804c-1785-4ad5-9b9f-33407f6c46a0\") " pod="calico-system/csi-node-driver-rsntl" Aug 5 21:52:16.314327 kubelet[2528]: I0805 21:52:16.311111 2528 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d273804c-1785-4ad5-9b9f-33407f6c46a0-socket-dir\") pod \"csi-node-driver-rsntl\" (UID: \"d273804c-1785-4ad5-9b9f-33407f6c46a0\") " pod="calico-system/csi-node-driver-rsntl" Aug 5 21:52:16.314327 kubelet[2528]: E0805 21:52:16.312223 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:52:16.314737 kubelet[2528]: W0805 21:52:16.312239 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:52:16.314737 kubelet[2528]: E0805 21:52:16.312256 2528 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:52:16.316344 kubelet[2528]: E0805 21:52:16.314993 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:52:16.316344 kubelet[2528]: W0805 21:52:16.315010 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:52:16.316344 kubelet[2528]: E0805 21:52:16.315032 2528 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:52:16.322953 kubelet[2528]: E0805 21:52:16.322458 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:52:16.322953 kubelet[2528]: W0805 21:52:16.322476 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:52:16.322953 kubelet[2528]: E0805 21:52:16.322493 2528 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:52:16.323174 kubelet[2528]: E0805 21:52:16.323157 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:52:16.323244 kubelet[2528]: W0805 21:52:16.323231 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:52:16.323296 kubelet[2528]: E0805 21:52:16.323287 2528 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:52:16.396383 kubelet[2528]: E0805 21:52:16.396340 2528 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:52:16.397098 containerd[1439]: time="2024-08-05T21:52:16.397011345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6bb8699c87-hrz7f,Uid:84ab6b85-4592-4db6-884e-c68ebb88fa01,Namespace:calico-system,Attempt:0,}" Aug 5 21:52:16.412131 kubelet[2528]: E0805 21:52:16.412016 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:52:16.412131 kubelet[2528]: W0805 21:52:16.412055 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:52:16.412131 kubelet[2528]: E0805 21:52:16.412080 2528 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:52:16.414587 kubelet[2528]: E0805 21:52:16.414471 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:52:16.414587 kubelet[2528]: W0805 21:52:16.414487 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:52:16.414587 kubelet[2528]: E0805 21:52:16.414527 2528 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:52:16.415067 kubelet[2528]: E0805 21:52:16.415027 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:52:16.415067 kubelet[2528]: W0805 21:52:16.415041 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:52:16.415326 kubelet[2528]: E0805 21:52:16.415230 2528 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:52:16.415580 kubelet[2528]: E0805 21:52:16.415567 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:52:16.415580 kubelet[2528]: W0805 21:52:16.415609 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:52:16.415880 kubelet[2528]: E0805 21:52:16.415853 2528 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:52:16.416122 kubelet[2528]: E0805 21:52:16.416109 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:52:16.416216 kubelet[2528]: W0805 21:52:16.416203 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:52:16.416410 kubelet[2528]: E0805 21:52:16.416298 2528 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:52:16.416582 kubelet[2528]: E0805 21:52:16.416570 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:52:16.416728 kubelet[2528]: W0805 21:52:16.416629 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:52:16.416728 kubelet[2528]: E0805 21:52:16.416669 2528 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:52:16.417268 kubelet[2528]: E0805 21:52:16.417249 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:52:16.417358 kubelet[2528]: W0805 21:52:16.417336 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:52:16.417496 kubelet[2528]: E0805 21:52:16.417423 2528 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:52:16.417779 kubelet[2528]: E0805 21:52:16.417739 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:52:16.417779 kubelet[2528]: W0805 21:52:16.417750 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:52:16.417946 kubelet[2528]: E0805 21:52:16.417936 2528 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:52:16.418339 kubelet[2528]: E0805 21:52:16.418289 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:52:16.418339 kubelet[2528]: W0805 21:52:16.418301 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:52:16.418479 kubelet[2528]: E0805 21:52:16.418440 2528 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:52:16.418787 kubelet[2528]: E0805 21:52:16.418722 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:52:16.418787 kubelet[2528]: W0805 21:52:16.418747 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:52:16.418974 kubelet[2528]: E0805 21:52:16.418952 2528 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:52:16.419168 kubelet[2528]: E0805 21:52:16.419113 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:52:16.419168 kubelet[2528]: W0805 21:52:16.419125 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:52:16.419373 kubelet[2528]: E0805 21:52:16.419360 2528 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:52:16.419705 kubelet[2528]: E0805 21:52:16.419645 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:52:16.419705 kubelet[2528]: W0805 21:52:16.419684 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:52:16.419915 kubelet[2528]: E0805 21:52:16.419867 2528 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:52:16.420297 kubelet[2528]: E0805 21:52:16.420194 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:52:16.420297 kubelet[2528]: W0805 21:52:16.420207 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:52:16.420409 kubelet[2528]: E0805 21:52:16.420398 2528 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:52:16.421679 kubelet[2528]: E0805 21:52:16.421613 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:52:16.422089 kubelet[2528]: W0805 21:52:16.421999 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:52:16.422661 kubelet[2528]: E0805 21:52:16.422607 2528 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:52:16.424617 kubelet[2528]: E0805 21:52:16.423422 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:52:16.424617 kubelet[2528]: W0805 21:52:16.423440 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:52:16.424617 kubelet[2528]: E0805 21:52:16.423495 2528 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:52:16.424617 kubelet[2528]: E0805 21:52:16.423641 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:52:16.424617 kubelet[2528]: W0805 21:52:16.423651 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:52:16.424617 kubelet[2528]: E0805 21:52:16.423689 2528 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:52:16.424617 kubelet[2528]: E0805 21:52:16.423820 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:52:16.424617 kubelet[2528]: W0805 21:52:16.423828 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:52:16.424617 kubelet[2528]: E0805 21:52:16.423894 2528 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:52:16.424617 kubelet[2528]: E0805 21:52:16.423969 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:52:16.424881 kubelet[2528]: W0805 21:52:16.423975 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:52:16.425371 kubelet[2528]: E0805 21:52:16.425270 2528 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:52:16.425469 kubelet[2528]: E0805 21:52:16.425453 2528 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:52:16.425901 containerd[1439]: time="2024-08-05T21:52:16.425856688Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-g8xh6,Uid:2f7476cf-4171-43e3-8b5d-7385c93c6ff8,Namespace:calico-system,Attempt:0,}" Aug 5 21:52:16.426239 kubelet[2528]: E0805 21:52:16.426222 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:52:16.426239 kubelet[2528]: W0805 21:52:16.426261 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:52:16.426239 kubelet[2528]: E0805 21:52:16.426280 2528 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:52:16.426643 containerd[1439]: time="2024-08-05T21:52:16.425366691Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 21:52:16.426643 containerd[1439]: time="2024-08-05T21:52:16.426481936Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:52:16.426643 containerd[1439]: time="2024-08-05T21:52:16.426501897Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 21:52:16.426643 containerd[1439]: time="2024-08-05T21:52:16.426512778Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:52:16.426917 kubelet[2528]: E0805 21:52:16.426851 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:52:16.426917 kubelet[2528]: W0805 21:52:16.426864 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:52:16.426917 kubelet[2528]: E0805 21:52:16.426879 2528 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:52:16.428241 kubelet[2528]: E0805 21:52:16.427449 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:52:16.428241 kubelet[2528]: W0805 21:52:16.427477 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:52:16.428241 kubelet[2528]: E0805 21:52:16.427497 2528 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:52:16.428241 kubelet[2528]: E0805 21:52:16.427720 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:52:16.428241 kubelet[2528]: W0805 21:52:16.427733 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:52:16.428241 kubelet[2528]: E0805 21:52:16.427747 2528 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:52:16.428241 kubelet[2528]: E0805 21:52:16.428175 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:52:16.428241 kubelet[2528]: W0805 21:52:16.428186 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:52:16.428437 kubelet[2528]: E0805 21:52:16.428261 2528 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:52:16.428437 kubelet[2528]: E0805 21:52:16.428386 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:52:16.428437 kubelet[2528]: W0805 21:52:16.428394 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:52:16.428497 kubelet[2528]: E0805 21:52:16.428454 2528 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:52:16.429053 kubelet[2528]: E0805 21:52:16.428811 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:52:16.429053 kubelet[2528]: W0805 21:52:16.428821 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:52:16.429053 kubelet[2528]: E0805 21:52:16.428833 2528 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:52:16.444577 kubelet[2528]: E0805 21:52:16.444361 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:52:16.444577 kubelet[2528]: W0805 21:52:16.444380 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:52:16.444577 kubelet[2528]: E0805 21:52:16.444401 2528 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:52:16.453339 systemd[1]: Started cri-containerd-55be59b55c543d1d90b6ab3b21d7ce76156aac3e05e01ce49c9a858ef1570721.scope - libcontainer container 55be59b55c543d1d90b6ab3b21d7ce76156aac3e05e01ce49c9a858ef1570721. Aug 5 21:52:16.459128 containerd[1439]: time="2024-08-05T21:52:16.459039560Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 21:52:16.459262 containerd[1439]: time="2024-08-05T21:52:16.459099044Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:52:16.459262 containerd[1439]: time="2024-08-05T21:52:16.459119766Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 21:52:16.459262 containerd[1439]: time="2024-08-05T21:52:16.459141967Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:52:16.488299 systemd[1]: Started cri-containerd-46e0f348c56cbc22a33fde4c593c7d7377eacf6532b7efe88579b4fb40e5099e.scope - libcontainer container 46e0f348c56cbc22a33fde4c593c7d7377eacf6532b7efe88579b4fb40e5099e. Aug 5 21:52:16.502908 containerd[1439]: time="2024-08-05T21:52:16.502858556Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6bb8699c87-hrz7f,Uid:84ab6b85-4592-4db6-884e-c68ebb88fa01,Namespace:calico-system,Attempt:0,} returns sandbox id \"55be59b55c543d1d90b6ab3b21d7ce76156aac3e05e01ce49c9a858ef1570721\"" Aug 5 21:52:16.507947 kubelet[2528]: E0805 21:52:16.507662 2528 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:52:16.517738 containerd[1439]: time="2024-08-05T21:52:16.517694039Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\"" Aug 5 21:52:16.532107 containerd[1439]: time="2024-08-05T21:52:16.531275186Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-g8xh6,Uid:2f7476cf-4171-43e3-8b5d-7385c93c6ff8,Namespace:calico-system,Attempt:0,} returns sandbox id \"46e0f348c56cbc22a33fde4c593c7d7377eacf6532b7efe88579b4fb40e5099e\"" Aug 5 21:52:16.532235 kubelet[2528]: E0805 21:52:16.531913 2528 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:52:18.227852 kubelet[2528]: E0805 21:52:18.227793 2528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rsntl" podUID="d273804c-1785-4ad5-9b9f-33407f6c46a0" Aug 5 21:52:19.770983 containerd[1439]: time="2024-08-05T21:52:19.770915716Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:52:19.772013 containerd[1439]: time="2024-08-05T21:52:19.771967666Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.0: active requests=0, bytes read=27476513" Aug 5 21:52:19.772879 containerd[1439]: time="2024-08-05T21:52:19.772852044Z" level=info msg="ImageCreate event name:\"sha256:2551880d36cd0ce4c6820747ffe4c40cbf344d26df0ecd878808432ad4f78f03\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:52:19.774818 containerd[1439]: time="2024-08-05T21:52:19.774787013Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:52:19.775775 containerd[1439]: time="2024-08-05T21:52:19.775734396Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.0\" with image id \"sha256:2551880d36cd0ce4c6820747ffe4c40cbf344d26df0ecd878808432ad4f78f03\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\", size \"28843073\" in 3.257997114s" Aug 5 21:52:19.775809 containerd[1439]: time="2024-08-05T21:52:19.775778039Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\" returns image reference \"sha256:2551880d36cd0ce4c6820747ffe4c40cbf344d26df0ecd878808432ad4f78f03\"" Aug 5 21:52:19.777281 containerd[1439]: time="2024-08-05T21:52:19.777250897Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\"" Aug 5 21:52:19.787793 containerd[1439]: time="2024-08-05T21:52:19.786070564Z" level=info msg="CreateContainer within sandbox \"55be59b55c543d1d90b6ab3b21d7ce76156aac3e05e01ce49c9a858ef1570721\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Aug 5 21:52:19.801496 containerd[1439]: time="2024-08-05T21:52:19.801444706Z" level=info msg="CreateContainer within sandbox \"55be59b55c543d1d90b6ab3b21d7ce76156aac3e05e01ce49c9a858ef1570721\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"8ad79dd49c2172eda5d42a3d39d99b233b293950383a5619532907f034f0ddb3\"" Aug 5 21:52:19.802074 containerd[1439]: time="2024-08-05T21:52:19.802048627Z" level=info msg="StartContainer for \"8ad79dd49c2172eda5d42a3d39d99b233b293950383a5619532907f034f0ddb3\"" Aug 5 21:52:19.833332 systemd[1]: Started cri-containerd-8ad79dd49c2172eda5d42a3d39d99b233b293950383a5619532907f034f0ddb3.scope - libcontainer container 8ad79dd49c2172eda5d42a3d39d99b233b293950383a5619532907f034f0ddb3. Aug 5 21:52:19.874564 containerd[1439]: time="2024-08-05T21:52:19.873827241Z" level=info msg="StartContainer for \"8ad79dd49c2172eda5d42a3d39d99b233b293950383a5619532907f034f0ddb3\" returns successfully" Aug 5 21:52:20.207094 kubelet[2528]: E0805 21:52:20.207052 2528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rsntl" podUID="d273804c-1785-4ad5-9b9f-33407f6c46a0" Aug 5 21:52:20.296242 kubelet[2528]: E0805 21:52:20.295500 2528 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:52:20.338543 kubelet[2528]: E0805 21:52:20.338504 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:52:20.338543 kubelet[2528]: W0805 21:52:20.338526 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:52:20.338543 kubelet[2528]: E0805 21:52:20.338550 2528 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:52:20.339236 kubelet[2528]: E0805 21:52:20.339212 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:52:20.339236 kubelet[2528]: W0805 21:52:20.339228 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:52:20.339236 kubelet[2528]: E0805 21:52:20.339243 2528 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:52:20.339451 kubelet[2528]: E0805 21:52:20.339430 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:52:20.339451 kubelet[2528]: W0805 21:52:20.339443 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:52:20.339516 kubelet[2528]: E0805 21:52:20.339458 2528 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:52:20.339877 kubelet[2528]: E0805 21:52:20.339581 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:52:20.339877 kubelet[2528]: W0805 21:52:20.339594 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:52:20.339877 kubelet[2528]: E0805 21:52:20.339605 2528 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:52:20.339877 kubelet[2528]: E0805 21:52:20.339737 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:52:20.339877 kubelet[2528]: W0805 21:52:20.339744 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:52:20.339877 kubelet[2528]: E0805 21:52:20.339754 2528 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:52:20.339877 kubelet[2528]: E0805 21:52:20.339864 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:52:20.339877 kubelet[2528]: W0805 21:52:20.339870 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:52:20.339877 kubelet[2528]: E0805 21:52:20.339880 2528 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:52:20.340094 kubelet[2528]: E0805 21:52:20.339997 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:52:20.340094 kubelet[2528]: W0805 21:52:20.340003 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:52:20.340094 kubelet[2528]: E0805 21:52:20.340014 2528 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:52:20.340669 kubelet[2528]: E0805 21:52:20.340165 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:52:20.340669 kubelet[2528]: W0805 21:52:20.340174 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:52:20.340669 kubelet[2528]: I0805 21:52:20.340194 2528 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-6bb8699c87-hrz7f" podStartSLOduration=1.080642288 podStartE2EDuration="4.340159431s" podCreationTimestamp="2024-08-05 21:52:16 +0000 UTC" firstStartedPulling="2024-08-05 21:52:16.516823213 +0000 UTC m=+20.408273824" lastFinishedPulling="2024-08-05 21:52:19.776340396 +0000 UTC m=+23.667790967" observedRunningTime="2024-08-05 21:52:20.340057465 +0000 UTC m=+24.231508076" watchObservedRunningTime="2024-08-05 21:52:20.340159431 +0000 UTC m=+24.231610042" Aug 5 21:52:20.340669 kubelet[2528]: E0805 21:52:20.340218 2528 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:52:20.340669 kubelet[2528]: E0805 21:52:20.340379 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:52:20.340669 kubelet[2528]: W0805 21:52:20.340387 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:52:20.340669 kubelet[2528]: E0805 21:52:20.340398 2528 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:52:20.341633 kubelet[2528]: E0805 21:52:20.341602 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:52:20.341633 kubelet[2528]: W0805 21:52:20.341625 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:52:20.341729 kubelet[2528]: E0805 21:52:20.341642 2528 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:52:20.342262 kubelet[2528]: E0805 21:52:20.342191 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:52:20.342430 kubelet[2528]: W0805 21:52:20.342210 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:52:20.342430 kubelet[2528]: E0805 21:52:20.342402 2528 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:52:20.342755 kubelet[2528]: E0805 21:52:20.342739 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:52:20.342797 kubelet[2528]: W0805 21:52:20.342768 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:52:20.342797 kubelet[2528]: E0805 21:52:20.342783 2528 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:52:20.345517 kubelet[2528]: E0805 21:52:20.343498 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:52:20.345517 kubelet[2528]: W0805 21:52:20.343514 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:52:20.345517 kubelet[2528]: E0805 21:52:20.343528 2528 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:52:20.345517 kubelet[2528]: E0805 21:52:20.343755 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:52:20.345517 kubelet[2528]: W0805 21:52:20.343764 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:52:20.345517 kubelet[2528]: E0805 21:52:20.343776 2528 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:52:20.345517 kubelet[2528]: E0805 21:52:20.343975 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:52:20.345517 kubelet[2528]: W0805 21:52:20.343982 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:52:20.345517 kubelet[2528]: E0805 21:52:20.343992 2528 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:52:20.345752 kubelet[2528]: E0805 21:52:20.345610 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:52:20.345752 kubelet[2528]: W0805 21:52:20.345621 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:52:20.345752 kubelet[2528]: E0805 21:52:20.345634 2528 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:52:20.345855 kubelet[2528]: E0805 21:52:20.345833 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:52:20.345855 kubelet[2528]: W0805 21:52:20.345846 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:52:20.345913 kubelet[2528]: E0805 21:52:20.345864 2528 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:52:20.346067 kubelet[2528]: E0805 21:52:20.346047 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:52:20.346067 kubelet[2528]: W0805 21:52:20.346060 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:52:20.346125 kubelet[2528]: E0805 21:52:20.346076 2528 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:52:20.346326 kubelet[2528]: E0805 21:52:20.346303 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:52:20.346326 kubelet[2528]: W0805 21:52:20.346317 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:52:20.346402 kubelet[2528]: E0805 21:52:20.346333 2528 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:52:20.346511 kubelet[2528]: E0805 21:52:20.346493 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:52:20.346511 kubelet[2528]: W0805 21:52:20.346504 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:52:20.346565 kubelet[2528]: E0805 21:52:20.346517 2528 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:52:20.346837 kubelet[2528]: E0805 21:52:20.346814 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:52:20.346837 kubelet[2528]: W0805 21:52:20.346829 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:52:20.346902 kubelet[2528]: E0805 21:52:20.346846 2528 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:52:20.347044 kubelet[2528]: E0805 21:52:20.347027 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:52:20.347044 kubelet[2528]: W0805 21:52:20.347039 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:52:20.347113 kubelet[2528]: E0805 21:52:20.347092 2528 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:52:20.347224 kubelet[2528]: E0805 21:52:20.347207 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:52:20.347224 kubelet[2528]: W0805 21:52:20.347219 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:52:20.347355 kubelet[2528]: E0805 21:52:20.347261 2528 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:52:20.347511 kubelet[2528]: E0805 21:52:20.347489 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:52:20.347511 kubelet[2528]: W0805 21:52:20.347503 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:52:20.347572 kubelet[2528]: E0805 21:52:20.347522 2528 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:52:20.347731 kubelet[2528]: E0805 21:52:20.347711 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:52:20.347731 kubelet[2528]: W0805 21:52:20.347722 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:52:20.347795 kubelet[2528]: E0805 21:52:20.347737 2528 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:52:20.347896 kubelet[2528]: E0805 21:52:20.347878 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:52:20.347896 kubelet[2528]: W0805 21:52:20.347888 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:52:20.347954 kubelet[2528]: E0805 21:52:20.347901 2528 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:52:20.348067 kubelet[2528]: E0805 21:52:20.348052 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:52:20.348067 kubelet[2528]: W0805 21:52:20.348065 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:52:20.348123 kubelet[2528]: E0805 21:52:20.348078 2528 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:52:20.348362 kubelet[2528]: E0805 21:52:20.348334 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:52:20.348362 kubelet[2528]: W0805 21:52:20.348352 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:52:20.348428 kubelet[2528]: E0805 21:52:20.348372 2528 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:52:20.348630 kubelet[2528]: E0805 21:52:20.348613 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:52:20.348630 kubelet[2528]: W0805 21:52:20.348624 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:52:20.348690 kubelet[2528]: E0805 21:52:20.348657 2528 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:52:20.350170 kubelet[2528]: E0805 21:52:20.348895 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:52:20.350170 kubelet[2528]: W0805 21:52:20.348916 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:52:20.350170 kubelet[2528]: E0805 21:52:20.348945 2528 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:52:20.350170 kubelet[2528]: E0805 21:52:20.349080 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:52:20.350170 kubelet[2528]: W0805 21:52:20.349089 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:52:20.350170 kubelet[2528]: E0805 21:52:20.349105 2528 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:52:20.350170 kubelet[2528]: E0805 21:52:20.349308 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:52:20.350170 kubelet[2528]: W0805 21:52:20.349317 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:52:20.350170 kubelet[2528]: E0805 21:52:20.349328 2528 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:52:20.350170 kubelet[2528]: E0805 21:52:20.349753 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:52:20.350517 kubelet[2528]: W0805 21:52:20.349767 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:52:20.350517 kubelet[2528]: E0805 21:52:20.349781 2528 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:52:21.225115 containerd[1439]: time="2024-08-05T21:52:21.225071283Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:52:21.226121 containerd[1439]: time="2024-08-05T21:52:21.225950257Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0: active requests=0, bytes read=4916009" Aug 5 21:52:21.228045 containerd[1439]: time="2024-08-05T21:52:21.227042644Z" level=info msg="ImageCreate event name:\"sha256:4b6a6a9b369fa6127e23e376ac423670fa81290e0860917acaacae108e3cc064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:52:21.229272 containerd[1439]: time="2024-08-05T21:52:21.229218417Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:52:21.230275 containerd[1439]: time="2024-08-05T21:52:21.230237840Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" with image id \"sha256:4b6a6a9b369fa6127e23e376ac423670fa81290e0860917acaacae108e3cc064\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\", size \"6282537\" in 1.45295154s" Aug 5 21:52:21.230379 containerd[1439]: time="2024-08-05T21:52:21.230359807Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" returns image reference \"sha256:4b6a6a9b369fa6127e23e376ac423670fa81290e0860917acaacae108e3cc064\"" Aug 5 21:52:21.234772 containerd[1439]: time="2024-08-05T21:52:21.234731435Z" level=info msg="CreateContainer within sandbox \"46e0f348c56cbc22a33fde4c593c7d7377eacf6532b7efe88579b4fb40e5099e\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Aug 5 21:52:21.261645 containerd[1439]: time="2024-08-05T21:52:21.261530319Z" level=info msg="CreateContainer within sandbox \"46e0f348c56cbc22a33fde4c593c7d7377eacf6532b7efe88579b4fb40e5099e\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"c2e6584a29bf67fc330570894447526d887e1c7efe719a9eec19e7dc77c2a1d2\"" Aug 5 21:52:21.263259 containerd[1439]: time="2024-08-05T21:52:21.262131116Z" level=info msg="StartContainer for \"c2e6584a29bf67fc330570894447526d887e1c7efe719a9eec19e7dc77c2a1d2\"" Aug 5 21:52:21.295251 systemd[1]: run-containerd-runc-k8s.io-c2e6584a29bf67fc330570894447526d887e1c7efe719a9eec19e7dc77c2a1d2-runc.1wfDZN.mount: Deactivated successfully. Aug 5 21:52:21.299712 kubelet[2528]: I0805 21:52:21.299672 2528 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 5 21:52:21.304408 kubelet[2528]: E0805 21:52:21.300312 2528 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:52:21.304571 systemd[1]: Started cri-containerd-c2e6584a29bf67fc330570894447526d887e1c7efe719a9eec19e7dc77c2a1d2.scope - libcontainer container c2e6584a29bf67fc330570894447526d887e1c7efe719a9eec19e7dc77c2a1d2. Aug 5 21:52:21.327986 containerd[1439]: time="2024-08-05T21:52:21.327889109Z" level=info msg="StartContainer for \"c2e6584a29bf67fc330570894447526d887e1c7efe719a9eec19e7dc77c2a1d2\" returns successfully" Aug 5 21:52:21.350354 kubelet[2528]: E0805 21:52:21.350217 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:52:21.350354 kubelet[2528]: W0805 21:52:21.350240 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:52:21.350354 kubelet[2528]: E0805 21:52:21.350264 2528 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:52:21.350710 kubelet[2528]: E0805 21:52:21.350635 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:52:21.350710 kubelet[2528]: W0805 21:52:21.350646 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:52:21.350710 kubelet[2528]: E0805 21:52:21.350674 2528 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:52:21.351146 kubelet[2528]: E0805 21:52:21.351068 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:52:21.351146 kubelet[2528]: W0805 21:52:21.351084 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:52:21.351146 kubelet[2528]: E0805 21:52:21.351097 2528 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:52:21.351487 kubelet[2528]: E0805 21:52:21.351395 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:52:21.351487 kubelet[2528]: W0805 21:52:21.351407 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:52:21.351487 kubelet[2528]: E0805 21:52:21.351427 2528 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:52:21.351913 kubelet[2528]: E0805 21:52:21.351814 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:52:21.351913 kubelet[2528]: W0805 21:52:21.351826 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:52:21.351913 kubelet[2528]: E0805 21:52:21.351839 2528 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:52:21.352242 kubelet[2528]: E0805 21:52:21.352184 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:52:21.352242 kubelet[2528]: W0805 21:52:21.352196 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:52:21.352242 kubelet[2528]: E0805 21:52:21.352208 2528 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:52:21.352608 kubelet[2528]: E0805 21:52:21.352514 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:52:21.352608 kubelet[2528]: W0805 21:52:21.352526 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:52:21.352608 kubelet[2528]: E0805 21:52:21.352538 2528 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:52:21.352892 kubelet[2528]: E0805 21:52:21.352880 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:52:21.353020 kubelet[2528]: W0805 21:52:21.352961 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:52:21.353020 kubelet[2528]: E0805 21:52:21.352979 2528 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:52:21.353433 kubelet[2528]: E0805 21:52:21.353330 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:52:21.353433 kubelet[2528]: W0805 21:52:21.353342 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:52:21.353433 kubelet[2528]: E0805 21:52:21.353361 2528 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:52:21.353619 kubelet[2528]: E0805 21:52:21.353608 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:52:21.353730 kubelet[2528]: W0805 21:52:21.353679 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:52:21.353730 kubelet[2528]: E0805 21:52:21.353696 2528 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:52:21.359790 systemd[1]: cri-containerd-c2e6584a29bf67fc330570894447526d887e1c7efe719a9eec19e7dc77c2a1d2.scope: Deactivated successfully. Aug 5 21:52:21.481654 containerd[1439]: time="2024-08-05T21:52:21.481520691Z" level=info msg="shim disconnected" id=c2e6584a29bf67fc330570894447526d887e1c7efe719a9eec19e7dc77c2a1d2 namespace=k8s.io Aug 5 21:52:21.481654 containerd[1439]: time="2024-08-05T21:52:21.481578454Z" level=warning msg="cleaning up after shim disconnected" id=c2e6584a29bf67fc330570894447526d887e1c7efe719a9eec19e7dc77c2a1d2 namespace=k8s.io Aug 5 21:52:21.481654 containerd[1439]: time="2024-08-05T21:52:21.481587495Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 21:52:21.780743 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c2e6584a29bf67fc330570894447526d887e1c7efe719a9eec19e7dc77c2a1d2-rootfs.mount: Deactivated successfully. Aug 5 21:52:22.208201 kubelet[2528]: E0805 21:52:22.208090 2528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rsntl" podUID="d273804c-1785-4ad5-9b9f-33407f6c46a0" Aug 5 21:52:22.303276 kubelet[2528]: E0805 21:52:22.303092 2528 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:52:22.304618 containerd[1439]: time="2024-08-05T21:52:22.304281996Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\"" Aug 5 21:52:24.209458 kubelet[2528]: E0805 21:52:24.208266 2528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rsntl" podUID="d273804c-1785-4ad5-9b9f-33407f6c46a0" Aug 5 21:52:26.209569 kubelet[2528]: E0805 21:52:26.209517 2528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rsntl" podUID="d273804c-1785-4ad5-9b9f-33407f6c46a0" Aug 5 21:52:28.168648 containerd[1439]: time="2024-08-05T21:52:28.168594416Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:52:28.169765 containerd[1439]: time="2024-08-05T21:52:28.169726310Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.0: active requests=0, bytes read=86799715" Aug 5 21:52:28.170494 containerd[1439]: time="2024-08-05T21:52:28.170463025Z" level=info msg="ImageCreate event name:\"sha256:adcb19ea66141abcd7dc426e3205f2e6ff26e524a3f7148c97f3d49933f502ee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:52:28.172522 containerd[1439]: time="2024-08-05T21:52:28.172476041Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:52:28.173289 containerd[1439]: time="2024-08-05T21:52:28.173251038Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.0\" with image id \"sha256:adcb19ea66141abcd7dc426e3205f2e6ff26e524a3f7148c97f3d49933f502ee\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\", size \"88166283\" in 5.8689304s" Aug 5 21:52:28.173335 containerd[1439]: time="2024-08-05T21:52:28.173287720Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\" returns image reference \"sha256:adcb19ea66141abcd7dc426e3205f2e6ff26e524a3f7148c97f3d49933f502ee\"" Aug 5 21:52:28.176066 containerd[1439]: time="2024-08-05T21:52:28.176034891Z" level=info msg="CreateContainer within sandbox \"46e0f348c56cbc22a33fde4c593c7d7377eacf6532b7efe88579b4fb40e5099e\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Aug 5 21:52:28.190366 containerd[1439]: time="2024-08-05T21:52:28.190265249Z" level=info msg="CreateContainer within sandbox \"46e0f348c56cbc22a33fde4c593c7d7377eacf6532b7efe88579b4fb40e5099e\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"4ab6f0d8df1b9aca24c9688b303c1ea9504edde191242b7100760a7426730580\"" Aug 5 21:52:28.191410 containerd[1439]: time="2024-08-05T21:52:28.191345020Z" level=info msg="StartContainer for \"4ab6f0d8df1b9aca24c9688b303c1ea9504edde191242b7100760a7426730580\"" Aug 5 21:52:28.208166 kubelet[2528]: E0805 21:52:28.208009 2528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rsntl" podUID="d273804c-1785-4ad5-9b9f-33407f6c46a0" Aug 5 21:52:28.225313 systemd[1]: Started cri-containerd-4ab6f0d8df1b9aca24c9688b303c1ea9504edde191242b7100760a7426730580.scope - libcontainer container 4ab6f0d8df1b9aca24c9688b303c1ea9504edde191242b7100760a7426730580. Aug 5 21:52:28.249470 containerd[1439]: time="2024-08-05T21:52:28.249430987Z" level=info msg="StartContainer for \"4ab6f0d8df1b9aca24c9688b303c1ea9504edde191242b7100760a7426730580\" returns successfully" Aug 5 21:52:28.324186 kubelet[2528]: E0805 21:52:28.324097 2528 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:52:28.833835 containerd[1439]: time="2024-08-05T21:52:28.833781981Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 5 21:52:28.836346 systemd[1]: cri-containerd-4ab6f0d8df1b9aca24c9688b303c1ea9504edde191242b7100760a7426730580.scope: Deactivated successfully. Aug 5 21:52:28.853269 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4ab6f0d8df1b9aca24c9688b303c1ea9504edde191242b7100760a7426730580-rootfs.mount: Deactivated successfully. Aug 5 21:52:28.901655 kubelet[2528]: I0805 21:52:28.901476 2528 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Aug 5 21:52:28.928164 kubelet[2528]: I0805 21:52:28.927739 2528 topology_manager.go:215] "Topology Admit Handler" podUID="dbb8ef3a-6e32-4c6c-91b3-57dd29571e98" podNamespace="kube-system" podName="coredns-76f75df574-q8fl2" Aug 5 21:52:28.928981 kubelet[2528]: I0805 21:52:28.928919 2528 topology_manager.go:215] "Topology Admit Handler" podUID="e4d6c640-dd8d-409d-b5ce-dbdbc361cc76" podNamespace="calico-system" podName="calico-kube-controllers-85cbdc89-d4rsn" Aug 5 21:52:28.929554 kubelet[2528]: I0805 21:52:28.929519 2528 topology_manager.go:215] "Topology Admit Handler" podUID="898cb65e-844a-495e-afd2-62c371049ceb" podNamespace="kube-system" podName="coredns-76f75df574-mws6f" Aug 5 21:52:28.930682 kubelet[2528]: I0805 21:52:28.929964 2528 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khbv9\" (UniqueName: \"kubernetes.io/projected/dbb8ef3a-6e32-4c6c-91b3-57dd29571e98-kube-api-access-khbv9\") pod \"coredns-76f75df574-q8fl2\" (UID: \"dbb8ef3a-6e32-4c6c-91b3-57dd29571e98\") " pod="kube-system/coredns-76f75df574-q8fl2" Aug 5 21:52:28.930682 kubelet[2528]: I0805 21:52:28.930007 2528 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e4d6c640-dd8d-409d-b5ce-dbdbc361cc76-tigera-ca-bundle\") pod \"calico-kube-controllers-85cbdc89-d4rsn\" (UID: \"e4d6c640-dd8d-409d-b5ce-dbdbc361cc76\") " pod="calico-system/calico-kube-controllers-85cbdc89-d4rsn" Aug 5 21:52:28.930682 kubelet[2528]: I0805 21:52:28.930040 2528 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dbb8ef3a-6e32-4c6c-91b3-57dd29571e98-config-volume\") pod \"coredns-76f75df574-q8fl2\" (UID: \"dbb8ef3a-6e32-4c6c-91b3-57dd29571e98\") " pod="kube-system/coredns-76f75df574-q8fl2" Aug 5 21:52:28.930682 kubelet[2528]: I0805 21:52:28.930063 2528 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/898cb65e-844a-495e-afd2-62c371049ceb-config-volume\") pod \"coredns-76f75df574-mws6f\" (UID: \"898cb65e-844a-495e-afd2-62c371049ceb\") " pod="kube-system/coredns-76f75df574-mws6f" Aug 5 21:52:28.930682 kubelet[2528]: I0805 21:52:28.930233 2528 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bps7s\" (UniqueName: \"kubernetes.io/projected/898cb65e-844a-495e-afd2-62c371049ceb-kube-api-access-bps7s\") pod \"coredns-76f75df574-mws6f\" (UID: \"898cb65e-844a-495e-afd2-62c371049ceb\") " pod="kube-system/coredns-76f75df574-mws6f" Aug 5 21:52:28.930855 kubelet[2528]: I0805 21:52:28.930588 2528 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgf8g\" (UniqueName: \"kubernetes.io/projected/e4d6c640-dd8d-409d-b5ce-dbdbc361cc76-kube-api-access-pgf8g\") pod \"calico-kube-controllers-85cbdc89-d4rsn\" (UID: \"e4d6c640-dd8d-409d-b5ce-dbdbc361cc76\") " pod="calico-system/calico-kube-controllers-85cbdc89-d4rsn" Aug 5 21:52:28.938084 systemd[1]: Created slice kubepods-burstable-poddbb8ef3a_6e32_4c6c_91b3_57dd29571e98.slice - libcontainer container kubepods-burstable-poddbb8ef3a_6e32_4c6c_91b3_57dd29571e98.slice. Aug 5 21:52:28.942897 containerd[1439]: time="2024-08-05T21:52:28.942812734Z" level=info msg="shim disconnected" id=4ab6f0d8df1b9aca24c9688b303c1ea9504edde191242b7100760a7426730580 namespace=k8s.io Aug 5 21:52:28.942897 containerd[1439]: time="2024-08-05T21:52:28.942885538Z" level=warning msg="cleaning up after shim disconnected" id=4ab6f0d8df1b9aca24c9688b303c1ea9504edde191242b7100760a7426730580 namespace=k8s.io Aug 5 21:52:28.942897 containerd[1439]: time="2024-08-05T21:52:28.942894178Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 21:52:28.944190 systemd[1]: Created slice kubepods-burstable-pod898cb65e_844a_495e_afd2_62c371049ceb.slice - libcontainer container kubepods-burstable-pod898cb65e_844a_495e_afd2_62c371049ceb.slice. Aug 5 21:52:28.954481 systemd[1]: Created slice kubepods-besteffort-pode4d6c640_dd8d_409d_b5ce_dbdbc361cc76.slice - libcontainer container kubepods-besteffort-pode4d6c640_dd8d_409d_b5ce_dbdbc361cc76.slice. Aug 5 21:52:29.242614 kubelet[2528]: E0805 21:52:29.242570 2528 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:52:29.243638 containerd[1439]: time="2024-08-05T21:52:29.243602218Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-q8fl2,Uid:dbb8ef3a-6e32-4c6c-91b3-57dd29571e98,Namespace:kube-system,Attempt:0,}" Aug 5 21:52:29.251377 kubelet[2528]: E0805 21:52:29.250994 2528 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:52:29.252702 containerd[1439]: time="2024-08-05T21:52:29.252661996Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-mws6f,Uid:898cb65e-844a-495e-afd2-62c371049ceb,Namespace:kube-system,Attempt:0,}" Aug 5 21:52:29.260519 containerd[1439]: time="2024-08-05T21:52:29.260290988Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85cbdc89-d4rsn,Uid:e4d6c640-dd8d-409d-b5ce-dbdbc361cc76,Namespace:calico-system,Attempt:0,}" Aug 5 21:52:29.336443 kubelet[2528]: E0805 21:52:29.335841 2528 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:52:29.338570 containerd[1439]: time="2024-08-05T21:52:29.338538078Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\"" Aug 5 21:52:29.486035 containerd[1439]: time="2024-08-05T21:52:29.485933278Z" level=error msg="Failed to destroy network for sandbox \"a1dd4bafe4261ec061bb941844e6493ab8361843e6fae0a50afa2f293fe31a80\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 21:52:29.487578 containerd[1439]: time="2024-08-05T21:52:29.487447188Z" level=error msg="Failed to destroy network for sandbox \"4b542e6a289afdff408fbdb4c6530e8d62c6d81601ad3a3b8fbd6fd39c89cfbb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 21:52:29.488423 containerd[1439]: time="2024-08-05T21:52:29.488377431Z" level=error msg="encountered an error cleaning up failed sandbox \"4b542e6a289afdff408fbdb4c6530e8d62c6d81601ad3a3b8fbd6fd39c89cfbb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 21:52:29.488503 containerd[1439]: time="2024-08-05T21:52:29.488440193Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-q8fl2,Uid:dbb8ef3a-6e32-4c6c-91b3-57dd29571e98,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4b542e6a289afdff408fbdb4c6530e8d62c6d81601ad3a3b8fbd6fd39c89cfbb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 21:52:29.490111 kubelet[2528]: E0805 21:52:29.489280 2528 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b542e6a289afdff408fbdb4c6530e8d62c6d81601ad3a3b8fbd6fd39c89cfbb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 21:52:29.489719 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a1dd4bafe4261ec061bb941844e6493ab8361843e6fae0a50afa2f293fe31a80-shm.mount: Deactivated successfully. Aug 5 21:52:29.491173 containerd[1439]: time="2024-08-05T21:52:29.490710338Z" level=error msg="encountered an error cleaning up failed sandbox \"a1dd4bafe4261ec061bb941844e6493ab8361843e6fae0a50afa2f293fe31a80\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 21:52:29.491173 containerd[1439]: time="2024-08-05T21:52:29.490772221Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-mws6f,Uid:898cb65e-844a-495e-afd2-62c371049ceb,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a1dd4bafe4261ec061bb941844e6493ab8361843e6fae0a50afa2f293fe31a80\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 21:52:29.491969 kubelet[2528]: E0805 21:52:29.491927 2528 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b542e6a289afdff408fbdb4c6530e8d62c6d81601ad3a3b8fbd6fd39c89cfbb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-q8fl2" Aug 5 21:52:29.491969 kubelet[2528]: E0805 21:52:29.491965 2528 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b542e6a289afdff408fbdb4c6530e8d62c6d81601ad3a3b8fbd6fd39c89cfbb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-q8fl2" Aug 5 21:52:29.492055 kubelet[2528]: E0805 21:52:29.492019 2528 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-q8fl2_kube-system(dbb8ef3a-6e32-4c6c-91b3-57dd29571e98)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-q8fl2_kube-system(dbb8ef3a-6e32-4c6c-91b3-57dd29571e98)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4b542e6a289afdff408fbdb4c6530e8d62c6d81601ad3a3b8fbd6fd39c89cfbb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-q8fl2" podUID="dbb8ef3a-6e32-4c6c-91b3-57dd29571e98" Aug 5 21:52:29.492228 kubelet[2528]: E0805 21:52:29.491331 2528 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a1dd4bafe4261ec061bb941844e6493ab8361843e6fae0a50afa2f293fe31a80\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 21:52:29.492271 kubelet[2528]: E0805 21:52:29.492241 2528 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a1dd4bafe4261ec061bb941844e6493ab8361843e6fae0a50afa2f293fe31a80\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-mws6f" Aug 5 21:52:29.492297 kubelet[2528]: E0805 21:52:29.492281 2528 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a1dd4bafe4261ec061bb941844e6493ab8361843e6fae0a50afa2f293fe31a80\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-mws6f" Aug 5 21:52:29.492328 kubelet[2528]: E0805 21:52:29.492316 2528 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-mws6f_kube-system(898cb65e-844a-495e-afd2-62c371049ceb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-mws6f_kube-system(898cb65e-844a-495e-afd2-62c371049ceb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a1dd4bafe4261ec061bb941844e6493ab8361843e6fae0a50afa2f293fe31a80\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-mws6f" podUID="898cb65e-844a-495e-afd2-62c371049ceb" Aug 5 21:52:29.501320 containerd[1439]: time="2024-08-05T21:52:29.501192262Z" level=error msg="Failed to destroy network for sandbox \"b89e20bf661d484f4daa291792eca0523729599c8a8962ea65467a07b6ba66fe\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 21:52:29.501545 containerd[1439]: time="2024-08-05T21:52:29.501502396Z" level=error msg="encountered an error cleaning up failed sandbox \"b89e20bf661d484f4daa291792eca0523729599c8a8962ea65467a07b6ba66fe\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 21:52:29.501584 containerd[1439]: time="2024-08-05T21:52:29.501558559Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85cbdc89-d4rsn,Uid:e4d6c640-dd8d-409d-b5ce-dbdbc361cc76,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b89e20bf661d484f4daa291792eca0523729599c8a8962ea65467a07b6ba66fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 21:52:29.501806 kubelet[2528]: E0805 21:52:29.501772 2528 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b89e20bf661d484f4daa291792eca0523729599c8a8962ea65467a07b6ba66fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 21:52:29.501850 kubelet[2528]: E0805 21:52:29.501821 2528 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b89e20bf661d484f4daa291792eca0523729599c8a8962ea65467a07b6ba66fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-85cbdc89-d4rsn" Aug 5 21:52:29.501850 kubelet[2528]: E0805 21:52:29.501841 2528 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b89e20bf661d484f4daa291792eca0523729599c8a8962ea65467a07b6ba66fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-85cbdc89-d4rsn" Aug 5 21:52:29.501916 kubelet[2528]: E0805 21:52:29.501895 2528 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-85cbdc89-d4rsn_calico-system(e4d6c640-dd8d-409d-b5ce-dbdbc361cc76)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-85cbdc89-d4rsn_calico-system(e4d6c640-dd8d-409d-b5ce-dbdbc361cc76)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b89e20bf661d484f4daa291792eca0523729599c8a8962ea65467a07b6ba66fe\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-85cbdc89-d4rsn" podUID="e4d6c640-dd8d-409d-b5ce-dbdbc361cc76" Aug 5 21:52:30.213483 systemd[1]: Created slice kubepods-besteffort-podd273804c_1785_4ad5_9b9f_33407f6c46a0.slice - libcontainer container kubepods-besteffort-podd273804c_1785_4ad5_9b9f_33407f6c46a0.slice. Aug 5 21:52:30.215619 containerd[1439]: time="2024-08-05T21:52:30.215580398Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rsntl,Uid:d273804c-1785-4ad5-9b9f-33407f6c46a0,Namespace:calico-system,Attempt:0,}" Aug 5 21:52:30.255508 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b89e20bf661d484f4daa291792eca0523729599c8a8962ea65467a07b6ba66fe-shm.mount: Deactivated successfully. Aug 5 21:52:30.255601 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4b542e6a289afdff408fbdb4c6530e8d62c6d81601ad3a3b8fbd6fd39c89cfbb-shm.mount: Deactivated successfully. Aug 5 21:52:30.269440 containerd[1439]: time="2024-08-05T21:52:30.269185275Z" level=error msg="Failed to destroy network for sandbox \"957b00fcb06b24b119cc36320a30c80b48c23a666030821d4698c3c44bd442af\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 21:52:30.269733 containerd[1439]: time="2024-08-05T21:52:30.269623255Z" level=error msg="encountered an error cleaning up failed sandbox \"957b00fcb06b24b119cc36320a30c80b48c23a666030821d4698c3c44bd442af\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 21:52:30.269733 containerd[1439]: time="2024-08-05T21:52:30.269673857Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rsntl,Uid:d273804c-1785-4ad5-9b9f-33407f6c46a0,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"957b00fcb06b24b119cc36320a30c80b48c23a666030821d4698c3c44bd442af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 21:52:30.270018 kubelet[2528]: E0805 21:52:30.269924 2528 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"957b00fcb06b24b119cc36320a30c80b48c23a666030821d4698c3c44bd442af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 21:52:30.270018 kubelet[2528]: E0805 21:52:30.269983 2528 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"957b00fcb06b24b119cc36320a30c80b48c23a666030821d4698c3c44bd442af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rsntl" Aug 5 21:52:30.270018 kubelet[2528]: E0805 21:52:30.270010 2528 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"957b00fcb06b24b119cc36320a30c80b48c23a666030821d4698c3c44bd442af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rsntl" Aug 5 21:52:30.270297 kubelet[2528]: E0805 21:52:30.270059 2528 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-rsntl_calico-system(d273804c-1785-4ad5-9b9f-33407f6c46a0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-rsntl_calico-system(d273804c-1785-4ad5-9b9f-33407f6c46a0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"957b00fcb06b24b119cc36320a30c80b48c23a666030821d4698c3c44bd442af\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rsntl" podUID="d273804c-1785-4ad5-9b9f-33407f6c46a0" Aug 5 21:52:30.273746 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-957b00fcb06b24b119cc36320a30c80b48c23a666030821d4698c3c44bd442af-shm.mount: Deactivated successfully. Aug 5 21:52:30.332048 kubelet[2528]: I0805 21:52:30.332017 2528 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a1dd4bafe4261ec061bb941844e6493ab8361843e6fae0a50afa2f293fe31a80" Aug 5 21:52:30.332990 containerd[1439]: time="2024-08-05T21:52:30.332542189Z" level=info msg="StopPodSandbox for \"a1dd4bafe4261ec061bb941844e6493ab8361843e6fae0a50afa2f293fe31a80\"" Aug 5 21:52:30.332990 containerd[1439]: time="2024-08-05T21:52:30.332735998Z" level=info msg="Ensure that sandbox a1dd4bafe4261ec061bb941844e6493ab8361843e6fae0a50afa2f293fe31a80 in task-service has been cleanup successfully" Aug 5 21:52:30.333360 kubelet[2528]: I0805 21:52:30.333340 2528 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b542e6a289afdff408fbdb4c6530e8d62c6d81601ad3a3b8fbd6fd39c89cfbb" Aug 5 21:52:30.333999 containerd[1439]: time="2024-08-05T21:52:30.333664559Z" level=info msg="StopPodSandbox for \"4b542e6a289afdff408fbdb4c6530e8d62c6d81601ad3a3b8fbd6fd39c89cfbb\"" Aug 5 21:52:30.333999 containerd[1439]: time="2024-08-05T21:52:30.333810286Z" level=info msg="Ensure that sandbox 4b542e6a289afdff408fbdb4c6530e8d62c6d81601ad3a3b8fbd6fd39c89cfbb in task-service has been cleanup successfully" Aug 5 21:52:30.335438 kubelet[2528]: I0805 21:52:30.335400 2528 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="957b00fcb06b24b119cc36320a30c80b48c23a666030821d4698c3c44bd442af" Aug 5 21:52:30.336643 containerd[1439]: time="2024-08-05T21:52:30.336599491Z" level=info msg="StopPodSandbox for \"957b00fcb06b24b119cc36320a30c80b48c23a666030821d4698c3c44bd442af\"" Aug 5 21:52:30.336799 containerd[1439]: time="2024-08-05T21:52:30.336777858Z" level=info msg="Ensure that sandbox 957b00fcb06b24b119cc36320a30c80b48c23a666030821d4698c3c44bd442af in task-service has been cleanup successfully" Aug 5 21:52:30.337913 kubelet[2528]: I0805 21:52:30.337862 2528 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b89e20bf661d484f4daa291792eca0523729599c8a8962ea65467a07b6ba66fe" Aug 5 21:52:30.338864 containerd[1439]: time="2024-08-05T21:52:30.338678143Z" level=info msg="StopPodSandbox for \"b89e20bf661d484f4daa291792eca0523729599c8a8962ea65467a07b6ba66fe\"" Aug 5 21:52:30.339844 containerd[1439]: time="2024-08-05T21:52:30.339043840Z" level=info msg="Ensure that sandbox b89e20bf661d484f4daa291792eca0523729599c8a8962ea65467a07b6ba66fe in task-service has been cleanup successfully" Aug 5 21:52:30.377385 containerd[1439]: time="2024-08-05T21:52:30.377312952Z" level=error msg="StopPodSandbox for \"4b542e6a289afdff408fbdb4c6530e8d62c6d81601ad3a3b8fbd6fd39c89cfbb\" failed" error="failed to destroy network for sandbox \"4b542e6a289afdff408fbdb4c6530e8d62c6d81601ad3a3b8fbd6fd39c89cfbb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 21:52:30.377623 kubelet[2528]: E0805 21:52:30.377588 2528 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4b542e6a289afdff408fbdb4c6530e8d62c6d81601ad3a3b8fbd6fd39c89cfbb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4b542e6a289afdff408fbdb4c6530e8d62c6d81601ad3a3b8fbd6fd39c89cfbb" Aug 5 21:52:30.377681 kubelet[2528]: E0805 21:52:30.377666 2528 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4b542e6a289afdff408fbdb4c6530e8d62c6d81601ad3a3b8fbd6fd39c89cfbb"} Aug 5 21:52:30.377722 kubelet[2528]: E0805 21:52:30.377708 2528 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"dbb8ef3a-6e32-4c6c-91b3-57dd29571e98\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4b542e6a289afdff408fbdb4c6530e8d62c6d81601ad3a3b8fbd6fd39c89cfbb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 5 21:52:30.377780 kubelet[2528]: E0805 21:52:30.377736 2528 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"dbb8ef3a-6e32-4c6c-91b3-57dd29571e98\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4b542e6a289afdff408fbdb4c6530e8d62c6d81601ad3a3b8fbd6fd39c89cfbb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-q8fl2" podUID="dbb8ef3a-6e32-4c6c-91b3-57dd29571e98" Aug 5 21:52:30.382290 containerd[1439]: time="2024-08-05T21:52:30.382176849Z" level=error msg="StopPodSandbox for \"957b00fcb06b24b119cc36320a30c80b48c23a666030821d4698c3c44bd442af\" failed" error="failed to destroy network for sandbox \"957b00fcb06b24b119cc36320a30c80b48c23a666030821d4698c3c44bd442af\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 21:52:30.382529 kubelet[2528]: E0805 21:52:30.382447 2528 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"957b00fcb06b24b119cc36320a30c80b48c23a666030821d4698c3c44bd442af\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="957b00fcb06b24b119cc36320a30c80b48c23a666030821d4698c3c44bd442af" Aug 5 21:52:30.382609 kubelet[2528]: E0805 21:52:30.382547 2528 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"957b00fcb06b24b119cc36320a30c80b48c23a666030821d4698c3c44bd442af"} Aug 5 21:52:30.382609 kubelet[2528]: E0805 21:52:30.382598 2528 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d273804c-1785-4ad5-9b9f-33407f6c46a0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"957b00fcb06b24b119cc36320a30c80b48c23a666030821d4698c3c44bd442af\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 5 21:52:30.382692 kubelet[2528]: E0805 21:52:30.382632 2528 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d273804c-1785-4ad5-9b9f-33407f6c46a0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"957b00fcb06b24b119cc36320a30c80b48c23a666030821d4698c3c44bd442af\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rsntl" podUID="d273804c-1785-4ad5-9b9f-33407f6c46a0" Aug 5 21:52:30.383779 containerd[1439]: time="2024-08-05T21:52:30.383745999Z" level=error msg="StopPodSandbox for \"a1dd4bafe4261ec061bb941844e6493ab8361843e6fae0a50afa2f293fe31a80\" failed" error="failed to destroy network for sandbox \"a1dd4bafe4261ec061bb941844e6493ab8361843e6fae0a50afa2f293fe31a80\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 21:52:30.384029 kubelet[2528]: E0805 21:52:30.384006 2528 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a1dd4bafe4261ec061bb941844e6493ab8361843e6fae0a50afa2f293fe31a80\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a1dd4bafe4261ec061bb941844e6493ab8361843e6fae0a50afa2f293fe31a80" Aug 5 21:52:30.384095 kubelet[2528]: E0805 21:52:30.384037 2528 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a1dd4bafe4261ec061bb941844e6493ab8361843e6fae0a50afa2f293fe31a80"} Aug 5 21:52:30.384095 kubelet[2528]: E0805 21:52:30.384068 2528 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"898cb65e-844a-495e-afd2-62c371049ceb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a1dd4bafe4261ec061bb941844e6493ab8361843e6fae0a50afa2f293fe31a80\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 5 21:52:30.384095 kubelet[2528]: E0805 21:52:30.384093 2528 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"898cb65e-844a-495e-afd2-62c371049ceb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a1dd4bafe4261ec061bb941844e6493ab8361843e6fae0a50afa2f293fe31a80\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-mws6f" podUID="898cb65e-844a-495e-afd2-62c371049ceb" Aug 5 21:52:30.384865 containerd[1439]: time="2024-08-05T21:52:30.384821247Z" level=error msg="StopPodSandbox for \"b89e20bf661d484f4daa291792eca0523729599c8a8962ea65467a07b6ba66fe\" failed" error="failed to destroy network for sandbox \"b89e20bf661d484f4daa291792eca0523729599c8a8962ea65467a07b6ba66fe\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 21:52:30.385007 kubelet[2528]: E0805 21:52:30.384982 2528 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b89e20bf661d484f4daa291792eca0523729599c8a8962ea65467a07b6ba66fe\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b89e20bf661d484f4daa291792eca0523729599c8a8962ea65467a07b6ba66fe" Aug 5 21:52:30.385059 kubelet[2528]: E0805 21:52:30.385036 2528 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b89e20bf661d484f4daa291792eca0523729599c8a8962ea65467a07b6ba66fe"} Aug 5 21:52:30.385082 kubelet[2528]: E0805 21:52:30.385065 2528 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e4d6c640-dd8d-409d-b5ce-dbdbc361cc76\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b89e20bf661d484f4daa291792eca0523729599c8a8962ea65467a07b6ba66fe\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 5 21:52:30.385122 kubelet[2528]: E0805 21:52:30.385088 2528 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e4d6c640-dd8d-409d-b5ce-dbdbc361cc76\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b89e20bf661d484f4daa291792eca0523729599c8a8962ea65467a07b6ba66fe\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-85cbdc89-d4rsn" podUID="e4d6c640-dd8d-409d-b5ce-dbdbc361cc76" Aug 5 21:52:32.347911 systemd[1]: Started sshd@7-10.0.0.99:22-10.0.0.1:42936.service - OpenSSH per-connection server daemon (10.0.0.1:42936). Aug 5 21:52:32.393018 sshd[3531]: Accepted publickey for core from 10.0.0.1 port 42936 ssh2: RSA SHA256:vLb6+tqq+AN0xHsfmostacJPBc0ER3TTPjmoL0pNEEc Aug 5 21:52:32.395152 sshd[3531]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:52:32.399811 systemd-logind[1420]: New session 8 of user core. Aug 5 21:52:32.405693 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 5 21:52:32.560345 sshd[3531]: pam_unix(sshd:session): session closed for user core Aug 5 21:52:32.564712 systemd[1]: session-8.scope: Deactivated successfully. Aug 5 21:52:32.567294 systemd-logind[1420]: Session 8 logged out. Waiting for processes to exit. Aug 5 21:52:32.567811 systemd[1]: sshd@7-10.0.0.99:22-10.0.0.1:42936.service: Deactivated successfully. Aug 5 21:52:32.570611 systemd-logind[1420]: Removed session 8. Aug 5 21:52:32.812390 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2517184765.mount: Deactivated successfully. Aug 5 21:52:33.064242 containerd[1439]: time="2024-08-05T21:52:33.064069821Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:52:33.071965 containerd[1439]: time="2024-08-05T21:52:33.071918903Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.0: active requests=0, bytes read=110491350" Aug 5 21:52:33.072821 containerd[1439]: time="2024-08-05T21:52:33.072792539Z" level=info msg="ImageCreate event name:\"sha256:d80cbd636ae2754a08d04558f0436508a17d92258e4712cc4a6299f43497607f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:52:33.075394 containerd[1439]: time="2024-08-05T21:52:33.075356444Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:52:33.076635 containerd[1439]: time="2024-08-05T21:52:33.076599215Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.0\" with image id \"sha256:d80cbd636ae2754a08d04558f0436508a17d92258e4712cc4a6299f43497607f\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\", size \"110491212\" in 3.737770604s" Aug 5 21:52:33.076689 containerd[1439]: time="2024-08-05T21:52:33.076634416Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\" returns image reference \"sha256:d80cbd636ae2754a08d04558f0436508a17d92258e4712cc4a6299f43497607f\"" Aug 5 21:52:33.083957 containerd[1439]: time="2024-08-05T21:52:33.083895234Z" level=info msg="CreateContainer within sandbox \"46e0f348c56cbc22a33fde4c593c7d7377eacf6532b7efe88579b4fb40e5099e\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Aug 5 21:52:33.106672 containerd[1439]: time="2024-08-05T21:52:33.106621326Z" level=info msg="CreateContainer within sandbox \"46e0f348c56cbc22a33fde4c593c7d7377eacf6532b7efe88579b4fb40e5099e\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"d10497cbde27537b54dbf318a431fc23be9b0edaf04312ec00547c0b919d86be\"" Aug 5 21:52:33.107333 containerd[1439]: time="2024-08-05T21:52:33.107184310Z" level=info msg="StartContainer for \"d10497cbde27537b54dbf318a431fc23be9b0edaf04312ec00547c0b919d86be\"" Aug 5 21:52:33.159330 systemd[1]: Started cri-containerd-d10497cbde27537b54dbf318a431fc23be9b0edaf04312ec00547c0b919d86be.scope - libcontainer container d10497cbde27537b54dbf318a431fc23be9b0edaf04312ec00547c0b919d86be. Aug 5 21:52:33.201446 containerd[1439]: time="2024-08-05T21:52:33.201389934Z" level=info msg="StartContainer for \"d10497cbde27537b54dbf318a431fc23be9b0edaf04312ec00547c0b919d86be\" returns successfully" Aug 5 21:52:33.347564 kubelet[2528]: E0805 21:52:33.346638 2528 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:52:33.355375 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Aug 5 21:52:33.355482 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Aug 5 21:52:34.348353 kubelet[2528]: E0805 21:52:34.347956 2528 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:52:34.966247 kubelet[2528]: I0805 21:52:34.966202 2528 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 5 21:52:34.966906 kubelet[2528]: E0805 21:52:34.966875 2528 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:52:34.979933 kubelet[2528]: I0805 21:52:34.979895 2528 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-g8xh6" podStartSLOduration=2.435359422 podStartE2EDuration="18.9798609s" podCreationTimestamp="2024-08-05 21:52:16 +0000 UTC" firstStartedPulling="2024-08-05 21:52:16.532355428 +0000 UTC m=+20.423806039" lastFinishedPulling="2024-08-05 21:52:33.076856946 +0000 UTC m=+36.968307517" observedRunningTime="2024-08-05 21:52:33.361403217 +0000 UTC m=+37.252853828" watchObservedRunningTime="2024-08-05 21:52:34.9798609 +0000 UTC m=+38.871311511" Aug 5 21:52:35.349954 kubelet[2528]: E0805 21:52:35.349833 2528 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:52:36.003750 systemd-networkd[1366]: vxlan.calico: Link UP Aug 5 21:52:36.003759 systemd-networkd[1366]: vxlan.calico: Gained carrier Aug 5 21:52:37.597428 systemd[1]: Started sshd@8-10.0.0.99:22-10.0.0.1:42938.service - OpenSSH per-connection server daemon (10.0.0.1:42938). Aug 5 21:52:37.639146 sshd[3895]: Accepted publickey for core from 10.0.0.1 port 42938 ssh2: RSA SHA256:vLb6+tqq+AN0xHsfmostacJPBc0ER3TTPjmoL0pNEEc Aug 5 21:52:37.643372 sshd[3895]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:52:37.648557 systemd-logind[1420]: New session 9 of user core. Aug 5 21:52:37.661587 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 5 21:52:37.719313 systemd-networkd[1366]: vxlan.calico: Gained IPv6LL Aug 5 21:52:37.796833 sshd[3895]: pam_unix(sshd:session): session closed for user core Aug 5 21:52:37.800291 systemd[1]: sshd@8-10.0.0.99:22-10.0.0.1:42938.service: Deactivated successfully. Aug 5 21:52:37.801972 systemd[1]: session-9.scope: Deactivated successfully. Aug 5 21:52:37.802639 systemd-logind[1420]: Session 9 logged out. Waiting for processes to exit. Aug 5 21:52:37.803534 systemd-logind[1420]: Removed session 9. Aug 5 21:52:42.208089 containerd[1439]: time="2024-08-05T21:52:42.208033296Z" level=info msg="StopPodSandbox for \"b89e20bf661d484f4daa291792eca0523729599c8a8962ea65467a07b6ba66fe\"" Aug 5 21:52:42.209416 containerd[1439]: time="2024-08-05T21:52:42.209324619Z" level=info msg="StopPodSandbox for \"4b542e6a289afdff408fbdb4c6530e8d62c6d81601ad3a3b8fbd6fd39c89cfbb\"" Aug 5 21:52:42.209416 containerd[1439]: time="2024-08-05T21:52:42.209368300Z" level=info msg="StopPodSandbox for \"a1dd4bafe4261ec061bb941844e6493ab8361843e6fae0a50afa2f293fe31a80\"" Aug 5 21:52:42.516324 containerd[1439]: 2024-08-05 21:52:42.327 [INFO][3972] k8s.go 608: Cleaning up netns ContainerID="b89e20bf661d484f4daa291792eca0523729599c8a8962ea65467a07b6ba66fe" Aug 5 21:52:42.516324 containerd[1439]: 2024-08-05 21:52:42.327 [INFO][3972] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="b89e20bf661d484f4daa291792eca0523729599c8a8962ea65467a07b6ba66fe" iface="eth0" netns="/var/run/netns/cni-c8071d2a-4a98-dcb9-283f-3139796ab072" Aug 5 21:52:42.516324 containerd[1439]: 2024-08-05 21:52:42.329 [INFO][3972] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="b89e20bf661d484f4daa291792eca0523729599c8a8962ea65467a07b6ba66fe" iface="eth0" netns="/var/run/netns/cni-c8071d2a-4a98-dcb9-283f-3139796ab072" Aug 5 21:52:42.516324 containerd[1439]: 2024-08-05 21:52:42.330 [INFO][3972] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="b89e20bf661d484f4daa291792eca0523729599c8a8962ea65467a07b6ba66fe" iface="eth0" netns="/var/run/netns/cni-c8071d2a-4a98-dcb9-283f-3139796ab072" Aug 5 21:52:42.516324 containerd[1439]: 2024-08-05 21:52:42.330 [INFO][3972] k8s.go 615: Releasing IP address(es) ContainerID="b89e20bf661d484f4daa291792eca0523729599c8a8962ea65467a07b6ba66fe" Aug 5 21:52:42.516324 containerd[1439]: 2024-08-05 21:52:42.330 [INFO][3972] utils.go 188: Calico CNI releasing IP address ContainerID="b89e20bf661d484f4daa291792eca0523729599c8a8962ea65467a07b6ba66fe" Aug 5 21:52:42.516324 containerd[1439]: 2024-08-05 21:52:42.500 [INFO][3993] ipam_plugin.go 411: Releasing address using handleID ContainerID="b89e20bf661d484f4daa291792eca0523729599c8a8962ea65467a07b6ba66fe" HandleID="k8s-pod-network.b89e20bf661d484f4daa291792eca0523729599c8a8962ea65467a07b6ba66fe" Workload="localhost-k8s-calico--kube--controllers--85cbdc89--d4rsn-eth0" Aug 5 21:52:42.516324 containerd[1439]: 2024-08-05 21:52:42.501 [INFO][3993] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 21:52:42.516324 containerd[1439]: 2024-08-05 21:52:42.501 [INFO][3993] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 21:52:42.516324 containerd[1439]: 2024-08-05 21:52:42.511 [WARNING][3993] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="b89e20bf661d484f4daa291792eca0523729599c8a8962ea65467a07b6ba66fe" HandleID="k8s-pod-network.b89e20bf661d484f4daa291792eca0523729599c8a8962ea65467a07b6ba66fe" Workload="localhost-k8s-calico--kube--controllers--85cbdc89--d4rsn-eth0" Aug 5 21:52:42.516324 containerd[1439]: 2024-08-05 21:52:42.511 [INFO][3993] ipam_plugin.go 439: Releasing address using workloadID ContainerID="b89e20bf661d484f4daa291792eca0523729599c8a8962ea65467a07b6ba66fe" HandleID="k8s-pod-network.b89e20bf661d484f4daa291792eca0523729599c8a8962ea65467a07b6ba66fe" Workload="localhost-k8s-calico--kube--controllers--85cbdc89--d4rsn-eth0" Aug 5 21:52:42.516324 containerd[1439]: 2024-08-05 21:52:42.512 [INFO][3993] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 21:52:42.516324 containerd[1439]: 2024-08-05 21:52:42.513 [INFO][3972] k8s.go 621: Teardown processing complete. ContainerID="b89e20bf661d484f4daa291792eca0523729599c8a8962ea65467a07b6ba66fe" Aug 5 21:52:42.516741 containerd[1439]: time="2024-08-05T21:52:42.516412345Z" level=info msg="TearDown network for sandbox \"b89e20bf661d484f4daa291792eca0523729599c8a8962ea65467a07b6ba66fe\" successfully" Aug 5 21:52:42.516741 containerd[1439]: time="2024-08-05T21:52:42.516439306Z" level=info msg="StopPodSandbox for \"b89e20bf661d484f4daa291792eca0523729599c8a8962ea65467a07b6ba66fe\" returns successfully" Aug 5 21:52:42.517725 containerd[1439]: time="2024-08-05T21:52:42.517693948Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85cbdc89-d4rsn,Uid:e4d6c640-dd8d-409d-b5ce-dbdbc361cc76,Namespace:calico-system,Attempt:1,}" Aug 5 21:52:42.518829 systemd[1]: run-netns-cni\x2dc8071d2a\x2d4a98\x2ddcb9\x2d283f\x2d3139796ab072.mount: Deactivated successfully. Aug 5 21:52:42.527701 containerd[1439]: 2024-08-05 21:52:42.328 [INFO][3966] k8s.go 608: Cleaning up netns ContainerID="4b542e6a289afdff408fbdb4c6530e8d62c6d81601ad3a3b8fbd6fd39c89cfbb" Aug 5 21:52:42.527701 containerd[1439]: 2024-08-05 21:52:42.330 [INFO][3966] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="4b542e6a289afdff408fbdb4c6530e8d62c6d81601ad3a3b8fbd6fd39c89cfbb" iface="eth0" netns="/var/run/netns/cni-e1ea0a80-c9be-efdd-a748-ebbf2cfb4c2e" Aug 5 21:52:42.527701 containerd[1439]: 2024-08-05 21:52:42.330 [INFO][3966] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="4b542e6a289afdff408fbdb4c6530e8d62c6d81601ad3a3b8fbd6fd39c89cfbb" iface="eth0" netns="/var/run/netns/cni-e1ea0a80-c9be-efdd-a748-ebbf2cfb4c2e" Aug 5 21:52:42.527701 containerd[1439]: 2024-08-05 21:52:42.331 [INFO][3966] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="4b542e6a289afdff408fbdb4c6530e8d62c6d81601ad3a3b8fbd6fd39c89cfbb" iface="eth0" netns="/var/run/netns/cni-e1ea0a80-c9be-efdd-a748-ebbf2cfb4c2e" Aug 5 21:52:42.527701 containerd[1439]: 2024-08-05 21:52:42.331 [INFO][3966] k8s.go 615: Releasing IP address(es) ContainerID="4b542e6a289afdff408fbdb4c6530e8d62c6d81601ad3a3b8fbd6fd39c89cfbb" Aug 5 21:52:42.527701 containerd[1439]: 2024-08-05 21:52:42.331 [INFO][3966] utils.go 188: Calico CNI releasing IP address ContainerID="4b542e6a289afdff408fbdb4c6530e8d62c6d81601ad3a3b8fbd6fd39c89cfbb" Aug 5 21:52:42.527701 containerd[1439]: 2024-08-05 21:52:42.500 [INFO][3995] ipam_plugin.go 411: Releasing address using handleID ContainerID="4b542e6a289afdff408fbdb4c6530e8d62c6d81601ad3a3b8fbd6fd39c89cfbb" HandleID="k8s-pod-network.4b542e6a289afdff408fbdb4c6530e8d62c6d81601ad3a3b8fbd6fd39c89cfbb" Workload="localhost-k8s-coredns--76f75df574--q8fl2-eth0" Aug 5 21:52:42.527701 containerd[1439]: 2024-08-05 21:52:42.501 [INFO][3995] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 21:52:42.527701 containerd[1439]: 2024-08-05 21:52:42.512 [INFO][3995] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 21:52:42.527701 containerd[1439]: 2024-08-05 21:52:42.522 [WARNING][3995] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="4b542e6a289afdff408fbdb4c6530e8d62c6d81601ad3a3b8fbd6fd39c89cfbb" HandleID="k8s-pod-network.4b542e6a289afdff408fbdb4c6530e8d62c6d81601ad3a3b8fbd6fd39c89cfbb" Workload="localhost-k8s-coredns--76f75df574--q8fl2-eth0" Aug 5 21:52:42.527701 containerd[1439]: 2024-08-05 21:52:42.522 [INFO][3995] ipam_plugin.go 439: Releasing address using workloadID ContainerID="4b542e6a289afdff408fbdb4c6530e8d62c6d81601ad3a3b8fbd6fd39c89cfbb" HandleID="k8s-pod-network.4b542e6a289afdff408fbdb4c6530e8d62c6d81601ad3a3b8fbd6fd39c89cfbb" Workload="localhost-k8s-coredns--76f75df574--q8fl2-eth0" Aug 5 21:52:42.527701 containerd[1439]: 2024-08-05 21:52:42.524 [INFO][3995] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 21:52:42.527701 containerd[1439]: 2024-08-05 21:52:42.526 [INFO][3966] k8s.go 621: Teardown processing complete. ContainerID="4b542e6a289afdff408fbdb4c6530e8d62c6d81601ad3a3b8fbd6fd39c89cfbb" Aug 5 21:52:42.528063 containerd[1439]: time="2024-08-05T21:52:42.527843606Z" level=info msg="TearDown network for sandbox \"4b542e6a289afdff408fbdb4c6530e8d62c6d81601ad3a3b8fbd6fd39c89cfbb\" successfully" Aug 5 21:52:42.528063 containerd[1439]: time="2024-08-05T21:52:42.527871167Z" level=info msg="StopPodSandbox for \"4b542e6a289afdff408fbdb4c6530e8d62c6d81601ad3a3b8fbd6fd39c89cfbb\" returns successfully" Aug 5 21:52:42.531092 kubelet[2528]: E0805 21:52:42.528376 2528 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:52:42.529605 systemd[1]: run-netns-cni\x2de1ea0a80\x2dc9be\x2defdd\x2da748\x2debbf2cfb4c2e.mount: Deactivated successfully. Aug 5 21:52:42.531601 containerd[1439]: time="2024-08-05T21:52:42.529106208Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-q8fl2,Uid:dbb8ef3a-6e32-4c6c-91b3-57dd29571e98,Namespace:kube-system,Attempt:1,}" Aug 5 21:52:42.545106 containerd[1439]: 2024-08-05 21:52:42.330 [INFO][3981] k8s.go 608: Cleaning up netns ContainerID="a1dd4bafe4261ec061bb941844e6493ab8361843e6fae0a50afa2f293fe31a80" Aug 5 21:52:42.545106 containerd[1439]: 2024-08-05 21:52:42.330 [INFO][3981] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="a1dd4bafe4261ec061bb941844e6493ab8361843e6fae0a50afa2f293fe31a80" iface="eth0" netns="/var/run/netns/cni-320851ea-e908-f320-8f82-b465d8c4e9ee" Aug 5 21:52:42.545106 containerd[1439]: 2024-08-05 21:52:42.331 [INFO][3981] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="a1dd4bafe4261ec061bb941844e6493ab8361843e6fae0a50afa2f293fe31a80" iface="eth0" netns="/var/run/netns/cni-320851ea-e908-f320-8f82-b465d8c4e9ee" Aug 5 21:52:42.545106 containerd[1439]: 2024-08-05 21:52:42.331 [INFO][3981] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="a1dd4bafe4261ec061bb941844e6493ab8361843e6fae0a50afa2f293fe31a80" iface="eth0" netns="/var/run/netns/cni-320851ea-e908-f320-8f82-b465d8c4e9ee" Aug 5 21:52:42.545106 containerd[1439]: 2024-08-05 21:52:42.331 [INFO][3981] k8s.go 615: Releasing IP address(es) ContainerID="a1dd4bafe4261ec061bb941844e6493ab8361843e6fae0a50afa2f293fe31a80" Aug 5 21:52:42.545106 containerd[1439]: 2024-08-05 21:52:42.331 [INFO][3981] utils.go 188: Calico CNI releasing IP address ContainerID="a1dd4bafe4261ec061bb941844e6493ab8361843e6fae0a50afa2f293fe31a80" Aug 5 21:52:42.545106 containerd[1439]: 2024-08-05 21:52:42.500 [INFO][3994] ipam_plugin.go 411: Releasing address using handleID ContainerID="a1dd4bafe4261ec061bb941844e6493ab8361843e6fae0a50afa2f293fe31a80" HandleID="k8s-pod-network.a1dd4bafe4261ec061bb941844e6493ab8361843e6fae0a50afa2f293fe31a80" Workload="localhost-k8s-coredns--76f75df574--mws6f-eth0" Aug 5 21:52:42.545106 containerd[1439]: 2024-08-05 21:52:42.501 [INFO][3994] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 21:52:42.545106 containerd[1439]: 2024-08-05 21:52:42.524 [INFO][3994] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 21:52:42.545106 containerd[1439]: 2024-08-05 21:52:42.540 [WARNING][3994] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="a1dd4bafe4261ec061bb941844e6493ab8361843e6fae0a50afa2f293fe31a80" HandleID="k8s-pod-network.a1dd4bafe4261ec061bb941844e6493ab8361843e6fae0a50afa2f293fe31a80" Workload="localhost-k8s-coredns--76f75df574--mws6f-eth0" Aug 5 21:52:42.545106 containerd[1439]: 2024-08-05 21:52:42.540 [INFO][3994] ipam_plugin.go 439: Releasing address using workloadID ContainerID="a1dd4bafe4261ec061bb941844e6493ab8361843e6fae0a50afa2f293fe31a80" HandleID="k8s-pod-network.a1dd4bafe4261ec061bb941844e6493ab8361843e6fae0a50afa2f293fe31a80" Workload="localhost-k8s-coredns--76f75df574--mws6f-eth0" Aug 5 21:52:42.545106 containerd[1439]: 2024-08-05 21:52:42.541 [INFO][3994] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 21:52:42.545106 containerd[1439]: 2024-08-05 21:52:42.543 [INFO][3981] k8s.go 621: Teardown processing complete. ContainerID="a1dd4bafe4261ec061bb941844e6493ab8361843e6fae0a50afa2f293fe31a80" Aug 5 21:52:42.547005 systemd[1]: run-netns-cni\x2d320851ea\x2de908\x2df320\x2d8f82\x2db465d8c4e9ee.mount: Deactivated successfully. Aug 5 21:52:42.547828 containerd[1439]: time="2024-08-05T21:52:42.547792192Z" level=info msg="TearDown network for sandbox \"a1dd4bafe4261ec061bb941844e6493ab8361843e6fae0a50afa2f293fe31a80\" successfully" Aug 5 21:52:42.547882 containerd[1439]: time="2024-08-05T21:52:42.547831313Z" level=info msg="StopPodSandbox for \"a1dd4bafe4261ec061bb941844e6493ab8361843e6fae0a50afa2f293fe31a80\" returns successfully" Aug 5 21:52:42.548288 kubelet[2528]: E0805 21:52:42.548265 2528 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:52:42.548918 containerd[1439]: time="2024-08-05T21:52:42.548879068Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-mws6f,Uid:898cb65e-844a-495e-afd2-62c371049ceb,Namespace:kube-system,Attempt:1,}" Aug 5 21:52:42.723564 systemd-networkd[1366]: cali4e5c5ae92ec: Link UP Aug 5 21:52:42.724191 systemd-networkd[1366]: cali4e5c5ae92ec: Gained carrier Aug 5 21:52:42.742018 containerd[1439]: 2024-08-05 21:52:42.638 [INFO][4037] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--76f75df574--mws6f-eth0 coredns-76f75df574- kube-system 898cb65e-844a-495e-afd2-62c371049ceb 837 0 2024-08-05 21:52:10 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-76f75df574-mws6f eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali4e5c5ae92ec [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="cc7b592790e5449511926a9e2ec4f4706c4d9a1be160483f1e6a31a7bdb9f6d7" Namespace="kube-system" Pod="coredns-76f75df574-mws6f" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--mws6f-" Aug 5 21:52:42.742018 containerd[1439]: 2024-08-05 21:52:42.638 [INFO][4037] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="cc7b592790e5449511926a9e2ec4f4706c4d9a1be160483f1e6a31a7bdb9f6d7" Namespace="kube-system" Pod="coredns-76f75df574-mws6f" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--mws6f-eth0" Aug 5 21:52:42.742018 containerd[1439]: 2024-08-05 21:52:42.678 [INFO][4062] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cc7b592790e5449511926a9e2ec4f4706c4d9a1be160483f1e6a31a7bdb9f6d7" HandleID="k8s-pod-network.cc7b592790e5449511926a9e2ec4f4706c4d9a1be160483f1e6a31a7bdb9f6d7" Workload="localhost-k8s-coredns--76f75df574--mws6f-eth0" Aug 5 21:52:42.742018 containerd[1439]: 2024-08-05 21:52:42.688 [INFO][4062] ipam_plugin.go 264: Auto assigning IP ContainerID="cc7b592790e5449511926a9e2ec4f4706c4d9a1be160483f1e6a31a7bdb9f6d7" HandleID="k8s-pod-network.cc7b592790e5449511926a9e2ec4f4706c4d9a1be160483f1e6a31a7bdb9f6d7" Workload="localhost-k8s-coredns--76f75df574--mws6f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003004c0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-76f75df574-mws6f", "timestamp":"2024-08-05 21:52:42.678577356 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 5 21:52:42.742018 containerd[1439]: 2024-08-05 21:52:42.689 [INFO][4062] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 21:52:42.742018 containerd[1439]: 2024-08-05 21:52:42.689 [INFO][4062] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 21:52:42.742018 containerd[1439]: 2024-08-05 21:52:42.689 [INFO][4062] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 5 21:52:42.742018 containerd[1439]: 2024-08-05 21:52:42.691 [INFO][4062] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.cc7b592790e5449511926a9e2ec4f4706c4d9a1be160483f1e6a31a7bdb9f6d7" host="localhost" Aug 5 21:52:42.742018 containerd[1439]: 2024-08-05 21:52:42.699 [INFO][4062] ipam.go 372: Looking up existing affinities for host host="localhost" Aug 5 21:52:42.742018 containerd[1439]: 2024-08-05 21:52:42.703 [INFO][4062] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Aug 5 21:52:42.742018 containerd[1439]: 2024-08-05 21:52:42.705 [INFO][4062] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 5 21:52:42.742018 containerd[1439]: 2024-08-05 21:52:42.706 [INFO][4062] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 5 21:52:42.742018 containerd[1439]: 2024-08-05 21:52:42.706 [INFO][4062] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.cc7b592790e5449511926a9e2ec4f4706c4d9a1be160483f1e6a31a7bdb9f6d7" host="localhost" Aug 5 21:52:42.742018 containerd[1439]: 2024-08-05 21:52:42.708 [INFO][4062] ipam.go 1685: Creating new handle: k8s-pod-network.cc7b592790e5449511926a9e2ec4f4706c4d9a1be160483f1e6a31a7bdb9f6d7 Aug 5 21:52:42.742018 containerd[1439]: 2024-08-05 21:52:42.710 [INFO][4062] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.cc7b592790e5449511926a9e2ec4f4706c4d9a1be160483f1e6a31a7bdb9f6d7" host="localhost" Aug 5 21:52:42.742018 containerd[1439]: 2024-08-05 21:52:42.714 [INFO][4062] ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.cc7b592790e5449511926a9e2ec4f4706c4d9a1be160483f1e6a31a7bdb9f6d7" host="localhost" Aug 5 21:52:42.742018 containerd[1439]: 2024-08-05 21:52:42.714 [INFO][4062] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.cc7b592790e5449511926a9e2ec4f4706c4d9a1be160483f1e6a31a7bdb9f6d7" host="localhost" Aug 5 21:52:42.742018 containerd[1439]: 2024-08-05 21:52:42.714 [INFO][4062] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 21:52:42.742018 containerd[1439]: 2024-08-05 21:52:42.714 [INFO][4062] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="cc7b592790e5449511926a9e2ec4f4706c4d9a1be160483f1e6a31a7bdb9f6d7" HandleID="k8s-pod-network.cc7b592790e5449511926a9e2ec4f4706c4d9a1be160483f1e6a31a7bdb9f6d7" Workload="localhost-k8s-coredns--76f75df574--mws6f-eth0" Aug 5 21:52:42.743817 containerd[1439]: 2024-08-05 21:52:42.716 [INFO][4037] k8s.go 386: Populated endpoint ContainerID="cc7b592790e5449511926a9e2ec4f4706c4d9a1be160483f1e6a31a7bdb9f6d7" Namespace="kube-system" Pod="coredns-76f75df574-mws6f" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--mws6f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--mws6f-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"898cb65e-844a-495e-afd2-62c371049ceb", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 21, 52, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-76f75df574-mws6f", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4e5c5ae92ec", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 21:52:42.743817 containerd[1439]: 2024-08-05 21:52:42.716 [INFO][4037] k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="cc7b592790e5449511926a9e2ec4f4706c4d9a1be160483f1e6a31a7bdb9f6d7" Namespace="kube-system" Pod="coredns-76f75df574-mws6f" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--mws6f-eth0" Aug 5 21:52:42.743817 containerd[1439]: 2024-08-05 21:52:42.716 [INFO][4037] dataplane_linux.go 68: Setting the host side veth name to cali4e5c5ae92ec ContainerID="cc7b592790e5449511926a9e2ec4f4706c4d9a1be160483f1e6a31a7bdb9f6d7" Namespace="kube-system" Pod="coredns-76f75df574-mws6f" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--mws6f-eth0" Aug 5 21:52:42.743817 containerd[1439]: 2024-08-05 21:52:42.724 [INFO][4037] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="cc7b592790e5449511926a9e2ec4f4706c4d9a1be160483f1e6a31a7bdb9f6d7" Namespace="kube-system" Pod="coredns-76f75df574-mws6f" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--mws6f-eth0" Aug 5 21:52:42.743817 containerd[1439]: 2024-08-05 21:52:42.724 [INFO][4037] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="cc7b592790e5449511926a9e2ec4f4706c4d9a1be160483f1e6a31a7bdb9f6d7" Namespace="kube-system" Pod="coredns-76f75df574-mws6f" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--mws6f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--mws6f-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"898cb65e-844a-495e-afd2-62c371049ceb", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 21, 52, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cc7b592790e5449511926a9e2ec4f4706c4d9a1be160483f1e6a31a7bdb9f6d7", Pod:"coredns-76f75df574-mws6f", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4e5c5ae92ec", MAC:"b2:7e:29:d6:2e:6a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 21:52:42.743817 containerd[1439]: 2024-08-05 21:52:42.737 [INFO][4037] k8s.go 500: Wrote updated endpoint to datastore ContainerID="cc7b592790e5449511926a9e2ec4f4706c4d9a1be160483f1e6a31a7bdb9f6d7" Namespace="kube-system" Pod="coredns-76f75df574-mws6f" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--mws6f-eth0" Aug 5 21:52:42.756369 systemd-networkd[1366]: cali771e885a486: Link UP Aug 5 21:52:42.757834 systemd-networkd[1366]: cali771e885a486: Gained carrier Aug 5 21:52:42.774435 containerd[1439]: time="2024-08-05T21:52:42.773812933Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 21:52:42.774435 containerd[1439]: time="2024-08-05T21:52:42.774188946Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:52:42.774435 containerd[1439]: time="2024-08-05T21:52:42.774207466Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 21:52:42.774435 containerd[1439]: time="2024-08-05T21:52:42.774218387Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:52:42.784921 containerd[1439]: 2024-08-05 21:52:42.640 [INFO][4027] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--85cbdc89--d4rsn-eth0 calico-kube-controllers-85cbdc89- calico-system e4d6c640-dd8d-409d-b5ce-dbdbc361cc76 835 0 2024-08-05 21:52:16 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:85cbdc89 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-85cbdc89-d4rsn eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali771e885a486 [] []}} ContainerID="d1684becf515b4664b1fec8201289e514c43bb9b636497143d80f85b68638f21" Namespace="calico-system" Pod="calico-kube-controllers-85cbdc89-d4rsn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--85cbdc89--d4rsn-" Aug 5 21:52:42.784921 containerd[1439]: 2024-08-05 21:52:42.640 [INFO][4027] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d1684becf515b4664b1fec8201289e514c43bb9b636497143d80f85b68638f21" Namespace="calico-system" Pod="calico-kube-controllers-85cbdc89-d4rsn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--85cbdc89--d4rsn-eth0" Aug 5 21:52:42.784921 containerd[1439]: 2024-08-05 21:52:42.679 [INFO][4058] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d1684becf515b4664b1fec8201289e514c43bb9b636497143d80f85b68638f21" HandleID="k8s-pod-network.d1684becf515b4664b1fec8201289e514c43bb9b636497143d80f85b68638f21" Workload="localhost-k8s-calico--kube--controllers--85cbdc89--d4rsn-eth0" Aug 5 21:52:42.784921 containerd[1439]: 2024-08-05 21:52:42.694 [INFO][4058] ipam_plugin.go 264: Auto assigning IP ContainerID="d1684becf515b4664b1fec8201289e514c43bb9b636497143d80f85b68638f21" HandleID="k8s-pod-network.d1684becf515b4664b1fec8201289e514c43bb9b636497143d80f85b68638f21" Workload="localhost-k8s-calico--kube--controllers--85cbdc89--d4rsn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001fa3d0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-85cbdc89-d4rsn", "timestamp":"2024-08-05 21:52:42.679290099 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 5 21:52:42.784921 containerd[1439]: 2024-08-05 21:52:42.694 [INFO][4058] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 21:52:42.784921 containerd[1439]: 2024-08-05 21:52:42.714 [INFO][4058] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 21:52:42.784921 containerd[1439]: 2024-08-05 21:52:42.714 [INFO][4058] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 5 21:52:42.784921 containerd[1439]: 2024-08-05 21:52:42.716 [INFO][4058] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d1684becf515b4664b1fec8201289e514c43bb9b636497143d80f85b68638f21" host="localhost" Aug 5 21:52:42.784921 containerd[1439]: 2024-08-05 21:52:42.722 [INFO][4058] ipam.go 372: Looking up existing affinities for host host="localhost" Aug 5 21:52:42.784921 containerd[1439]: 2024-08-05 21:52:42.727 [INFO][4058] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Aug 5 21:52:42.784921 containerd[1439]: 2024-08-05 21:52:42.730 [INFO][4058] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 5 21:52:42.784921 containerd[1439]: 2024-08-05 21:52:42.732 [INFO][4058] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 5 21:52:42.784921 containerd[1439]: 2024-08-05 21:52:42.732 [INFO][4058] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d1684becf515b4664b1fec8201289e514c43bb9b636497143d80f85b68638f21" host="localhost" Aug 5 21:52:42.784921 containerd[1439]: 2024-08-05 21:52:42.735 [INFO][4058] ipam.go 1685: Creating new handle: k8s-pod-network.d1684becf515b4664b1fec8201289e514c43bb9b636497143d80f85b68638f21 Aug 5 21:52:42.784921 containerd[1439]: 2024-08-05 21:52:42.742 [INFO][4058] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d1684becf515b4664b1fec8201289e514c43bb9b636497143d80f85b68638f21" host="localhost" Aug 5 21:52:42.784921 containerd[1439]: 2024-08-05 21:52:42.748 [INFO][4058] ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.d1684becf515b4664b1fec8201289e514c43bb9b636497143d80f85b68638f21" host="localhost" Aug 5 21:52:42.784921 containerd[1439]: 2024-08-05 21:52:42.748 [INFO][4058] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.d1684becf515b4664b1fec8201289e514c43bb9b636497143d80f85b68638f21" host="localhost" Aug 5 21:52:42.784921 containerd[1439]: 2024-08-05 21:52:42.748 [INFO][4058] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 21:52:42.784921 containerd[1439]: 2024-08-05 21:52:42.748 [INFO][4058] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="d1684becf515b4664b1fec8201289e514c43bb9b636497143d80f85b68638f21" HandleID="k8s-pod-network.d1684becf515b4664b1fec8201289e514c43bb9b636497143d80f85b68638f21" Workload="localhost-k8s-calico--kube--controllers--85cbdc89--d4rsn-eth0" Aug 5 21:52:42.785857 containerd[1439]: 2024-08-05 21:52:42.751 [INFO][4027] k8s.go 386: Populated endpoint ContainerID="d1684becf515b4664b1fec8201289e514c43bb9b636497143d80f85b68638f21" Namespace="calico-system" Pod="calico-kube-controllers-85cbdc89-d4rsn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--85cbdc89--d4rsn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--85cbdc89--d4rsn-eth0", GenerateName:"calico-kube-controllers-85cbdc89-", Namespace:"calico-system", SelfLink:"", UID:"e4d6c640-dd8d-409d-b5ce-dbdbc361cc76", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 21, 52, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"85cbdc89", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-85cbdc89-d4rsn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali771e885a486", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 21:52:42.785857 containerd[1439]: 2024-08-05 21:52:42.751 [INFO][4027] k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="d1684becf515b4664b1fec8201289e514c43bb9b636497143d80f85b68638f21" Namespace="calico-system" Pod="calico-kube-controllers-85cbdc89-d4rsn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--85cbdc89--d4rsn-eth0" Aug 5 21:52:42.785857 containerd[1439]: 2024-08-05 21:52:42.751 [INFO][4027] dataplane_linux.go 68: Setting the host side veth name to cali771e885a486 ContainerID="d1684becf515b4664b1fec8201289e514c43bb9b636497143d80f85b68638f21" Namespace="calico-system" Pod="calico-kube-controllers-85cbdc89-d4rsn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--85cbdc89--d4rsn-eth0" Aug 5 21:52:42.785857 containerd[1439]: 2024-08-05 21:52:42.758 [INFO][4027] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="d1684becf515b4664b1fec8201289e514c43bb9b636497143d80f85b68638f21" Namespace="calico-system" Pod="calico-kube-controllers-85cbdc89-d4rsn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--85cbdc89--d4rsn-eth0" Aug 5 21:52:42.785857 containerd[1439]: 2024-08-05 21:52:42.767 [INFO][4027] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d1684becf515b4664b1fec8201289e514c43bb9b636497143d80f85b68638f21" Namespace="calico-system" Pod="calico-kube-controllers-85cbdc89-d4rsn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--85cbdc89--d4rsn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--85cbdc89--d4rsn-eth0", GenerateName:"calico-kube-controllers-85cbdc89-", Namespace:"calico-system", SelfLink:"", UID:"e4d6c640-dd8d-409d-b5ce-dbdbc361cc76", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 21, 52, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"85cbdc89", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d1684becf515b4664b1fec8201289e514c43bb9b636497143d80f85b68638f21", Pod:"calico-kube-controllers-85cbdc89-d4rsn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali771e885a486", MAC:"c2:f6:6c:cd:d7:3b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 21:52:42.785857 containerd[1439]: 2024-08-05 21:52:42.779 [INFO][4027] k8s.go 500: Wrote updated endpoint to datastore ContainerID="d1684becf515b4664b1fec8201289e514c43bb9b636497143d80f85b68638f21" Namespace="calico-system" Pod="calico-kube-controllers-85cbdc89-d4rsn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--85cbdc89--d4rsn-eth0" Aug 5 21:52:42.797376 systemd-networkd[1366]: calidd3bf46c1df: Link UP Aug 5 21:52:42.797815 systemd-networkd[1366]: calidd3bf46c1df: Gained carrier Aug 5 21:52:42.808954 systemd[1]: Started sshd@9-10.0.0.99:22-10.0.0.1:55302.service - OpenSSH per-connection server daemon (10.0.0.1:55302). Aug 5 21:52:42.817833 containerd[1439]: 2024-08-05 21:52:42.645 [INFO][4017] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--76f75df574--q8fl2-eth0 coredns-76f75df574- kube-system dbb8ef3a-6e32-4c6c-91b3-57dd29571e98 836 0 2024-08-05 21:52:10 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-76f75df574-q8fl2 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calidd3bf46c1df [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="c26ef696a6e018636afaf5da690fdc4ec9c4b81f1e8014ae8bd2d0ca39abfe2f" Namespace="kube-system" Pod="coredns-76f75df574-q8fl2" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--q8fl2-" Aug 5 21:52:42.817833 containerd[1439]: 2024-08-05 21:52:42.647 [INFO][4017] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c26ef696a6e018636afaf5da690fdc4ec9c4b81f1e8014ae8bd2d0ca39abfe2f" Namespace="kube-system" Pod="coredns-76f75df574-q8fl2" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--q8fl2-eth0" Aug 5 21:52:42.817833 containerd[1439]: 2024-08-05 21:52:42.681 [INFO][4067] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c26ef696a6e018636afaf5da690fdc4ec9c4b81f1e8014ae8bd2d0ca39abfe2f" HandleID="k8s-pod-network.c26ef696a6e018636afaf5da690fdc4ec9c4b81f1e8014ae8bd2d0ca39abfe2f" Workload="localhost-k8s-coredns--76f75df574--q8fl2-eth0" Aug 5 21:52:42.817833 containerd[1439]: 2024-08-05 21:52:42.699 [INFO][4067] ipam_plugin.go 264: Auto assigning IP ContainerID="c26ef696a6e018636afaf5da690fdc4ec9c4b81f1e8014ae8bd2d0ca39abfe2f" HandleID="k8s-pod-network.c26ef696a6e018636afaf5da690fdc4ec9c4b81f1e8014ae8bd2d0ca39abfe2f" Workload="localhost-k8s-coredns--76f75df574--q8fl2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002f4940), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-76f75df574-q8fl2", "timestamp":"2024-08-05 21:52:42.681562455 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 5 21:52:42.817833 containerd[1439]: 2024-08-05 21:52:42.699 [INFO][4067] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 21:52:42.817833 containerd[1439]: 2024-08-05 21:52:42.748 [INFO][4067] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 21:52:42.817833 containerd[1439]: 2024-08-05 21:52:42.749 [INFO][4067] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 5 21:52:42.817833 containerd[1439]: 2024-08-05 21:52:42.751 [INFO][4067] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c26ef696a6e018636afaf5da690fdc4ec9c4b81f1e8014ae8bd2d0ca39abfe2f" host="localhost" Aug 5 21:52:42.817833 containerd[1439]: 2024-08-05 21:52:42.757 [INFO][4067] ipam.go 372: Looking up existing affinities for host host="localhost" Aug 5 21:52:42.817833 containerd[1439]: 2024-08-05 21:52:42.763 [INFO][4067] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Aug 5 21:52:42.817833 containerd[1439]: 2024-08-05 21:52:42.766 [INFO][4067] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 5 21:52:42.817833 containerd[1439]: 2024-08-05 21:52:42.768 [INFO][4067] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 5 21:52:42.817833 containerd[1439]: 2024-08-05 21:52:42.768 [INFO][4067] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c26ef696a6e018636afaf5da690fdc4ec9c4b81f1e8014ae8bd2d0ca39abfe2f" host="localhost" Aug 5 21:52:42.817833 containerd[1439]: 2024-08-05 21:52:42.770 [INFO][4067] ipam.go 1685: Creating new handle: k8s-pod-network.c26ef696a6e018636afaf5da690fdc4ec9c4b81f1e8014ae8bd2d0ca39abfe2f Aug 5 21:52:42.817833 containerd[1439]: 2024-08-05 21:52:42.780 [INFO][4067] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c26ef696a6e018636afaf5da690fdc4ec9c4b81f1e8014ae8bd2d0ca39abfe2f" host="localhost" Aug 5 21:52:42.817833 containerd[1439]: 2024-08-05 21:52:42.786 [INFO][4067] ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.c26ef696a6e018636afaf5da690fdc4ec9c4b81f1e8014ae8bd2d0ca39abfe2f" host="localhost" Aug 5 21:52:42.817833 containerd[1439]: 2024-08-05 21:52:42.786 [INFO][4067] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.c26ef696a6e018636afaf5da690fdc4ec9c4b81f1e8014ae8bd2d0ca39abfe2f" host="localhost" Aug 5 21:52:42.817833 containerd[1439]: 2024-08-05 21:52:42.786 [INFO][4067] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 21:52:42.817833 containerd[1439]: 2024-08-05 21:52:42.786 [INFO][4067] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="c26ef696a6e018636afaf5da690fdc4ec9c4b81f1e8014ae8bd2d0ca39abfe2f" HandleID="k8s-pod-network.c26ef696a6e018636afaf5da690fdc4ec9c4b81f1e8014ae8bd2d0ca39abfe2f" Workload="localhost-k8s-coredns--76f75df574--q8fl2-eth0" Aug 5 21:52:42.818687 containerd[1439]: 2024-08-05 21:52:42.789 [INFO][4017] k8s.go 386: Populated endpoint ContainerID="c26ef696a6e018636afaf5da690fdc4ec9c4b81f1e8014ae8bd2d0ca39abfe2f" Namespace="kube-system" Pod="coredns-76f75df574-q8fl2" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--q8fl2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--q8fl2-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"dbb8ef3a-6e32-4c6c-91b3-57dd29571e98", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 21, 52, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-76f75df574-q8fl2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidd3bf46c1df", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 21:52:42.818687 containerd[1439]: 2024-08-05 21:52:42.790 [INFO][4017] k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="c26ef696a6e018636afaf5da690fdc4ec9c4b81f1e8014ae8bd2d0ca39abfe2f" Namespace="kube-system" Pod="coredns-76f75df574-q8fl2" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--q8fl2-eth0" Aug 5 21:52:42.818687 containerd[1439]: 2024-08-05 21:52:42.790 [INFO][4017] dataplane_linux.go 68: Setting the host side veth name to calidd3bf46c1df ContainerID="c26ef696a6e018636afaf5da690fdc4ec9c4b81f1e8014ae8bd2d0ca39abfe2f" Namespace="kube-system" Pod="coredns-76f75df574-q8fl2" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--q8fl2-eth0" Aug 5 21:52:42.818687 containerd[1439]: 2024-08-05 21:52:42.797 [INFO][4017] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="c26ef696a6e018636afaf5da690fdc4ec9c4b81f1e8014ae8bd2d0ca39abfe2f" Namespace="kube-system" Pod="coredns-76f75df574-q8fl2" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--q8fl2-eth0" Aug 5 21:52:42.818687 containerd[1439]: 2024-08-05 21:52:42.801 [INFO][4017] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c26ef696a6e018636afaf5da690fdc4ec9c4b81f1e8014ae8bd2d0ca39abfe2f" Namespace="kube-system" Pod="coredns-76f75df574-q8fl2" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--q8fl2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--q8fl2-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"dbb8ef3a-6e32-4c6c-91b3-57dd29571e98", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 21, 52, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c26ef696a6e018636afaf5da690fdc4ec9c4b81f1e8014ae8bd2d0ca39abfe2f", Pod:"coredns-76f75df574-q8fl2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidd3bf46c1df", MAC:"62:f8:d6:a7:e6:be", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 21:52:42.818687 containerd[1439]: 2024-08-05 21:52:42.813 [INFO][4017] k8s.go 500: Wrote updated endpoint to datastore ContainerID="c26ef696a6e018636afaf5da690fdc4ec9c4b81f1e8014ae8bd2d0ca39abfe2f" Namespace="kube-system" Pod="coredns-76f75df574-q8fl2" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--q8fl2-eth0" Aug 5 21:52:42.832009 containerd[1439]: time="2024-08-05T21:52:42.831535099Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 21:52:42.832009 containerd[1439]: time="2024-08-05T21:52:42.831602541Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:52:42.832009 containerd[1439]: time="2024-08-05T21:52:42.831620102Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 21:52:42.832009 containerd[1439]: time="2024-08-05T21:52:42.831631382Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:52:42.853368 systemd[1]: Started cri-containerd-cc7b592790e5449511926a9e2ec4f4706c4d9a1be160483f1e6a31a7bdb9f6d7.scope - libcontainer container cc7b592790e5449511926a9e2ec4f4706c4d9a1be160483f1e6a31a7bdb9f6d7. Aug 5 21:52:42.868539 containerd[1439]: time="2024-08-05T21:52:42.868313846Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 21:52:42.869211 containerd[1439]: time="2024-08-05T21:52:42.868625056Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:52:42.870285 containerd[1439]: time="2024-08-05T21:52:42.869275718Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 21:52:42.870285 containerd[1439]: time="2024-08-05T21:52:42.869367841Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:52:42.875385 systemd[1]: Started cri-containerd-d1684becf515b4664b1fec8201289e514c43bb9b636497143d80f85b68638f21.scope - libcontainer container d1684becf515b4664b1fec8201289e514c43bb9b636497143d80f85b68638f21. Aug 5 21:52:42.885444 systemd-resolved[1308]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 5 21:52:42.894066 sshd[4137]: Accepted publickey for core from 10.0.0.1 port 55302 ssh2: RSA SHA256:vLb6+tqq+AN0xHsfmostacJPBc0ER3TTPjmoL0pNEEc Aug 5 21:52:42.895355 sshd[4137]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:52:42.903172 systemd[1]: Started cri-containerd-c26ef696a6e018636afaf5da690fdc4ec9c4b81f1e8014ae8bd2d0ca39abfe2f.scope - libcontainer container c26ef696a6e018636afaf5da690fdc4ec9c4b81f1e8014ae8bd2d0ca39abfe2f. Aug 5 21:52:42.916635 systemd-resolved[1308]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 5 21:52:42.921771 containerd[1439]: time="2024-08-05T21:52:42.920840839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-mws6f,Uid:898cb65e-844a-495e-afd2-62c371049ceb,Namespace:kube-system,Attempt:1,} returns sandbox id \"cc7b592790e5449511926a9e2ec4f4706c4d9a1be160483f1e6a31a7bdb9f6d7\"" Aug 5 21:52:42.920958 systemd-logind[1420]: New session 10 of user core. Aug 5 21:52:42.924829 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 5 21:52:42.926058 kubelet[2528]: E0805 21:52:42.921910 2528 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:52:42.931510 containerd[1439]: time="2024-08-05T21:52:42.931245906Z" level=info msg="CreateContainer within sandbox \"cc7b592790e5449511926a9e2ec4f4706c4d9a1be160483f1e6a31a7bdb9f6d7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 5 21:52:42.943335 systemd-resolved[1308]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 5 21:52:42.953635 containerd[1439]: time="2024-08-05T21:52:42.953584531Z" level=info msg="CreateContainer within sandbox \"cc7b592790e5449511926a9e2ec4f4706c4d9a1be160483f1e6a31a7bdb9f6d7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7cd2cb6d6dadc173fa7530bd22908afe42c85ec5f1df4ce6ef0b9fbd6ef2c5bd\"" Aug 5 21:52:42.954899 containerd[1439]: time="2024-08-05T21:52:42.954250033Z" level=info msg="StartContainer for \"7cd2cb6d6dadc173fa7530bd22908afe42c85ec5f1df4ce6ef0b9fbd6ef2c5bd\"" Aug 5 21:52:42.962502 containerd[1439]: time="2024-08-05T21:52:42.962462107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85cbdc89-d4rsn,Uid:e4d6c640-dd8d-409d-b5ce-dbdbc361cc76,Namespace:calico-system,Attempt:1,} returns sandbox id \"d1684becf515b4664b1fec8201289e514c43bb9b636497143d80f85b68638f21\"" Aug 5 21:52:42.965452 containerd[1439]: time="2024-08-05T21:52:42.965425526Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\"" Aug 5 21:52:42.978815 containerd[1439]: time="2024-08-05T21:52:42.978714250Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-q8fl2,Uid:dbb8ef3a-6e32-4c6c-91b3-57dd29571e98,Namespace:kube-system,Attempt:1,} returns sandbox id \"c26ef696a6e018636afaf5da690fdc4ec9c4b81f1e8014ae8bd2d0ca39abfe2f\"" Aug 5 21:52:42.979484 kubelet[2528]: E0805 21:52:42.979462 2528 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:52:42.982580 containerd[1439]: time="2024-08-05T21:52:42.982328250Z" level=info msg="CreateContainer within sandbox \"c26ef696a6e018636afaf5da690fdc4ec9c4b81f1e8014ae8bd2d0ca39abfe2f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 5 21:52:42.998082 containerd[1439]: time="2024-08-05T21:52:42.998000573Z" level=info msg="CreateContainer within sandbox \"c26ef696a6e018636afaf5da690fdc4ec9c4b81f1e8014ae8bd2d0ca39abfe2f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1ac4f4fb55d336575b60067b238567595da475683a30803d861f8f22a9d74a9b\"" Aug 5 21:52:42.998509 containerd[1439]: time="2024-08-05T21:52:42.998482789Z" level=info msg="StartContainer for \"1ac4f4fb55d336575b60067b238567595da475683a30803d861f8f22a9d74a9b\"" Aug 5 21:52:42.999340 systemd[1]: Started cri-containerd-7cd2cb6d6dadc173fa7530bd22908afe42c85ec5f1df4ce6ef0b9fbd6ef2c5bd.scope - libcontainer container 7cd2cb6d6dadc173fa7530bd22908afe42c85ec5f1df4ce6ef0b9fbd6ef2c5bd. Aug 5 21:52:43.036525 systemd[1]: Started cri-containerd-1ac4f4fb55d336575b60067b238567595da475683a30803d861f8f22a9d74a9b.scope - libcontainer container 1ac4f4fb55d336575b60067b238567595da475683a30803d861f8f22a9d74a9b. Aug 5 21:52:43.046309 containerd[1439]: time="2024-08-05T21:52:43.046195793Z" level=info msg="StartContainer for \"7cd2cb6d6dadc173fa7530bd22908afe42c85ec5f1df4ce6ef0b9fbd6ef2c5bd\" returns successfully" Aug 5 21:52:43.081396 containerd[1439]: time="2024-08-05T21:52:43.081350945Z" level=info msg="StartContainer for \"1ac4f4fb55d336575b60067b238567595da475683a30803d861f8f22a9d74a9b\" returns successfully" Aug 5 21:52:43.088106 sshd[4137]: pam_unix(sshd:session): session closed for user core Aug 5 21:52:43.097289 systemd[1]: sshd@9-10.0.0.99:22-10.0.0.1:55302.service: Deactivated successfully. Aug 5 21:52:43.101150 systemd[1]: session-10.scope: Deactivated successfully. Aug 5 21:52:43.102017 systemd-logind[1420]: Session 10 logged out. Waiting for processes to exit. Aug 5 21:52:43.112519 systemd[1]: Started sshd@10-10.0.0.99:22-10.0.0.1:55312.service - OpenSSH per-connection server daemon (10.0.0.1:55312). Aug 5 21:52:43.113071 systemd-logind[1420]: Removed session 10. Aug 5 21:52:43.153684 sshd[4340]: Accepted publickey for core from 10.0.0.1 port 55312 ssh2: RSA SHA256:vLb6+tqq+AN0xHsfmostacJPBc0ER3TTPjmoL0pNEEc Aug 5 21:52:43.154397 sshd[4340]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:52:43.179171 systemd-logind[1420]: New session 11 of user core. Aug 5 21:52:43.195415 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 5 21:52:43.364517 sshd[4340]: pam_unix(sshd:session): session closed for user core Aug 5 21:52:43.374468 systemd[1]: sshd@10-10.0.0.99:22-10.0.0.1:55312.service: Deactivated successfully. Aug 5 21:52:43.380331 systemd[1]: session-11.scope: Deactivated successfully. Aug 5 21:52:43.381735 systemd-logind[1420]: Session 11 logged out. Waiting for processes to exit. Aug 5 21:52:43.387222 kubelet[2528]: E0805 21:52:43.387055 2528 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:52:43.388496 systemd[1]: Started sshd@11-10.0.0.99:22-10.0.0.1:55320.service - OpenSSH per-connection server daemon (10.0.0.1:55320). Aug 5 21:52:43.392092 kubelet[2528]: E0805 21:52:43.392064 2528 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:52:43.393811 systemd-logind[1420]: Removed session 11. Aug 5 21:52:43.398342 kubelet[2528]: I0805 21:52:43.398012 2528 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-q8fl2" podStartSLOduration=33.397969836 podStartE2EDuration="33.397969836s" podCreationTimestamp="2024-08-05 21:52:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 21:52:43.397373497 +0000 UTC m=+47.288824108" watchObservedRunningTime="2024-08-05 21:52:43.397969836 +0000 UTC m=+47.289420447" Aug 5 21:52:43.411170 kubelet[2528]: I0805 21:52:43.410424 2528 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-mws6f" podStartSLOduration=33.410385483 podStartE2EDuration="33.410385483s" podCreationTimestamp="2024-08-05 21:52:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 21:52:43.409232805 +0000 UTC m=+47.300683416" watchObservedRunningTime="2024-08-05 21:52:43.410385483 +0000 UTC m=+47.301836094" Aug 5 21:52:43.436718 sshd[4363]: Accepted publickey for core from 10.0.0.1 port 55320 ssh2: RSA SHA256:vLb6+tqq+AN0xHsfmostacJPBc0ER3TTPjmoL0pNEEc Aug 5 21:52:43.438255 sshd[4363]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:52:43.444743 systemd-logind[1420]: New session 12 of user core. Aug 5 21:52:43.461395 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 5 21:52:43.577385 sshd[4363]: pam_unix(sshd:session): session closed for user core Aug 5 21:52:43.580096 systemd[1]: sshd@11-10.0.0.99:22-10.0.0.1:55320.service: Deactivated successfully. Aug 5 21:52:43.582297 systemd[1]: session-12.scope: Deactivated successfully. Aug 5 21:52:43.583941 systemd-logind[1420]: Session 12 logged out. Waiting for processes to exit. Aug 5 21:52:43.584992 systemd-logind[1420]: Removed session 12. Aug 5 21:52:43.991374 systemd-networkd[1366]: cali771e885a486: Gained IPv6LL Aug 5 21:52:43.991717 systemd-networkd[1366]: cali4e5c5ae92ec: Gained IPv6LL Aug 5 21:52:44.208451 containerd[1439]: time="2024-08-05T21:52:44.208110337Z" level=info msg="StopPodSandbox for \"957b00fcb06b24b119cc36320a30c80b48c23a666030821d4698c3c44bd442af\"" Aug 5 21:52:44.248264 systemd-networkd[1366]: calidd3bf46c1df: Gained IPv6LL Aug 5 21:52:44.311333 containerd[1439]: 2024-08-05 21:52:44.274 [INFO][4398] k8s.go 608: Cleaning up netns ContainerID="957b00fcb06b24b119cc36320a30c80b48c23a666030821d4698c3c44bd442af" Aug 5 21:52:44.311333 containerd[1439]: 2024-08-05 21:52:44.274 [INFO][4398] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="957b00fcb06b24b119cc36320a30c80b48c23a666030821d4698c3c44bd442af" iface="eth0" netns="/var/run/netns/cni-c6887e05-e476-0b11-b110-ae4052116b32" Aug 5 21:52:44.311333 containerd[1439]: 2024-08-05 21:52:44.274 [INFO][4398] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="957b00fcb06b24b119cc36320a30c80b48c23a666030821d4698c3c44bd442af" iface="eth0" netns="/var/run/netns/cni-c6887e05-e476-0b11-b110-ae4052116b32" Aug 5 21:52:44.311333 containerd[1439]: 2024-08-05 21:52:44.275 [INFO][4398] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="957b00fcb06b24b119cc36320a30c80b48c23a666030821d4698c3c44bd442af" iface="eth0" netns="/var/run/netns/cni-c6887e05-e476-0b11-b110-ae4052116b32" Aug 5 21:52:44.311333 containerd[1439]: 2024-08-05 21:52:44.275 [INFO][4398] k8s.go 615: Releasing IP address(es) ContainerID="957b00fcb06b24b119cc36320a30c80b48c23a666030821d4698c3c44bd442af" Aug 5 21:52:44.311333 containerd[1439]: 2024-08-05 21:52:44.275 [INFO][4398] utils.go 188: Calico CNI releasing IP address ContainerID="957b00fcb06b24b119cc36320a30c80b48c23a666030821d4698c3c44bd442af" Aug 5 21:52:44.311333 containerd[1439]: 2024-08-05 21:52:44.298 [INFO][4409] ipam_plugin.go 411: Releasing address using handleID ContainerID="957b00fcb06b24b119cc36320a30c80b48c23a666030821d4698c3c44bd442af" HandleID="k8s-pod-network.957b00fcb06b24b119cc36320a30c80b48c23a666030821d4698c3c44bd442af" Workload="localhost-k8s-csi--node--driver--rsntl-eth0" Aug 5 21:52:44.311333 containerd[1439]: 2024-08-05 21:52:44.298 [INFO][4409] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 21:52:44.311333 containerd[1439]: 2024-08-05 21:52:44.298 [INFO][4409] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 21:52:44.311333 containerd[1439]: 2024-08-05 21:52:44.306 [WARNING][4409] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="957b00fcb06b24b119cc36320a30c80b48c23a666030821d4698c3c44bd442af" HandleID="k8s-pod-network.957b00fcb06b24b119cc36320a30c80b48c23a666030821d4698c3c44bd442af" Workload="localhost-k8s-csi--node--driver--rsntl-eth0" Aug 5 21:52:44.311333 containerd[1439]: 2024-08-05 21:52:44.307 [INFO][4409] ipam_plugin.go 439: Releasing address using workloadID ContainerID="957b00fcb06b24b119cc36320a30c80b48c23a666030821d4698c3c44bd442af" HandleID="k8s-pod-network.957b00fcb06b24b119cc36320a30c80b48c23a666030821d4698c3c44bd442af" Workload="localhost-k8s-csi--node--driver--rsntl-eth0" Aug 5 21:52:44.311333 containerd[1439]: 2024-08-05 21:52:44.308 [INFO][4409] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 21:52:44.311333 containerd[1439]: 2024-08-05 21:52:44.309 [INFO][4398] k8s.go 621: Teardown processing complete. ContainerID="957b00fcb06b24b119cc36320a30c80b48c23a666030821d4698c3c44bd442af" Aug 5 21:52:44.311794 containerd[1439]: time="2024-08-05T21:52:44.311482064Z" level=info msg="TearDown network for sandbox \"957b00fcb06b24b119cc36320a30c80b48c23a666030821d4698c3c44bd442af\" successfully" Aug 5 21:52:44.311794 containerd[1439]: time="2024-08-05T21:52:44.311509145Z" level=info msg="StopPodSandbox for \"957b00fcb06b24b119cc36320a30c80b48c23a666030821d4698c3c44bd442af\" returns successfully" Aug 5 21:52:44.313808 systemd[1]: run-netns-cni\x2dc6887e05\x2de476\x2d0b11\x2db110\x2dae4052116b32.mount: Deactivated successfully. Aug 5 21:52:44.314654 containerd[1439]: time="2024-08-05T21:52:44.314579644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rsntl,Uid:d273804c-1785-4ad5-9b9f-33407f6c46a0,Namespace:calico-system,Attempt:1,}" Aug 5 21:52:44.394320 kubelet[2528]: E0805 21:52:44.393976 2528 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:52:44.394320 kubelet[2528]: E0805 21:52:44.394054 2528 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:52:44.433477 systemd-networkd[1366]: cali046639d5163: Link UP Aug 5 21:52:44.434387 systemd-networkd[1366]: cali046639d5163: Gained carrier Aug 5 21:52:44.447070 containerd[1439]: 2024-08-05 21:52:44.366 [INFO][4416] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--rsntl-eth0 csi-node-driver- calico-system d273804c-1785-4ad5-9b9f-33407f6c46a0 911 0 2024-08-05 21:52:16 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:7d7f6c786c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s localhost csi-node-driver-rsntl eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali046639d5163 [] []}} ContainerID="ddd69a1c583fa6cb1cab7e9b565d2563bd331e1fe9b49408687016b9ac3af025" Namespace="calico-system" Pod="csi-node-driver-rsntl" WorkloadEndpoint="localhost-k8s-csi--node--driver--rsntl-" Aug 5 21:52:44.447070 containerd[1439]: 2024-08-05 21:52:44.366 [INFO][4416] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ddd69a1c583fa6cb1cab7e9b565d2563bd331e1fe9b49408687016b9ac3af025" Namespace="calico-system" Pod="csi-node-driver-rsntl" WorkloadEndpoint="localhost-k8s-csi--node--driver--rsntl-eth0" Aug 5 21:52:44.447070 containerd[1439]: 2024-08-05 21:52:44.394 [INFO][4430] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ddd69a1c583fa6cb1cab7e9b565d2563bd331e1fe9b49408687016b9ac3af025" HandleID="k8s-pod-network.ddd69a1c583fa6cb1cab7e9b565d2563bd331e1fe9b49408687016b9ac3af025" Workload="localhost-k8s-csi--node--driver--rsntl-eth0" Aug 5 21:52:44.447070 containerd[1439]: 2024-08-05 21:52:44.406 [INFO][4430] ipam_plugin.go 264: Auto assigning IP ContainerID="ddd69a1c583fa6cb1cab7e9b565d2563bd331e1fe9b49408687016b9ac3af025" HandleID="k8s-pod-network.ddd69a1c583fa6cb1cab7e9b565d2563bd331e1fe9b49408687016b9ac3af025" Workload="localhost-k8s-csi--node--driver--rsntl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002821c0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-rsntl", "timestamp":"2024-08-05 21:52:44.39401328 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 5 21:52:44.447070 containerd[1439]: 2024-08-05 21:52:44.407 [INFO][4430] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 21:52:44.447070 containerd[1439]: 2024-08-05 21:52:44.407 [INFO][4430] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 21:52:44.447070 containerd[1439]: 2024-08-05 21:52:44.407 [INFO][4430] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 5 21:52:44.447070 containerd[1439]: 2024-08-05 21:52:44.408 [INFO][4430] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ddd69a1c583fa6cb1cab7e9b565d2563bd331e1fe9b49408687016b9ac3af025" host="localhost" Aug 5 21:52:44.447070 containerd[1439]: 2024-08-05 21:52:44.411 [INFO][4430] ipam.go 372: Looking up existing affinities for host host="localhost" Aug 5 21:52:44.447070 containerd[1439]: 2024-08-05 21:52:44.416 [INFO][4430] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Aug 5 21:52:44.447070 containerd[1439]: 2024-08-05 21:52:44.417 [INFO][4430] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 5 21:52:44.447070 containerd[1439]: 2024-08-05 21:52:44.419 [INFO][4430] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 5 21:52:44.447070 containerd[1439]: 2024-08-05 21:52:44.419 [INFO][4430] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ddd69a1c583fa6cb1cab7e9b565d2563bd331e1fe9b49408687016b9ac3af025" host="localhost" Aug 5 21:52:44.447070 containerd[1439]: 2024-08-05 21:52:44.421 [INFO][4430] ipam.go 1685: Creating new handle: k8s-pod-network.ddd69a1c583fa6cb1cab7e9b565d2563bd331e1fe9b49408687016b9ac3af025 Aug 5 21:52:44.447070 containerd[1439]: 2024-08-05 21:52:44.424 [INFO][4430] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ddd69a1c583fa6cb1cab7e9b565d2563bd331e1fe9b49408687016b9ac3af025" host="localhost" Aug 5 21:52:44.447070 containerd[1439]: 2024-08-05 21:52:44.428 [INFO][4430] ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.ddd69a1c583fa6cb1cab7e9b565d2563bd331e1fe9b49408687016b9ac3af025" host="localhost" Aug 5 21:52:44.447070 containerd[1439]: 2024-08-05 21:52:44.429 [INFO][4430] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.ddd69a1c583fa6cb1cab7e9b565d2563bd331e1fe9b49408687016b9ac3af025" host="localhost" Aug 5 21:52:44.447070 containerd[1439]: 2024-08-05 21:52:44.429 [INFO][4430] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 21:52:44.447070 containerd[1439]: 2024-08-05 21:52:44.429 [INFO][4430] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="ddd69a1c583fa6cb1cab7e9b565d2563bd331e1fe9b49408687016b9ac3af025" HandleID="k8s-pod-network.ddd69a1c583fa6cb1cab7e9b565d2563bd331e1fe9b49408687016b9ac3af025" Workload="localhost-k8s-csi--node--driver--rsntl-eth0" Aug 5 21:52:44.447615 containerd[1439]: 2024-08-05 21:52:44.431 [INFO][4416] k8s.go 386: Populated endpoint ContainerID="ddd69a1c583fa6cb1cab7e9b565d2563bd331e1fe9b49408687016b9ac3af025" Namespace="calico-system" Pod="csi-node-driver-rsntl" WorkloadEndpoint="localhost-k8s-csi--node--driver--rsntl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--rsntl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d273804c-1785-4ad5-9b9f-33407f6c46a0", ResourceVersion:"911", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 21, 52, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-rsntl", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali046639d5163", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 21:52:44.447615 containerd[1439]: 2024-08-05 21:52:44.431 [INFO][4416] k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="ddd69a1c583fa6cb1cab7e9b565d2563bd331e1fe9b49408687016b9ac3af025" Namespace="calico-system" Pod="csi-node-driver-rsntl" WorkloadEndpoint="localhost-k8s-csi--node--driver--rsntl-eth0" Aug 5 21:52:44.447615 containerd[1439]: 2024-08-05 21:52:44.431 [INFO][4416] dataplane_linux.go 68: Setting the host side veth name to cali046639d5163 ContainerID="ddd69a1c583fa6cb1cab7e9b565d2563bd331e1fe9b49408687016b9ac3af025" Namespace="calico-system" Pod="csi-node-driver-rsntl" WorkloadEndpoint="localhost-k8s-csi--node--driver--rsntl-eth0" Aug 5 21:52:44.447615 containerd[1439]: 2024-08-05 21:52:44.432 [INFO][4416] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="ddd69a1c583fa6cb1cab7e9b565d2563bd331e1fe9b49408687016b9ac3af025" Namespace="calico-system" Pod="csi-node-driver-rsntl" WorkloadEndpoint="localhost-k8s-csi--node--driver--rsntl-eth0" Aug 5 21:52:44.447615 containerd[1439]: 2024-08-05 21:52:44.433 [INFO][4416] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ddd69a1c583fa6cb1cab7e9b565d2563bd331e1fe9b49408687016b9ac3af025" Namespace="calico-system" Pod="csi-node-driver-rsntl" WorkloadEndpoint="localhost-k8s-csi--node--driver--rsntl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--rsntl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d273804c-1785-4ad5-9b9f-33407f6c46a0", ResourceVersion:"911", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 21, 52, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ddd69a1c583fa6cb1cab7e9b565d2563bd331e1fe9b49408687016b9ac3af025", Pod:"csi-node-driver-rsntl", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali046639d5163", MAC:"82:8b:0d:34:40:c2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 21:52:44.447615 containerd[1439]: 2024-08-05 21:52:44.444 [INFO][4416] k8s.go 500: Wrote updated endpoint to datastore ContainerID="ddd69a1c583fa6cb1cab7e9b565d2563bd331e1fe9b49408687016b9ac3af025" Namespace="calico-system" Pod="csi-node-driver-rsntl" WorkloadEndpoint="localhost-k8s-csi--node--driver--rsntl-eth0" Aug 5 21:52:44.473218 containerd[1439]: time="2024-08-05T21:52:44.472980382Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 21:52:44.473218 containerd[1439]: time="2024-08-05T21:52:44.473036904Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:52:44.473218 containerd[1439]: time="2024-08-05T21:52:44.473069425Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 21:52:44.473562 containerd[1439]: time="2024-08-05T21:52:44.473247911Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:52:44.499323 systemd[1]: Started cri-containerd-ddd69a1c583fa6cb1cab7e9b565d2563bd331e1fe9b49408687016b9ac3af025.scope - libcontainer container ddd69a1c583fa6cb1cab7e9b565d2563bd331e1fe9b49408687016b9ac3af025. Aug 5 21:52:44.511482 systemd-resolved[1308]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 5 21:52:44.527446 containerd[1439]: time="2024-08-05T21:52:44.527404214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rsntl,Uid:d273804c-1785-4ad5-9b9f-33407f6c46a0,Namespace:calico-system,Attempt:1,} returns sandbox id \"ddd69a1c583fa6cb1cab7e9b565d2563bd331e1fe9b49408687016b9ac3af025\"" Aug 5 21:52:44.620227 containerd[1439]: time="2024-08-05T21:52:44.620178160Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:52:44.620897 containerd[1439]: time="2024-08-05T21:52:44.620727338Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.0: active requests=0, bytes read=31361057" Aug 5 21:52:44.621562 containerd[1439]: time="2024-08-05T21:52:44.621531484Z" level=info msg="ImageCreate event name:\"sha256:89df47edb6965978d3683de1cac38ee5b47d7054332bbea7cc0ef3b3c17da2e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:52:44.624276 containerd[1439]: time="2024-08-05T21:52:44.624235131Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:52:44.624924 containerd[1439]: time="2024-08-05T21:52:44.624808549Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" with image id \"sha256:89df47edb6965978d3683de1cac38ee5b47d7054332bbea7cc0ef3b3c17da2e1\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\", size \"32727593\" in 1.659252499s" Aug 5 21:52:44.624924 containerd[1439]: time="2024-08-05T21:52:44.624846271Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" returns image reference \"sha256:89df47edb6965978d3683de1cac38ee5b47d7054332bbea7cc0ef3b3c17da2e1\"" Aug 5 21:52:44.625517 containerd[1439]: time="2024-08-05T21:52:44.625491691Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\"" Aug 5 21:52:44.632203 containerd[1439]: time="2024-08-05T21:52:44.632043622Z" level=info msg="CreateContainer within sandbox \"d1684becf515b4664b1fec8201289e514c43bb9b636497143d80f85b68638f21\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Aug 5 21:52:44.645814 containerd[1439]: time="2024-08-05T21:52:44.645763184Z" level=info msg="CreateContainer within sandbox \"d1684becf515b4664b1fec8201289e514c43bb9b636497143d80f85b68638f21\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"8bfa92471616e6afb9dbb789a4a87cfeccfef4e71a8f03aef05ca8d32136fc1c\"" Aug 5 21:52:44.646392 containerd[1439]: time="2024-08-05T21:52:44.646363163Z" level=info msg="StartContainer for \"8bfa92471616e6afb9dbb789a4a87cfeccfef4e71a8f03aef05ca8d32136fc1c\"" Aug 5 21:52:44.673335 systemd[1]: Started cri-containerd-8bfa92471616e6afb9dbb789a4a87cfeccfef4e71a8f03aef05ca8d32136fc1c.scope - libcontainer container 8bfa92471616e6afb9dbb789a4a87cfeccfef4e71a8f03aef05ca8d32136fc1c. Aug 5 21:52:44.701740 containerd[1439]: time="2024-08-05T21:52:44.700946400Z" level=info msg="StartContainer for \"8bfa92471616e6afb9dbb789a4a87cfeccfef4e71a8f03aef05ca8d32136fc1c\" returns successfully" Aug 5 21:52:45.398159 kubelet[2528]: E0805 21:52:45.398112 2528 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:52:45.399023 kubelet[2528]: E0805 21:52:45.398951 2528 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:52:45.452284 kubelet[2528]: I0805 21:52:45.451780 2528 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-85cbdc89-d4rsn" podStartSLOduration=27.790829812 podStartE2EDuration="29.451738526s" podCreationTimestamp="2024-08-05 21:52:16 +0000 UTC" firstStartedPulling="2024-08-05 21:52:42.964316169 +0000 UTC m=+46.855766780" lastFinishedPulling="2024-08-05 21:52:44.625224923 +0000 UTC m=+48.516675494" observedRunningTime="2024-08-05 21:52:45.412078031 +0000 UTC m=+49.303528642" watchObservedRunningTime="2024-08-05 21:52:45.451738526 +0000 UTC m=+49.343189137" Aug 5 21:52:45.655461 systemd-networkd[1366]: cali046639d5163: Gained IPv6LL Aug 5 21:52:45.670729 containerd[1439]: time="2024-08-05T21:52:45.670684057Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:52:45.671556 containerd[1439]: time="2024-08-05T21:52:45.671398199Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.0: active requests=0, bytes read=7210579" Aug 5 21:52:45.672370 containerd[1439]: time="2024-08-05T21:52:45.672100062Z" level=info msg="ImageCreate event name:\"sha256:94ad0dc71bacd91f470c20e61073c2dc00648fd583c0fb95657dee38af05e5ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:52:45.674443 containerd[1439]: time="2024-08-05T21:52:45.674386174Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:52:45.674992 containerd[1439]: time="2024-08-05T21:52:45.674954712Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.0\" with image id \"sha256:94ad0dc71bacd91f470c20e61073c2dc00648fd583c0fb95657dee38af05e5ed\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\", size \"8577147\" in 1.04942882s" Aug 5 21:52:45.675047 containerd[1439]: time="2024-08-05T21:52:45.674991553Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\" returns image reference \"sha256:94ad0dc71bacd91f470c20e61073c2dc00648fd583c0fb95657dee38af05e5ed\"" Aug 5 21:52:45.676588 containerd[1439]: time="2024-08-05T21:52:45.676554363Z" level=info msg="CreateContainer within sandbox \"ddd69a1c583fa6cb1cab7e9b565d2563bd331e1fe9b49408687016b9ac3af025\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Aug 5 21:52:45.701487 containerd[1439]: time="2024-08-05T21:52:45.701366668Z" level=info msg="CreateContainer within sandbox \"ddd69a1c583fa6cb1cab7e9b565d2563bd331e1fe9b49408687016b9ac3af025\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"9b0fca6a8fa5ea50fc9435ff79979b121f1e714af2ada846adeb9ad70057590b\"" Aug 5 21:52:45.703222 containerd[1439]: time="2024-08-05T21:52:45.702022449Z" level=info msg="StartContainer for \"9b0fca6a8fa5ea50fc9435ff79979b121f1e714af2ada846adeb9ad70057590b\"" Aug 5 21:52:45.732363 systemd[1]: Started cri-containerd-9b0fca6a8fa5ea50fc9435ff79979b121f1e714af2ada846adeb9ad70057590b.scope - libcontainer container 9b0fca6a8fa5ea50fc9435ff79979b121f1e714af2ada846adeb9ad70057590b. Aug 5 21:52:45.761832 containerd[1439]: time="2024-08-05T21:52:45.761792701Z" level=info msg="StartContainer for \"9b0fca6a8fa5ea50fc9435ff79979b121f1e714af2ada846adeb9ad70057590b\" returns successfully" Aug 5 21:52:45.764037 containerd[1439]: time="2024-08-05T21:52:45.763998090Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\"" Aug 5 21:52:46.987811 containerd[1439]: time="2024-08-05T21:52:46.987748014Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:52:46.989889 containerd[1439]: time="2024-08-05T21:52:46.989253541Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0: active requests=0, bytes read=9548567" Aug 5 21:52:46.990388 containerd[1439]: time="2024-08-05T21:52:46.990360735Z" level=info msg="ImageCreate event name:\"sha256:f708eddd5878891da5bc6148fc8bb3f7277210481a15957910fe5fb551a5ed28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:52:46.997377 containerd[1439]: time="2024-08-05T21:52:46.997338352Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:52:46.998544 containerd[1439]: time="2024-08-05T21:52:46.998156618Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" with image id \"sha256:f708eddd5878891da5bc6148fc8bb3f7277210481a15957910fe5fb551a5ed28\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\", size \"10915087\" in 1.234120287s" Aug 5 21:52:46.998544 containerd[1439]: time="2024-08-05T21:52:46.998191419Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" returns image reference \"sha256:f708eddd5878891da5bc6148fc8bb3f7277210481a15957910fe5fb551a5ed28\"" Aug 5 21:52:47.002354 containerd[1439]: time="2024-08-05T21:52:47.000618055Z" level=info msg="CreateContainer within sandbox \"ddd69a1c583fa6cb1cab7e9b565d2563bd331e1fe9b49408687016b9ac3af025\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Aug 5 21:52:47.012242 containerd[1439]: time="2024-08-05T21:52:47.012192210Z" level=info msg="CreateContainer within sandbox \"ddd69a1c583fa6cb1cab7e9b565d2563bd331e1fe9b49408687016b9ac3af025\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"b94df7aab54cd90f77aeca808a3b3b5a5d7c86cee393481565f14ae0ddb2316d\"" Aug 5 21:52:47.012647 containerd[1439]: time="2024-08-05T21:52:47.012611303Z" level=info msg="StartContainer for \"b94df7aab54cd90f77aeca808a3b3b5a5d7c86cee393481565f14ae0ddb2316d\"" Aug 5 21:52:47.044327 systemd[1]: Started cri-containerd-b94df7aab54cd90f77aeca808a3b3b5a5d7c86cee393481565f14ae0ddb2316d.scope - libcontainer container b94df7aab54cd90f77aeca808a3b3b5a5d7c86cee393481565f14ae0ddb2316d. Aug 5 21:52:47.073030 containerd[1439]: time="2024-08-05T21:52:47.072988716Z" level=info msg="StartContainer for \"b94df7aab54cd90f77aeca808a3b3b5a5d7c86cee393481565f14ae0ddb2316d\" returns successfully" Aug 5 21:52:47.284776 kubelet[2528]: I0805 21:52:47.284678 2528 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Aug 5 21:52:47.284776 kubelet[2528]: I0805 21:52:47.284719 2528 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Aug 5 21:52:47.416866 kubelet[2528]: I0805 21:52:47.416829 2528 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-rsntl" podStartSLOduration=28.949128922 podStartE2EDuration="31.416788745s" podCreationTimestamp="2024-08-05 21:52:16 +0000 UTC" firstStartedPulling="2024-08-05 21:52:44.531114774 +0000 UTC m=+48.422565345" lastFinishedPulling="2024-08-05 21:52:46.998774557 +0000 UTC m=+50.890225168" observedRunningTime="2024-08-05 21:52:47.415812035 +0000 UTC m=+51.307262646" watchObservedRunningTime="2024-08-05 21:52:47.416788745 +0000 UTC m=+51.308239396" Aug 5 21:52:48.593789 systemd[1]: Started sshd@12-10.0.0.99:22-10.0.0.1:55330.service - OpenSSH per-connection server daemon (10.0.0.1:55330). Aug 5 21:52:48.659896 sshd[4651]: Accepted publickey for core from 10.0.0.1 port 55330 ssh2: RSA SHA256:vLb6+tqq+AN0xHsfmostacJPBc0ER3TTPjmoL0pNEEc Aug 5 21:52:48.661613 sshd[4651]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:52:48.667217 systemd-logind[1420]: New session 13 of user core. Aug 5 21:52:48.676324 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 5 21:52:48.816205 sshd[4651]: pam_unix(sshd:session): session closed for user core Aug 5 21:52:48.832950 systemd[1]: sshd@12-10.0.0.99:22-10.0.0.1:55330.service: Deactivated successfully. Aug 5 21:52:48.834948 systemd[1]: session-13.scope: Deactivated successfully. Aug 5 21:52:48.836915 systemd-logind[1420]: Session 13 logged out. Waiting for processes to exit. Aug 5 21:52:48.846378 systemd[1]: Started sshd@13-10.0.0.99:22-10.0.0.1:55338.service - OpenSSH per-connection server daemon (10.0.0.1:55338). Aug 5 21:52:48.849131 systemd-logind[1420]: Removed session 13. Aug 5 21:52:48.890168 sshd[4665]: Accepted publickey for core from 10.0.0.1 port 55338 ssh2: RSA SHA256:vLb6+tqq+AN0xHsfmostacJPBc0ER3TTPjmoL0pNEEc Aug 5 21:52:48.891541 sshd[4665]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:52:48.895679 systemd-logind[1420]: New session 14 of user core. Aug 5 21:52:48.903648 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 5 21:52:49.143849 sshd[4665]: pam_unix(sshd:session): session closed for user core Aug 5 21:52:49.153315 systemd[1]: sshd@13-10.0.0.99:22-10.0.0.1:55338.service: Deactivated successfully. Aug 5 21:52:49.155414 systemd[1]: session-14.scope: Deactivated successfully. Aug 5 21:52:49.156130 systemd-logind[1420]: Session 14 logged out. Waiting for processes to exit. Aug 5 21:52:49.163448 systemd[1]: Started sshd@14-10.0.0.99:22-10.0.0.1:55340.service - OpenSSH per-connection server daemon (10.0.0.1:55340). Aug 5 21:52:49.165074 systemd-logind[1420]: Removed session 14. Aug 5 21:52:49.213962 sshd[4678]: Accepted publickey for core from 10.0.0.1 port 55340 ssh2: RSA SHA256:vLb6+tqq+AN0xHsfmostacJPBc0ER3TTPjmoL0pNEEc Aug 5 21:52:49.215724 sshd[4678]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:52:49.220279 systemd-logind[1420]: New session 15 of user core. Aug 5 21:52:49.226311 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 5 21:52:50.656991 sshd[4678]: pam_unix(sshd:session): session closed for user core Aug 5 21:52:50.663870 systemd[1]: sshd@14-10.0.0.99:22-10.0.0.1:55340.service: Deactivated successfully. Aug 5 21:52:50.667783 systemd[1]: session-15.scope: Deactivated successfully. Aug 5 21:52:50.670341 systemd-logind[1420]: Session 15 logged out. Waiting for processes to exit. Aug 5 21:52:50.679375 systemd[1]: Started sshd@15-10.0.0.99:22-10.0.0.1:55348.service - OpenSSH per-connection server daemon (10.0.0.1:55348). Aug 5 21:52:50.681233 systemd-logind[1420]: Removed session 15. Aug 5 21:52:50.717621 sshd[4705]: Accepted publickey for core from 10.0.0.1 port 55348 ssh2: RSA SHA256:vLb6+tqq+AN0xHsfmostacJPBc0ER3TTPjmoL0pNEEc Aug 5 21:52:50.718830 sshd[4705]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:52:50.722845 systemd-logind[1420]: New session 16 of user core. Aug 5 21:52:50.732351 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 5 21:52:50.956488 sshd[4705]: pam_unix(sshd:session): session closed for user core Aug 5 21:52:50.966681 systemd[1]: sshd@15-10.0.0.99:22-10.0.0.1:55348.service: Deactivated successfully. Aug 5 21:52:50.968627 systemd[1]: session-16.scope: Deactivated successfully. Aug 5 21:52:50.971303 systemd-logind[1420]: Session 16 logged out. Waiting for processes to exit. Aug 5 21:52:50.976410 systemd[1]: Started sshd@16-10.0.0.99:22-10.0.0.1:55350.service - OpenSSH per-connection server daemon (10.0.0.1:55350). Aug 5 21:52:50.977217 systemd-logind[1420]: Removed session 16. Aug 5 21:52:51.011723 sshd[4717]: Accepted publickey for core from 10.0.0.1 port 55350 ssh2: RSA SHA256:vLb6+tqq+AN0xHsfmostacJPBc0ER3TTPjmoL0pNEEc Aug 5 21:52:51.012993 sshd[4717]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:52:51.016969 systemd-logind[1420]: New session 17 of user core. Aug 5 21:52:51.025368 systemd[1]: Started session-17.scope - Session 17 of User core. Aug 5 21:52:51.148452 sshd[4717]: pam_unix(sshd:session): session closed for user core Aug 5 21:52:51.151548 systemd-logind[1420]: Session 17 logged out. Waiting for processes to exit. Aug 5 21:52:51.151822 systemd[1]: sshd@16-10.0.0.99:22-10.0.0.1:55350.service: Deactivated successfully. Aug 5 21:52:51.153568 systemd[1]: session-17.scope: Deactivated successfully. Aug 5 21:52:51.155429 systemd-logind[1420]: Removed session 17. Aug 5 21:52:53.184977 kubelet[2528]: E0805 21:52:53.184872 2528 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:52:56.163469 systemd[1]: Started sshd@17-10.0.0.99:22-10.0.0.1:41970.service - OpenSSH per-connection server daemon (10.0.0.1:41970). Aug 5 21:52:56.197506 containerd[1439]: time="2024-08-05T21:52:56.197456721Z" level=info msg="StopPodSandbox for \"a1dd4bafe4261ec061bb941844e6493ab8361843e6fae0a50afa2f293fe31a80\"" Aug 5 21:52:56.204860 sshd[4761]: Accepted publickey for core from 10.0.0.1 port 41970 ssh2: RSA SHA256:vLb6+tqq+AN0xHsfmostacJPBc0ER3TTPjmoL0pNEEc Aug 5 21:52:56.206579 sshd[4761]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:52:56.214628 systemd-logind[1420]: New session 18 of user core. Aug 5 21:52:56.224378 systemd[1]: Started session-18.scope - Session 18 of User core. Aug 5 21:52:56.324886 containerd[1439]: 2024-08-05 21:52:56.277 [WARNING][4778] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a1dd4bafe4261ec061bb941844e6493ab8361843e6fae0a50afa2f293fe31a80" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--mws6f-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"898cb65e-844a-495e-afd2-62c371049ceb", ResourceVersion:"893", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 21, 52, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cc7b592790e5449511926a9e2ec4f4706c4d9a1be160483f1e6a31a7bdb9f6d7", Pod:"coredns-76f75df574-mws6f", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4e5c5ae92ec", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 21:52:56.324886 containerd[1439]: 2024-08-05 21:52:56.278 [INFO][4778] k8s.go 608: Cleaning up netns ContainerID="a1dd4bafe4261ec061bb941844e6493ab8361843e6fae0a50afa2f293fe31a80" Aug 5 21:52:56.324886 containerd[1439]: 2024-08-05 21:52:56.278 [INFO][4778] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="a1dd4bafe4261ec061bb941844e6493ab8361843e6fae0a50afa2f293fe31a80" iface="eth0" netns="" Aug 5 21:52:56.324886 containerd[1439]: 2024-08-05 21:52:56.278 [INFO][4778] k8s.go 615: Releasing IP address(es) ContainerID="a1dd4bafe4261ec061bb941844e6493ab8361843e6fae0a50afa2f293fe31a80" Aug 5 21:52:56.324886 containerd[1439]: 2024-08-05 21:52:56.278 [INFO][4778] utils.go 188: Calico CNI releasing IP address ContainerID="a1dd4bafe4261ec061bb941844e6493ab8361843e6fae0a50afa2f293fe31a80" Aug 5 21:52:56.324886 containerd[1439]: 2024-08-05 21:52:56.307 [INFO][4792] ipam_plugin.go 411: Releasing address using handleID ContainerID="a1dd4bafe4261ec061bb941844e6493ab8361843e6fae0a50afa2f293fe31a80" HandleID="k8s-pod-network.a1dd4bafe4261ec061bb941844e6493ab8361843e6fae0a50afa2f293fe31a80" Workload="localhost-k8s-coredns--76f75df574--mws6f-eth0" Aug 5 21:52:56.324886 containerd[1439]: 2024-08-05 21:52:56.307 [INFO][4792] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 21:52:56.324886 containerd[1439]: 2024-08-05 21:52:56.307 [INFO][4792] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 21:52:56.324886 containerd[1439]: 2024-08-05 21:52:56.316 [WARNING][4792] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="a1dd4bafe4261ec061bb941844e6493ab8361843e6fae0a50afa2f293fe31a80" HandleID="k8s-pod-network.a1dd4bafe4261ec061bb941844e6493ab8361843e6fae0a50afa2f293fe31a80" Workload="localhost-k8s-coredns--76f75df574--mws6f-eth0" Aug 5 21:52:56.324886 containerd[1439]: 2024-08-05 21:52:56.316 [INFO][4792] ipam_plugin.go 439: Releasing address using workloadID ContainerID="a1dd4bafe4261ec061bb941844e6493ab8361843e6fae0a50afa2f293fe31a80" HandleID="k8s-pod-network.a1dd4bafe4261ec061bb941844e6493ab8361843e6fae0a50afa2f293fe31a80" Workload="localhost-k8s-coredns--76f75df574--mws6f-eth0" Aug 5 21:52:56.324886 containerd[1439]: 2024-08-05 21:52:56.321 [INFO][4792] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 21:52:56.324886 containerd[1439]: 2024-08-05 21:52:56.322 [INFO][4778] k8s.go 621: Teardown processing complete. ContainerID="a1dd4bafe4261ec061bb941844e6493ab8361843e6fae0a50afa2f293fe31a80" Aug 5 21:52:56.325373 containerd[1439]: time="2024-08-05T21:52:56.324935358Z" level=info msg="TearDown network for sandbox \"a1dd4bafe4261ec061bb941844e6493ab8361843e6fae0a50afa2f293fe31a80\" successfully" Aug 5 21:52:56.325373 containerd[1439]: time="2024-08-05T21:52:56.324962718Z" level=info msg="StopPodSandbox for \"a1dd4bafe4261ec061bb941844e6493ab8361843e6fae0a50afa2f293fe31a80\" returns successfully" Aug 5 21:52:56.327054 containerd[1439]: time="2024-08-05T21:52:56.327019495Z" level=info msg="RemovePodSandbox for \"a1dd4bafe4261ec061bb941844e6493ab8361843e6fae0a50afa2f293fe31a80\"" Aug 5 21:52:56.337052 containerd[1439]: time="2024-08-05T21:52:56.327067376Z" level=info msg="Forcibly stopping sandbox \"a1dd4bafe4261ec061bb941844e6493ab8361843e6fae0a50afa2f293fe31a80\"" Aug 5 21:52:56.365947 sshd[4761]: pam_unix(sshd:session): session closed for user core Aug 5 21:52:56.370813 systemd[1]: session-18.scope: Deactivated successfully. Aug 5 21:52:56.372464 systemd-logind[1420]: Session 18 logged out. Waiting for processes to exit. Aug 5 21:52:56.372989 systemd[1]: sshd@17-10.0.0.99:22-10.0.0.1:41970.service: Deactivated successfully. Aug 5 21:52:56.376081 systemd-logind[1420]: Removed session 18. Aug 5 21:52:56.418756 containerd[1439]: 2024-08-05 21:52:56.376 [WARNING][4820] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a1dd4bafe4261ec061bb941844e6493ab8361843e6fae0a50afa2f293fe31a80" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--mws6f-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"898cb65e-844a-495e-afd2-62c371049ceb", ResourceVersion:"893", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 21, 52, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cc7b592790e5449511926a9e2ec4f4706c4d9a1be160483f1e6a31a7bdb9f6d7", Pod:"coredns-76f75df574-mws6f", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4e5c5ae92ec", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 21:52:56.418756 containerd[1439]: 2024-08-05 21:52:56.377 [INFO][4820] k8s.go 608: Cleaning up netns ContainerID="a1dd4bafe4261ec061bb941844e6493ab8361843e6fae0a50afa2f293fe31a80" Aug 5 21:52:56.418756 containerd[1439]: 2024-08-05 21:52:56.377 [INFO][4820] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="a1dd4bafe4261ec061bb941844e6493ab8361843e6fae0a50afa2f293fe31a80" iface="eth0" netns="" Aug 5 21:52:56.418756 containerd[1439]: 2024-08-05 21:52:56.377 [INFO][4820] k8s.go 615: Releasing IP address(es) ContainerID="a1dd4bafe4261ec061bb941844e6493ab8361843e6fae0a50afa2f293fe31a80" Aug 5 21:52:56.418756 containerd[1439]: 2024-08-05 21:52:56.377 [INFO][4820] utils.go 188: Calico CNI releasing IP address ContainerID="a1dd4bafe4261ec061bb941844e6493ab8361843e6fae0a50afa2f293fe31a80" Aug 5 21:52:56.418756 containerd[1439]: 2024-08-05 21:52:56.403 [INFO][4829] ipam_plugin.go 411: Releasing address using handleID ContainerID="a1dd4bafe4261ec061bb941844e6493ab8361843e6fae0a50afa2f293fe31a80" HandleID="k8s-pod-network.a1dd4bafe4261ec061bb941844e6493ab8361843e6fae0a50afa2f293fe31a80" Workload="localhost-k8s-coredns--76f75df574--mws6f-eth0" Aug 5 21:52:56.418756 containerd[1439]: 2024-08-05 21:52:56.404 [INFO][4829] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 21:52:56.418756 containerd[1439]: 2024-08-05 21:52:56.404 [INFO][4829] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 21:52:56.418756 containerd[1439]: 2024-08-05 21:52:56.413 [WARNING][4829] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="a1dd4bafe4261ec061bb941844e6493ab8361843e6fae0a50afa2f293fe31a80" HandleID="k8s-pod-network.a1dd4bafe4261ec061bb941844e6493ab8361843e6fae0a50afa2f293fe31a80" Workload="localhost-k8s-coredns--76f75df574--mws6f-eth0" Aug 5 21:52:56.418756 containerd[1439]: 2024-08-05 21:52:56.413 [INFO][4829] ipam_plugin.go 439: Releasing address using workloadID ContainerID="a1dd4bafe4261ec061bb941844e6493ab8361843e6fae0a50afa2f293fe31a80" HandleID="k8s-pod-network.a1dd4bafe4261ec061bb941844e6493ab8361843e6fae0a50afa2f293fe31a80" Workload="localhost-k8s-coredns--76f75df574--mws6f-eth0" Aug 5 21:52:56.418756 containerd[1439]: 2024-08-05 21:52:56.415 [INFO][4829] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 21:52:56.418756 containerd[1439]: 2024-08-05 21:52:56.417 [INFO][4820] k8s.go 621: Teardown processing complete. ContainerID="a1dd4bafe4261ec061bb941844e6493ab8361843e6fae0a50afa2f293fe31a80" Aug 5 21:52:56.418756 containerd[1439]: time="2024-08-05T21:52:56.418677383Z" level=info msg="TearDown network for sandbox \"a1dd4bafe4261ec061bb941844e6493ab8361843e6fae0a50afa2f293fe31a80\" successfully" Aug 5 21:52:56.426700 containerd[1439]: time="2024-08-05T21:52:56.426650643Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a1dd4bafe4261ec061bb941844e6493ab8361843e6fae0a50afa2f293fe31a80\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 5 21:52:56.426825 containerd[1439]: time="2024-08-05T21:52:56.426729405Z" level=info msg="RemovePodSandbox \"a1dd4bafe4261ec061bb941844e6493ab8361843e6fae0a50afa2f293fe31a80\" returns successfully" Aug 5 21:52:56.427118 containerd[1439]: time="2024-08-05T21:52:56.427092055Z" level=info msg="StopPodSandbox for \"957b00fcb06b24b119cc36320a30c80b48c23a666030821d4698c3c44bd442af\"" Aug 5 21:52:56.492542 containerd[1439]: 2024-08-05 21:52:56.461 [WARNING][4852] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="957b00fcb06b24b119cc36320a30c80b48c23a666030821d4698c3c44bd442af" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--rsntl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d273804c-1785-4ad5-9b9f-33407f6c46a0", ResourceVersion:"959", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 21, 52, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ddd69a1c583fa6cb1cab7e9b565d2563bd331e1fe9b49408687016b9ac3af025", Pod:"csi-node-driver-rsntl", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali046639d5163", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 21:52:56.492542 containerd[1439]: 2024-08-05 21:52:56.462 [INFO][4852] k8s.go 608: Cleaning up netns ContainerID="957b00fcb06b24b119cc36320a30c80b48c23a666030821d4698c3c44bd442af" Aug 5 21:52:56.492542 containerd[1439]: 2024-08-05 21:52:56.462 [INFO][4852] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="957b00fcb06b24b119cc36320a30c80b48c23a666030821d4698c3c44bd442af" iface="eth0" netns="" Aug 5 21:52:56.492542 containerd[1439]: 2024-08-05 21:52:56.462 [INFO][4852] k8s.go 615: Releasing IP address(es) ContainerID="957b00fcb06b24b119cc36320a30c80b48c23a666030821d4698c3c44bd442af" Aug 5 21:52:56.492542 containerd[1439]: 2024-08-05 21:52:56.462 [INFO][4852] utils.go 188: Calico CNI releasing IP address ContainerID="957b00fcb06b24b119cc36320a30c80b48c23a666030821d4698c3c44bd442af" Aug 5 21:52:56.492542 containerd[1439]: 2024-08-05 21:52:56.480 [INFO][4860] ipam_plugin.go 411: Releasing address using handleID ContainerID="957b00fcb06b24b119cc36320a30c80b48c23a666030821d4698c3c44bd442af" HandleID="k8s-pod-network.957b00fcb06b24b119cc36320a30c80b48c23a666030821d4698c3c44bd442af" Workload="localhost-k8s-csi--node--driver--rsntl-eth0" Aug 5 21:52:56.492542 containerd[1439]: 2024-08-05 21:52:56.480 [INFO][4860] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 21:52:56.492542 containerd[1439]: 2024-08-05 21:52:56.480 [INFO][4860] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 21:52:56.492542 containerd[1439]: 2024-08-05 21:52:56.488 [WARNING][4860] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="957b00fcb06b24b119cc36320a30c80b48c23a666030821d4698c3c44bd442af" HandleID="k8s-pod-network.957b00fcb06b24b119cc36320a30c80b48c23a666030821d4698c3c44bd442af" Workload="localhost-k8s-csi--node--driver--rsntl-eth0" Aug 5 21:52:56.492542 containerd[1439]: 2024-08-05 21:52:56.488 [INFO][4860] ipam_plugin.go 439: Releasing address using workloadID ContainerID="957b00fcb06b24b119cc36320a30c80b48c23a666030821d4698c3c44bd442af" HandleID="k8s-pod-network.957b00fcb06b24b119cc36320a30c80b48c23a666030821d4698c3c44bd442af" Workload="localhost-k8s-csi--node--driver--rsntl-eth0" Aug 5 21:52:56.492542 containerd[1439]: 2024-08-05 21:52:56.489 [INFO][4860] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 21:52:56.492542 containerd[1439]: 2024-08-05 21:52:56.491 [INFO][4852] k8s.go 621: Teardown processing complete. ContainerID="957b00fcb06b24b119cc36320a30c80b48c23a666030821d4698c3c44bd442af" Aug 5 21:52:56.493196 containerd[1439]: time="2024-08-05T21:52:56.492568462Z" level=info msg="TearDown network for sandbox \"957b00fcb06b24b119cc36320a30c80b48c23a666030821d4698c3c44bd442af\" successfully" Aug 5 21:52:56.493196 containerd[1439]: time="2024-08-05T21:52:56.492592382Z" level=info msg="StopPodSandbox for \"957b00fcb06b24b119cc36320a30c80b48c23a666030821d4698c3c44bd442af\" returns successfully" Aug 5 21:52:56.493196 containerd[1439]: time="2024-08-05T21:52:56.493011994Z" level=info msg="RemovePodSandbox for \"957b00fcb06b24b119cc36320a30c80b48c23a666030821d4698c3c44bd442af\"" Aug 5 21:52:56.493196 containerd[1439]: time="2024-08-05T21:52:56.493039875Z" level=info msg="Forcibly stopping sandbox \"957b00fcb06b24b119cc36320a30c80b48c23a666030821d4698c3c44bd442af\"" Aug 5 21:52:56.569023 containerd[1439]: 2024-08-05 21:52:56.538 [WARNING][4883] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="957b00fcb06b24b119cc36320a30c80b48c23a666030821d4698c3c44bd442af" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--rsntl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d273804c-1785-4ad5-9b9f-33407f6c46a0", ResourceVersion:"959", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 21, 52, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ddd69a1c583fa6cb1cab7e9b565d2563bd331e1fe9b49408687016b9ac3af025", Pod:"csi-node-driver-rsntl", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali046639d5163", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 21:52:56.569023 containerd[1439]: 2024-08-05 21:52:56.538 [INFO][4883] k8s.go 608: Cleaning up netns ContainerID="957b00fcb06b24b119cc36320a30c80b48c23a666030821d4698c3c44bd442af" Aug 5 21:52:56.569023 containerd[1439]: 2024-08-05 21:52:56.538 [INFO][4883] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="957b00fcb06b24b119cc36320a30c80b48c23a666030821d4698c3c44bd442af" iface="eth0" netns="" Aug 5 21:52:56.569023 containerd[1439]: 2024-08-05 21:52:56.538 [INFO][4883] k8s.go 615: Releasing IP address(es) ContainerID="957b00fcb06b24b119cc36320a30c80b48c23a666030821d4698c3c44bd442af" Aug 5 21:52:56.569023 containerd[1439]: 2024-08-05 21:52:56.538 [INFO][4883] utils.go 188: Calico CNI releasing IP address ContainerID="957b00fcb06b24b119cc36320a30c80b48c23a666030821d4698c3c44bd442af" Aug 5 21:52:56.569023 containerd[1439]: 2024-08-05 21:52:56.556 [INFO][4890] ipam_plugin.go 411: Releasing address using handleID ContainerID="957b00fcb06b24b119cc36320a30c80b48c23a666030821d4698c3c44bd442af" HandleID="k8s-pod-network.957b00fcb06b24b119cc36320a30c80b48c23a666030821d4698c3c44bd442af" Workload="localhost-k8s-csi--node--driver--rsntl-eth0" Aug 5 21:52:56.569023 containerd[1439]: 2024-08-05 21:52:56.556 [INFO][4890] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 21:52:56.569023 containerd[1439]: 2024-08-05 21:52:56.556 [INFO][4890] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 21:52:56.569023 containerd[1439]: 2024-08-05 21:52:56.564 [WARNING][4890] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="957b00fcb06b24b119cc36320a30c80b48c23a666030821d4698c3c44bd442af" HandleID="k8s-pod-network.957b00fcb06b24b119cc36320a30c80b48c23a666030821d4698c3c44bd442af" Workload="localhost-k8s-csi--node--driver--rsntl-eth0" Aug 5 21:52:56.569023 containerd[1439]: 2024-08-05 21:52:56.564 [INFO][4890] ipam_plugin.go 439: Releasing address using workloadID ContainerID="957b00fcb06b24b119cc36320a30c80b48c23a666030821d4698c3c44bd442af" HandleID="k8s-pod-network.957b00fcb06b24b119cc36320a30c80b48c23a666030821d4698c3c44bd442af" Workload="localhost-k8s-csi--node--driver--rsntl-eth0" Aug 5 21:52:56.569023 containerd[1439]: 2024-08-05 21:52:56.566 [INFO][4890] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 21:52:56.569023 containerd[1439]: 2024-08-05 21:52:56.567 [INFO][4883] k8s.go 621: Teardown processing complete. ContainerID="957b00fcb06b24b119cc36320a30c80b48c23a666030821d4698c3c44bd442af" Aug 5 21:52:56.569425 containerd[1439]: time="2024-08-05T21:52:56.569062612Z" level=info msg="TearDown network for sandbox \"957b00fcb06b24b119cc36320a30c80b48c23a666030821d4698c3c44bd442af\" successfully" Aug 5 21:52:56.571923 containerd[1439]: time="2024-08-05T21:52:56.571836008Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"957b00fcb06b24b119cc36320a30c80b48c23a666030821d4698c3c44bd442af\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 5 21:52:56.571988 containerd[1439]: time="2024-08-05T21:52:56.571940611Z" level=info msg="RemovePodSandbox \"957b00fcb06b24b119cc36320a30c80b48c23a666030821d4698c3c44bd442af\" returns successfully" Aug 5 21:52:56.572515 containerd[1439]: time="2024-08-05T21:52:56.572492906Z" level=info msg="StopPodSandbox for \"b89e20bf661d484f4daa291792eca0523729599c8a8962ea65467a07b6ba66fe\"" Aug 5 21:52:56.636245 containerd[1439]: 2024-08-05 21:52:56.606 [WARNING][4912] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b89e20bf661d484f4daa291792eca0523729599c8a8962ea65467a07b6ba66fe" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--85cbdc89--d4rsn-eth0", GenerateName:"calico-kube-controllers-85cbdc89-", Namespace:"calico-system", SelfLink:"", UID:"e4d6c640-dd8d-409d-b5ce-dbdbc361cc76", ResourceVersion:"935", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 21, 52, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"85cbdc89", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d1684becf515b4664b1fec8201289e514c43bb9b636497143d80f85b68638f21", Pod:"calico-kube-controllers-85cbdc89-d4rsn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali771e885a486", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 21:52:56.636245 containerd[1439]: 2024-08-05 21:52:56.606 [INFO][4912] k8s.go 608: Cleaning up netns ContainerID="b89e20bf661d484f4daa291792eca0523729599c8a8962ea65467a07b6ba66fe" Aug 5 21:52:56.636245 containerd[1439]: 2024-08-05 21:52:56.606 [INFO][4912] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="b89e20bf661d484f4daa291792eca0523729599c8a8962ea65467a07b6ba66fe" iface="eth0" netns="" Aug 5 21:52:56.636245 containerd[1439]: 2024-08-05 21:52:56.606 [INFO][4912] k8s.go 615: Releasing IP address(es) ContainerID="b89e20bf661d484f4daa291792eca0523729599c8a8962ea65467a07b6ba66fe" Aug 5 21:52:56.636245 containerd[1439]: 2024-08-05 21:52:56.606 [INFO][4912] utils.go 188: Calico CNI releasing IP address ContainerID="b89e20bf661d484f4daa291792eca0523729599c8a8962ea65467a07b6ba66fe" Aug 5 21:52:56.636245 containerd[1439]: 2024-08-05 21:52:56.623 [INFO][4920] ipam_plugin.go 411: Releasing address using handleID ContainerID="b89e20bf661d484f4daa291792eca0523729599c8a8962ea65467a07b6ba66fe" HandleID="k8s-pod-network.b89e20bf661d484f4daa291792eca0523729599c8a8962ea65467a07b6ba66fe" Workload="localhost-k8s-calico--kube--controllers--85cbdc89--d4rsn-eth0" Aug 5 21:52:56.636245 containerd[1439]: 2024-08-05 21:52:56.623 [INFO][4920] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 21:52:56.636245 containerd[1439]: 2024-08-05 21:52:56.623 [INFO][4920] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 21:52:56.636245 containerd[1439]: 2024-08-05 21:52:56.632 [WARNING][4920] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="b89e20bf661d484f4daa291792eca0523729599c8a8962ea65467a07b6ba66fe" HandleID="k8s-pod-network.b89e20bf661d484f4daa291792eca0523729599c8a8962ea65467a07b6ba66fe" Workload="localhost-k8s-calico--kube--controllers--85cbdc89--d4rsn-eth0" Aug 5 21:52:56.636245 containerd[1439]: 2024-08-05 21:52:56.632 [INFO][4920] ipam_plugin.go 439: Releasing address using workloadID ContainerID="b89e20bf661d484f4daa291792eca0523729599c8a8962ea65467a07b6ba66fe" HandleID="k8s-pod-network.b89e20bf661d484f4daa291792eca0523729599c8a8962ea65467a07b6ba66fe" Workload="localhost-k8s-calico--kube--controllers--85cbdc89--d4rsn-eth0" Aug 5 21:52:56.636245 containerd[1439]: 2024-08-05 21:52:56.633 [INFO][4920] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 21:52:56.636245 containerd[1439]: 2024-08-05 21:52:56.634 [INFO][4912] k8s.go 621: Teardown processing complete. ContainerID="b89e20bf661d484f4daa291792eca0523729599c8a8962ea65467a07b6ba66fe" Aug 5 21:52:56.636245 containerd[1439]: time="2024-08-05T21:52:56.636065860Z" level=info msg="TearDown network for sandbox \"b89e20bf661d484f4daa291792eca0523729599c8a8962ea65467a07b6ba66fe\" successfully" Aug 5 21:52:56.636245 containerd[1439]: time="2024-08-05T21:52:56.636090501Z" level=info msg="StopPodSandbox for \"b89e20bf661d484f4daa291792eca0523729599c8a8962ea65467a07b6ba66fe\" returns successfully" Aug 5 21:52:56.636686 containerd[1439]: time="2024-08-05T21:52:56.636629555Z" level=info msg="RemovePodSandbox for \"b89e20bf661d484f4daa291792eca0523729599c8a8962ea65467a07b6ba66fe\"" Aug 5 21:52:56.636748 containerd[1439]: time="2024-08-05T21:52:56.636661836Z" level=info msg="Forcibly stopping sandbox \"b89e20bf661d484f4daa291792eca0523729599c8a8962ea65467a07b6ba66fe\"" Aug 5 21:52:56.709277 containerd[1439]: 2024-08-05 21:52:56.674 [WARNING][4943] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b89e20bf661d484f4daa291792eca0523729599c8a8962ea65467a07b6ba66fe" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--85cbdc89--d4rsn-eth0", GenerateName:"calico-kube-controllers-85cbdc89-", Namespace:"calico-system", SelfLink:"", UID:"e4d6c640-dd8d-409d-b5ce-dbdbc361cc76", ResourceVersion:"935", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 21, 52, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"85cbdc89", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d1684becf515b4664b1fec8201289e514c43bb9b636497143d80f85b68638f21", Pod:"calico-kube-controllers-85cbdc89-d4rsn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali771e885a486", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 21:52:56.709277 containerd[1439]: 2024-08-05 21:52:56.674 [INFO][4943] k8s.go 608: Cleaning up netns ContainerID="b89e20bf661d484f4daa291792eca0523729599c8a8962ea65467a07b6ba66fe" Aug 5 21:52:56.709277 containerd[1439]: 2024-08-05 21:52:56.674 [INFO][4943] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="b89e20bf661d484f4daa291792eca0523729599c8a8962ea65467a07b6ba66fe" iface="eth0" netns="" Aug 5 21:52:56.709277 containerd[1439]: 2024-08-05 21:52:56.674 [INFO][4943] k8s.go 615: Releasing IP address(es) ContainerID="b89e20bf661d484f4daa291792eca0523729599c8a8962ea65467a07b6ba66fe" Aug 5 21:52:56.709277 containerd[1439]: 2024-08-05 21:52:56.674 [INFO][4943] utils.go 188: Calico CNI releasing IP address ContainerID="b89e20bf661d484f4daa291792eca0523729599c8a8962ea65467a07b6ba66fe" Aug 5 21:52:56.709277 containerd[1439]: 2024-08-05 21:52:56.691 [INFO][4950] ipam_plugin.go 411: Releasing address using handleID ContainerID="b89e20bf661d484f4daa291792eca0523729599c8a8962ea65467a07b6ba66fe" HandleID="k8s-pod-network.b89e20bf661d484f4daa291792eca0523729599c8a8962ea65467a07b6ba66fe" Workload="localhost-k8s-calico--kube--controllers--85cbdc89--d4rsn-eth0" Aug 5 21:52:56.709277 containerd[1439]: 2024-08-05 21:52:56.691 [INFO][4950] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 21:52:56.709277 containerd[1439]: 2024-08-05 21:52:56.692 [INFO][4950] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 21:52:56.709277 containerd[1439]: 2024-08-05 21:52:56.703 [WARNING][4950] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="b89e20bf661d484f4daa291792eca0523729599c8a8962ea65467a07b6ba66fe" HandleID="k8s-pod-network.b89e20bf661d484f4daa291792eca0523729599c8a8962ea65467a07b6ba66fe" Workload="localhost-k8s-calico--kube--controllers--85cbdc89--d4rsn-eth0" Aug 5 21:52:56.709277 containerd[1439]: 2024-08-05 21:52:56.704 [INFO][4950] ipam_plugin.go 439: Releasing address using workloadID ContainerID="b89e20bf661d484f4daa291792eca0523729599c8a8962ea65467a07b6ba66fe" HandleID="k8s-pod-network.b89e20bf661d484f4daa291792eca0523729599c8a8962ea65467a07b6ba66fe" Workload="localhost-k8s-calico--kube--controllers--85cbdc89--d4rsn-eth0" Aug 5 21:52:56.709277 containerd[1439]: 2024-08-05 21:52:56.706 [INFO][4950] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 21:52:56.709277 containerd[1439]: 2024-08-05 21:52:56.707 [INFO][4943] k8s.go 621: Teardown processing complete. ContainerID="b89e20bf661d484f4daa291792eca0523729599c8a8962ea65467a07b6ba66fe" Aug 5 21:52:56.709667 containerd[1439]: time="2024-08-05T21:52:56.709319881Z" level=info msg="TearDown network for sandbox \"b89e20bf661d484f4daa291792eca0523729599c8a8962ea65467a07b6ba66fe\" successfully" Aug 5 21:52:56.712095 containerd[1439]: time="2024-08-05T21:52:56.712056516Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b89e20bf661d484f4daa291792eca0523729599c8a8962ea65467a07b6ba66fe\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 5 21:52:56.712146 containerd[1439]: time="2024-08-05T21:52:56.712118478Z" level=info msg="RemovePodSandbox \"b89e20bf661d484f4daa291792eca0523729599c8a8962ea65467a07b6ba66fe\" returns successfully" Aug 5 21:52:56.712633 containerd[1439]: time="2024-08-05T21:52:56.712613251Z" level=info msg="StopPodSandbox for \"4b542e6a289afdff408fbdb4c6530e8d62c6d81601ad3a3b8fbd6fd39c89cfbb\"" Aug 5 21:52:56.777235 containerd[1439]: 2024-08-05 21:52:56.745 [WARNING][4973] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4b542e6a289afdff408fbdb4c6530e8d62c6d81601ad3a3b8fbd6fd39c89cfbb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--q8fl2-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"dbb8ef3a-6e32-4c6c-91b3-57dd29571e98", ResourceVersion:"897", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 21, 52, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c26ef696a6e018636afaf5da690fdc4ec9c4b81f1e8014ae8bd2d0ca39abfe2f", Pod:"coredns-76f75df574-q8fl2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidd3bf46c1df", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 21:52:56.777235 containerd[1439]: 2024-08-05 21:52:56.745 [INFO][4973] k8s.go 608: Cleaning up netns ContainerID="4b542e6a289afdff408fbdb4c6530e8d62c6d81601ad3a3b8fbd6fd39c89cfbb" Aug 5 21:52:56.777235 containerd[1439]: 2024-08-05 21:52:56.745 [INFO][4973] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="4b542e6a289afdff408fbdb4c6530e8d62c6d81601ad3a3b8fbd6fd39c89cfbb" iface="eth0" netns="" Aug 5 21:52:56.777235 containerd[1439]: 2024-08-05 21:52:56.745 [INFO][4973] k8s.go 615: Releasing IP address(es) ContainerID="4b542e6a289afdff408fbdb4c6530e8d62c6d81601ad3a3b8fbd6fd39c89cfbb" Aug 5 21:52:56.777235 containerd[1439]: 2024-08-05 21:52:56.745 [INFO][4973] utils.go 188: Calico CNI releasing IP address ContainerID="4b542e6a289afdff408fbdb4c6530e8d62c6d81601ad3a3b8fbd6fd39c89cfbb" Aug 5 21:52:56.777235 containerd[1439]: 2024-08-05 21:52:56.765 [INFO][4981] ipam_plugin.go 411: Releasing address using handleID ContainerID="4b542e6a289afdff408fbdb4c6530e8d62c6d81601ad3a3b8fbd6fd39c89cfbb" HandleID="k8s-pod-network.4b542e6a289afdff408fbdb4c6530e8d62c6d81601ad3a3b8fbd6fd39c89cfbb" Workload="localhost-k8s-coredns--76f75df574--q8fl2-eth0" Aug 5 21:52:56.777235 containerd[1439]: 2024-08-05 21:52:56.765 [INFO][4981] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 21:52:56.777235 containerd[1439]: 2024-08-05 21:52:56.765 [INFO][4981] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 21:52:56.777235 containerd[1439]: 2024-08-05 21:52:56.773 [WARNING][4981] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="4b542e6a289afdff408fbdb4c6530e8d62c6d81601ad3a3b8fbd6fd39c89cfbb" HandleID="k8s-pod-network.4b542e6a289afdff408fbdb4c6530e8d62c6d81601ad3a3b8fbd6fd39c89cfbb" Workload="localhost-k8s-coredns--76f75df574--q8fl2-eth0" Aug 5 21:52:56.777235 containerd[1439]: 2024-08-05 21:52:56.773 [INFO][4981] ipam_plugin.go 439: Releasing address using workloadID ContainerID="4b542e6a289afdff408fbdb4c6530e8d62c6d81601ad3a3b8fbd6fd39c89cfbb" HandleID="k8s-pod-network.4b542e6a289afdff408fbdb4c6530e8d62c6d81601ad3a3b8fbd6fd39c89cfbb" Workload="localhost-k8s-coredns--76f75df574--q8fl2-eth0" Aug 5 21:52:56.777235 containerd[1439]: 2024-08-05 21:52:56.774 [INFO][4981] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 21:52:56.777235 containerd[1439]: 2024-08-05 21:52:56.775 [INFO][4973] k8s.go 621: Teardown processing complete. ContainerID="4b542e6a289afdff408fbdb4c6530e8d62c6d81601ad3a3b8fbd6fd39c89cfbb" Aug 5 21:52:56.777918 containerd[1439]: time="2024-08-05T21:52:56.777237954Z" level=info msg="TearDown network for sandbox \"4b542e6a289afdff408fbdb4c6530e8d62c6d81601ad3a3b8fbd6fd39c89cfbb\" successfully" Aug 5 21:52:56.777918 containerd[1439]: time="2024-08-05T21:52:56.777264035Z" level=info msg="StopPodSandbox for \"4b542e6a289afdff408fbdb4c6530e8d62c6d81601ad3a3b8fbd6fd39c89cfbb\" returns successfully" Aug 5 21:52:56.777918 containerd[1439]: time="2024-08-05T21:52:56.777702567Z" level=info msg="RemovePodSandbox for \"4b542e6a289afdff408fbdb4c6530e8d62c6d81601ad3a3b8fbd6fd39c89cfbb\"" Aug 5 21:52:56.777918 containerd[1439]: time="2024-08-05T21:52:56.777741688Z" level=info msg="Forcibly stopping sandbox \"4b542e6a289afdff408fbdb4c6530e8d62c6d81601ad3a3b8fbd6fd39c89cfbb\"" Aug 5 21:52:56.842742 containerd[1439]: 2024-08-05 21:52:56.811 [WARNING][5003] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4b542e6a289afdff408fbdb4c6530e8d62c6d81601ad3a3b8fbd6fd39c89cfbb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--q8fl2-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"dbb8ef3a-6e32-4c6c-91b3-57dd29571e98", ResourceVersion:"897", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 21, 52, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c26ef696a6e018636afaf5da690fdc4ec9c4b81f1e8014ae8bd2d0ca39abfe2f", Pod:"coredns-76f75df574-q8fl2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidd3bf46c1df", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 21:52:56.842742 containerd[1439]: 2024-08-05 21:52:56.811 [INFO][5003] k8s.go 608: Cleaning up netns ContainerID="4b542e6a289afdff408fbdb4c6530e8d62c6d81601ad3a3b8fbd6fd39c89cfbb" Aug 5 21:52:56.842742 containerd[1439]: 2024-08-05 21:52:56.811 [INFO][5003] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="4b542e6a289afdff408fbdb4c6530e8d62c6d81601ad3a3b8fbd6fd39c89cfbb" iface="eth0" netns="" Aug 5 21:52:56.842742 containerd[1439]: 2024-08-05 21:52:56.811 [INFO][5003] k8s.go 615: Releasing IP address(es) ContainerID="4b542e6a289afdff408fbdb4c6530e8d62c6d81601ad3a3b8fbd6fd39c89cfbb" Aug 5 21:52:56.842742 containerd[1439]: 2024-08-05 21:52:56.811 [INFO][5003] utils.go 188: Calico CNI releasing IP address ContainerID="4b542e6a289afdff408fbdb4c6530e8d62c6d81601ad3a3b8fbd6fd39c89cfbb" Aug 5 21:52:56.842742 containerd[1439]: 2024-08-05 21:52:56.829 [INFO][5011] ipam_plugin.go 411: Releasing address using handleID ContainerID="4b542e6a289afdff408fbdb4c6530e8d62c6d81601ad3a3b8fbd6fd39c89cfbb" HandleID="k8s-pod-network.4b542e6a289afdff408fbdb4c6530e8d62c6d81601ad3a3b8fbd6fd39c89cfbb" Workload="localhost-k8s-coredns--76f75df574--q8fl2-eth0" Aug 5 21:52:56.842742 containerd[1439]: 2024-08-05 21:52:56.829 [INFO][5011] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 21:52:56.842742 containerd[1439]: 2024-08-05 21:52:56.829 [INFO][5011] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 21:52:56.842742 containerd[1439]: 2024-08-05 21:52:56.838 [WARNING][5011] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="4b542e6a289afdff408fbdb4c6530e8d62c6d81601ad3a3b8fbd6fd39c89cfbb" HandleID="k8s-pod-network.4b542e6a289afdff408fbdb4c6530e8d62c6d81601ad3a3b8fbd6fd39c89cfbb" Workload="localhost-k8s-coredns--76f75df574--q8fl2-eth0" Aug 5 21:52:56.842742 containerd[1439]: 2024-08-05 21:52:56.838 [INFO][5011] ipam_plugin.go 439: Releasing address using workloadID ContainerID="4b542e6a289afdff408fbdb4c6530e8d62c6d81601ad3a3b8fbd6fd39c89cfbb" HandleID="k8s-pod-network.4b542e6a289afdff408fbdb4c6530e8d62c6d81601ad3a3b8fbd6fd39c89cfbb" Workload="localhost-k8s-coredns--76f75df574--q8fl2-eth0" Aug 5 21:52:56.842742 containerd[1439]: 2024-08-05 21:52:56.840 [INFO][5011] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 21:52:56.842742 containerd[1439]: 2024-08-05 21:52:56.841 [INFO][5003] k8s.go 621: Teardown processing complete. ContainerID="4b542e6a289afdff408fbdb4c6530e8d62c6d81601ad3a3b8fbd6fd39c89cfbb" Aug 5 21:52:56.843230 containerd[1439]: time="2024-08-05T21:52:56.842766882Z" level=info msg="TearDown network for sandbox \"4b542e6a289afdff408fbdb4c6530e8d62c6d81601ad3a3b8fbd6fd39c89cfbb\" successfully" Aug 5 21:52:56.845278 containerd[1439]: time="2024-08-05T21:52:56.845248270Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4b542e6a289afdff408fbdb4c6530e8d62c6d81601ad3a3b8fbd6fd39c89cfbb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 5 21:52:56.845507 containerd[1439]: time="2024-08-05T21:52:56.845306032Z" level=info msg="RemovePodSandbox \"4b542e6a289afdff408fbdb4c6530e8d62c6d81601ad3a3b8fbd6fd39c89cfbb\" returns successfully" Aug 5 21:53:01.375981 systemd[1]: Started sshd@18-10.0.0.99:22-10.0.0.1:41982.service - OpenSSH per-connection server daemon (10.0.0.1:41982). Aug 5 21:53:01.415733 sshd[5046]: Accepted publickey for core from 10.0.0.1 port 41982 ssh2: RSA SHA256:vLb6+tqq+AN0xHsfmostacJPBc0ER3TTPjmoL0pNEEc Aug 5 21:53:01.417094 sshd[5046]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:53:01.420839 systemd-logind[1420]: New session 19 of user core. Aug 5 21:53:01.430366 systemd[1]: Started session-19.scope - Session 19 of User core. Aug 5 21:53:01.536572 sshd[5046]: pam_unix(sshd:session): session closed for user core Aug 5 21:53:01.539749 systemd[1]: sshd@18-10.0.0.99:22-10.0.0.1:41982.service: Deactivated successfully. Aug 5 21:53:01.541542 systemd[1]: session-19.scope: Deactivated successfully. Aug 5 21:53:01.543704 systemd-logind[1420]: Session 19 logged out. Waiting for processes to exit. Aug 5 21:53:01.544782 systemd-logind[1420]: Removed session 19. Aug 5 21:53:05.208367 kubelet[2528]: E0805 21:53:05.208327 2528 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:53:06.548154 systemd[1]: Started sshd@19-10.0.0.99:22-10.0.0.1:55750.service - OpenSSH per-connection server daemon (10.0.0.1:55750). Aug 5 21:53:06.587870 sshd[5063]: Accepted publickey for core from 10.0.0.1 port 55750 ssh2: RSA SHA256:vLb6+tqq+AN0xHsfmostacJPBc0ER3TTPjmoL0pNEEc Aug 5 21:53:06.589336 sshd[5063]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:53:06.593470 systemd-logind[1420]: New session 20 of user core. Aug 5 21:53:06.602315 systemd[1]: Started session-20.scope - Session 20 of User core. Aug 5 21:53:06.709489 sshd[5063]: pam_unix(sshd:session): session closed for user core Aug 5 21:53:06.712604 systemd[1]: sshd@19-10.0.0.99:22-10.0.0.1:55750.service: Deactivated successfully. Aug 5 21:53:06.714570 systemd[1]: session-20.scope: Deactivated successfully. Aug 5 21:53:06.715438 systemd-logind[1420]: Session 20 logged out. Waiting for processes to exit. Aug 5 21:53:06.716266 systemd-logind[1420]: Removed session 20.