Sep 4 17:20:12.929983 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 4 17:20:12.930011 kernel: Linux version 6.6.48-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Wed Sep 4 15:58:01 -00 2024 Sep 4 17:20:12.930024 kernel: KASLR enabled Sep 4 17:20:12.930029 kernel: efi: EFI v2.7 by EDK II Sep 4 17:20:12.930035 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb900018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Sep 4 17:20:12.930041 kernel: random: crng init done Sep 4 17:20:12.930048 kernel: ACPI: Early table checksum verification disabled Sep 4 17:20:12.930054 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Sep 4 17:20:12.930061 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Sep 4 17:20:12.930069 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:20:12.930075 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:20:12.930081 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:20:12.930087 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:20:12.930093 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:20:12.930101 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:20:12.930108 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:20:12.930115 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:20:12.930121 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:20:12.930127 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Sep 4 17:20:12.930133 kernel: NUMA: Failed to initialise from firmware Sep 4 17:20:12.930140 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Sep 4 17:20:12.930146 kernel: NUMA: NODE_DATA [mem 0xdc957800-0xdc95cfff] Sep 4 17:20:12.930152 kernel: Zone ranges: Sep 4 17:20:12.930158 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Sep 4 17:20:12.930165 kernel: DMA32 empty Sep 4 17:20:12.930172 kernel: Normal empty Sep 4 17:20:12.930178 kernel: Movable zone start for each node Sep 4 17:20:12.930185 kernel: Early memory node ranges Sep 4 17:20:12.930191 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Sep 4 17:20:12.930197 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Sep 4 17:20:12.930203 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Sep 4 17:20:12.930210 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Sep 4 17:20:12.930216 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Sep 4 17:20:12.930222 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Sep 4 17:20:12.930228 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Sep 4 17:20:12.930235 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Sep 4 17:20:12.930242 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Sep 4 17:20:12.930249 kernel: psci: probing for conduit method from ACPI. Sep 4 17:20:12.930256 kernel: psci: PSCIv1.1 detected in firmware. Sep 4 17:20:12.930263 kernel: psci: Using standard PSCI v0.2 function IDs Sep 4 17:20:12.930272 kernel: psci: Trusted OS migration not required Sep 4 17:20:12.930279 kernel: psci: SMC Calling Convention v1.1 Sep 4 17:20:12.930286 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 4 17:20:12.930294 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Sep 4 17:20:12.930301 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Sep 4 17:20:12.930308 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Sep 4 17:20:12.930315 kernel: Detected PIPT I-cache on CPU0 Sep 4 17:20:12.930321 kernel: CPU features: detected: GIC system register CPU interface Sep 4 17:20:12.930328 kernel: CPU features: detected: Hardware dirty bit management Sep 4 17:20:12.930334 kernel: CPU features: detected: Spectre-v4 Sep 4 17:20:12.930341 kernel: CPU features: detected: Spectre-BHB Sep 4 17:20:12.930348 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 4 17:20:12.930354 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 4 17:20:12.930362 kernel: CPU features: detected: ARM erratum 1418040 Sep 4 17:20:12.930369 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 4 17:20:12.930375 kernel: alternatives: applying boot alternatives Sep 4 17:20:12.930383 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=28a986328b36e7de6a755f88bb335afbeb3e3932bc9a20c5f8e57b952c2d23a9 Sep 4 17:20:12.930390 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 4 17:20:12.930397 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 4 17:20:12.930404 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 4 17:20:12.930411 kernel: Fallback order for Node 0: 0 Sep 4 17:20:12.930418 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Sep 4 17:20:12.930425 kernel: Policy zone: DMA Sep 4 17:20:12.930431 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 4 17:20:12.930439 kernel: software IO TLB: area num 4. Sep 4 17:20:12.930447 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Sep 4 17:20:12.930454 kernel: Memory: 2386592K/2572288K available (10240K kernel code, 2184K rwdata, 8084K rodata, 39296K init, 897K bss, 185696K reserved, 0K cma-reserved) Sep 4 17:20:12.930461 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 4 17:20:12.930468 kernel: trace event string verifier disabled Sep 4 17:20:12.930475 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 4 17:20:12.930482 kernel: rcu: RCU event tracing is enabled. Sep 4 17:20:12.930489 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 4 17:20:12.930496 kernel: Trampoline variant of Tasks RCU enabled. Sep 4 17:20:12.930518 kernel: Tracing variant of Tasks RCU enabled. Sep 4 17:20:12.930525 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 4 17:20:12.930532 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 4 17:20:12.930542 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 4 17:20:12.930549 kernel: GICv3: 256 SPIs implemented Sep 4 17:20:12.930555 kernel: GICv3: 0 Extended SPIs implemented Sep 4 17:20:12.930562 kernel: Root IRQ handler: gic_handle_irq Sep 4 17:20:12.930569 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 4 17:20:12.930576 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 4 17:20:12.930618 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 4 17:20:12.930626 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400d0000 (indirect, esz 8, psz 64K, shr 1) Sep 4 17:20:12.930632 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400e0000 (flat, esz 8, psz 64K, shr 1) Sep 4 17:20:12.930639 kernel: GICv3: using LPI property table @0x00000000400f0000 Sep 4 17:20:12.930646 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Sep 4 17:20:12.930656 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 4 17:20:12.930663 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 4 17:20:12.930670 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 4 17:20:12.930677 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 4 17:20:12.930685 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 4 17:20:12.930691 kernel: arm-pv: using stolen time PV Sep 4 17:20:12.930699 kernel: Console: colour dummy device 80x25 Sep 4 17:20:12.930706 kernel: ACPI: Core revision 20230628 Sep 4 17:20:12.930713 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 4 17:20:12.930720 kernel: pid_max: default: 32768 minimum: 301 Sep 4 17:20:12.930729 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 4 17:20:12.930736 kernel: landlock: Up and running. Sep 4 17:20:12.930742 kernel: SELinux: Initializing. Sep 4 17:20:12.930749 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 4 17:20:12.930757 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 4 17:20:12.930763 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Sep 4 17:20:12.930770 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Sep 4 17:20:12.930777 kernel: rcu: Hierarchical SRCU implementation. Sep 4 17:20:12.930784 kernel: rcu: Max phase no-delay instances is 400. Sep 4 17:20:12.930791 kernel: Platform MSI: ITS@0x8080000 domain created Sep 4 17:20:12.930799 kernel: PCI/MSI: ITS@0x8080000 domain created Sep 4 17:20:12.930806 kernel: Remapping and enabling EFI services. Sep 4 17:20:12.930813 kernel: smp: Bringing up secondary CPUs ... Sep 4 17:20:12.930820 kernel: Detected PIPT I-cache on CPU1 Sep 4 17:20:12.930826 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 4 17:20:12.930834 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Sep 4 17:20:12.930841 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 4 17:20:12.930847 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 4 17:20:12.930854 kernel: Detected PIPT I-cache on CPU2 Sep 4 17:20:12.930863 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Sep 4 17:20:12.930870 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Sep 4 17:20:12.930883 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 4 17:20:12.930891 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Sep 4 17:20:12.930898 kernel: Detected PIPT I-cache on CPU3 Sep 4 17:20:12.930905 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Sep 4 17:20:12.930913 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Sep 4 17:20:12.930920 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 4 17:20:12.930927 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Sep 4 17:20:12.930936 kernel: smp: Brought up 1 node, 4 CPUs Sep 4 17:20:12.930943 kernel: SMP: Total of 4 processors activated. Sep 4 17:20:12.930950 kernel: CPU features: detected: 32-bit EL0 Support Sep 4 17:20:12.930958 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 4 17:20:12.930965 kernel: CPU features: detected: Common not Private translations Sep 4 17:20:12.930972 kernel: CPU features: detected: CRC32 instructions Sep 4 17:20:12.930979 kernel: CPU features: detected: Enhanced Virtualization Traps Sep 4 17:20:12.930986 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 4 17:20:12.930996 kernel: CPU features: detected: LSE atomic instructions Sep 4 17:20:12.931003 kernel: CPU features: detected: Privileged Access Never Sep 4 17:20:12.931017 kernel: CPU features: detected: RAS Extension Support Sep 4 17:20:12.931024 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 4 17:20:12.931032 kernel: CPU: All CPU(s) started at EL1 Sep 4 17:20:12.931039 kernel: alternatives: applying system-wide alternatives Sep 4 17:20:12.931047 kernel: devtmpfs: initialized Sep 4 17:20:12.931055 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 4 17:20:12.931062 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 4 17:20:12.931072 kernel: pinctrl core: initialized pinctrl subsystem Sep 4 17:20:12.931079 kernel: SMBIOS 3.0.0 present. Sep 4 17:20:12.931086 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Sep 4 17:20:12.931094 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 4 17:20:12.931101 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 4 17:20:12.931109 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 4 17:20:12.931116 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 4 17:20:12.931124 kernel: audit: initializing netlink subsys (disabled) Sep 4 17:20:12.931131 kernel: audit: type=2000 audit(0.027:1): state=initialized audit_enabled=0 res=1 Sep 4 17:20:12.931140 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 4 17:20:12.931147 kernel: cpuidle: using governor menu Sep 4 17:20:12.931154 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 4 17:20:12.931162 kernel: ASID allocator initialised with 32768 entries Sep 4 17:20:12.931170 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 4 17:20:12.931177 kernel: Serial: AMBA PL011 UART driver Sep 4 17:20:12.931185 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 4 17:20:12.931192 kernel: Modules: 0 pages in range for non-PLT usage Sep 4 17:20:12.931199 kernel: Modules: 509056 pages in range for PLT usage Sep 4 17:20:12.931208 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 4 17:20:12.931216 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 4 17:20:12.931223 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 4 17:20:12.931231 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 4 17:20:12.931239 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 4 17:20:12.931246 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 4 17:20:12.931253 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 4 17:20:12.931260 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 4 17:20:12.931267 kernel: ACPI: Added _OSI(Module Device) Sep 4 17:20:12.931275 kernel: ACPI: Added _OSI(Processor Device) Sep 4 17:20:12.931284 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Sep 4 17:20:12.931291 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 4 17:20:12.931298 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 4 17:20:12.931306 kernel: ACPI: Interpreter enabled Sep 4 17:20:12.931313 kernel: ACPI: Using GIC for interrupt routing Sep 4 17:20:12.931320 kernel: ACPI: MCFG table detected, 1 entries Sep 4 17:20:12.931328 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 4 17:20:12.931335 kernel: printk: console [ttyAMA0] enabled Sep 4 17:20:12.931342 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 4 17:20:12.931493 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 4 17:20:12.931568 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 4 17:20:12.931650 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 4 17:20:12.931716 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 4 17:20:12.931782 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 4 17:20:12.931792 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 4 17:20:12.931800 kernel: PCI host bridge to bus 0000:00 Sep 4 17:20:12.931877 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 4 17:20:12.931937 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 4 17:20:12.931996 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 4 17:20:12.932064 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 4 17:20:12.932163 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Sep 4 17:20:12.932244 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Sep 4 17:20:12.932319 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Sep 4 17:20:12.932408 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Sep 4 17:20:12.932478 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Sep 4 17:20:12.932545 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Sep 4 17:20:12.932640 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Sep 4 17:20:12.932708 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Sep 4 17:20:12.932768 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 4 17:20:12.932830 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 4 17:20:12.932888 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 4 17:20:12.932898 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 4 17:20:12.932906 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 4 17:20:12.932913 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 4 17:20:12.932921 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 4 17:20:12.932929 kernel: iommu: Default domain type: Translated Sep 4 17:20:12.932936 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 4 17:20:12.932946 kernel: efivars: Registered efivars operations Sep 4 17:20:12.932953 kernel: vgaarb: loaded Sep 4 17:20:12.932960 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 4 17:20:12.932968 kernel: VFS: Disk quotas dquot_6.6.0 Sep 4 17:20:12.932975 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 4 17:20:12.932982 kernel: pnp: PnP ACPI init Sep 4 17:20:12.933066 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 4 17:20:12.933077 kernel: pnp: PnP ACPI: found 1 devices Sep 4 17:20:12.933085 kernel: NET: Registered PF_INET protocol family Sep 4 17:20:12.933096 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 4 17:20:12.933104 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 4 17:20:12.933111 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 4 17:20:12.933119 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 4 17:20:12.933127 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 4 17:20:12.933134 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 4 17:20:12.933142 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 4 17:20:12.933149 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 4 17:20:12.933159 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 4 17:20:12.933166 kernel: PCI: CLS 0 bytes, default 64 Sep 4 17:20:12.933173 kernel: kvm [1]: HYP mode not available Sep 4 17:20:12.933181 kernel: Initialise system trusted keyrings Sep 4 17:20:12.933188 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 4 17:20:12.933196 kernel: Key type asymmetric registered Sep 4 17:20:12.933203 kernel: Asymmetric key parser 'x509' registered Sep 4 17:20:12.933211 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 4 17:20:12.933218 kernel: io scheduler mq-deadline registered Sep 4 17:20:12.933225 kernel: io scheduler kyber registered Sep 4 17:20:12.933234 kernel: io scheduler bfq registered Sep 4 17:20:12.933242 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 4 17:20:12.933249 kernel: ACPI: button: Power Button [PWRB] Sep 4 17:20:12.933258 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 4 17:20:12.933325 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Sep 4 17:20:12.933335 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 4 17:20:12.933343 kernel: thunder_xcv, ver 1.0 Sep 4 17:20:12.933350 kernel: thunder_bgx, ver 1.0 Sep 4 17:20:12.933357 kernel: nicpf, ver 1.0 Sep 4 17:20:12.933367 kernel: nicvf, ver 1.0 Sep 4 17:20:12.933471 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 4 17:20:12.933559 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-09-04T17:20:12 UTC (1725470412) Sep 4 17:20:12.933573 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 4 17:20:12.933587 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Sep 4 17:20:12.933607 kernel: watchdog: Delayed init of the lockup detector failed: -19 Sep 4 17:20:12.933618 kernel: watchdog: Hard watchdog permanently disabled Sep 4 17:20:12.933626 kernel: NET: Registered PF_INET6 protocol family Sep 4 17:20:12.933638 kernel: Segment Routing with IPv6 Sep 4 17:20:12.933645 kernel: In-situ OAM (IOAM) with IPv6 Sep 4 17:20:12.933653 kernel: NET: Registered PF_PACKET protocol family Sep 4 17:20:12.933660 kernel: Key type dns_resolver registered Sep 4 17:20:12.933668 kernel: registered taskstats version 1 Sep 4 17:20:12.933677 kernel: Loading compiled-in X.509 certificates Sep 4 17:20:12.933685 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.48-flatcar: 6782952639b29daf968f5d0c3e73fb25e5af1d5e' Sep 4 17:20:12.933693 kernel: Key type .fscrypt registered Sep 4 17:20:12.933701 kernel: Key type fscrypt-provisioning registered Sep 4 17:20:12.933711 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 4 17:20:12.933720 kernel: ima: Allocated hash algorithm: sha1 Sep 4 17:20:12.933732 kernel: ima: No architecture policies found Sep 4 17:20:12.933745 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 4 17:20:12.933752 kernel: clk: Disabling unused clocks Sep 4 17:20:12.933760 kernel: Freeing unused kernel memory: 39296K Sep 4 17:20:12.933767 kernel: Run /init as init process Sep 4 17:20:12.933775 kernel: with arguments: Sep 4 17:20:12.933782 kernel: /init Sep 4 17:20:12.933790 kernel: with environment: Sep 4 17:20:12.933798 kernel: HOME=/ Sep 4 17:20:12.933805 kernel: TERM=linux Sep 4 17:20:12.933812 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 4 17:20:12.933822 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 4 17:20:12.933831 systemd[1]: Detected virtualization kvm. Sep 4 17:20:12.933840 systemd[1]: Detected architecture arm64. Sep 4 17:20:12.933850 systemd[1]: Running in initrd. Sep 4 17:20:12.933858 systemd[1]: No hostname configured, using default hostname. Sep 4 17:20:12.933866 systemd[1]: Hostname set to . Sep 4 17:20:12.933874 systemd[1]: Initializing machine ID from VM UUID. Sep 4 17:20:12.933882 systemd[1]: Queued start job for default target initrd.target. Sep 4 17:20:12.933891 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:20:12.933899 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:20:12.933908 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 4 17:20:12.933917 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 17:20:12.933926 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 4 17:20:12.933934 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 4 17:20:12.933943 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 4 17:20:12.933952 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 4 17:20:12.933960 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:20:12.933968 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:20:12.933978 systemd[1]: Reached target paths.target - Path Units. Sep 4 17:20:12.933986 systemd[1]: Reached target slices.target - Slice Units. Sep 4 17:20:12.933994 systemd[1]: Reached target swap.target - Swaps. Sep 4 17:20:12.934003 systemd[1]: Reached target timers.target - Timer Units. Sep 4 17:20:12.934017 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 17:20:12.934025 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 17:20:12.934034 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 4 17:20:12.934042 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 4 17:20:12.934050 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:20:12.934061 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 17:20:12.934070 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:20:12.934078 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 17:20:12.934086 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 4 17:20:12.934094 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 17:20:12.934102 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 4 17:20:12.934110 systemd[1]: Starting systemd-fsck-usr.service... Sep 4 17:20:12.934118 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 17:20:12.934128 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 17:20:12.934136 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:20:12.934144 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 4 17:20:12.934152 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:20:12.934161 systemd[1]: Finished systemd-fsck-usr.service. Sep 4 17:20:12.934192 systemd-journald[237]: Collecting audit messages is disabled. Sep 4 17:20:12.934214 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 17:20:12.934223 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:20:12.934232 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 17:20:12.934243 systemd-journald[237]: Journal started Sep 4 17:20:12.934262 systemd-journald[237]: Runtime Journal (/run/log/journal/32105735335e45ee8cf54f0d00c8300e) is 5.9M, max 47.3M, 41.4M free. Sep 4 17:20:12.924031 systemd-modules-load[239]: Inserted module 'overlay' Sep 4 17:20:12.937691 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 4 17:20:12.939966 systemd-modules-load[239]: Inserted module 'br_netfilter' Sep 4 17:20:12.941612 kernel: Bridge firewalling registered Sep 4 17:20:12.941632 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 17:20:12.942856 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 17:20:12.955809 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:20:12.957486 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 17:20:12.960751 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 17:20:12.964457 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 4 17:20:12.971286 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:20:12.974911 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:20:12.976677 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:20:12.979084 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 17:20:12.989729 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 4 17:20:12.992054 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 17:20:13.001438 dracut-cmdline[276]: dracut-dracut-053 Sep 4 17:20:13.003717 dracut-cmdline[276]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=28a986328b36e7de6a755f88bb335afbeb3e3932bc9a20c5f8e57b952c2d23a9 Sep 4 17:20:13.021535 systemd-resolved[280]: Positive Trust Anchors: Sep 4 17:20:13.021552 systemd-resolved[280]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 17:20:13.021595 systemd-resolved[280]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 4 17:20:13.026552 systemd-resolved[280]: Defaulting to hostname 'linux'. Sep 4 17:20:13.027549 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 17:20:13.031368 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:20:13.069607 kernel: SCSI subsystem initialized Sep 4 17:20:13.074597 kernel: Loading iSCSI transport class v2.0-870. Sep 4 17:20:13.082605 kernel: iscsi: registered transport (tcp) Sep 4 17:20:13.095622 kernel: iscsi: registered transport (qla4xxx) Sep 4 17:20:13.095687 kernel: QLogic iSCSI HBA Driver Sep 4 17:20:13.135778 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 4 17:20:13.147829 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 4 17:20:13.163000 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 4 17:20:13.163065 kernel: device-mapper: uevent: version 1.0.3 Sep 4 17:20:13.164046 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 4 17:20:13.210620 kernel: raid6: neonx8 gen() 15745 MB/s Sep 4 17:20:13.227603 kernel: raid6: neonx4 gen() 15656 MB/s Sep 4 17:20:13.244599 kernel: raid6: neonx2 gen() 13221 MB/s Sep 4 17:20:13.261598 kernel: raid6: neonx1 gen() 10501 MB/s Sep 4 17:20:13.278597 kernel: raid6: int64x8 gen() 6943 MB/s Sep 4 17:20:13.295606 kernel: raid6: int64x4 gen() 7338 MB/s Sep 4 17:20:13.312599 kernel: raid6: int64x2 gen() 6120 MB/s Sep 4 17:20:13.329611 kernel: raid6: int64x1 gen() 5055 MB/s Sep 4 17:20:13.329650 kernel: raid6: using algorithm neonx8 gen() 15745 MB/s Sep 4 17:20:13.346624 kernel: raid6: .... xor() 11902 MB/s, rmw enabled Sep 4 17:20:13.346662 kernel: raid6: using neon recovery algorithm Sep 4 17:20:13.352620 kernel: xor: measuring software checksum speed Sep 4 17:20:13.352639 kernel: 8regs : 19840 MB/sec Sep 4 17:20:13.353609 kernel: 32regs : 19668 MB/sec Sep 4 17:20:13.355058 kernel: arm64_neon : 27224 MB/sec Sep 4 17:20:13.355071 kernel: xor: using function: arm64_neon (27224 MB/sec) Sep 4 17:20:13.404604 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 4 17:20:13.415985 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 4 17:20:13.436746 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:20:13.448758 systemd-udevd[462]: Using default interface naming scheme 'v255'. Sep 4 17:20:13.451880 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:20:13.463868 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 4 17:20:13.474944 dracut-pre-trigger[464]: rd.md=0: removing MD RAID activation Sep 4 17:20:13.502379 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 17:20:13.521770 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 17:20:13.564627 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:20:13.572784 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 4 17:20:13.586806 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 4 17:20:13.589253 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 17:20:13.590848 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:20:13.592932 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 17:20:13.602744 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 4 17:20:13.619088 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Sep 4 17:20:13.619350 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 4 17:20:13.621341 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 4 17:20:13.624347 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 4 17:20:13.624382 kernel: GPT:9289727 != 19775487 Sep 4 17:20:13.625692 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 4 17:20:13.627050 kernel: GPT:9289727 != 19775487 Sep 4 17:20:13.627082 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 4 17:20:13.627102 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 17:20:13.630442 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 17:20:13.630576 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:20:13.632808 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:20:13.634392 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 17:20:13.634546 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:20:13.637150 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:20:13.648604 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (516) Sep 4 17:20:13.650595 kernel: BTRFS: device fsid 3e706a0f-a579-4862-bc52-e66e95e66d87 devid 1 transid 42 /dev/vda3 scanned by (udev-worker) (525) Sep 4 17:20:13.653591 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:20:13.666611 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:20:13.671453 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 4 17:20:13.676437 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 4 17:20:13.682962 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 4 17:20:13.684176 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 4 17:20:13.689694 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 4 17:20:13.705767 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 4 17:20:13.707590 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:20:13.713605 disk-uuid[553]: Primary Header is updated. Sep 4 17:20:13.713605 disk-uuid[553]: Secondary Entries is updated. Sep 4 17:20:13.713605 disk-uuid[553]: Secondary Header is updated. Sep 4 17:20:13.718605 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 17:20:13.730641 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:20:13.733918 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 17:20:14.732656 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 17:20:14.732816 disk-uuid[554]: The operation has completed successfully. Sep 4 17:20:14.759211 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 4 17:20:14.759313 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 4 17:20:14.775772 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 4 17:20:14.780067 sh[577]: Success Sep 4 17:20:14.791597 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 4 17:20:14.825906 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 4 17:20:14.840035 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 4 17:20:14.842472 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 4 17:20:14.853603 kernel: BTRFS info (device dm-0): first mount of filesystem 3e706a0f-a579-4862-bc52-e66e95e66d87 Sep 4 17:20:14.853653 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 4 17:20:14.853664 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 4 17:20:14.853827 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 4 17:20:14.854593 kernel: BTRFS info (device dm-0): using free space tree Sep 4 17:20:14.858968 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 4 17:20:14.860352 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 4 17:20:14.867744 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 4 17:20:14.869400 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 4 17:20:14.877900 kernel: BTRFS info (device vda6): first mount of filesystem e85e5091-8620-4def-b250-7009f4048f6e Sep 4 17:20:14.877952 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 4 17:20:14.877969 kernel: BTRFS info (device vda6): using free space tree Sep 4 17:20:14.880633 kernel: BTRFS info (device vda6): auto enabling async discard Sep 4 17:20:14.891412 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 4 17:20:14.894632 kernel: BTRFS info (device vda6): last unmount of filesystem e85e5091-8620-4def-b250-7009f4048f6e Sep 4 17:20:14.899816 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 4 17:20:14.907813 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 4 17:20:14.980187 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 17:20:15.004795 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 17:20:15.059908 systemd-networkd[762]: lo: Link UP Sep 4 17:20:15.059920 systemd-networkd[762]: lo: Gained carrier Sep 4 17:20:15.060654 systemd-networkd[762]: Enumeration completed Sep 4 17:20:15.061233 systemd-networkd[762]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:20:15.061236 systemd-networkd[762]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 17:20:15.062634 systemd-networkd[762]: eth0: Link UP Sep 4 17:20:15.062638 systemd-networkd[762]: eth0: Gained carrier Sep 4 17:20:15.062647 systemd-networkd[762]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:20:15.064702 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 17:20:15.065909 systemd[1]: Reached target network.target - Network. Sep 4 17:20:15.079655 systemd-networkd[762]: eth0: DHCPv4 address 10.0.0.51/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 4 17:20:15.088436 ignition[670]: Ignition 2.19.0 Sep 4 17:20:15.088445 ignition[670]: Stage: fetch-offline Sep 4 17:20:15.088481 ignition[670]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:20:15.088491 ignition[670]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 17:20:15.088818 ignition[670]: parsed url from cmdline: "" Sep 4 17:20:15.088821 ignition[670]: no config URL provided Sep 4 17:20:15.088826 ignition[670]: reading system config file "/usr/lib/ignition/user.ign" Sep 4 17:20:15.088833 ignition[670]: no config at "/usr/lib/ignition/user.ign" Sep 4 17:20:15.088858 ignition[670]: op(1): [started] loading QEMU firmware config module Sep 4 17:20:15.088863 ignition[670]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 4 17:20:15.099321 ignition[670]: op(1): [finished] loading QEMU firmware config module Sep 4 17:20:15.139262 ignition[670]: parsing config with SHA512: 12aae1a8679a4e3a0abd0dfde1428fcb67864c8108eac51a0b1172afb260baee1c5da62ef29f054b5bdb2b0ae42c6c3c55a70b71bfb30bf4a553da8461041646 Sep 4 17:20:15.144961 unknown[670]: fetched base config from "system" Sep 4 17:20:15.144970 unknown[670]: fetched user config from "qemu" Sep 4 17:20:15.145391 ignition[670]: fetch-offline: fetch-offline passed Sep 4 17:20:15.147643 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 17:20:15.145458 ignition[670]: Ignition finished successfully Sep 4 17:20:15.149015 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 4 17:20:15.156785 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 4 17:20:15.167746 ignition[775]: Ignition 2.19.0 Sep 4 17:20:15.167756 ignition[775]: Stage: kargs Sep 4 17:20:15.167923 ignition[775]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:20:15.167932 ignition[775]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 17:20:15.168791 ignition[775]: kargs: kargs passed Sep 4 17:20:15.172450 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 4 17:20:15.168837 ignition[775]: Ignition finished successfully Sep 4 17:20:15.181781 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 4 17:20:15.193506 ignition[783]: Ignition 2.19.0 Sep 4 17:20:15.193516 ignition[783]: Stage: disks Sep 4 17:20:15.193710 ignition[783]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:20:15.193721 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 17:20:15.196423 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 4 17:20:15.194700 ignition[783]: disks: disks passed Sep 4 17:20:15.197741 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 4 17:20:15.194747 ignition[783]: Ignition finished successfully Sep 4 17:20:15.199442 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 4 17:20:15.201422 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 17:20:15.202867 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 17:20:15.204705 systemd[1]: Reached target basic.target - Basic System. Sep 4 17:20:15.212748 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 4 17:20:15.223205 systemd-fsck[794]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 4 17:20:15.227972 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 4 17:20:15.235680 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 4 17:20:15.281605 kernel: EXT4-fs (vda9): mounted filesystem 901d46b0-2319-4536-8a6d-46889db73e8c r/w with ordered data mode. Quota mode: none. Sep 4 17:20:15.282030 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 4 17:20:15.283278 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 4 17:20:15.295688 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 17:20:15.298254 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 4 17:20:15.299310 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 4 17:20:15.299357 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 4 17:20:15.299381 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 17:20:15.305552 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 4 17:20:15.307960 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 4 17:20:15.312307 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (802) Sep 4 17:20:15.312340 kernel: BTRFS info (device vda6): first mount of filesystem e85e5091-8620-4def-b250-7009f4048f6e Sep 4 17:20:15.312353 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 4 17:20:15.313800 kernel: BTRFS info (device vda6): using free space tree Sep 4 17:20:15.316598 kernel: BTRFS info (device vda6): auto enabling async discard Sep 4 17:20:15.317676 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 17:20:15.355209 initrd-setup-root[826]: cut: /sysroot/etc/passwd: No such file or directory Sep 4 17:20:15.359907 initrd-setup-root[833]: cut: /sysroot/etc/group: No such file or directory Sep 4 17:20:15.363686 initrd-setup-root[840]: cut: /sysroot/etc/shadow: No such file or directory Sep 4 17:20:15.367486 initrd-setup-root[847]: cut: /sysroot/etc/gshadow: No such file or directory Sep 4 17:20:15.456730 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 4 17:20:15.472702 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 4 17:20:15.475060 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 4 17:20:15.479611 kernel: BTRFS info (device vda6): last unmount of filesystem e85e5091-8620-4def-b250-7009f4048f6e Sep 4 17:20:15.499899 ignition[915]: INFO : Ignition 2.19.0 Sep 4 17:20:15.499899 ignition[915]: INFO : Stage: mount Sep 4 17:20:15.501496 ignition[915]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:20:15.501496 ignition[915]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 17:20:15.501496 ignition[915]: INFO : mount: mount passed Sep 4 17:20:15.501496 ignition[915]: INFO : Ignition finished successfully Sep 4 17:20:15.503654 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 4 17:20:15.505641 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 4 17:20:15.510725 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 4 17:20:15.851635 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 4 17:20:15.864773 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 17:20:15.871192 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (930) Sep 4 17:20:15.871230 kernel: BTRFS info (device vda6): first mount of filesystem e85e5091-8620-4def-b250-7009f4048f6e Sep 4 17:20:15.871241 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 4 17:20:15.872634 kernel: BTRFS info (device vda6): using free space tree Sep 4 17:20:15.874596 kernel: BTRFS info (device vda6): auto enabling async discard Sep 4 17:20:15.875880 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 17:20:15.891533 ignition[947]: INFO : Ignition 2.19.0 Sep 4 17:20:15.891533 ignition[947]: INFO : Stage: files Sep 4 17:20:15.893224 ignition[947]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:20:15.893224 ignition[947]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 17:20:15.893224 ignition[947]: DEBUG : files: compiled without relabeling support, skipping Sep 4 17:20:15.896367 ignition[947]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 4 17:20:15.896367 ignition[947]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 4 17:20:15.898989 ignition[947]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 4 17:20:15.898989 ignition[947]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 4 17:20:15.898989 ignition[947]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 4 17:20:15.898518 unknown[947]: wrote ssh authorized keys file for user: core Sep 4 17:20:15.903656 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 4 17:20:15.903656 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Sep 4 17:20:15.942736 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 4 17:20:16.006213 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 4 17:20:16.008677 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 4 17:20:16.008677 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 4 17:20:16.008677 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 4 17:20:16.008677 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 4 17:20:16.008677 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 17:20:16.008677 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 17:20:16.008677 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 17:20:16.008677 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 17:20:16.008677 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 17:20:16.022769 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 17:20:16.022769 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Sep 4 17:20:16.022769 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Sep 4 17:20:16.022769 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Sep 4 17:20:16.022769 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Sep 4 17:20:16.276279 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 4 17:20:16.471709 systemd-networkd[762]: eth0: Gained IPv6LL Sep 4 17:20:16.565362 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Sep 4 17:20:16.565362 ignition[947]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 4 17:20:16.568861 ignition[947]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 17:20:16.568861 ignition[947]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 17:20:16.568861 ignition[947]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 4 17:20:16.568861 ignition[947]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 4 17:20:16.568861 ignition[947]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 4 17:20:16.568861 ignition[947]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 4 17:20:16.568861 ignition[947]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 4 17:20:16.568861 ignition[947]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Sep 4 17:20:16.588914 ignition[947]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 4 17:20:16.593008 ignition[947]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 4 17:20:16.595418 ignition[947]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Sep 4 17:20:16.595418 ignition[947]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Sep 4 17:20:16.595418 ignition[947]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Sep 4 17:20:16.595418 ignition[947]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 4 17:20:16.595418 ignition[947]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 4 17:20:16.595418 ignition[947]: INFO : files: files passed Sep 4 17:20:16.595418 ignition[947]: INFO : Ignition finished successfully Sep 4 17:20:16.595931 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 4 17:20:16.608787 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 4 17:20:16.610734 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 4 17:20:16.614383 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 4 17:20:16.615480 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 4 17:20:16.619883 initrd-setup-root-after-ignition[976]: grep: /sysroot/oem/oem-release: No such file or directory Sep 4 17:20:16.623153 initrd-setup-root-after-ignition[978]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:20:16.623153 initrd-setup-root-after-ignition[978]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:20:16.626303 initrd-setup-root-after-ignition[982]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:20:16.627764 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 17:20:16.629053 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 4 17:20:16.642736 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 4 17:20:16.662086 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 4 17:20:16.662195 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 4 17:20:16.664369 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 4 17:20:16.666163 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 4 17:20:16.667946 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 4 17:20:16.668851 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 4 17:20:16.684520 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 17:20:16.692757 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 4 17:20:16.700668 systemd[1]: Stopped target network.target - Network. Sep 4 17:20:16.701666 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:20:16.703502 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:20:16.705638 systemd[1]: Stopped target timers.target - Timer Units. Sep 4 17:20:16.707497 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 4 17:20:16.707637 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 17:20:16.710205 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 4 17:20:16.712272 systemd[1]: Stopped target basic.target - Basic System. Sep 4 17:20:16.713940 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 4 17:20:16.715785 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 17:20:16.717782 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 4 17:20:16.719850 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 4 17:20:16.721681 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 17:20:16.723701 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 4 17:20:16.725763 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 4 17:20:16.727549 systemd[1]: Stopped target swap.target - Swaps. Sep 4 17:20:16.729169 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 4 17:20:16.729301 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 4 17:20:16.731708 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:20:16.733655 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:20:16.735571 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 4 17:20:16.737502 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:20:16.738821 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 4 17:20:16.738959 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 4 17:20:16.741798 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 4 17:20:16.741912 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 17:20:16.743827 systemd[1]: Stopped target paths.target - Path Units. Sep 4 17:20:16.745345 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 4 17:20:16.745448 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:20:16.747547 systemd[1]: Stopped target slices.target - Slice Units. Sep 4 17:20:16.749153 systemd[1]: Stopped target sockets.target - Socket Units. Sep 4 17:20:16.750882 systemd[1]: iscsid.socket: Deactivated successfully. Sep 4 17:20:16.750971 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 17:20:16.752976 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 4 17:20:16.753065 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 17:20:16.754644 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 4 17:20:16.754758 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 17:20:16.756367 systemd[1]: ignition-files.service: Deactivated successfully. Sep 4 17:20:16.756460 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 4 17:20:16.769808 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 4 17:20:16.772362 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 4 17:20:16.773394 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 4 17:20:16.776838 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 4 17:20:16.779781 ignition[1002]: INFO : Ignition 2.19.0 Sep 4 17:20:16.779781 ignition[1002]: INFO : Stage: umount Sep 4 17:20:16.779781 ignition[1002]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:20:16.779781 ignition[1002]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 17:20:16.778520 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 4 17:20:16.795797 ignition[1002]: INFO : umount: umount passed Sep 4 17:20:16.795797 ignition[1002]: INFO : Ignition finished successfully Sep 4 17:20:16.778677 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:20:16.783378 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 4 17:20:16.783498 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 17:20:16.785968 systemd-networkd[762]: eth0: DHCPv6 lease lost Sep 4 17:20:16.788186 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 4 17:20:16.789049 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 4 17:20:16.789143 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 4 17:20:16.796919 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 4 17:20:16.797027 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 4 17:20:16.799213 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 4 17:20:16.799483 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 4 17:20:16.804362 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 4 17:20:16.804538 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 4 17:20:16.807809 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 4 17:20:16.807845 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:20:16.812436 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 4 17:20:16.812501 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 4 17:20:16.813773 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 4 17:20:16.813819 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 4 17:20:16.816625 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 4 17:20:16.816670 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 4 17:20:16.818311 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 4 17:20:16.818349 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 4 17:20:16.835741 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 4 17:20:16.836610 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 4 17:20:16.836680 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 17:20:16.838616 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 4 17:20:16.838663 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:20:16.840548 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 4 17:20:16.840616 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 4 17:20:16.842574 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 4 17:20:16.842637 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 17:20:16.844566 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:20:16.853913 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 4 17:20:16.854050 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 4 17:20:16.855951 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 4 17:20:16.856101 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:20:16.858487 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 4 17:20:16.858551 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 4 17:20:16.860311 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 4 17:20:16.860349 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:20:16.862434 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 4 17:20:16.862487 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 4 17:20:16.865280 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 4 17:20:16.865328 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 4 17:20:16.868407 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 17:20:16.868454 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:20:16.882788 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 4 17:20:16.883841 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 4 17:20:16.883912 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:20:16.886013 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 4 17:20:16.886061 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 17:20:16.888006 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 4 17:20:16.888054 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:20:16.890047 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 17:20:16.890094 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:20:16.892575 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 4 17:20:16.892692 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 4 17:20:16.894761 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 4 17:20:16.894839 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 4 17:20:16.897273 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 4 17:20:16.898584 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 4 17:20:16.898645 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 4 17:20:16.900469 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 4 17:20:16.910431 systemd[1]: Switching root. Sep 4 17:20:16.938850 systemd-journald[237]: Journal stopped Sep 4 17:20:17.676832 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Sep 4 17:20:17.676895 kernel: SELinux: policy capability network_peer_controls=1 Sep 4 17:20:17.676908 kernel: SELinux: policy capability open_perms=1 Sep 4 17:20:17.676918 kernel: SELinux: policy capability extended_socket_class=1 Sep 4 17:20:17.676933 kernel: SELinux: policy capability always_check_network=0 Sep 4 17:20:17.676943 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 4 17:20:17.676952 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 4 17:20:17.676961 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 4 17:20:17.676971 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 4 17:20:17.676982 kernel: audit: type=1403 audit(1725470417.089:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 4 17:20:17.677003 systemd[1]: Successfully loaded SELinux policy in 33.074ms. Sep 4 17:20:17.677023 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.886ms. Sep 4 17:20:17.677034 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 4 17:20:17.677045 systemd[1]: Detected virtualization kvm. Sep 4 17:20:17.677056 systemd[1]: Detected architecture arm64. Sep 4 17:20:17.677066 systemd[1]: Detected first boot. Sep 4 17:20:17.677077 systemd[1]: Initializing machine ID from VM UUID. Sep 4 17:20:17.677087 zram_generator::config[1046]: No configuration found. Sep 4 17:20:17.677101 systemd[1]: Populated /etc with preset unit settings. Sep 4 17:20:17.677112 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 4 17:20:17.677122 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 4 17:20:17.677134 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 4 17:20:17.677145 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 4 17:20:17.677156 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 4 17:20:17.677166 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 4 17:20:17.677177 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 4 17:20:17.677189 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 4 17:20:17.677200 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 4 17:20:17.677211 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 4 17:20:17.677221 systemd[1]: Created slice user.slice - User and Session Slice. Sep 4 17:20:17.677231 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:20:17.677242 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:20:17.677253 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 4 17:20:17.677263 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 4 17:20:17.677275 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 4 17:20:17.677291 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 17:20:17.677301 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 4 17:20:17.677311 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:20:17.677322 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 4 17:20:17.677332 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 4 17:20:17.677343 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 4 17:20:17.677353 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 4 17:20:17.677366 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:20:17.677376 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 17:20:17.677387 systemd[1]: Reached target slices.target - Slice Units. Sep 4 17:20:17.677397 systemd[1]: Reached target swap.target - Swaps. Sep 4 17:20:17.677407 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 4 17:20:17.677418 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 4 17:20:17.677429 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:20:17.677439 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 17:20:17.677449 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:20:17.677460 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 4 17:20:17.677472 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 4 17:20:17.677484 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 4 17:20:17.677494 systemd[1]: Mounting media.mount - External Media Directory... Sep 4 17:20:17.677505 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 4 17:20:17.677516 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 4 17:20:17.677526 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 4 17:20:17.677537 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 4 17:20:17.677552 systemd[1]: Reached target machines.target - Containers. Sep 4 17:20:17.677563 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 4 17:20:17.677574 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:20:17.677593 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 17:20:17.677607 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 4 17:20:17.677618 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 17:20:17.677640 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 17:20:17.677651 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 17:20:17.677662 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 4 17:20:17.677673 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 17:20:17.677686 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 4 17:20:17.677697 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 4 17:20:17.677708 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 4 17:20:17.677719 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 4 17:20:17.677730 systemd[1]: Stopped systemd-fsck-usr.service. Sep 4 17:20:17.677739 kernel: fuse: init (API version 7.39) Sep 4 17:20:17.677749 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 17:20:17.677759 kernel: loop: module loaded Sep 4 17:20:17.677771 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 17:20:17.677782 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 4 17:20:17.677792 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 4 17:20:17.677802 kernel: ACPI: bus type drm_connector registered Sep 4 17:20:17.677812 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 17:20:17.677843 systemd-journald[1112]: Collecting audit messages is disabled. Sep 4 17:20:17.677865 systemd[1]: verity-setup.service: Deactivated successfully. Sep 4 17:20:17.677877 systemd[1]: Stopped verity-setup.service. Sep 4 17:20:17.677890 systemd-journald[1112]: Journal started Sep 4 17:20:17.677912 systemd-journald[1112]: Runtime Journal (/run/log/journal/32105735335e45ee8cf54f0d00c8300e) is 5.9M, max 47.3M, 41.4M free. Sep 4 17:20:17.481127 systemd[1]: Queued start job for default target multi-user.target. Sep 4 17:20:17.495051 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 4 17:20:17.495405 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 4 17:20:17.681329 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 17:20:17.682025 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 4 17:20:17.683217 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 4 17:20:17.684400 systemd[1]: Mounted media.mount - External Media Directory. Sep 4 17:20:17.685501 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 4 17:20:17.686692 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 4 17:20:17.687851 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 4 17:20:17.689061 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 4 17:20:17.691617 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:20:17.692983 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 4 17:20:17.693137 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 4 17:20:17.694540 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 17:20:17.694691 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 17:20:17.696272 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 17:20:17.696439 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 17:20:17.697732 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 17:20:17.697870 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 17:20:17.699259 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 4 17:20:17.699387 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 4 17:20:17.700629 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 17:20:17.700769 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 17:20:17.702057 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 17:20:17.703370 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 4 17:20:17.704791 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 4 17:20:17.716628 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 4 17:20:17.725687 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 4 17:20:17.727803 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 4 17:20:17.728876 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 4 17:20:17.728915 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 17:20:17.730841 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 4 17:20:17.732992 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 4 17:20:17.735031 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 4 17:20:17.736133 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:20:17.737305 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 4 17:20:17.740089 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 4 17:20:17.741272 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 17:20:17.742398 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 4 17:20:17.743530 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 17:20:17.744487 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 17:20:17.749854 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 4 17:20:17.752289 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 17:20:17.752445 systemd-journald[1112]: Time spent on flushing to /var/log/journal/32105735335e45ee8cf54f0d00c8300e is 24.913ms for 858 entries. Sep 4 17:20:17.752445 systemd-journald[1112]: System Journal (/var/log/journal/32105735335e45ee8cf54f0d00c8300e) is 8.0M, max 195.6M, 187.6M free. Sep 4 17:20:17.785114 systemd-journald[1112]: Received client request to flush runtime journal. Sep 4 17:20:17.785160 kernel: loop0: detected capacity change from 0 to 194512 Sep 4 17:20:17.759622 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:20:17.761159 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 4 17:20:17.762617 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 4 17:20:17.764048 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 4 17:20:17.765564 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 4 17:20:17.773741 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 4 17:20:17.782733 systemd-tmpfiles[1157]: ACLs are not supported, ignoring. Sep 4 17:20:17.782744 systemd-tmpfiles[1157]: ACLs are not supported, ignoring. Sep 4 17:20:17.786870 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 4 17:20:17.790782 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 4 17:20:17.794621 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 4 17:20:17.797127 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:20:17.798589 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 17:20:17.800979 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 4 17:20:17.814853 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 4 17:20:17.817110 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 4 17:20:17.818015 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 4 17:20:17.819890 udevadm[1168]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 4 17:20:17.831612 kernel: loop1: detected capacity change from 0 to 114288 Sep 4 17:20:17.839128 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 4 17:20:17.845767 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 17:20:17.867068 systemd-tmpfiles[1179]: ACLs are not supported, ignoring. Sep 4 17:20:17.867089 systemd-tmpfiles[1179]: ACLs are not supported, ignoring. Sep 4 17:20:17.870671 kernel: loop2: detected capacity change from 0 to 65520 Sep 4 17:20:17.871336 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:20:17.911623 kernel: loop3: detected capacity change from 0 to 194512 Sep 4 17:20:17.917612 kernel: loop4: detected capacity change from 0 to 114288 Sep 4 17:20:17.924832 kernel: loop5: detected capacity change from 0 to 65520 Sep 4 17:20:17.927674 (sd-merge)[1184]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 4 17:20:17.928093 (sd-merge)[1184]: Merged extensions into '/usr'. Sep 4 17:20:17.932468 systemd[1]: Reloading requested from client PID 1156 ('systemd-sysext') (unit systemd-sysext.service)... Sep 4 17:20:17.932484 systemd[1]: Reloading... Sep 4 17:20:17.989611 zram_generator::config[1206]: No configuration found. Sep 4 17:20:18.032299 ldconfig[1151]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 4 17:20:18.084509 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:20:18.120787 systemd[1]: Reloading finished in 187 ms. Sep 4 17:20:18.154088 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 4 17:20:18.155750 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 4 17:20:18.168850 systemd[1]: Starting ensure-sysext.service... Sep 4 17:20:18.170762 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 4 17:20:18.177234 systemd[1]: Reloading requested from client PID 1242 ('systemctl') (unit ensure-sysext.service)... Sep 4 17:20:18.177250 systemd[1]: Reloading... Sep 4 17:20:18.188412 systemd-tmpfiles[1244]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 4 17:20:18.189023 systemd-tmpfiles[1244]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 4 17:20:18.189764 systemd-tmpfiles[1244]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 4 17:20:18.190077 systemd-tmpfiles[1244]: ACLs are not supported, ignoring. Sep 4 17:20:18.190189 systemd-tmpfiles[1244]: ACLs are not supported, ignoring. Sep 4 17:20:18.192614 systemd-tmpfiles[1244]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 17:20:18.192711 systemd-tmpfiles[1244]: Skipping /boot Sep 4 17:20:18.199425 systemd-tmpfiles[1244]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 17:20:18.199522 systemd-tmpfiles[1244]: Skipping /boot Sep 4 17:20:18.228388 zram_generator::config[1272]: No configuration found. Sep 4 17:20:18.309551 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:20:18.346287 systemd[1]: Reloading finished in 168 ms. Sep 4 17:20:18.362617 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 4 17:20:18.375055 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 17:20:18.383359 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 4 17:20:18.386032 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 4 17:20:18.388424 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 4 17:20:18.392917 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 17:20:18.401516 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:20:18.404544 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 4 17:20:18.408677 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:20:18.411890 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 17:20:18.416070 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 17:20:18.421558 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 17:20:18.422760 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:20:18.428019 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 4 17:20:18.432606 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 4 17:20:18.434510 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 4 17:20:18.436399 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 17:20:18.436548 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 17:20:18.438226 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 17:20:18.438454 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 17:20:18.456532 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 17:20:18.456730 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 17:20:18.461006 systemd-udevd[1311]: Using default interface naming scheme 'v255'. Sep 4 17:20:18.463071 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:20:18.472938 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 17:20:18.483943 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 17:20:18.486630 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 17:20:18.488192 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:20:18.492873 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 4 17:20:18.494004 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 4 17:20:18.494950 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 4 17:20:18.496898 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:20:18.500596 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 4 17:20:18.502451 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 17:20:18.502636 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 17:20:18.504156 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 17:20:18.504651 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 17:20:18.506220 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 17:20:18.506349 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 17:20:18.509043 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 4 17:20:18.519920 augenrules[1359]: No rules Sep 4 17:20:18.522094 systemd[1]: Finished ensure-sysext.service. Sep 4 17:20:18.524314 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 4 17:20:18.527810 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:20:18.535852 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 17:20:18.539896 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 17:20:18.542123 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 17:20:18.547831 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 17:20:18.548601 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (1347) Sep 4 17:20:18.549208 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:20:18.551883 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 17:20:18.561980 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 4 17:20:18.563200 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 4 17:20:18.563851 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 17:20:18.564028 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 17:20:18.567035 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 17:20:18.567209 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 17:20:18.568512 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 17:20:18.568693 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 17:20:18.570339 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 17:20:18.570498 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 17:20:18.576445 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 4 17:20:18.579558 systemd-resolved[1310]: Positive Trust Anchors: Sep 4 17:20:18.579624 systemd-resolved[1310]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 17:20:18.579659 systemd-resolved[1310]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 4 17:20:18.588635 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1350) Sep 4 17:20:18.587797 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 17:20:18.587862 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 17:20:18.593645 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1350) Sep 4 17:20:18.604327 systemd-resolved[1310]: Defaulting to hostname 'linux'. Sep 4 17:20:18.605848 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 17:20:18.607171 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:20:18.627458 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 4 17:20:18.641442 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 4 17:20:18.645673 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 4 17:20:18.647421 systemd[1]: Reached target time-set.target - System Time Set. Sep 4 17:20:18.650632 systemd-networkd[1380]: lo: Link UP Sep 4 17:20:18.650639 systemd-networkd[1380]: lo: Gained carrier Sep 4 17:20:18.651352 systemd-networkd[1380]: Enumeration completed Sep 4 17:20:18.651527 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 17:20:18.653104 systemd[1]: Reached target network.target - Network. Sep 4 17:20:18.656566 systemd-networkd[1380]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:20:18.656697 systemd-networkd[1380]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 17:20:18.659337 systemd-networkd[1380]: eth0: Link UP Sep 4 17:20:18.659432 systemd-networkd[1380]: eth0: Gained carrier Sep 4 17:20:18.659490 systemd-networkd[1380]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:20:18.665776 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 4 17:20:18.667261 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 4 17:20:18.682771 systemd-networkd[1380]: eth0: DHCPv4 address 10.0.0.51/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 4 17:20:18.685730 systemd-timesyncd[1381]: Network configuration changed, trying to establish connection. Sep 4 17:20:18.686788 systemd-timesyncd[1381]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 4 17:20:18.686834 systemd-timesyncd[1381]: Initial clock synchronization to Wed 2024-09-04 17:20:18.860285 UTC. Sep 4 17:20:18.694874 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:20:18.703060 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 4 17:20:18.706266 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 4 17:20:18.721626 lvm[1404]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 4 17:20:18.735622 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:20:18.750237 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 4 17:20:18.751852 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:20:18.752895 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 17:20:18.753916 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 4 17:20:18.755146 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 4 17:20:18.756618 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 4 17:20:18.757715 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 4 17:20:18.758991 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 4 17:20:18.760199 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 4 17:20:18.760236 systemd[1]: Reached target paths.target - Path Units. Sep 4 17:20:18.761166 systemd[1]: Reached target timers.target - Timer Units. Sep 4 17:20:18.763164 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 4 17:20:18.765637 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 4 17:20:18.779801 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 4 17:20:18.782149 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 4 17:20:18.783657 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 4 17:20:18.784896 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 17:20:18.785790 systemd[1]: Reached target basic.target - Basic System. Sep 4 17:20:18.786717 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 4 17:20:18.786750 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 4 17:20:18.787790 systemd[1]: Starting containerd.service - containerd container runtime... Sep 4 17:20:18.789781 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 4 17:20:18.790647 lvm[1411]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 4 17:20:18.792224 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 4 17:20:18.800836 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 4 17:20:18.802029 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 4 17:20:18.803883 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 4 17:20:18.806754 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 4 17:20:18.809240 jq[1414]: false Sep 4 17:20:18.811927 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 4 17:20:18.816117 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 4 17:20:18.819110 extend-filesystems[1415]: Found loop3 Sep 4 17:20:18.821763 extend-filesystems[1415]: Found loop4 Sep 4 17:20:18.821763 extend-filesystems[1415]: Found loop5 Sep 4 17:20:18.821763 extend-filesystems[1415]: Found vda Sep 4 17:20:18.821763 extend-filesystems[1415]: Found vda1 Sep 4 17:20:18.821763 extend-filesystems[1415]: Found vda2 Sep 4 17:20:18.821763 extend-filesystems[1415]: Found vda3 Sep 4 17:20:18.821763 extend-filesystems[1415]: Found usr Sep 4 17:20:18.821763 extend-filesystems[1415]: Found vda4 Sep 4 17:20:18.821763 extend-filesystems[1415]: Found vda6 Sep 4 17:20:18.821763 extend-filesystems[1415]: Found vda7 Sep 4 17:20:18.821763 extend-filesystems[1415]: Found vda9 Sep 4 17:20:18.821763 extend-filesystems[1415]: Checking size of /dev/vda9 Sep 4 17:20:18.821644 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 4 17:20:18.827418 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 4 17:20:18.827959 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 4 17:20:18.834685 dbus-daemon[1413]: [system] SELinux support is enabled Sep 4 17:20:18.839705 extend-filesystems[1415]: Resized partition /dev/vda9 Sep 4 17:20:18.835874 systemd[1]: Starting update-engine.service - Update Engine... Sep 4 17:20:18.839711 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 4 17:20:18.845296 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 4 17:20:18.845745 extend-filesystems[1435]: resize2fs 1.47.1 (20-May-2024) Sep 4 17:20:18.848352 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 4 17:20:18.850654 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 4 17:20:18.850807 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 4 17:20:18.851086 systemd[1]: motdgen.service: Deactivated successfully. Sep 4 17:20:18.851231 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 4 17:20:18.853294 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 4 17:20:18.853456 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 4 17:20:18.858043 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 4 17:20:18.858097 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (1367) Sep 4 17:20:18.860693 jq[1436]: true Sep 4 17:20:18.871774 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 4 17:20:18.871821 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 4 17:20:18.874431 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 4 17:20:18.874466 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 4 17:20:18.885199 jq[1444]: true Sep 4 17:20:18.890799 update_engine[1431]: I0904 17:20:18.890487 1431 main.cc:92] Flatcar Update Engine starting Sep 4 17:20:18.894071 update_engine[1431]: I0904 17:20:18.894009 1431 update_check_scheduler.cc:74] Next update check in 10m26s Sep 4 17:20:18.899061 (ntainerd)[1446]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 4 17:20:18.899709 systemd[1]: Started update-engine.service - Update Engine. Sep 4 17:20:18.900627 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 4 17:20:18.904860 tar[1438]: linux-arm64/helm Sep 4 17:20:18.910913 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 4 17:20:18.911468 extend-filesystems[1435]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 4 17:20:18.911468 extend-filesystems[1435]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 4 17:20:18.911468 extend-filesystems[1435]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 4 17:20:18.916245 extend-filesystems[1415]: Resized filesystem in /dev/vda9 Sep 4 17:20:18.916725 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 4 17:20:18.918654 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 4 17:20:18.921078 systemd-logind[1424]: Watching system buttons on /dev/input/event0 (Power Button) Sep 4 17:20:18.921264 systemd-logind[1424]: New seat seat0. Sep 4 17:20:18.921853 systemd[1]: Started systemd-logind.service - User Login Management. Sep 4 17:20:18.944680 bash[1469]: Updated "/home/core/.ssh/authorized_keys" Sep 4 17:20:18.946913 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 4 17:20:18.948762 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 4 17:20:18.978055 locksmithd[1457]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 4 17:20:19.101053 containerd[1446]: time="2024-09-04T17:20:19.100909391Z" level=info msg="starting containerd" revision=8ccfc03e4e2b73c22899202ae09d0caf906d3863 version=v1.7.20 Sep 4 17:20:19.132207 containerd[1446]: time="2024-09-04T17:20:19.132149547Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:20:19.134886 containerd[1446]: time="2024-09-04T17:20:19.133671798Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.48-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:20:19.134886 containerd[1446]: time="2024-09-04T17:20:19.133713811Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 4 17:20:19.134886 containerd[1446]: time="2024-09-04T17:20:19.133733264Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 4 17:20:19.134886 containerd[1446]: time="2024-09-04T17:20:19.133909568Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 4 17:20:19.134886 containerd[1446]: time="2024-09-04T17:20:19.133928327Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 4 17:20:19.134886 containerd[1446]: time="2024-09-04T17:20:19.133981210Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:20:19.134886 containerd[1446]: time="2024-09-04T17:20:19.133993593Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:20:19.134886 containerd[1446]: time="2024-09-04T17:20:19.134164135Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:20:19.134886 containerd[1446]: time="2024-09-04T17:20:19.134180074Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 4 17:20:19.134886 containerd[1446]: time="2024-09-04T17:20:19.134194010Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:20:19.134886 containerd[1446]: time="2024-09-04T17:20:19.134205085Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 4 17:20:19.135218 containerd[1446]: time="2024-09-04T17:20:19.134274642Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:20:19.135218 containerd[1446]: time="2024-09-04T17:20:19.134473056Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:20:19.135218 containerd[1446]: time="2024-09-04T17:20:19.134566726Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:20:19.135218 containerd[1446]: time="2024-09-04T17:20:19.134580662Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 4 17:20:19.135218 containerd[1446]: time="2024-09-04T17:20:19.134678377Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 4 17:20:19.135218 containerd[1446]: time="2024-09-04T17:20:19.134734612Z" level=info msg="metadata content store policy set" policy=shared Sep 4 17:20:19.138676 containerd[1446]: time="2024-09-04T17:20:19.138643963Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 4 17:20:19.138819 containerd[1446]: time="2024-09-04T17:20:19.138802490Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 4 17:20:19.138877 containerd[1446]: time="2024-09-04T17:20:19.138864650Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 4 17:20:19.138946 containerd[1446]: time="2024-09-04T17:20:19.138931347Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 4 17:20:19.139004 containerd[1446]: time="2024-09-04T17:20:19.138991055Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 4 17:20:19.139198 containerd[1446]: time="2024-09-04T17:20:19.139176677Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 4 17:20:19.139508 containerd[1446]: time="2024-09-04T17:20:19.139487029Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 4 17:20:19.139692 containerd[1446]: time="2024-09-04T17:20:19.139672284Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 4 17:20:19.139775 containerd[1446]: time="2024-09-04T17:20:19.139760068Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 4 17:20:19.139828 containerd[1446]: time="2024-09-04T17:20:19.139816180Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 4 17:20:19.139881 containerd[1446]: time="2024-09-04T17:20:19.139868859Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 4 17:20:19.139936 containerd[1446]: time="2024-09-04T17:20:19.139923540Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 4 17:20:19.140003 containerd[1446]: time="2024-09-04T17:20:19.139988725Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 4 17:20:19.140056 containerd[1446]: time="2024-09-04T17:20:19.140044223Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 4 17:20:19.140122 containerd[1446]: time="2024-09-04T17:20:19.140108264Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 4 17:20:19.140174 containerd[1446]: time="2024-09-04T17:20:19.140162618Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 4 17:20:19.140225 containerd[1446]: time="2024-09-04T17:20:19.140212354Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 4 17:20:19.140275 containerd[1446]: time="2024-09-04T17:20:19.140262786Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 4 17:20:19.140350 containerd[1446]: time="2024-09-04T17:20:19.140336307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 4 17:20:19.140409 containerd[1446]: time="2024-09-04T17:20:19.140396587Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 4 17:20:19.140473 containerd[1446]: time="2024-09-04T17:20:19.140459606Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 4 17:20:19.140535 containerd[1446]: time="2024-09-04T17:20:19.140521725Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 4 17:20:19.140588 containerd[1446]: time="2024-09-04T17:20:19.140576488Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 4 17:20:19.140658 containerd[1446]: time="2024-09-04T17:20:19.140644901Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 4 17:20:19.140724 containerd[1446]: time="2024-09-04T17:20:19.140711802Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 4 17:20:19.140777 containerd[1446]: time="2024-09-04T17:20:19.140765625Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 4 17:20:19.140838 containerd[1446]: time="2024-09-04T17:20:19.140825538Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 4 17:20:19.140895 containerd[1446]: time="2024-09-04T17:20:19.140883284Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 4 17:20:19.140949 containerd[1446]: time="2024-09-04T17:20:19.140936372Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 4 17:20:19.141016 containerd[1446]: time="2024-09-04T17:20:19.141002987Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 4 17:20:19.141072 containerd[1446]: time="2024-09-04T17:20:19.141060284Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 4 17:20:19.141143 containerd[1446]: time="2024-09-04T17:20:19.141129310Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 4 17:20:19.141207 containerd[1446]: time="2024-09-04T17:20:19.141194821Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 4 17:20:19.141262 containerd[1446]: time="2024-09-04T17:20:19.141250238Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 4 17:20:19.141313 containerd[1446]: time="2024-09-04T17:20:19.141301487Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 4 17:20:19.141489 containerd[1446]: time="2024-09-04T17:20:19.141472683Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 4 17:20:19.141554 containerd[1446]: time="2024-09-04T17:20:19.141538971Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 4 17:20:19.141633 containerd[1446]: time="2024-09-04T17:20:19.141617355Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 4 17:20:19.141692 containerd[1446]: time="2024-09-04T17:20:19.141675674Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 4 17:20:19.141741 containerd[1446]: time="2024-09-04T17:20:19.141728108Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 4 17:20:19.141791 containerd[1446]: time="2024-09-04T17:20:19.141779847Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 4 17:20:19.141838 containerd[1446]: time="2024-09-04T17:20:19.141827254Z" level=info msg="NRI interface is disabled by configuration." Sep 4 17:20:19.141901 containerd[1446]: time="2024-09-04T17:20:19.141888596Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 4 17:20:19.142362 containerd[1446]: time="2024-09-04T17:20:19.142294089Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 4 17:20:19.142540 containerd[1446]: time="2024-09-04T17:20:19.142521478Z" level=info msg="Connect containerd service" Sep 4 17:20:19.142637 containerd[1446]: time="2024-09-04T17:20:19.142622177Z" level=info msg="using legacy CRI server" Sep 4 17:20:19.142687 containerd[1446]: time="2024-09-04T17:20:19.142674856Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 4 17:20:19.142827 containerd[1446]: time="2024-09-04T17:20:19.142810906Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 4 17:20:19.143675 containerd[1446]: time="2024-09-04T17:20:19.143645021Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 17:20:19.144079 containerd[1446]: time="2024-09-04T17:20:19.143961299Z" level=info msg="Start subscribing containerd event" Sep 4 17:20:19.144079 containerd[1446]: time="2024-09-04T17:20:19.144021620Z" level=info msg="Start recovering state" Sep 4 17:20:19.144134 containerd[1446]: time="2024-09-04T17:20:19.144096899Z" level=info msg="Start event monitor" Sep 4 17:20:19.144134 containerd[1446]: time="2024-09-04T17:20:19.144109977Z" level=info msg="Start snapshots syncer" Sep 4 17:20:19.144134 containerd[1446]: time="2024-09-04T17:20:19.144120316Z" level=info msg="Start cni network conf syncer for default" Sep 4 17:20:19.144134 containerd[1446]: time="2024-09-04T17:20:19.144128040Z" level=info msg="Start streaming server" Sep 4 17:20:19.144391 containerd[1446]: time="2024-09-04T17:20:19.144369448Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 4 17:20:19.144567 containerd[1446]: time="2024-09-04T17:20:19.144541175Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 4 17:20:19.144809 systemd[1]: Started containerd.service - containerd container runtime. Sep 4 17:20:19.146528 containerd[1446]: time="2024-09-04T17:20:19.146497486Z" level=info msg="containerd successfully booted in 0.049815s" Sep 4 17:20:19.241855 tar[1438]: linux-arm64/LICENSE Sep 4 17:20:19.243308 tar[1438]: linux-arm64/README.md Sep 4 17:20:19.255204 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 4 17:20:19.735770 systemd-networkd[1380]: eth0: Gained IPv6LL Sep 4 17:20:19.738455 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 4 17:20:19.740442 systemd[1]: Reached target network-online.target - Network is Online. Sep 4 17:20:19.750845 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 4 17:20:19.753502 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:20:19.755636 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 4 17:20:19.775545 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 4 17:20:19.775737 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 4 17:20:19.777309 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 4 17:20:19.779025 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 4 17:20:20.182589 sshd_keygen[1432]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 4 17:20:20.202556 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 4 17:20:20.216882 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 4 17:20:20.221808 systemd[1]: issuegen.service: Deactivated successfully. Sep 4 17:20:20.222001 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 4 17:20:20.224950 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 4 17:20:20.232495 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:20:20.236231 (kubelet)[1522]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:20:20.237481 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 4 17:20:20.250934 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 4 17:20:20.253008 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 4 17:20:20.254261 systemd[1]: Reached target getty.target - Login Prompts. Sep 4 17:20:20.255339 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 4 17:20:20.256461 systemd[1]: Startup finished in 607ms (kernel) + 4.359s (initrd) + 3.200s (userspace) = 8.167s. Sep 4 17:20:20.708747 kubelet[1522]: E0904 17:20:20.708657 1522 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:20:20.711698 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:20:20.711843 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:20:25.684271 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 4 17:20:25.685395 systemd[1]: Started sshd@0-10.0.0.51:22-10.0.0.1:44248.service - OpenSSH per-connection server daemon (10.0.0.1:44248). Sep 4 17:20:25.742290 sshd[1540]: Accepted publickey for core from 10.0.0.1 port 44248 ssh2: RSA SHA256:bZG4GDjjdFyRf+7zZQ8+tZsxmoYB2474ukIYHB/jTWk Sep 4 17:20:25.745966 sshd[1540]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:20:25.753865 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 4 17:20:25.767846 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 4 17:20:25.769672 systemd-logind[1424]: New session 1 of user core. Sep 4 17:20:25.779693 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 4 17:20:25.781877 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 4 17:20:25.789970 (systemd)[1544]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:20:25.875726 systemd[1544]: Queued start job for default target default.target. Sep 4 17:20:25.888532 systemd[1544]: Created slice app.slice - User Application Slice. Sep 4 17:20:25.888577 systemd[1544]: Reached target paths.target - Paths. Sep 4 17:20:25.888607 systemd[1544]: Reached target timers.target - Timers. Sep 4 17:20:25.889811 systemd[1544]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 4 17:20:25.899174 systemd[1544]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 4 17:20:25.899237 systemd[1544]: Reached target sockets.target - Sockets. Sep 4 17:20:25.899249 systemd[1544]: Reached target basic.target - Basic System. Sep 4 17:20:25.899282 systemd[1544]: Reached target default.target - Main User Target. Sep 4 17:20:25.899308 systemd[1544]: Startup finished in 99ms. Sep 4 17:20:25.899624 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 4 17:20:25.901011 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 4 17:20:25.974875 systemd[1]: Started sshd@1-10.0.0.51:22-10.0.0.1:44260.service - OpenSSH per-connection server daemon (10.0.0.1:44260). Sep 4 17:20:26.024946 sshd[1555]: Accepted publickey for core from 10.0.0.1 port 44260 ssh2: RSA SHA256:bZG4GDjjdFyRf+7zZQ8+tZsxmoYB2474ukIYHB/jTWk Sep 4 17:20:26.026289 sshd[1555]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:20:26.031377 systemd-logind[1424]: New session 2 of user core. Sep 4 17:20:26.042757 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 4 17:20:26.096099 sshd[1555]: pam_unix(sshd:session): session closed for user core Sep 4 17:20:26.109977 systemd[1]: sshd@1-10.0.0.51:22-10.0.0.1:44260.service: Deactivated successfully. Sep 4 17:20:26.111285 systemd[1]: session-2.scope: Deactivated successfully. Sep 4 17:20:26.111963 systemd-logind[1424]: Session 2 logged out. Waiting for processes to exit. Sep 4 17:20:26.113584 systemd[1]: Started sshd@2-10.0.0.51:22-10.0.0.1:44266.service - OpenSSH per-connection server daemon (10.0.0.1:44266). Sep 4 17:20:26.114352 systemd-logind[1424]: Removed session 2. Sep 4 17:20:26.152594 sshd[1562]: Accepted publickey for core from 10.0.0.1 port 44266 ssh2: RSA SHA256:bZG4GDjjdFyRf+7zZQ8+tZsxmoYB2474ukIYHB/jTWk Sep 4 17:20:26.153854 sshd[1562]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:20:26.158583 systemd-logind[1424]: New session 3 of user core. Sep 4 17:20:26.168779 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 4 17:20:26.217465 sshd[1562]: pam_unix(sshd:session): session closed for user core Sep 4 17:20:26.226817 systemd[1]: sshd@2-10.0.0.51:22-10.0.0.1:44266.service: Deactivated successfully. Sep 4 17:20:26.228056 systemd[1]: session-3.scope: Deactivated successfully. Sep 4 17:20:26.229233 systemd-logind[1424]: Session 3 logged out. Waiting for processes to exit. Sep 4 17:20:26.240921 systemd[1]: Started sshd@3-10.0.0.51:22-10.0.0.1:44276.service - OpenSSH per-connection server daemon (10.0.0.1:44276). Sep 4 17:20:26.241792 systemd-logind[1424]: Removed session 3. Sep 4 17:20:26.275971 sshd[1569]: Accepted publickey for core from 10.0.0.1 port 44276 ssh2: RSA SHA256:bZG4GDjjdFyRf+7zZQ8+tZsxmoYB2474ukIYHB/jTWk Sep 4 17:20:26.277583 sshd[1569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:20:26.281402 systemd-logind[1424]: New session 4 of user core. Sep 4 17:20:26.303786 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 4 17:20:26.356599 sshd[1569]: pam_unix(sshd:session): session closed for user core Sep 4 17:20:26.380254 systemd[1]: sshd@3-10.0.0.51:22-10.0.0.1:44276.service: Deactivated successfully. Sep 4 17:20:26.383223 systemd[1]: session-4.scope: Deactivated successfully. Sep 4 17:20:26.384498 systemd-logind[1424]: Session 4 logged out. Waiting for processes to exit. Sep 4 17:20:26.385728 systemd[1]: Started sshd@4-10.0.0.51:22-10.0.0.1:44288.service - OpenSSH per-connection server daemon (10.0.0.1:44288). Sep 4 17:20:26.386830 systemd-logind[1424]: Removed session 4. Sep 4 17:20:26.422124 sshd[1576]: Accepted publickey for core from 10.0.0.1 port 44288 ssh2: RSA SHA256:bZG4GDjjdFyRf+7zZQ8+tZsxmoYB2474ukIYHB/jTWk Sep 4 17:20:26.423695 sshd[1576]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:20:26.428197 systemd-logind[1424]: New session 5 of user core. Sep 4 17:20:26.440777 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 4 17:20:26.515765 sudo[1579]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 4 17:20:26.516402 sudo[1579]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 17:20:26.530392 sudo[1579]: pam_unix(sudo:session): session closed for user root Sep 4 17:20:26.532867 sshd[1576]: pam_unix(sshd:session): session closed for user core Sep 4 17:20:26.546161 systemd[1]: sshd@4-10.0.0.51:22-10.0.0.1:44288.service: Deactivated successfully. Sep 4 17:20:26.547659 systemd[1]: session-5.scope: Deactivated successfully. Sep 4 17:20:26.550786 systemd-logind[1424]: Session 5 logged out. Waiting for processes to exit. Sep 4 17:20:26.559860 systemd[1]: Started sshd@5-10.0.0.51:22-10.0.0.1:44290.service - OpenSSH per-connection server daemon (10.0.0.1:44290). Sep 4 17:20:26.560715 systemd-logind[1424]: Removed session 5. Sep 4 17:20:26.593764 sshd[1584]: Accepted publickey for core from 10.0.0.1 port 44290 ssh2: RSA SHA256:bZG4GDjjdFyRf+7zZQ8+tZsxmoYB2474ukIYHB/jTWk Sep 4 17:20:26.595104 sshd[1584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:20:26.599074 systemd-logind[1424]: New session 6 of user core. Sep 4 17:20:26.608734 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 4 17:20:26.659765 sudo[1588]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 4 17:20:26.660049 sudo[1588]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 17:20:26.663023 sudo[1588]: pam_unix(sudo:session): session closed for user root Sep 4 17:20:26.667405 sudo[1587]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 4 17:20:26.667681 sudo[1587]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 17:20:26.685985 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 4 17:20:26.687171 auditctl[1591]: No rules Sep 4 17:20:26.688004 systemd[1]: audit-rules.service: Deactivated successfully. Sep 4 17:20:26.689613 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 4 17:20:26.691339 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 4 17:20:26.715508 augenrules[1609]: No rules Sep 4 17:20:26.716676 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 4 17:20:26.717934 sudo[1587]: pam_unix(sudo:session): session closed for user root Sep 4 17:20:26.719560 sshd[1584]: pam_unix(sshd:session): session closed for user core Sep 4 17:20:26.734029 systemd[1]: sshd@5-10.0.0.51:22-10.0.0.1:44290.service: Deactivated successfully. Sep 4 17:20:26.735545 systemd[1]: session-6.scope: Deactivated successfully. Sep 4 17:20:26.736229 systemd-logind[1424]: Session 6 logged out. Waiting for processes to exit. Sep 4 17:20:26.757933 systemd[1]: Started sshd@6-10.0.0.51:22-10.0.0.1:44302.service - OpenSSH per-connection server daemon (10.0.0.1:44302). Sep 4 17:20:26.759024 systemd-logind[1424]: Removed session 6. Sep 4 17:20:26.790766 sshd[1617]: Accepted publickey for core from 10.0.0.1 port 44302 ssh2: RSA SHA256:bZG4GDjjdFyRf+7zZQ8+tZsxmoYB2474ukIYHB/jTWk Sep 4 17:20:26.792270 sshd[1617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:20:26.796299 systemd-logind[1424]: New session 7 of user core. Sep 4 17:20:26.811778 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 4 17:20:26.864153 sudo[1621]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 4 17:20:26.864419 sudo[1621]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 17:20:26.992028 (dockerd)[1631]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 4 17:20:26.992154 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 4 17:20:27.268895 dockerd[1631]: time="2024-09-04T17:20:27.268751788Z" level=info msg="Starting up" Sep 4 17:20:27.431004 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport276031604-merged.mount: Deactivated successfully. Sep 4 17:20:27.451953 dockerd[1631]: time="2024-09-04T17:20:27.451903281Z" level=info msg="Loading containers: start." Sep 4 17:20:27.537617 kernel: Initializing XFRM netlink socket Sep 4 17:20:27.602406 systemd-networkd[1380]: docker0: Link UP Sep 4 17:20:27.618116 dockerd[1631]: time="2024-09-04T17:20:27.618055076Z" level=info msg="Loading containers: done." Sep 4 17:20:27.640142 dockerd[1631]: time="2024-09-04T17:20:27.640080288Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 4 17:20:27.640309 dockerd[1631]: time="2024-09-04T17:20:27.640187562Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 4 17:20:27.640309 dockerd[1631]: time="2024-09-04T17:20:27.640304024Z" level=info msg="Daemon has completed initialization" Sep 4 17:20:27.672005 dockerd[1631]: time="2024-09-04T17:20:27.671319495Z" level=info msg="API listen on /run/docker.sock" Sep 4 17:20:27.671570 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 4 17:20:28.332736 containerd[1446]: time="2024-09-04T17:20:28.332685329Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.8\"" Sep 4 17:20:28.429104 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3983197767-merged.mount: Deactivated successfully. Sep 4 17:20:29.115266 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2028808661.mount: Deactivated successfully. Sep 4 17:20:30.732224 containerd[1446]: time="2024-09-04T17:20:30.732104823Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:20:30.733112 containerd[1446]: time="2024-09-04T17:20:30.732856841Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.8: active requests=0, bytes read=32283564" Sep 4 17:20:30.733927 containerd[1446]: time="2024-09-04T17:20:30.733888971Z" level=info msg="ImageCreate event name:\"sha256:6b88c4d45de58e9ed0353538f5b2ae206a8582fcb53e67d0505abbe3a567fbae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:20:30.737239 containerd[1446]: time="2024-09-04T17:20:30.737205576Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6f72fa926c9b05e10629fe1a092fd28dcd65b4fdfd0cc7bd55f85a57a6ba1fa5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:20:30.741017 containerd[1446]: time="2024-09-04T17:20:30.740978850Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.8\" with image id \"sha256:6b88c4d45de58e9ed0353538f5b2ae206a8582fcb53e67d0505abbe3a567fbae\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6f72fa926c9b05e10629fe1a092fd28dcd65b4fdfd0cc7bd55f85a57a6ba1fa5\", size \"32280362\" in 2.408242171s" Sep 4 17:20:30.741190 containerd[1446]: time="2024-09-04T17:20:30.741148252Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.8\" returns image reference \"sha256:6b88c4d45de58e9ed0353538f5b2ae206a8582fcb53e67d0505abbe3a567fbae\"" Sep 4 17:20:30.760197 containerd[1446]: time="2024-09-04T17:20:30.760152630Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.8\"" Sep 4 17:20:30.962171 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 4 17:20:30.969772 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:20:31.060395 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:20:31.065507 (kubelet)[1857]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:20:31.117595 kubelet[1857]: E0904 17:20:31.117534 1857 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:20:31.121516 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:20:31.121680 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:20:32.854193 containerd[1446]: time="2024-09-04T17:20:32.854120660Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:20:32.854697 containerd[1446]: time="2024-09-04T17:20:32.854651121Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.8: active requests=0, bytes read=29368212" Sep 4 17:20:32.855527 containerd[1446]: time="2024-09-04T17:20:32.855490519Z" level=info msg="ImageCreate event name:\"sha256:bddc5fa0c49f499b7ec60c114671fcbb0436c22300448964f77acb6c13f0ffed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:20:32.858410 containerd[1446]: time="2024-09-04T17:20:32.858383380Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6f27d63ded20614c68554b477cd7a78eda78a498a92bfe8935cf964ca5b74d0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:20:32.859726 containerd[1446]: time="2024-09-04T17:20:32.859692528Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.8\" with image id \"sha256:bddc5fa0c49f499b7ec60c114671fcbb0436c22300448964f77acb6c13f0ffed\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6f27d63ded20614c68554b477cd7a78eda78a498a92bfe8935cf964ca5b74d0b\", size \"30855477\" in 2.099372842s" Sep 4 17:20:32.859779 containerd[1446]: time="2024-09-04T17:20:32.859732119Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.8\" returns image reference \"sha256:bddc5fa0c49f499b7ec60c114671fcbb0436c22300448964f77acb6c13f0ffed\"" Sep 4 17:20:32.877980 containerd[1446]: time="2024-09-04T17:20:32.877845327Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.8\"" Sep 4 17:20:34.028969 containerd[1446]: time="2024-09-04T17:20:34.028828508Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:20:34.030366 containerd[1446]: time="2024-09-04T17:20:34.029887921Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.8: active requests=0, bytes read=15751075" Sep 4 17:20:34.031297 containerd[1446]: time="2024-09-04T17:20:34.031265182Z" level=info msg="ImageCreate event name:\"sha256:db329f69447ed4eb4b489d7c357c7723493b3a72946edb35a6c16973d5f257d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:20:34.034638 containerd[1446]: time="2024-09-04T17:20:34.034570753Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:da74a66675d95e39ec25da5e70729da746d0fa0b15ee0da872ac980519bc28bd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:20:34.035719 containerd[1446]: time="2024-09-04T17:20:34.035595826Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.8\" with image id \"sha256:db329f69447ed4eb4b489d7c357c7723493b3a72946edb35a6c16973d5f257d4\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:da74a66675d95e39ec25da5e70729da746d0fa0b15ee0da872ac980519bc28bd\", size \"17238358\" in 1.15769828s" Sep 4 17:20:34.035719 containerd[1446]: time="2024-09-04T17:20:34.035628602Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.8\" returns image reference \"sha256:db329f69447ed4eb4b489d7c357c7723493b3a72946edb35a6c16973d5f257d4\"" Sep 4 17:20:34.059437 containerd[1446]: time="2024-09-04T17:20:34.059399762Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.8\"" Sep 4 17:20:35.076803 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount245672974.mount: Deactivated successfully. Sep 4 17:20:35.390824 containerd[1446]: time="2024-09-04T17:20:35.390681204Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:20:35.391471 containerd[1446]: time="2024-09-04T17:20:35.391356610Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.8: active requests=0, bytes read=25251885" Sep 4 17:20:35.392163 containerd[1446]: time="2024-09-04T17:20:35.392119961Z" level=info msg="ImageCreate event name:\"sha256:61223b17dfa4bd3d116a0b714c4f2cc2e3d83853942dfb8578f50cc8e91eb399\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:20:35.394679 containerd[1446]: time="2024-09-04T17:20:35.394642527Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:559a093080f70ca863922f5e4bb90d6926d52653a91edb5b72c685ebb65f1858\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:20:35.395308 containerd[1446]: time="2024-09-04T17:20:35.395267043Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.8\" with image id \"sha256:61223b17dfa4bd3d116a0b714c4f2cc2e3d83853942dfb8578f50cc8e91eb399\", repo tag \"registry.k8s.io/kube-proxy:v1.29.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:559a093080f70ca863922f5e4bb90d6926d52653a91edb5b72c685ebb65f1858\", size \"25250902\" in 1.335803539s" Sep 4 17:20:35.395342 containerd[1446]: time="2024-09-04T17:20:35.395308870Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.8\" returns image reference \"sha256:61223b17dfa4bd3d116a0b714c4f2cc2e3d83853942dfb8578f50cc8e91eb399\"" Sep 4 17:20:35.414526 containerd[1446]: time="2024-09-04T17:20:35.414481145Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Sep 4 17:20:36.032235 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3744010838.mount: Deactivated successfully. Sep 4 17:20:36.790989 containerd[1446]: time="2024-09-04T17:20:36.790930424Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:20:36.791500 containerd[1446]: time="2024-09-04T17:20:36.791451269Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Sep 4 17:20:36.792364 containerd[1446]: time="2024-09-04T17:20:36.792319972Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:20:36.796464 containerd[1446]: time="2024-09-04T17:20:36.796420985Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:20:36.797282 containerd[1446]: time="2024-09-04T17:20:36.797197923Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.382676198s" Sep 4 17:20:36.797282 containerd[1446]: time="2024-09-04T17:20:36.797229674Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Sep 4 17:20:36.816321 containerd[1446]: time="2024-09-04T17:20:36.816279725Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Sep 4 17:20:37.329506 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2657943599.mount: Deactivated successfully. Sep 4 17:20:37.333610 containerd[1446]: time="2024-09-04T17:20:37.333303616Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:20:37.334348 containerd[1446]: time="2024-09-04T17:20:37.334308944Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" Sep 4 17:20:37.335327 containerd[1446]: time="2024-09-04T17:20:37.335270426Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:20:37.337879 containerd[1446]: time="2024-09-04T17:20:37.337839616Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:20:37.338944 containerd[1446]: time="2024-09-04T17:20:37.338708036Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 522.384255ms" Sep 4 17:20:37.338944 containerd[1446]: time="2024-09-04T17:20:37.338747233Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Sep 4 17:20:37.357929 containerd[1446]: time="2024-09-04T17:20:37.357895841Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Sep 4 17:20:37.944575 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2033729018.mount: Deactivated successfully. Sep 4 17:20:41.041037 containerd[1446]: time="2024-09-04T17:20:41.040982721Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:20:41.042506 containerd[1446]: time="2024-09-04T17:20:41.042472952Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200788" Sep 4 17:20:41.043508 containerd[1446]: time="2024-09-04T17:20:41.043472420Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:20:41.046681 containerd[1446]: time="2024-09-04T17:20:41.046633731Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:20:41.047945 containerd[1446]: time="2024-09-04T17:20:41.047909436Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 3.689977571s" Sep 4 17:20:41.048007 containerd[1446]: time="2024-09-04T17:20:41.047943435Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Sep 4 17:20:41.301554 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 4 17:20:41.317025 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:20:41.409565 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:20:41.413141 (kubelet)[2043]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:20:41.459226 kubelet[2043]: E0904 17:20:41.459138 2043 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:20:41.462170 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:20:41.462314 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:20:45.600261 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:20:45.614839 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:20:45.633547 systemd[1]: Reloading requested from client PID 2103 ('systemctl') (unit session-7.scope)... Sep 4 17:20:45.633568 systemd[1]: Reloading... Sep 4 17:20:45.688707 zram_generator::config[2140]: No configuration found. Sep 4 17:20:45.803576 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:20:45.856847 systemd[1]: Reloading finished in 222 ms. Sep 4 17:20:45.906560 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 4 17:20:45.906639 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 4 17:20:45.906866 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:20:45.910167 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:20:46.011664 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:20:46.016727 (kubelet)[2186]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 17:20:46.063656 kubelet[2186]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:20:46.063656 kubelet[2186]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 4 17:20:46.063656 kubelet[2186]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:20:46.063656 kubelet[2186]: I0904 17:20:46.058570 2186 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 17:20:47.183531 kubelet[2186]: I0904 17:20:47.183467 2186 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Sep 4 17:20:47.183531 kubelet[2186]: I0904 17:20:47.183505 2186 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 17:20:47.183898 kubelet[2186]: I0904 17:20:47.183727 2186 server.go:919] "Client rotation is on, will bootstrap in background" Sep 4 17:20:47.210251 kubelet[2186]: E0904 17:20:47.210216 2186 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.51:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.51:6443: connect: connection refused Sep 4 17:20:47.211241 kubelet[2186]: I0904 17:20:47.211214 2186 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 17:20:47.221884 kubelet[2186]: I0904 17:20:47.221849 2186 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 17:20:47.223270 kubelet[2186]: I0904 17:20:47.223238 2186 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 17:20:47.223557 kubelet[2186]: I0904 17:20:47.223536 2186 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Sep 4 17:20:47.223642 kubelet[2186]: I0904 17:20:47.223560 2186 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 17:20:47.223642 kubelet[2186]: I0904 17:20:47.223571 2186 container_manager_linux.go:301] "Creating device plugin manager" Sep 4 17:20:47.225166 kubelet[2186]: I0904 17:20:47.224715 2186 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:20:47.229804 kubelet[2186]: I0904 17:20:47.229771 2186 kubelet.go:396] "Attempting to sync node with API server" Sep 4 17:20:47.229901 kubelet[2186]: I0904 17:20:47.229810 2186 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 17:20:47.229901 kubelet[2186]: I0904 17:20:47.229835 2186 kubelet.go:312] "Adding apiserver pod source" Sep 4 17:20:47.229901 kubelet[2186]: I0904 17:20:47.229851 2186 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 17:20:47.230313 kubelet[2186]: W0904 17:20:47.230268 2186 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.51:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.51:6443: connect: connection refused Sep 4 17:20:47.230354 kubelet[2186]: E0904 17:20:47.230320 2186 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.51:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.51:6443: connect: connection refused Sep 4 17:20:47.232091 kubelet[2186]: W0904 17:20:47.231973 2186 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.51:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.51:6443: connect: connection refused Sep 4 17:20:47.232091 kubelet[2186]: I0904 17:20:47.231992 2186 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.20" apiVersion="v1" Sep 4 17:20:47.232091 kubelet[2186]: E0904 17:20:47.232011 2186 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.51:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.51:6443: connect: connection refused Sep 4 17:20:47.232561 kubelet[2186]: I0904 17:20:47.232526 2186 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 4 17:20:47.233126 kubelet[2186]: W0904 17:20:47.233093 2186 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 4 17:20:47.234671 kubelet[2186]: I0904 17:20:47.233890 2186 server.go:1256] "Started kubelet" Sep 4 17:20:47.234763 kubelet[2186]: I0904 17:20:47.234740 2186 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 4 17:20:47.235310 kubelet[2186]: I0904 17:20:47.235003 2186 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 17:20:47.235310 kubelet[2186]: I0904 17:20:47.235063 2186 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 17:20:47.235862 kubelet[2186]: I0904 17:20:47.235838 2186 server.go:461] "Adding debug handlers to kubelet server" Sep 4 17:20:47.235929 kubelet[2186]: I0904 17:20:47.235844 2186 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 17:20:47.237679 kubelet[2186]: I0904 17:20:47.237652 2186 volume_manager.go:291] "Starting Kubelet Volume Manager" Sep 4 17:20:47.237758 kubelet[2186]: I0904 17:20:47.237740 2186 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Sep 4 17:20:47.237830 kubelet[2186]: I0904 17:20:47.237813 2186 reconciler_new.go:29] "Reconciler: start to sync state" Sep 4 17:20:47.238266 kubelet[2186]: W0904 17:20:47.238224 2186 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.51:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.51:6443: connect: connection refused Sep 4 17:20:47.238372 kubelet[2186]: E0904 17:20:47.238359 2186 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.51:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.51:6443: connect: connection refused Sep 4 17:20:47.238604 kubelet[2186]: E0904 17:20:47.238501 2186 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.51:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.51:6443: connect: connection refused" interval="200ms" Sep 4 17:20:47.239423 kubelet[2186]: I0904 17:20:47.239396 2186 factory.go:221] Registration of the systemd container factory successfully Sep 4 17:20:47.239499 kubelet[2186]: I0904 17:20:47.239477 2186 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 4 17:20:47.239645 kubelet[2186]: E0904 17:20:47.239629 2186 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 17:20:47.240493 kubelet[2186]: E0904 17:20:47.240463 2186 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.51:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.51:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.17f21a3ad5cf1611 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-09-04 17:20:47.233865233 +0000 UTC m=+1.213740502,LastTimestamp:2024-09-04 17:20:47.233865233 +0000 UTC m=+1.213740502,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 4 17:20:47.241351 kubelet[2186]: I0904 17:20:47.240643 2186 factory.go:221] Registration of the containerd container factory successfully Sep 4 17:20:47.251313 kubelet[2186]: I0904 17:20:47.251267 2186 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 17:20:47.252910 kubelet[2186]: I0904 17:20:47.252484 2186 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 17:20:47.252910 kubelet[2186]: I0904 17:20:47.252507 2186 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 4 17:20:47.252910 kubelet[2186]: I0904 17:20:47.252523 2186 kubelet.go:2329] "Starting kubelet main sync loop" Sep 4 17:20:47.252910 kubelet[2186]: E0904 17:20:47.252571 2186 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 17:20:47.255731 kubelet[2186]: W0904 17:20:47.255672 2186 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.51:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.51:6443: connect: connection refused Sep 4 17:20:47.255731 kubelet[2186]: E0904 17:20:47.255735 2186 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.51:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.51:6443: connect: connection refused Sep 4 17:20:47.256206 kubelet[2186]: I0904 17:20:47.256191 2186 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 4 17:20:47.256206 kubelet[2186]: I0904 17:20:47.256206 2186 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 4 17:20:47.256276 kubelet[2186]: I0904 17:20:47.256235 2186 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:20:47.260852 kubelet[2186]: I0904 17:20:47.260830 2186 policy_none.go:49] "None policy: Start" Sep 4 17:20:47.261512 kubelet[2186]: I0904 17:20:47.261494 2186 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 4 17:20:47.261618 kubelet[2186]: I0904 17:20:47.261606 2186 state_mem.go:35] "Initializing new in-memory state store" Sep 4 17:20:47.268658 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 4 17:20:47.281323 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 4 17:20:47.284068 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 4 17:20:47.296735 kubelet[2186]: I0904 17:20:47.296410 2186 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 17:20:47.296735 kubelet[2186]: I0904 17:20:47.296676 2186 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 17:20:47.297788 kubelet[2186]: E0904 17:20:47.297766 2186 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 4 17:20:47.339748 kubelet[2186]: I0904 17:20:47.339725 2186 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Sep 4 17:20:47.340119 kubelet[2186]: E0904 17:20:47.340102 2186 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.51:6443/api/v1/nodes\": dial tcp 10.0.0.51:6443: connect: connection refused" node="localhost" Sep 4 17:20:47.353452 kubelet[2186]: I0904 17:20:47.353235 2186 topology_manager.go:215] "Topology Admit Handler" podUID="7fa6213ac08f24a6b78f4cd3838d26c9" podNamespace="kube-system" podName="kube-controller-manager-localhost" Sep 4 17:20:47.354306 kubelet[2186]: I0904 17:20:47.354236 2186 topology_manager.go:215] "Topology Admit Handler" podUID="d9ddd765c3b0fcde29edfee4da9578f6" podNamespace="kube-system" podName="kube-scheduler-localhost" Sep 4 17:20:47.355750 kubelet[2186]: I0904 17:20:47.355717 2186 topology_manager.go:215] "Topology Admit Handler" podUID="27e63bfd9969a50c094ce3184a73a4fc" podNamespace="kube-system" podName="kube-apiserver-localhost" Sep 4 17:20:47.360281 systemd[1]: Created slice kubepods-burstable-pod7fa6213ac08f24a6b78f4cd3838d26c9.slice - libcontainer container kubepods-burstable-pod7fa6213ac08f24a6b78f4cd3838d26c9.slice. Sep 4 17:20:47.374491 systemd[1]: Created slice kubepods-burstable-podd9ddd765c3b0fcde29edfee4da9578f6.slice - libcontainer container kubepods-burstable-podd9ddd765c3b0fcde29edfee4da9578f6.slice. Sep 4 17:20:47.391170 systemd[1]: Created slice kubepods-burstable-pod27e63bfd9969a50c094ce3184a73a4fc.slice - libcontainer container kubepods-burstable-pod27e63bfd9969a50c094ce3184a73a4fc.slice. Sep 4 17:20:47.439211 kubelet[2186]: E0904 17:20:47.439119 2186 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.51:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.51:6443: connect: connection refused" interval="400ms" Sep 4 17:20:47.440609 kubelet[2186]: I0904 17:20:47.440246 2186 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7fa6213ac08f24a6b78f4cd3838d26c9-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7fa6213ac08f24a6b78f4cd3838d26c9\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:20:47.440609 kubelet[2186]: I0904 17:20:47.440285 2186 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7fa6213ac08f24a6b78f4cd3838d26c9-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7fa6213ac08f24a6b78f4cd3838d26c9\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:20:47.440609 kubelet[2186]: I0904 17:20:47.440322 2186 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7fa6213ac08f24a6b78f4cd3838d26c9-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"7fa6213ac08f24a6b78f4cd3838d26c9\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:20:47.440609 kubelet[2186]: I0904 17:20:47.440351 2186 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/27e63bfd9969a50c094ce3184a73a4fc-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"27e63bfd9969a50c094ce3184a73a4fc\") " pod="kube-system/kube-apiserver-localhost" Sep 4 17:20:47.440609 kubelet[2186]: I0904 17:20:47.440373 2186 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/27e63bfd9969a50c094ce3184a73a4fc-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"27e63bfd9969a50c094ce3184a73a4fc\") " pod="kube-system/kube-apiserver-localhost" Sep 4 17:20:47.440782 kubelet[2186]: I0904 17:20:47.440392 2186 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7fa6213ac08f24a6b78f4cd3838d26c9-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"7fa6213ac08f24a6b78f4cd3838d26c9\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:20:47.440782 kubelet[2186]: I0904 17:20:47.440411 2186 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7fa6213ac08f24a6b78f4cd3838d26c9-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"7fa6213ac08f24a6b78f4cd3838d26c9\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:20:47.440782 kubelet[2186]: I0904 17:20:47.440430 2186 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d9ddd765c3b0fcde29edfee4da9578f6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d9ddd765c3b0fcde29edfee4da9578f6\") " pod="kube-system/kube-scheduler-localhost" Sep 4 17:20:47.440782 kubelet[2186]: I0904 17:20:47.440448 2186 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/27e63bfd9969a50c094ce3184a73a4fc-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"27e63bfd9969a50c094ce3184a73a4fc\") " pod="kube-system/kube-apiserver-localhost" Sep 4 17:20:47.541607 kubelet[2186]: I0904 17:20:47.541552 2186 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Sep 4 17:20:47.541949 kubelet[2186]: E0904 17:20:47.541910 2186 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.51:6443/api/v1/nodes\": dial tcp 10.0.0.51:6443: connect: connection refused" node="localhost" Sep 4 17:20:47.673781 kubelet[2186]: E0904 17:20:47.673557 2186 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:20:47.674232 containerd[1446]: time="2024-09-04T17:20:47.674192464Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:7fa6213ac08f24a6b78f4cd3838d26c9,Namespace:kube-system,Attempt:0,}" Sep 4 17:20:47.689974 kubelet[2186]: E0904 17:20:47.689860 2186 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:20:47.690333 containerd[1446]: time="2024-09-04T17:20:47.690297090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d9ddd765c3b0fcde29edfee4da9578f6,Namespace:kube-system,Attempt:0,}" Sep 4 17:20:47.693749 kubelet[2186]: E0904 17:20:47.693669 2186 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:20:47.694110 containerd[1446]: time="2024-09-04T17:20:47.694041542Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:27e63bfd9969a50c094ce3184a73a4fc,Namespace:kube-system,Attempt:0,}" Sep 4 17:20:47.840621 kubelet[2186]: E0904 17:20:47.840557 2186 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.51:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.51:6443: connect: connection refused" interval="800ms" Sep 4 17:20:47.944107 kubelet[2186]: I0904 17:20:47.944004 2186 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Sep 4 17:20:47.944356 kubelet[2186]: E0904 17:20:47.944334 2186 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.51:6443/api/v1/nodes\": dial tcp 10.0.0.51:6443: connect: connection refused" node="localhost" Sep 4 17:20:48.166566 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3934949394.mount: Deactivated successfully. Sep 4 17:20:48.173733 containerd[1446]: time="2024-09-04T17:20:48.173677357Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:20:48.174665 containerd[1446]: time="2024-09-04T17:20:48.174631828Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:20:48.175423 containerd[1446]: time="2024-09-04T17:20:48.175344710Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 4 17:20:48.176205 containerd[1446]: time="2024-09-04T17:20:48.176151474Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:20:48.177833 containerd[1446]: time="2024-09-04T17:20:48.177790493Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:20:48.178311 containerd[1446]: time="2024-09-04T17:20:48.178104475Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 4 17:20:48.178929 containerd[1446]: time="2024-09-04T17:20:48.178867339Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Sep 4 17:20:48.180815 containerd[1446]: time="2024-09-04T17:20:48.180771199Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:20:48.182811 containerd[1446]: time="2024-09-04T17:20:48.182775864Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 492.401534ms" Sep 4 17:20:48.184851 containerd[1446]: time="2024-09-04T17:20:48.184763561Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 490.657986ms" Sep 4 17:20:48.187780 containerd[1446]: time="2024-09-04T17:20:48.187738583Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 513.460716ms" Sep 4 17:20:48.232153 kubelet[2186]: W0904 17:20:48.231947 2186 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.51:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.51:6443: connect: connection refused Sep 4 17:20:48.232153 kubelet[2186]: E0904 17:20:48.232009 2186 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.51:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.51:6443: connect: connection refused Sep 4 17:20:48.326390 containerd[1446]: time="2024-09-04T17:20:48.326275270Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:20:48.326390 containerd[1446]: time="2024-09-04T17:20:48.326335657Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:20:48.326390 containerd[1446]: time="2024-09-04T17:20:48.326351064Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:20:48.327153 containerd[1446]: time="2024-09-04T17:20:48.326929125Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:20:48.327153 containerd[1446]: time="2024-09-04T17:20:48.327000317Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:20:48.327153 containerd[1446]: time="2024-09-04T17:20:48.327012122Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:20:48.327153 containerd[1446]: time="2024-09-04T17:20:48.327091998Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:20:48.327531 containerd[1446]: time="2024-09-04T17:20:48.327314739Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:20:48.327531 containerd[1446]: time="2024-09-04T17:20:48.327360920Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:20:48.327531 containerd[1446]: time="2024-09-04T17:20:48.327378888Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:20:48.327531 containerd[1446]: time="2024-09-04T17:20:48.327452761Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:20:48.329019 containerd[1446]: time="2024-09-04T17:20:48.328918103Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:20:48.353796 systemd[1]: Started cri-containerd-5ebbc14f96f88ed74338ba58c913f1677f8965150a60801c1b2821da7e227ee1.scope - libcontainer container 5ebbc14f96f88ed74338ba58c913f1677f8965150a60801c1b2821da7e227ee1. Sep 4 17:20:48.354976 systemd[1]: Started cri-containerd-aa15978a03d7184b921aba8c4002fbf00e8faeae57f0f90db4711384671ee248.scope - libcontainer container aa15978a03d7184b921aba8c4002fbf00e8faeae57f0f90db4711384671ee248. Sep 4 17:20:48.355957 systemd[1]: Started cri-containerd-b013cab8dee485db6a20cf8b71ce85e08aa78d7db5ed858e2b70aad05932035a.scope - libcontainer container b013cab8dee485db6a20cf8b71ce85e08aa78d7db5ed858e2b70aad05932035a. Sep 4 17:20:48.391326 containerd[1446]: time="2024-09-04T17:20:48.391286732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d9ddd765c3b0fcde29edfee4da9578f6,Namespace:kube-system,Attempt:0,} returns sandbox id \"5ebbc14f96f88ed74338ba58c913f1677f8965150a60801c1b2821da7e227ee1\"" Sep 4 17:20:48.391858 containerd[1446]: time="2024-09-04T17:20:48.391662621Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:7fa6213ac08f24a6b78f4cd3838d26c9,Namespace:kube-system,Attempt:0,} returns sandbox id \"aa15978a03d7184b921aba8c4002fbf00e8faeae57f0f90db4711384671ee248\"" Sep 4 17:20:48.392200 kubelet[2186]: E0904 17:20:48.392173 2186 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:20:48.394915 kubelet[2186]: E0904 17:20:48.394888 2186 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:20:48.397724 containerd[1446]: time="2024-09-04T17:20:48.397607144Z" level=info msg="CreateContainer within sandbox \"5ebbc14f96f88ed74338ba58c913f1677f8965150a60801c1b2821da7e227ee1\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 4 17:20:48.398165 containerd[1446]: time="2024-09-04T17:20:48.398023012Z" level=info msg="CreateContainer within sandbox \"aa15978a03d7184b921aba8c4002fbf00e8faeae57f0f90db4711384671ee248\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 4 17:20:48.417596 containerd[1446]: time="2024-09-04T17:20:48.417532377Z" level=info msg="CreateContainer within sandbox \"aa15978a03d7184b921aba8c4002fbf00e8faeae57f0f90db4711384671ee248\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c9559ff7e8cab7a5a9f79fefe86b4c5fb74d27ce295677b9b75d7804b1b64daf\"" Sep 4 17:20:48.418037 containerd[1446]: time="2024-09-04T17:20:48.417985982Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:27e63bfd9969a50c094ce3184a73a4fc,Namespace:kube-system,Attempt:0,} returns sandbox id \"b013cab8dee485db6a20cf8b71ce85e08aa78d7db5ed858e2b70aad05932035a\"" Sep 4 17:20:48.418439 containerd[1446]: time="2024-09-04T17:20:48.418408853Z" level=info msg="StartContainer for \"c9559ff7e8cab7a5a9f79fefe86b4c5fb74d27ce295677b9b75d7804b1b64daf\"" Sep 4 17:20:48.418903 kubelet[2186]: E0904 17:20:48.418883 2186 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:20:48.420139 containerd[1446]: time="2024-09-04T17:20:48.420103378Z" level=info msg="CreateContainer within sandbox \"5ebbc14f96f88ed74338ba58c913f1677f8965150a60801c1b2821da7e227ee1\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f24a33fa3ff05e893f43cdcf48ed032dc9c0c831c54b90a45df97b8f0b2efed6\"" Sep 4 17:20:48.420482 containerd[1446]: time="2024-09-04T17:20:48.420457618Z" level=info msg="StartContainer for \"f24a33fa3ff05e893f43cdcf48ed032dc9c0c831c54b90a45df97b8f0b2efed6\"" Sep 4 17:20:48.420814 containerd[1446]: time="2024-09-04T17:20:48.420777562Z" level=info msg="CreateContainer within sandbox \"b013cab8dee485db6a20cf8b71ce85e08aa78d7db5ed858e2b70aad05932035a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 4 17:20:48.437367 containerd[1446]: time="2024-09-04T17:20:48.437173882Z" level=info msg="CreateContainer within sandbox \"b013cab8dee485db6a20cf8b71ce85e08aa78d7db5ed858e2b70aad05932035a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a35f90bdc62abc2eae162df0a1824ac74e4955f46056f4c8b52bb66ba70312be\"" Sep 4 17:20:48.437904 containerd[1446]: time="2024-09-04T17:20:48.437875279Z" level=info msg="StartContainer for \"a35f90bdc62abc2eae162df0a1824ac74e4955f46056f4c8b52bb66ba70312be\"" Sep 4 17:20:48.447766 systemd[1]: Started cri-containerd-c9559ff7e8cab7a5a9f79fefe86b4c5fb74d27ce295677b9b75d7804b1b64daf.scope - libcontainer container c9559ff7e8cab7a5a9f79fefe86b4c5fb74d27ce295677b9b75d7804b1b64daf. Sep 4 17:20:48.448770 systemd[1]: Started cri-containerd-f24a33fa3ff05e893f43cdcf48ed032dc9c0c831c54b90a45df97b8f0b2efed6.scope - libcontainer container f24a33fa3ff05e893f43cdcf48ed032dc9c0c831c54b90a45df97b8f0b2efed6. Sep 4 17:20:48.467842 kubelet[2186]: W0904 17:20:48.467765 2186 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.51:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.51:6443: connect: connection refused Sep 4 17:20:48.467842 kubelet[2186]: E0904 17:20:48.467838 2186 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.51:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.51:6443: connect: connection refused Sep 4 17:20:48.470725 systemd[1]: Started cri-containerd-a35f90bdc62abc2eae162df0a1824ac74e4955f46056f4c8b52bb66ba70312be.scope - libcontainer container a35f90bdc62abc2eae162df0a1824ac74e4955f46056f4c8b52bb66ba70312be. Sep 4 17:20:48.484010 kubelet[2186]: W0904 17:20:48.483874 2186 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.51:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.51:6443: connect: connection refused Sep 4 17:20:48.484010 kubelet[2186]: E0904 17:20:48.483937 2186 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.51:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.51:6443: connect: connection refused Sep 4 17:20:48.497316 containerd[1446]: time="2024-09-04T17:20:48.496661131Z" level=info msg="StartContainer for \"c9559ff7e8cab7a5a9f79fefe86b4c5fb74d27ce295677b9b75d7804b1b64daf\" returns successfully" Sep 4 17:20:48.503794 containerd[1446]: time="2024-09-04T17:20:48.503504740Z" level=info msg="StartContainer for \"f24a33fa3ff05e893f43cdcf48ed032dc9c0c831c54b90a45df97b8f0b2efed6\" returns successfully" Sep 4 17:20:48.531120 containerd[1446]: time="2024-09-04T17:20:48.527650517Z" level=info msg="StartContainer for \"a35f90bdc62abc2eae162df0a1824ac74e4955f46056f4c8b52bb66ba70312be\" returns successfully" Sep 4 17:20:48.641259 kubelet[2186]: E0904 17:20:48.641211 2186 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.51:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.51:6443: connect: connection refused" interval="1.6s" Sep 4 17:20:48.665548 kubelet[2186]: W0904 17:20:48.664944 2186 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.51:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.51:6443: connect: connection refused Sep 4 17:20:48.665548 kubelet[2186]: E0904 17:20:48.665027 2186 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.51:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.51:6443: connect: connection refused Sep 4 17:20:48.746077 kubelet[2186]: I0904 17:20:48.745855 2186 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Sep 4 17:20:48.746438 kubelet[2186]: E0904 17:20:48.746421 2186 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.51:6443/api/v1/nodes\": dial tcp 10.0.0.51:6443: connect: connection refused" node="localhost" Sep 4 17:20:49.277116 kubelet[2186]: E0904 17:20:49.276680 2186 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:20:49.277116 kubelet[2186]: E0904 17:20:49.276750 2186 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:20:49.280506 kubelet[2186]: E0904 17:20:49.280290 2186 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:20:50.063273 kubelet[2186]: E0904 17:20:50.061076 2186 event.go:346] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.17f21a3ad5cf1611 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-09-04 17:20:47.233865233 +0000 UTC m=+1.213740502,LastTimestamp:2024-09-04 17:20:47.233865233 +0000 UTC m=+1.213740502,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 4 17:20:50.245135 kubelet[2186]: E0904 17:20:50.245098 2186 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 4 17:20:50.283058 kubelet[2186]: E0904 17:20:50.283016 2186 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:20:50.341805 kubelet[2186]: E0904 17:20:50.341695 2186 csi_plugin.go:300] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Sep 4 17:20:50.349721 kubelet[2186]: I0904 17:20:50.347890 2186 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Sep 4 17:20:50.358295 kubelet[2186]: I0904 17:20:50.358142 2186 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Sep 4 17:20:50.461479 kubelet[2186]: E0904 17:20:50.461120 2186 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 4 17:20:50.461479 kubelet[2186]: E0904 17:20:50.461413 2186 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:20:51.233408 kubelet[2186]: I0904 17:20:51.233362 2186 apiserver.go:52] "Watching apiserver" Sep 4 17:20:51.338427 kubelet[2186]: I0904 17:20:51.338366 2186 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Sep 4 17:20:52.385296 systemd[1]: Reloading requested from client PID 2465 ('systemctl') (unit session-7.scope)... Sep 4 17:20:52.385312 systemd[1]: Reloading... Sep 4 17:20:52.459617 zram_generator::config[2502]: No configuration found. Sep 4 17:20:52.544461 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:20:52.609783 systemd[1]: Reloading finished in 224 ms. Sep 4 17:20:52.643945 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:20:52.660909 systemd[1]: kubelet.service: Deactivated successfully. Sep 4 17:20:52.661121 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:20:52.661172 systemd[1]: kubelet.service: Consumed 1.584s CPU time, 117.1M memory peak, 0B memory swap peak. Sep 4 17:20:52.680390 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:20:52.771250 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:20:52.776454 (kubelet)[2544]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 17:20:52.827095 kubelet[2544]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:20:52.827095 kubelet[2544]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 4 17:20:52.827095 kubelet[2544]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:20:52.827565 kubelet[2544]: I0904 17:20:52.827140 2544 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 17:20:52.832457 kubelet[2544]: I0904 17:20:52.832420 2544 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Sep 4 17:20:52.833241 kubelet[2544]: I0904 17:20:52.832614 2544 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 17:20:52.833241 kubelet[2544]: I0904 17:20:52.832824 2544 server.go:919] "Client rotation is on, will bootstrap in background" Sep 4 17:20:52.834368 kubelet[2544]: I0904 17:20:52.834326 2544 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 4 17:20:52.836449 kubelet[2544]: I0904 17:20:52.836305 2544 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 17:20:52.845136 kubelet[2544]: I0904 17:20:52.845113 2544 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 17:20:52.845848 kubelet[2544]: I0904 17:20:52.845507 2544 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 17:20:52.845848 kubelet[2544]: I0904 17:20:52.845695 2544 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Sep 4 17:20:52.845848 kubelet[2544]: I0904 17:20:52.845713 2544 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 17:20:52.845848 kubelet[2544]: I0904 17:20:52.845722 2544 container_manager_linux.go:301] "Creating device plugin manager" Sep 4 17:20:52.845848 kubelet[2544]: I0904 17:20:52.845752 2544 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:20:52.846085 kubelet[2544]: I0904 17:20:52.846071 2544 kubelet.go:396] "Attempting to sync node with API server" Sep 4 17:20:52.846158 kubelet[2544]: I0904 17:20:52.846148 2544 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 17:20:52.846731 kubelet[2544]: I0904 17:20:52.846711 2544 kubelet.go:312] "Adding apiserver pod source" Sep 4 17:20:52.846859 kubelet[2544]: I0904 17:20:52.846848 2544 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 17:20:52.853300 kubelet[2544]: I0904 17:20:52.853274 2544 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.20" apiVersion="v1" Sep 4 17:20:52.853507 kubelet[2544]: I0904 17:20:52.853464 2544 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 4 17:20:52.853977 kubelet[2544]: I0904 17:20:52.853953 2544 server.go:1256] "Started kubelet" Sep 4 17:20:52.855681 kubelet[2544]: I0904 17:20:52.855649 2544 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 4 17:20:52.855908 kubelet[2544]: I0904 17:20:52.855884 2544 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 17:20:52.855951 kubelet[2544]: I0904 17:20:52.855940 2544 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 17:20:52.856741 kubelet[2544]: I0904 17:20:52.856713 2544 server.go:461] "Adding debug handlers to kubelet server" Sep 4 17:20:52.858687 kubelet[2544]: I0904 17:20:52.858659 2544 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 17:20:52.868848 kubelet[2544]: I0904 17:20:52.866507 2544 volume_manager.go:291] "Starting Kubelet Volume Manager" Sep 4 17:20:52.868848 kubelet[2544]: I0904 17:20:52.866900 2544 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Sep 4 17:20:52.868848 kubelet[2544]: I0904 17:20:52.867038 2544 reconciler_new.go:29] "Reconciler: start to sync state" Sep 4 17:20:52.875605 kubelet[2544]: E0904 17:20:52.875548 2544 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 17:20:52.876751 kubelet[2544]: I0904 17:20:52.876052 2544 factory.go:221] Registration of the containerd container factory successfully Sep 4 17:20:52.876751 kubelet[2544]: I0904 17:20:52.876070 2544 factory.go:221] Registration of the systemd container factory successfully Sep 4 17:20:52.876751 kubelet[2544]: I0904 17:20:52.876183 2544 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 4 17:20:52.883673 kubelet[2544]: I0904 17:20:52.883620 2544 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 17:20:52.887479 kubelet[2544]: I0904 17:20:52.887456 2544 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 17:20:52.887479 kubelet[2544]: I0904 17:20:52.887483 2544 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 4 17:20:52.887614 kubelet[2544]: I0904 17:20:52.887500 2544 kubelet.go:2329] "Starting kubelet main sync loop" Sep 4 17:20:52.887614 kubelet[2544]: E0904 17:20:52.887551 2544 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 17:20:52.921658 kubelet[2544]: I0904 17:20:52.921634 2544 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 4 17:20:52.921658 kubelet[2544]: I0904 17:20:52.921653 2544 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 4 17:20:52.921658 kubelet[2544]: I0904 17:20:52.921671 2544 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:20:52.921866 kubelet[2544]: I0904 17:20:52.921847 2544 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 4 17:20:52.921902 kubelet[2544]: I0904 17:20:52.921874 2544 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 4 17:20:52.921902 kubelet[2544]: I0904 17:20:52.921881 2544 policy_none.go:49] "None policy: Start" Sep 4 17:20:52.922622 kubelet[2544]: I0904 17:20:52.922603 2544 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 4 17:20:52.922622 kubelet[2544]: I0904 17:20:52.922627 2544 state_mem.go:35] "Initializing new in-memory state store" Sep 4 17:20:52.922891 kubelet[2544]: I0904 17:20:52.922865 2544 state_mem.go:75] "Updated machine memory state" Sep 4 17:20:52.928201 kubelet[2544]: I0904 17:20:52.927798 2544 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 17:20:52.928201 kubelet[2544]: I0904 17:20:52.928038 2544 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 17:20:52.973694 kubelet[2544]: I0904 17:20:52.971765 2544 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Sep 4 17:20:52.979399 kubelet[2544]: I0904 17:20:52.979250 2544 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Sep 4 17:20:52.979399 kubelet[2544]: I0904 17:20:52.979348 2544 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Sep 4 17:20:52.987707 kubelet[2544]: I0904 17:20:52.987678 2544 topology_manager.go:215] "Topology Admit Handler" podUID="7fa6213ac08f24a6b78f4cd3838d26c9" podNamespace="kube-system" podName="kube-controller-manager-localhost" Sep 4 17:20:52.987817 kubelet[2544]: I0904 17:20:52.987779 2544 topology_manager.go:215] "Topology Admit Handler" podUID="d9ddd765c3b0fcde29edfee4da9578f6" podNamespace="kube-system" podName="kube-scheduler-localhost" Sep 4 17:20:52.987861 kubelet[2544]: I0904 17:20:52.987841 2544 topology_manager.go:215] "Topology Admit Handler" podUID="27e63bfd9969a50c094ce3184a73a4fc" podNamespace="kube-system" podName="kube-apiserver-localhost" Sep 4 17:20:53.068279 kubelet[2544]: I0904 17:20:53.068235 2544 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7fa6213ac08f24a6b78f4cd3838d26c9-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"7fa6213ac08f24a6b78f4cd3838d26c9\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:20:53.068423 kubelet[2544]: I0904 17:20:53.068371 2544 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/27e63bfd9969a50c094ce3184a73a4fc-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"27e63bfd9969a50c094ce3184a73a4fc\") " pod="kube-system/kube-apiserver-localhost" Sep 4 17:20:53.068423 kubelet[2544]: I0904 17:20:53.068394 2544 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/27e63bfd9969a50c094ce3184a73a4fc-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"27e63bfd9969a50c094ce3184a73a4fc\") " pod="kube-system/kube-apiserver-localhost" Sep 4 17:20:53.068518 kubelet[2544]: I0904 17:20:53.068447 2544 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7fa6213ac08f24a6b78f4cd3838d26c9-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7fa6213ac08f24a6b78f4cd3838d26c9\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:20:53.068518 kubelet[2544]: I0904 17:20:53.068499 2544 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7fa6213ac08f24a6b78f4cd3838d26c9-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"7fa6213ac08f24a6b78f4cd3838d26c9\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:20:53.068567 kubelet[2544]: I0904 17:20:53.068532 2544 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7fa6213ac08f24a6b78f4cd3838d26c9-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7fa6213ac08f24a6b78f4cd3838d26c9\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:20:53.068567 kubelet[2544]: I0904 17:20:53.068563 2544 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7fa6213ac08f24a6b78f4cd3838d26c9-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"7fa6213ac08f24a6b78f4cd3838d26c9\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:20:53.068644 kubelet[2544]: I0904 17:20:53.068595 2544 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d9ddd765c3b0fcde29edfee4da9578f6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d9ddd765c3b0fcde29edfee4da9578f6\") " pod="kube-system/kube-scheduler-localhost" Sep 4 17:20:53.068644 kubelet[2544]: I0904 17:20:53.068629 2544 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/27e63bfd9969a50c094ce3184a73a4fc-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"27e63bfd9969a50c094ce3184a73a4fc\") " pod="kube-system/kube-apiserver-localhost" Sep 4 17:20:53.293490 kubelet[2544]: E0904 17:20:53.293366 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:20:53.294058 kubelet[2544]: E0904 17:20:53.293883 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:20:53.294906 kubelet[2544]: E0904 17:20:53.294512 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:20:53.853352 kubelet[2544]: I0904 17:20:53.848849 2544 apiserver.go:52] "Watching apiserver" Sep 4 17:20:53.867389 kubelet[2544]: I0904 17:20:53.867330 2544 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Sep 4 17:20:53.905534 kubelet[2544]: E0904 17:20:53.905303 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:20:53.905534 kubelet[2544]: E0904 17:20:53.905446 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:20:53.912412 kubelet[2544]: E0904 17:20:53.910830 2544 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 4 17:20:53.912412 kubelet[2544]: E0904 17:20:53.911242 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:20:53.942767 kubelet[2544]: I0904 17:20:53.942711 2544 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.9426640389999998 podStartE2EDuration="1.942664039s" podCreationTimestamp="2024-09-04 17:20:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:20:53.933374153 +0000 UTC m=+1.152487737" watchObservedRunningTime="2024-09-04 17:20:53.942664039 +0000 UTC m=+1.161777623" Sep 4 17:20:53.949223 kubelet[2544]: I0904 17:20:53.949004 2544 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.948849442 podStartE2EDuration="1.948849442s" podCreationTimestamp="2024-09-04 17:20:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:20:53.942830758 +0000 UTC m=+1.161944302" watchObservedRunningTime="2024-09-04 17:20:53.948849442 +0000 UTC m=+1.167963026" Sep 4 17:20:53.949223 kubelet[2544]: I0904 17:20:53.949145 2544 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.949125586 podStartE2EDuration="1.949125586s" podCreationTimestamp="2024-09-04 17:20:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:20:53.949118905 +0000 UTC m=+1.168232489" watchObservedRunningTime="2024-09-04 17:20:53.949125586 +0000 UTC m=+1.168239130" Sep 4 17:20:54.905899 kubelet[2544]: E0904 17:20:54.905859 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:20:54.905899 kubelet[2544]: E0904 17:20:54.905897 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:20:56.847006 sudo[1621]: pam_unix(sudo:session): session closed for user root Sep 4 17:20:56.848680 sshd[1617]: pam_unix(sshd:session): session closed for user core Sep 4 17:20:56.852648 systemd[1]: sshd@6-10.0.0.51:22-10.0.0.1:44302.service: Deactivated successfully. Sep 4 17:20:56.854798 systemd[1]: session-7.scope: Deactivated successfully. Sep 4 17:20:56.855015 systemd[1]: session-7.scope: Consumed 6.554s CPU time, 139.9M memory peak, 0B memory swap peak. Sep 4 17:20:56.856218 systemd-logind[1424]: Session 7 logged out. Waiting for processes to exit. Sep 4 17:20:56.859120 systemd-logind[1424]: Removed session 7. Sep 4 17:20:57.474046 kubelet[2544]: E0904 17:20:57.473998 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:20:57.747411 kubelet[2544]: E0904 17:20:57.746137 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:20:58.957950 kubelet[2544]: E0904 17:20:58.957877 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:20:59.914042 kubelet[2544]: E0904 17:20:59.913011 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:21:04.094499 update_engine[1431]: I0904 17:21:04.093961 1431 update_attempter.cc:509] Updating boot flags... Sep 4 17:21:04.114613 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (2640) Sep 4 17:21:04.143246 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (2639) Sep 4 17:21:07.348548 kubelet[2544]: I0904 17:21:07.348515 2544 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 4 17:21:07.349096 containerd[1446]: time="2024-09-04T17:21:07.349064224Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 4 17:21:07.349427 kubelet[2544]: I0904 17:21:07.349312 2544 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 4 17:21:07.482288 kubelet[2544]: E0904 17:21:07.482239 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:21:07.755888 kubelet[2544]: E0904 17:21:07.755542 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:21:07.927605 kubelet[2544]: E0904 17:21:07.927212 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:21:08.371491 kubelet[2544]: I0904 17:21:08.370146 2544 topology_manager.go:215] "Topology Admit Handler" podUID="fe1f1012-ebf8-4116-acc1-9449d5803765" podNamespace="kube-system" podName="kube-proxy-bbw8m" Sep 4 17:21:08.385771 systemd[1]: Created slice kubepods-besteffort-podfe1f1012_ebf8_4116_acc1_9449d5803765.slice - libcontainer container kubepods-besteffort-podfe1f1012_ebf8_4116_acc1_9449d5803765.slice. Sep 4 17:21:08.470129 kubelet[2544]: I0904 17:21:08.469776 2544 topology_manager.go:215] "Topology Admit Handler" podUID="7990d86c-8b21-4189-92a6-2a3e29805cd2" podNamespace="tigera-operator" podName="tigera-operator-5d56685c77-nrcgp" Sep 4 17:21:08.473439 kubelet[2544]: I0904 17:21:08.473400 2544 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fe1f1012-ebf8-4116-acc1-9449d5803765-lib-modules\") pod \"kube-proxy-bbw8m\" (UID: \"fe1f1012-ebf8-4116-acc1-9449d5803765\") " pod="kube-system/kube-proxy-bbw8m" Sep 4 17:21:08.473555 kubelet[2544]: I0904 17:21:08.473449 2544 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fe1f1012-ebf8-4116-acc1-9449d5803765-xtables-lock\") pod \"kube-proxy-bbw8m\" (UID: \"fe1f1012-ebf8-4116-acc1-9449d5803765\") " pod="kube-system/kube-proxy-bbw8m" Sep 4 17:21:08.473555 kubelet[2544]: I0904 17:21:08.473473 2544 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tv89j\" (UniqueName: \"kubernetes.io/projected/fe1f1012-ebf8-4116-acc1-9449d5803765-kube-api-access-tv89j\") pod \"kube-proxy-bbw8m\" (UID: \"fe1f1012-ebf8-4116-acc1-9449d5803765\") " pod="kube-system/kube-proxy-bbw8m" Sep 4 17:21:08.473555 kubelet[2544]: I0904 17:21:08.473510 2544 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fe1f1012-ebf8-4116-acc1-9449d5803765-kube-proxy\") pod \"kube-proxy-bbw8m\" (UID: \"fe1f1012-ebf8-4116-acc1-9449d5803765\") " pod="kube-system/kube-proxy-bbw8m" Sep 4 17:21:08.477995 systemd[1]: Created slice kubepods-besteffort-pod7990d86c_8b21_4189_92a6_2a3e29805cd2.slice - libcontainer container kubepods-besteffort-pod7990d86c_8b21_4189_92a6_2a3e29805cd2.slice. Sep 4 17:21:08.574681 kubelet[2544]: I0904 17:21:08.574521 2544 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7990d86c-8b21-4189-92a6-2a3e29805cd2-var-lib-calico\") pod \"tigera-operator-5d56685c77-nrcgp\" (UID: \"7990d86c-8b21-4189-92a6-2a3e29805cd2\") " pod="tigera-operator/tigera-operator-5d56685c77-nrcgp" Sep 4 17:21:08.574681 kubelet[2544]: I0904 17:21:08.574574 2544 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5p9j\" (UniqueName: \"kubernetes.io/projected/7990d86c-8b21-4189-92a6-2a3e29805cd2-kube-api-access-z5p9j\") pod \"tigera-operator-5d56685c77-nrcgp\" (UID: \"7990d86c-8b21-4189-92a6-2a3e29805cd2\") " pod="tigera-operator/tigera-operator-5d56685c77-nrcgp" Sep 4 17:21:08.700655 kubelet[2544]: E0904 17:21:08.700229 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:21:08.701308 containerd[1446]: time="2024-09-04T17:21:08.700960832Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bbw8m,Uid:fe1f1012-ebf8-4116-acc1-9449d5803765,Namespace:kube-system,Attempt:0,}" Sep 4 17:21:08.727377 containerd[1446]: time="2024-09-04T17:21:08.727239834Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:21:08.727377 containerd[1446]: time="2024-09-04T17:21:08.727329003Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:21:08.727377 containerd[1446]: time="2024-09-04T17:21:08.727345765Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:21:08.727676 containerd[1446]: time="2024-09-04T17:21:08.727452496Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:21:08.757487 systemd[1]: Started cri-containerd-17a252ac040e32af4159b151bbfbbd805adc3c216cc6d019f0cb33fde505b393.scope - libcontainer container 17a252ac040e32af4159b151bbfbbd805adc3c216cc6d019f0cb33fde505b393. Sep 4 17:21:08.777042 containerd[1446]: time="2024-09-04T17:21:08.776998378Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bbw8m,Uid:fe1f1012-ebf8-4116-acc1-9449d5803765,Namespace:kube-system,Attempt:0,} returns sandbox id \"17a252ac040e32af4159b151bbfbbd805adc3c216cc6d019f0cb33fde505b393\"" Sep 4 17:21:08.777927 kubelet[2544]: E0904 17:21:08.777904 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:21:08.780872 containerd[1446]: time="2024-09-04T17:21:08.780832267Z" level=info msg="CreateContainer within sandbox \"17a252ac040e32af4159b151bbfbbd805adc3c216cc6d019f0cb33fde505b393\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 4 17:21:08.783826 containerd[1446]: time="2024-09-04T17:21:08.783754098Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5d56685c77-nrcgp,Uid:7990d86c-8b21-4189-92a6-2a3e29805cd2,Namespace:tigera-operator,Attempt:0,}" Sep 4 17:21:08.800513 containerd[1446]: time="2024-09-04T17:21:08.799000964Z" level=info msg="CreateContainer within sandbox \"17a252ac040e32af4159b151bbfbbd805adc3c216cc6d019f0cb33fde505b393\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"3a7712299e012c022cf38a334317ffaf0b951b2ab2b70f1b27ec9f95b25e3c61\"" Sep 4 17:21:08.801192 containerd[1446]: time="2024-09-04T17:21:08.801145712Z" level=info msg="StartContainer for \"3a7712299e012c022cf38a334317ffaf0b951b2ab2b70f1b27ec9f95b25e3c61\"" Sep 4 17:21:08.810233 containerd[1446]: time="2024-09-04T17:21:08.810141311Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:21:08.810233 containerd[1446]: time="2024-09-04T17:21:08.810194677Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:21:08.810233 containerd[1446]: time="2024-09-04T17:21:08.810207118Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:21:08.810617 containerd[1446]: time="2024-09-04T17:21:08.810295248Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:21:08.830805 systemd[1]: Started cri-containerd-736eee218d7ab66f8b5089088113b4626b56ffdc59a7d4a9c4dc0bab3dd1c2be.scope - libcontainer container 736eee218d7ab66f8b5089088113b4626b56ffdc59a7d4a9c4dc0bab3dd1c2be. Sep 4 17:21:08.833573 systemd[1]: Started cri-containerd-3a7712299e012c022cf38a334317ffaf0b951b2ab2b70f1b27ec9f95b25e3c61.scope - libcontainer container 3a7712299e012c022cf38a334317ffaf0b951b2ab2b70f1b27ec9f95b25e3c61. Sep 4 17:21:08.861375 containerd[1446]: time="2024-09-04T17:21:08.861277203Z" level=info msg="StartContainer for \"3a7712299e012c022cf38a334317ffaf0b951b2ab2b70f1b27ec9f95b25e3c61\" returns successfully" Sep 4 17:21:08.871427 containerd[1446]: time="2024-09-04T17:21:08.871306712Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5d56685c77-nrcgp,Uid:7990d86c-8b21-4189-92a6-2a3e29805cd2,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"736eee218d7ab66f8b5089088113b4626b56ffdc59a7d4a9c4dc0bab3dd1c2be\"" Sep 4 17:21:08.877280 containerd[1446]: time="2024-09-04T17:21:08.877142974Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\"" Sep 4 17:21:08.931436 kubelet[2544]: E0904 17:21:08.930591 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:21:08.940559 kubelet[2544]: I0904 17:21:08.940521 2544 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-bbw8m" podStartSLOduration=0.940484606 podStartE2EDuration="940.484606ms" podCreationTimestamp="2024-09-04 17:21:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:21:08.939146944 +0000 UTC m=+16.158260528" watchObservedRunningTime="2024-09-04 17:21:08.940484606 +0000 UTC m=+16.159598190" Sep 4 17:21:09.764187 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1626764435.mount: Deactivated successfully. Sep 4 17:21:10.076677 containerd[1446]: time="2024-09-04T17:21:10.076103756Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:21:10.077804 containerd[1446]: time="2024-09-04T17:21:10.077696031Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.3: active requests=0, bytes read=19485963" Sep 4 17:21:10.078418 containerd[1446]: time="2024-09-04T17:21:10.078392219Z" level=info msg="ImageCreate event name:\"sha256:2fd8a2c22d96f6b41bf5709bd6ebbb915093532073f7039d03ab056b4e148f56\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:21:10.082997 containerd[1446]: time="2024-09-04T17:21:10.082944341Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:21:10.083999 containerd[1446]: time="2024-09-04T17:21:10.083964041Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.3\" with image id \"sha256:2fd8a2c22d96f6b41bf5709bd6ebbb915093532073f7039d03ab056b4e148f56\", repo tag \"quay.io/tigera/operator:v1.34.3\", repo digest \"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\", size \"19480102\" in 1.206778943s" Sep 4 17:21:10.094746 containerd[1446]: time="2024-09-04T17:21:10.094684203Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\" returns image reference \"sha256:2fd8a2c22d96f6b41bf5709bd6ebbb915093532073f7039d03ab056b4e148f56\"" Sep 4 17:21:10.117006 containerd[1446]: time="2024-09-04T17:21:10.116842917Z" level=info msg="CreateContainer within sandbox \"736eee218d7ab66f8b5089088113b4626b56ffdc59a7d4a9c4dc0bab3dd1c2be\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 4 17:21:10.132555 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3786371289.mount: Deactivated successfully. Sep 4 17:21:10.134938 containerd[1446]: time="2024-09-04T17:21:10.134897472Z" level=info msg="CreateContainer within sandbox \"736eee218d7ab66f8b5089088113b4626b56ffdc59a7d4a9c4dc0bab3dd1c2be\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"4a1d235e40373e8c383fb26db20450c0c2e5b0092542a097d3d4cdb32a9f500b\"" Sep 4 17:21:10.138233 containerd[1446]: time="2024-09-04T17:21:10.137330869Z" level=info msg="StartContainer for \"4a1d235e40373e8c383fb26db20450c0c2e5b0092542a097d3d4cdb32a9f500b\"" Sep 4 17:21:10.167782 systemd[1]: Started cri-containerd-4a1d235e40373e8c383fb26db20450c0c2e5b0092542a097d3d4cdb32a9f500b.scope - libcontainer container 4a1d235e40373e8c383fb26db20450c0c2e5b0092542a097d3d4cdb32a9f500b. Sep 4 17:21:10.202244 containerd[1446]: time="2024-09-04T17:21:10.202088524Z" level=info msg="StartContainer for \"4a1d235e40373e8c383fb26db20450c0c2e5b0092542a097d3d4cdb32a9f500b\" returns successfully" Sep 4 17:21:14.808241 kubelet[2544]: I0904 17:21:14.807909 2544 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5d56685c77-nrcgp" podStartSLOduration=5.58376156 podStartE2EDuration="6.807865631s" podCreationTimestamp="2024-09-04 17:21:08 +0000 UTC" firstStartedPulling="2024-09-04 17:21:08.872427551 +0000 UTC m=+16.091541135" lastFinishedPulling="2024-09-04 17:21:10.096531622 +0000 UTC m=+17.315645206" observedRunningTime="2024-09-04 17:21:10.952343738 +0000 UTC m=+18.171457322" watchObservedRunningTime="2024-09-04 17:21:14.807865631 +0000 UTC m=+22.026979215" Sep 4 17:21:14.809315 kubelet[2544]: I0904 17:21:14.808859 2544 topology_manager.go:215] "Topology Admit Handler" podUID="b6929d3a-d7aa-4aca-8f4b-733365970b70" podNamespace="calico-system" podName="calico-typha-86576c5bd8-5bd5j" Sep 4 17:21:14.823482 systemd[1]: Created slice kubepods-besteffort-podb6929d3a_d7aa_4aca_8f4b_733365970b70.slice - libcontainer container kubepods-besteffort-podb6929d3a_d7aa_4aca_8f4b_733365970b70.slice. Sep 4 17:21:14.849920 kubelet[2544]: I0904 17:21:14.849098 2544 topology_manager.go:215] "Topology Admit Handler" podUID="5786c50b-ba63-49fb-9fba-dd9ce2eff2b6" podNamespace="calico-system" podName="calico-node-f6vv6" Sep 4 17:21:14.857487 systemd[1]: Created slice kubepods-besteffort-pod5786c50b_ba63_49fb_9fba_dd9ce2eff2b6.slice - libcontainer container kubepods-besteffort-pod5786c50b_ba63_49fb_9fba_dd9ce2eff2b6.slice. Sep 4 17:21:14.914870 kubelet[2544]: I0904 17:21:14.914832 2544 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/5786c50b-ba63-49fb-9fba-dd9ce2eff2b6-policysync\") pod \"calico-node-f6vv6\" (UID: \"5786c50b-ba63-49fb-9fba-dd9ce2eff2b6\") " pod="calico-system/calico-node-f6vv6" Sep 4 17:21:14.914870 kubelet[2544]: I0904 17:21:14.914878 2544 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/5786c50b-ba63-49fb-9fba-dd9ce2eff2b6-flexvol-driver-host\") pod \"calico-node-f6vv6\" (UID: \"5786c50b-ba63-49fb-9fba-dd9ce2eff2b6\") " pod="calico-system/calico-node-f6vv6" Sep 4 17:21:14.915045 kubelet[2544]: I0904 17:21:14.914900 2544 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/5786c50b-ba63-49fb-9fba-dd9ce2eff2b6-cni-net-dir\") pod \"calico-node-f6vv6\" (UID: \"5786c50b-ba63-49fb-9fba-dd9ce2eff2b6\") " pod="calico-system/calico-node-f6vv6" Sep 4 17:21:14.915045 kubelet[2544]: I0904 17:21:14.914997 2544 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28pgk\" (UniqueName: \"kubernetes.io/projected/b6929d3a-d7aa-4aca-8f4b-733365970b70-kube-api-access-28pgk\") pod \"calico-typha-86576c5bd8-5bd5j\" (UID: \"b6929d3a-d7aa-4aca-8f4b-733365970b70\") " pod="calico-system/calico-typha-86576c5bd8-5bd5j" Sep 4 17:21:14.915045 kubelet[2544]: I0904 17:21:14.915037 2544 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/5786c50b-ba63-49fb-9fba-dd9ce2eff2b6-node-certs\") pod \"calico-node-f6vv6\" (UID: \"5786c50b-ba63-49fb-9fba-dd9ce2eff2b6\") " pod="calico-system/calico-node-f6vv6" Sep 4 17:21:14.915118 kubelet[2544]: I0904 17:21:14.915061 2544 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5786c50b-ba63-49fb-9fba-dd9ce2eff2b6-xtables-lock\") pod \"calico-node-f6vv6\" (UID: \"5786c50b-ba63-49fb-9fba-dd9ce2eff2b6\") " pod="calico-system/calico-node-f6vv6" Sep 4 17:21:14.915118 kubelet[2544]: I0904 17:21:14.915084 2544 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/5786c50b-ba63-49fb-9fba-dd9ce2eff2b6-var-run-calico\") pod \"calico-node-f6vv6\" (UID: \"5786c50b-ba63-49fb-9fba-dd9ce2eff2b6\") " pod="calico-system/calico-node-f6vv6" Sep 4 17:21:14.915118 kubelet[2544]: I0904 17:21:14.915103 2544 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/5786c50b-ba63-49fb-9fba-dd9ce2eff2b6-cni-bin-dir\") pod \"calico-node-f6vv6\" (UID: \"5786c50b-ba63-49fb-9fba-dd9ce2eff2b6\") " pod="calico-system/calico-node-f6vv6" Sep 4 17:21:14.915185 kubelet[2544]: I0904 17:21:14.915123 2544 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/5786c50b-ba63-49fb-9fba-dd9ce2eff2b6-cni-log-dir\") pod \"calico-node-f6vv6\" (UID: \"5786c50b-ba63-49fb-9fba-dd9ce2eff2b6\") " pod="calico-system/calico-node-f6vv6" Sep 4 17:21:14.915185 kubelet[2544]: I0904 17:21:14.915149 2544 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b6929d3a-d7aa-4aca-8f4b-733365970b70-tigera-ca-bundle\") pod \"calico-typha-86576c5bd8-5bd5j\" (UID: \"b6929d3a-d7aa-4aca-8f4b-733365970b70\") " pod="calico-system/calico-typha-86576c5bd8-5bd5j" Sep 4 17:21:14.915185 kubelet[2544]: I0904 17:21:14.915168 2544 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/b6929d3a-d7aa-4aca-8f4b-733365970b70-typha-certs\") pod \"calico-typha-86576c5bd8-5bd5j\" (UID: \"b6929d3a-d7aa-4aca-8f4b-733365970b70\") " pod="calico-system/calico-typha-86576c5bd8-5bd5j" Sep 4 17:21:14.915254 kubelet[2544]: I0904 17:21:14.915189 2544 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5786c50b-ba63-49fb-9fba-dd9ce2eff2b6-tigera-ca-bundle\") pod \"calico-node-f6vv6\" (UID: \"5786c50b-ba63-49fb-9fba-dd9ce2eff2b6\") " pod="calico-system/calico-node-f6vv6" Sep 4 17:21:14.915254 kubelet[2544]: I0904 17:21:14.915208 2544 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5786c50b-ba63-49fb-9fba-dd9ce2eff2b6-var-lib-calico\") pod \"calico-node-f6vv6\" (UID: \"5786c50b-ba63-49fb-9fba-dd9ce2eff2b6\") " pod="calico-system/calico-node-f6vv6" Sep 4 17:21:14.915254 kubelet[2544]: I0904 17:21:14.915228 2544 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4vz4\" (UniqueName: \"kubernetes.io/projected/5786c50b-ba63-49fb-9fba-dd9ce2eff2b6-kube-api-access-v4vz4\") pod \"calico-node-f6vv6\" (UID: \"5786c50b-ba63-49fb-9fba-dd9ce2eff2b6\") " pod="calico-system/calico-node-f6vv6" Sep 4 17:21:14.915329 kubelet[2544]: I0904 17:21:14.915250 2544 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5786c50b-ba63-49fb-9fba-dd9ce2eff2b6-lib-modules\") pod \"calico-node-f6vv6\" (UID: \"5786c50b-ba63-49fb-9fba-dd9ce2eff2b6\") " pod="calico-system/calico-node-f6vv6" Sep 4 17:21:14.966407 kubelet[2544]: I0904 17:21:14.966364 2544 topology_manager.go:215] "Topology Admit Handler" podUID="0101168c-9721-4950-958b-1ab1d8e66f6e" podNamespace="calico-system" podName="csi-node-driver-vf6pl" Sep 4 17:21:14.967651 kubelet[2544]: E0904 17:21:14.966685 2544 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vf6pl" podUID="0101168c-9721-4950-958b-1ab1d8e66f6e" Sep 4 17:21:15.017035 kubelet[2544]: I0904 17:21:15.015960 2544 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0101168c-9721-4950-958b-1ab1d8e66f6e-socket-dir\") pod \"csi-node-driver-vf6pl\" (UID: \"0101168c-9721-4950-958b-1ab1d8e66f6e\") " pod="calico-system/csi-node-driver-vf6pl" Sep 4 17:21:15.017035 kubelet[2544]: I0904 17:21:15.016014 2544 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0101168c-9721-4950-958b-1ab1d8e66f6e-registration-dir\") pod \"csi-node-driver-vf6pl\" (UID: \"0101168c-9721-4950-958b-1ab1d8e66f6e\") " pod="calico-system/csi-node-driver-vf6pl" Sep 4 17:21:15.017035 kubelet[2544]: I0904 17:21:15.016066 2544 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0101168c-9721-4950-958b-1ab1d8e66f6e-kubelet-dir\") pod \"csi-node-driver-vf6pl\" (UID: \"0101168c-9721-4950-958b-1ab1d8e66f6e\") " pod="calico-system/csi-node-driver-vf6pl" Sep 4 17:21:15.017035 kubelet[2544]: I0904 17:21:15.016188 2544 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5qg6\" (UniqueName: \"kubernetes.io/projected/0101168c-9721-4950-958b-1ab1d8e66f6e-kube-api-access-g5qg6\") pod \"csi-node-driver-vf6pl\" (UID: \"0101168c-9721-4950-958b-1ab1d8e66f6e\") " pod="calico-system/csi-node-driver-vf6pl" Sep 4 17:21:15.017035 kubelet[2544]: I0904 17:21:15.016242 2544 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/0101168c-9721-4950-958b-1ab1d8e66f6e-varrun\") pod \"csi-node-driver-vf6pl\" (UID: \"0101168c-9721-4950-958b-1ab1d8e66f6e\") " pod="calico-system/csi-node-driver-vf6pl" Sep 4 17:21:15.029420 kubelet[2544]: E0904 17:21:15.029380 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:21:15.029610 kubelet[2544]: W0904 17:21:15.029574 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:21:15.029696 kubelet[2544]: E0904 17:21:15.029683 2544 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:21:15.038598 kubelet[2544]: E0904 17:21:15.038545 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:21:15.038598 kubelet[2544]: W0904 17:21:15.038567 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:21:15.039958 kubelet[2544]: E0904 17:21:15.038622 2544 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:21:15.039958 kubelet[2544]: E0904 17:21:15.038905 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:21:15.039958 kubelet[2544]: W0904 17:21:15.038916 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:21:15.039958 kubelet[2544]: E0904 17:21:15.039023 2544 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:21:15.044838 kubelet[2544]: E0904 17:21:15.044792 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:21:15.044838 kubelet[2544]: W0904 17:21:15.044820 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:21:15.045005 kubelet[2544]: E0904 17:21:15.044875 2544 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:21:15.048990 kubelet[2544]: E0904 17:21:15.048963 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:21:15.048990 kubelet[2544]: W0904 17:21:15.048985 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:21:15.049138 kubelet[2544]: E0904 17:21:15.049007 2544 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:21:15.117293 kubelet[2544]: E0904 17:21:15.117130 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:21:15.117293 kubelet[2544]: W0904 17:21:15.117155 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:21:15.117293 kubelet[2544]: E0904 17:21:15.117176 2544 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:21:15.117462 kubelet[2544]: E0904 17:21:15.117386 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:21:15.117462 kubelet[2544]: W0904 17:21:15.117397 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:21:15.117462 kubelet[2544]: E0904 17:21:15.117410 2544 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:21:15.118655 kubelet[2544]: E0904 17:21:15.118597 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:21:15.118655 kubelet[2544]: W0904 17:21:15.118612 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:21:15.118655 kubelet[2544]: E0904 17:21:15.118631 2544 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:21:15.118977 kubelet[2544]: E0904 17:21:15.118958 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:21:15.118977 kubelet[2544]: W0904 17:21:15.118973 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:21:15.119246 kubelet[2544]: E0904 17:21:15.119098 2544 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:21:15.119246 kubelet[2544]: E0904 17:21:15.119127 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:21:15.119246 kubelet[2544]: W0904 17:21:15.119150 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:21:15.119246 kubelet[2544]: E0904 17:21:15.119170 2544 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:21:15.119902 kubelet[2544]: E0904 17:21:15.119745 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:21:15.119902 kubelet[2544]: W0904 17:21:15.119758 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:21:15.119902 kubelet[2544]: E0904 17:21:15.119835 2544 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:21:15.120507 kubelet[2544]: E0904 17:21:15.120489 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:21:15.120507 kubelet[2544]: W0904 17:21:15.120506 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:21:15.120614 kubelet[2544]: E0904 17:21:15.120525 2544 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:21:15.120725 kubelet[2544]: E0904 17:21:15.120714 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:21:15.120725 kubelet[2544]: W0904 17:21:15.120725 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:21:15.120794 kubelet[2544]: E0904 17:21:15.120774 2544 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:21:15.121034 kubelet[2544]: E0904 17:21:15.121011 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:21:15.121034 kubelet[2544]: W0904 17:21:15.121026 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:21:15.121945 kubelet[2544]: E0904 17:21:15.121060 2544 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:21:15.121945 kubelet[2544]: E0904 17:21:15.121253 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:21:15.121945 kubelet[2544]: W0904 17:21:15.121262 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:21:15.121945 kubelet[2544]: E0904 17:21:15.121294 2544 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:21:15.121945 kubelet[2544]: E0904 17:21:15.121446 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:21:15.121945 kubelet[2544]: W0904 17:21:15.121455 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:21:15.121945 kubelet[2544]: E0904 17:21:15.121481 2544 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:21:15.121945 kubelet[2544]: E0904 17:21:15.121618 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:21:15.121945 kubelet[2544]: W0904 17:21:15.121626 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:21:15.121945 kubelet[2544]: E0904 17:21:15.121650 2544 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:21:15.122194 kubelet[2544]: E0904 17:21:15.121755 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:21:15.122194 kubelet[2544]: W0904 17:21:15.121767 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:21:15.122194 kubelet[2544]: E0904 17:21:15.121785 2544 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:21:15.122494 kubelet[2544]: E0904 17:21:15.122480 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:21:15.122494 kubelet[2544]: W0904 17:21:15.122491 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:21:15.122681 kubelet[2544]: E0904 17:21:15.122510 2544 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:21:15.122768 kubelet[2544]: E0904 17:21:15.122743 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:21:15.122768 kubelet[2544]: W0904 17:21:15.122756 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:21:15.122834 kubelet[2544]: E0904 17:21:15.122772 2544 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:21:15.122971 kubelet[2544]: E0904 17:21:15.122924 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:21:15.122971 kubelet[2544]: W0904 17:21:15.122936 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:21:15.123037 kubelet[2544]: E0904 17:21:15.122994 2544 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:21:15.123116 kubelet[2544]: E0904 17:21:15.123073 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:21:15.123116 kubelet[2544]: W0904 17:21:15.123084 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:21:15.123227 kubelet[2544]: E0904 17:21:15.123121 2544 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:21:15.123307 kubelet[2544]: E0904 17:21:15.123255 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:21:15.123307 kubelet[2544]: W0904 17:21:15.123268 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:21:15.123376 kubelet[2544]: E0904 17:21:15.123320 2544 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:21:15.124170 kubelet[2544]: E0904 17:21:15.123407 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:21:15.124170 kubelet[2544]: W0904 17:21:15.123418 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:21:15.124170 kubelet[2544]: E0904 17:21:15.123447 2544 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:21:15.124170 kubelet[2544]: E0904 17:21:15.123560 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:21:15.124170 kubelet[2544]: W0904 17:21:15.123571 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:21:15.124170 kubelet[2544]: E0904 17:21:15.123627 2544 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:21:15.124170 kubelet[2544]: E0904 17:21:15.123824 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:21:15.124170 kubelet[2544]: W0904 17:21:15.123834 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:21:15.124170 kubelet[2544]: E0904 17:21:15.123855 2544 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:21:15.124170 kubelet[2544]: E0904 17:21:15.124109 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:21:15.124495 kubelet[2544]: W0904 17:21:15.124126 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:21:15.124495 kubelet[2544]: E0904 17:21:15.124146 2544 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:21:15.125473 kubelet[2544]: E0904 17:21:15.125454 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:21:15.125473 kubelet[2544]: W0904 17:21:15.125471 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:21:15.125557 kubelet[2544]: E0904 17:21:15.125493 2544 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:21:15.125836 kubelet[2544]: E0904 17:21:15.125820 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:21:15.125885 kubelet[2544]: W0904 17:21:15.125839 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:21:15.125885 kubelet[2544]: E0904 17:21:15.125863 2544 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:21:15.126229 kubelet[2544]: E0904 17:21:15.126199 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:21:15.126273 kubelet[2544]: W0904 17:21:15.126230 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:21:15.126273 kubelet[2544]: E0904 17:21:15.126247 2544 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:21:15.131210 kubelet[2544]: E0904 17:21:15.130874 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:21:15.133456 containerd[1446]: time="2024-09-04T17:21:15.131507363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-86576c5bd8-5bd5j,Uid:b6929d3a-d7aa-4aca-8f4b-733365970b70,Namespace:calico-system,Attempt:0,}" Sep 4 17:21:15.139450 kubelet[2544]: E0904 17:21:15.139262 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:21:15.139450 kubelet[2544]: W0904 17:21:15.139286 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:21:15.139450 kubelet[2544]: E0904 17:21:15.139306 2544 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:21:15.160650 kubelet[2544]: E0904 17:21:15.160587 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:21:15.162407 containerd[1446]: time="2024-09-04T17:21:15.162350461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-f6vv6,Uid:5786c50b-ba63-49fb-9fba-dd9ce2eff2b6,Namespace:calico-system,Attempt:0,}" Sep 4 17:21:15.188037 containerd[1446]: time="2024-09-04T17:21:15.187946028Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:21:15.188037 containerd[1446]: time="2024-09-04T17:21:15.188030595Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:21:15.188037 containerd[1446]: time="2024-09-04T17:21:15.188046956Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:21:15.188503 containerd[1446]: time="2024-09-04T17:21:15.188162565Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:21:15.202771 containerd[1446]: time="2024-09-04T17:21:15.202420083Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:21:15.202771 containerd[1446]: time="2024-09-04T17:21:15.202490889Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:21:15.202771 containerd[1446]: time="2024-09-04T17:21:15.202509490Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:21:15.202771 containerd[1446]: time="2024-09-04T17:21:15.202686664Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:21:15.219051 systemd[1]: Started cri-containerd-42a06bdb36cfb2d82b499bbb43eab14f08416ef4801ee81567c195f17f3bc11c.scope - libcontainer container 42a06bdb36cfb2d82b499bbb43eab14f08416ef4801ee81567c195f17f3bc11c. Sep 4 17:21:15.240832 systemd[1]: Started cri-containerd-c59607d1823a60cd23f356aac3438b96a2ef583bff810b67eb95ca5fbff258da.scope - libcontainer container c59607d1823a60cd23f356aac3438b96a2ef583bff810b67eb95ca5fbff258da. Sep 4 17:21:15.282456 containerd[1446]: time="2024-09-04T17:21:15.282406355Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-86576c5bd8-5bd5j,Uid:b6929d3a-d7aa-4aca-8f4b-733365970b70,Namespace:calico-system,Attempt:0,} returns sandbox id \"42a06bdb36cfb2d82b499bbb43eab14f08416ef4801ee81567c195f17f3bc11c\"" Sep 4 17:21:15.282651 containerd[1446]: time="2024-09-04T17:21:15.282541366Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-f6vv6,Uid:5786c50b-ba63-49fb-9fba-dd9ce2eff2b6,Namespace:calico-system,Attempt:0,} returns sandbox id \"c59607d1823a60cd23f356aac3438b96a2ef583bff810b67eb95ca5fbff258da\"" Sep 4 17:21:15.284499 kubelet[2544]: E0904 17:21:15.283521 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:21:15.284499 kubelet[2544]: E0904 17:21:15.283539 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:21:15.285110 containerd[1446]: time="2024-09-04T17:21:15.285080005Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\"" Sep 4 17:21:16.888868 kubelet[2544]: E0904 17:21:16.888813 2544 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vf6pl" podUID="0101168c-9721-4950-958b-1ab1d8e66f6e" Sep 4 17:21:18.307345 containerd[1446]: time="2024-09-04T17:21:18.307296586Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:21:18.309312 containerd[1446]: time="2024-09-04T17:21:18.308730445Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.1: active requests=0, bytes read=27474479" Sep 4 17:21:18.311642 containerd[1446]: time="2024-09-04T17:21:18.311603646Z" level=info msg="ImageCreate event name:\"sha256:c1d0081df1580fc17ebf95ca7499d2e1af1b1ab8c75835172213221419018924\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:21:18.313416 containerd[1446]: time="2024-09-04T17:21:18.313378650Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:21:18.314324 containerd[1446]: time="2024-09-04T17:21:18.314192986Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.1\" with image id \"sha256:c1d0081df1580fc17ebf95ca7499d2e1af1b1ab8c75835172213221419018924\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\", size \"28841990\" in 3.028880524s" Sep 4 17:21:18.314324 containerd[1446]: time="2024-09-04T17:21:18.314231149Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\" returns image reference \"sha256:c1d0081df1580fc17ebf95ca7499d2e1af1b1ab8c75835172213221419018924\"" Sep 4 17:21:18.314885 containerd[1446]: time="2024-09-04T17:21:18.314853112Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\"" Sep 4 17:21:18.323857 containerd[1446]: time="2024-09-04T17:21:18.323679368Z" level=info msg="CreateContainer within sandbox \"42a06bdb36cfb2d82b499bbb43eab14f08416ef4801ee81567c195f17f3bc11c\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 4 17:21:18.354140 containerd[1446]: time="2024-09-04T17:21:18.354090888Z" level=info msg="CreateContainer within sandbox \"42a06bdb36cfb2d82b499bbb43eab14f08416ef4801ee81567c195f17f3bc11c\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"b9d30d8fac6c23fbc5f098c380f7ff9dc4564425363af6ddc78a533ba5517c80\"" Sep 4 17:21:18.354828 containerd[1446]: time="2024-09-04T17:21:18.354799058Z" level=info msg="StartContainer for \"b9d30d8fac6c23fbc5f098c380f7ff9dc4564425363af6ddc78a533ba5517c80\"" Sep 4 17:21:18.388817 systemd[1]: Started cri-containerd-b9d30d8fac6c23fbc5f098c380f7ff9dc4564425363af6ddc78a533ba5517c80.scope - libcontainer container b9d30d8fac6c23fbc5f098c380f7ff9dc4564425363af6ddc78a533ba5517c80. Sep 4 17:21:18.430399 containerd[1446]: time="2024-09-04T17:21:18.430355406Z" level=info msg="StartContainer for \"b9d30d8fac6c23fbc5f098c380f7ff9dc4564425363af6ddc78a533ba5517c80\" returns successfully" Sep 4 17:21:18.888144 kubelet[2544]: E0904 17:21:18.888072 2544 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vf6pl" podUID="0101168c-9721-4950-958b-1ab1d8e66f6e" Sep 4 17:21:18.952911 kubelet[2544]: E0904 17:21:18.952875 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:21:18.962323 kubelet[2544]: I0904 17:21:18.961919 2544 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-86576c5bd8-5bd5j" podStartSLOduration=1.93191458 podStartE2EDuration="4.961881785s" podCreationTimestamp="2024-09-04 17:21:14 +0000 UTC" firstStartedPulling="2024-09-04 17:21:15.284691214 +0000 UTC m=+22.503804798" lastFinishedPulling="2024-09-04 17:21:18.314658419 +0000 UTC m=+25.533772003" observedRunningTime="2024-09-04 17:21:18.961749776 +0000 UTC m=+26.180863360" watchObservedRunningTime="2024-09-04 17:21:18.961881785 +0000 UTC m=+26.180995329" Sep 4 17:21:19.041286 kubelet[2544]: E0904 17:21:19.041255 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:21:19.041782 kubelet[2544]: W0904 17:21:19.041438 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:21:19.041782 kubelet[2544]: E0904 17:21:19.041468 2544 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:21:19.041981 kubelet[2544]: E0904 17:21:19.041969 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:21:19.042095 kubelet[2544]: W0904 17:21:19.042031 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:21:19.042095 kubelet[2544]: E0904 17:21:19.042052 2544 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:21:19.043111 kubelet[2544]: E0904 17:21:19.042402 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:21:19.043111 kubelet[2544]: W0904 17:21:19.042415 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:21:19.043111 kubelet[2544]: E0904 17:21:19.042428 2544 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:21:19.043307 kubelet[2544]: E0904 17:21:19.043295 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:21:19.044424 kubelet[2544]: W0904 17:21:19.044234 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:21:19.044424 kubelet[2544]: E0904 17:21:19.044264 2544 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:21:19.044615 kubelet[2544]: E0904 17:21:19.044601 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:21:19.044740 kubelet[2544]: W0904 17:21:19.044682 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:21:19.044740 kubelet[2544]: E0904 17:21:19.044701 2544 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:21:19.044982 kubelet[2544]: E0904 17:21:19.044971 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:21:19.045091 kubelet[2544]: W0904 17:21:19.045037 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:21:19.045091 kubelet[2544]: E0904 17:21:19.045055 2544 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:21:19.045324 kubelet[2544]: E0904 17:21:19.045312 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:21:19.045391 kubelet[2544]: W0904 17:21:19.045380 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:21:19.045831 kubelet[2544]: E0904 17:21:19.045688 2544 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:21:19.046438 kubelet[2544]: E0904 17:21:19.045959 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:21:19.046438 kubelet[2544]: W0904 17:21:19.045973 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:21:19.046438 kubelet[2544]: E0904 17:21:19.045987 2544 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:21:19.046438 kubelet[2544]: E0904 17:21:19.046191 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:21:19.046438 kubelet[2544]: W0904 17:21:19.046200 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:21:19.046438 kubelet[2544]: E0904 17:21:19.046214 2544 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:21:19.046438 kubelet[2544]: E0904 17:21:19.046360 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:21:19.046438 kubelet[2544]: W0904 17:21:19.046368 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:21:19.046438 kubelet[2544]: E0904 17:21:19.046379 2544 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:21:19.046722 kubelet[2544]: E0904 17:21:19.046512 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:21:19.046722 kubelet[2544]: W0904 17:21:19.046524 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:21:19.046722 kubelet[2544]: E0904 17:21:19.046534 2544 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:21:19.046787 kubelet[2544]: E0904 17:21:19.046736 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:21:19.046787 kubelet[2544]: W0904 17:21:19.046746 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:21:19.046787 kubelet[2544]: E0904 17:21:19.046757 2544 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:21:19.047710 kubelet[2544]: E0904 17:21:19.046920 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:21:19.047710 kubelet[2544]: W0904 17:21:19.046930 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:21:19.047710 kubelet[2544]: E0904 17:21:19.046940 2544 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:21:19.047710 kubelet[2544]: E0904 17:21:19.047084 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:21:19.047710 kubelet[2544]: W0904 17:21:19.047092 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:21:19.047710 kubelet[2544]: E0904 17:21:19.047101 2544 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:21:19.047710 kubelet[2544]: E0904 17:21:19.047228 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:21:19.047710 kubelet[2544]: W0904 17:21:19.047234 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:21:19.047710 kubelet[2544]: E0904 17:21:19.047243 2544 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:21:19.047710 kubelet[2544]: E0904 17:21:19.047453 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:21:19.047959 kubelet[2544]: W0904 17:21:19.047461 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:21:19.047959 kubelet[2544]: E0904 17:21:19.047470 2544 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:21:19.047959 kubelet[2544]: E0904 17:21:19.047664 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:21:19.047959 kubelet[2544]: W0904 17:21:19.047672 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:21:19.047959 kubelet[2544]: E0904 17:21:19.047687 2544 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:21:19.047959 kubelet[2544]: E0904 17:21:19.047849 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:21:19.047959 kubelet[2544]: W0904 17:21:19.047856 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:21:19.047959 kubelet[2544]: E0904 17:21:19.047871 2544 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:21:19.048164 kubelet[2544]: E0904 17:21:19.048032 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:21:19.048164 kubelet[2544]: W0904 17:21:19.048039 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:21:19.048164 kubelet[2544]: E0904 17:21:19.048054 2544 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:21:19.048226 kubelet[2544]: E0904 17:21:19.048202 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:21:19.048226 kubelet[2544]: W0904 17:21:19.048209 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:21:19.048226 kubelet[2544]: E0904 17:21:19.048222 2544 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:21:19.048367 kubelet[2544]: E0904 17:21:19.048346 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:21:19.048367 kubelet[2544]: W0904 17:21:19.048355 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:21:19.048419 kubelet[2544]: E0904 17:21:19.048370 2544 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:21:19.048530 kubelet[2544]: E0904 17:21:19.048517 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:21:19.048530 kubelet[2544]: W0904 17:21:19.048526 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:21:19.048610 kubelet[2544]: E0904 17:21:19.048540 2544 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:21:19.049621 kubelet[2544]: E0904 17:21:19.048753 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:21:19.049621 kubelet[2544]: W0904 17:21:19.048770 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:21:19.049621 kubelet[2544]: E0904 17:21:19.048789 2544 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:21:19.049621 kubelet[2544]: E0904 17:21:19.048933 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:21:19.049621 kubelet[2544]: W0904 17:21:19.048941 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:21:19.049621 kubelet[2544]: E0904 17:21:19.048970 2544 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:21:19.049621 kubelet[2544]: E0904 17:21:19.049090 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:21:19.049621 kubelet[2544]: W0904 17:21:19.049098 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:21:19.049621 kubelet[2544]: E0904 17:21:19.049120 2544 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:21:19.049621 kubelet[2544]: E0904 17:21:19.049231 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:21:19.049953 kubelet[2544]: W0904 17:21:19.049240 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:21:19.049953 kubelet[2544]: E0904 17:21:19.049254 2544 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:21:19.049953 kubelet[2544]: E0904 17:21:19.049396 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:21:19.049953 kubelet[2544]: W0904 17:21:19.049403 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:21:19.049953 kubelet[2544]: E0904 17:21:19.049417 2544 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:21:19.049953 kubelet[2544]: E0904 17:21:19.049564 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:21:19.049953 kubelet[2544]: W0904 17:21:19.049571 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:21:19.049953 kubelet[2544]: E0904 17:21:19.049604 2544 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:21:19.049953 kubelet[2544]: E0904 17:21:19.049821 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:21:19.049953 kubelet[2544]: W0904 17:21:19.049833 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:21:19.050152 kubelet[2544]: E0904 17:21:19.049852 2544 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:21:19.050152 kubelet[2544]: E0904 17:21:19.049991 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:21:19.050152 kubelet[2544]: W0904 17:21:19.049999 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:21:19.050152 kubelet[2544]: E0904 17:21:19.050013 2544 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:21:19.050236 kubelet[2544]: E0904 17:21:19.050197 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:21:19.050236 kubelet[2544]: W0904 17:21:19.050205 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:21:19.050236 kubelet[2544]: E0904 17:21:19.050219 2544 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:21:19.050495 kubelet[2544]: E0904 17:21:19.050476 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:21:19.050495 kubelet[2544]: W0904 17:21:19.050494 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:21:19.050558 kubelet[2544]: E0904 17:21:19.050515 2544 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:21:19.050772 kubelet[2544]: E0904 17:21:19.050756 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:21:19.050772 kubelet[2544]: W0904 17:21:19.050770 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:21:19.050829 kubelet[2544]: E0904 17:21:19.050796 2544 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:21:19.694360 containerd[1446]: time="2024-09-04T17:21:19.694309171Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:21:19.694983 containerd[1446]: time="2024-09-04T17:21:19.694762002Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1: active requests=0, bytes read=4916957" Sep 4 17:21:19.697080 containerd[1446]: time="2024-09-04T17:21:19.697046635Z" level=info msg="ImageCreate event name:\"sha256:20b54f73684933653d4a4b8b63c59211e3c828f94251ecf4d1bff2a334ff4ba0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:21:19.699596 containerd[1446]: time="2024-09-04T17:21:19.699157937Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:21:19.700077 containerd[1446]: time="2024-09-04T17:21:19.700021235Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" with image id \"sha256:20b54f73684933653d4a4b8b63c59211e3c828f94251ecf4d1bff2a334ff4ba0\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\", size \"6284436\" in 1.38513168s" Sep 4 17:21:19.700077 containerd[1446]: time="2024-09-04T17:21:19.700067638Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" returns image reference \"sha256:20b54f73684933653d4a4b8b63c59211e3c828f94251ecf4d1bff2a334ff4ba0\"" Sep 4 17:21:19.702809 containerd[1446]: time="2024-09-04T17:21:19.702754339Z" level=info msg="CreateContainer within sandbox \"c59607d1823a60cd23f356aac3438b96a2ef583bff810b67eb95ca5fbff258da\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 4 17:21:19.736931 containerd[1446]: time="2024-09-04T17:21:19.736865550Z" level=info msg="CreateContainer within sandbox \"c59607d1823a60cd23f356aac3438b96a2ef583bff810b67eb95ca5fbff258da\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"c60ec54c95a944db0ee9984bcafc93bf5c71f3a86c493a2a37db79c029bfed66\"" Sep 4 17:21:19.737774 containerd[1446]: time="2024-09-04T17:21:19.737731208Z" level=info msg="StartContainer for \"c60ec54c95a944db0ee9984bcafc93bf5c71f3a86c493a2a37db79c029bfed66\"" Sep 4 17:21:19.763791 systemd[1]: Started cri-containerd-c60ec54c95a944db0ee9984bcafc93bf5c71f3a86c493a2a37db79c029bfed66.scope - libcontainer container c60ec54c95a944db0ee9984bcafc93bf5c71f3a86c493a2a37db79c029bfed66. Sep 4 17:21:19.802756 containerd[1446]: time="2024-09-04T17:21:19.802706974Z" level=info msg="StartContainer for \"c60ec54c95a944db0ee9984bcafc93bf5c71f3a86c493a2a37db79c029bfed66\" returns successfully" Sep 4 17:21:19.829940 systemd[1]: cri-containerd-c60ec54c95a944db0ee9984bcafc93bf5c71f3a86c493a2a37db79c029bfed66.scope: Deactivated successfully. Sep 4 17:21:19.854576 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c60ec54c95a944db0ee9984bcafc93bf5c71f3a86c493a2a37db79c029bfed66-rootfs.mount: Deactivated successfully. Sep 4 17:21:19.869618 containerd[1446]: time="2024-09-04T17:21:19.867181105Z" level=info msg="shim disconnected" id=c60ec54c95a944db0ee9984bcafc93bf5c71f3a86c493a2a37db79c029bfed66 namespace=k8s.io Sep 4 17:21:19.869618 containerd[1446]: time="2024-09-04T17:21:19.869613989Z" level=warning msg="cleaning up after shim disconnected" id=c60ec54c95a944db0ee9984bcafc93bf5c71f3a86c493a2a37db79c029bfed66 namespace=k8s.io Sep 4 17:21:19.869618 containerd[1446]: time="2024-09-04T17:21:19.869627749Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:21:19.955712 kubelet[2544]: I0904 17:21:19.955568 2544 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 4 17:21:19.956073 kubelet[2544]: E0904 17:21:19.956051 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:21:19.956206 kubelet[2544]: E0904 17:21:19.956172 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:21:19.958324 containerd[1446]: time="2024-09-04T17:21:19.958279985Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\"" Sep 4 17:21:20.888167 kubelet[2544]: E0904 17:21:20.888075 2544 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vf6pl" podUID="0101168c-9721-4950-958b-1ab1d8e66f6e" Sep 4 17:21:22.875997 kubelet[2544]: I0904 17:21:22.875795 2544 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 4 17:21:22.884666 kubelet[2544]: E0904 17:21:22.884622 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:21:22.888353 kubelet[2544]: E0904 17:21:22.888315 2544 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vf6pl" podUID="0101168c-9721-4950-958b-1ab1d8e66f6e" Sep 4 17:21:22.960814 kubelet[2544]: E0904 17:21:22.960775 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:21:24.127444 containerd[1446]: time="2024-09-04T17:21:24.127330907Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:21:24.129379 containerd[1446]: time="2024-09-04T17:21:24.129331581Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.1: active requests=0, bytes read=86859887" Sep 4 17:21:24.130301 containerd[1446]: time="2024-09-04T17:21:24.130269314Z" level=info msg="ImageCreate event name:\"sha256:6123e515001d9cafdf3dbe8f8dc8b5ae1c56165013052b8cbc7d27f3395cfd85\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:21:24.132484 containerd[1446]: time="2024-09-04T17:21:24.132428516Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:21:24.133433 containerd[1446]: time="2024-09-04T17:21:24.133394411Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.1\" with image id \"sha256:6123e515001d9cafdf3dbe8f8dc8b5ae1c56165013052b8cbc7d27f3395cfd85\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\", size \"88227406\" in 4.175064542s" Sep 4 17:21:24.133433 containerd[1446]: time="2024-09-04T17:21:24.133432453Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\" returns image reference \"sha256:6123e515001d9cafdf3dbe8f8dc8b5ae1c56165013052b8cbc7d27f3395cfd85\"" Sep 4 17:21:24.138470 containerd[1446]: time="2024-09-04T17:21:24.138415255Z" level=info msg="CreateContainer within sandbox \"c59607d1823a60cd23f356aac3438b96a2ef583bff810b67eb95ca5fbff258da\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 4 17:21:24.153013 containerd[1446]: time="2024-09-04T17:21:24.152949999Z" level=info msg="CreateContainer within sandbox \"c59607d1823a60cd23f356aac3438b96a2ef583bff810b67eb95ca5fbff258da\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"d884885ae065da951f2d31b0f1acfd162bcf90e8a2e7569e3f8d17ab7d4db2ff\"" Sep 4 17:21:24.155326 containerd[1446]: time="2024-09-04T17:21:24.153775446Z" level=info msg="StartContainer for \"d884885ae065da951f2d31b0f1acfd162bcf90e8a2e7569e3f8d17ab7d4db2ff\"" Sep 4 17:21:24.194807 systemd[1]: Started cri-containerd-d884885ae065da951f2d31b0f1acfd162bcf90e8a2e7569e3f8d17ab7d4db2ff.scope - libcontainer container d884885ae065da951f2d31b0f1acfd162bcf90e8a2e7569e3f8d17ab7d4db2ff. Sep 4 17:21:24.242600 containerd[1446]: time="2024-09-04T17:21:24.242508074Z" level=info msg="StartContainer for \"d884885ae065da951f2d31b0f1acfd162bcf90e8a2e7569e3f8d17ab7d4db2ff\" returns successfully" Sep 4 17:21:24.710064 containerd[1446]: time="2024-09-04T17:21:24.710003483Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 17:21:24.713247 systemd[1]: cri-containerd-d884885ae065da951f2d31b0f1acfd162bcf90e8a2e7569e3f8d17ab7d4db2ff.scope: Deactivated successfully. Sep 4 17:21:24.724089 kubelet[2544]: I0904 17:21:24.724046 2544 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Sep 4 17:21:24.744865 containerd[1446]: time="2024-09-04T17:21:24.744794575Z" level=info msg="shim disconnected" id=d884885ae065da951f2d31b0f1acfd162bcf90e8a2e7569e3f8d17ab7d4db2ff namespace=k8s.io Sep 4 17:21:24.744865 containerd[1446]: time="2024-09-04T17:21:24.744850178Z" level=warning msg="cleaning up after shim disconnected" id=d884885ae065da951f2d31b0f1acfd162bcf90e8a2e7569e3f8d17ab7d4db2ff namespace=k8s.io Sep 4 17:21:24.744865 containerd[1446]: time="2024-09-04T17:21:24.744860898Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:21:24.750015 kubelet[2544]: I0904 17:21:24.749961 2544 topology_manager.go:215] "Topology Admit Handler" podUID="83aa095b-3e52-41ca-9aa6-57186d153ed4" podNamespace="kube-system" podName="coredns-76f75df574-mhvbp" Sep 4 17:21:24.751969 kubelet[2544]: I0904 17:21:24.751361 2544 topology_manager.go:215] "Topology Admit Handler" podUID="ee12f804-e0c2-496a-8f48-b9a1d21198a6" podNamespace="calico-system" podName="calico-kube-controllers-5f9bdfc4c-7tvk2" Sep 4 17:21:24.753207 kubelet[2544]: I0904 17:21:24.752301 2544 topology_manager.go:215] "Topology Admit Handler" podUID="b6d78a67-e0e4-42df-a851-94f64f8dabc6" podNamespace="kube-system" podName="coredns-76f75df574-xq9t4" Sep 4 17:21:24.764186 systemd[1]: Created slice kubepods-burstable-pod83aa095b_3e52_41ca_9aa6_57186d153ed4.slice - libcontainer container kubepods-burstable-pod83aa095b_3e52_41ca_9aa6_57186d153ed4.slice. Sep 4 17:21:24.772181 systemd[1]: Created slice kubepods-besteffort-podee12f804_e0c2_496a_8f48_b9a1d21198a6.slice - libcontainer container kubepods-besteffort-podee12f804_e0c2_496a_8f48_b9a1d21198a6.slice. Sep 4 17:21:24.776692 systemd[1]: Created slice kubepods-burstable-podb6d78a67_e0e4_42df_a851_94f64f8dabc6.slice - libcontainer container kubepods-burstable-podb6d78a67_e0e4_42df_a851_94f64f8dabc6.slice. Sep 4 17:21:24.889620 kubelet[2544]: I0904 17:21:24.889553 2544 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pd242\" (UniqueName: \"kubernetes.io/projected/83aa095b-3e52-41ca-9aa6-57186d153ed4-kube-api-access-pd242\") pod \"coredns-76f75df574-mhvbp\" (UID: \"83aa095b-3e52-41ca-9aa6-57186d153ed4\") " pod="kube-system/coredns-76f75df574-mhvbp" Sep 4 17:21:24.889765 kubelet[2544]: I0904 17:21:24.889649 2544 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee12f804-e0c2-496a-8f48-b9a1d21198a6-tigera-ca-bundle\") pod \"calico-kube-controllers-5f9bdfc4c-7tvk2\" (UID: \"ee12f804-e0c2-496a-8f48-b9a1d21198a6\") " pod="calico-system/calico-kube-controllers-5f9bdfc4c-7tvk2" Sep 4 17:21:24.889765 kubelet[2544]: I0904 17:21:24.889722 2544 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/83aa095b-3e52-41ca-9aa6-57186d153ed4-config-volume\") pod \"coredns-76f75df574-mhvbp\" (UID: \"83aa095b-3e52-41ca-9aa6-57186d153ed4\") " pod="kube-system/coredns-76f75df574-mhvbp" Sep 4 17:21:24.889765 kubelet[2544]: I0904 17:21:24.889750 2544 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfmmn\" (UniqueName: \"kubernetes.io/projected/ee12f804-e0c2-496a-8f48-b9a1d21198a6-kube-api-access-dfmmn\") pod \"calico-kube-controllers-5f9bdfc4c-7tvk2\" (UID: \"ee12f804-e0c2-496a-8f48-b9a1d21198a6\") " pod="calico-system/calico-kube-controllers-5f9bdfc4c-7tvk2" Sep 4 17:21:24.889848 kubelet[2544]: I0904 17:21:24.889779 2544 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b6d78a67-e0e4-42df-a851-94f64f8dabc6-config-volume\") pod \"coredns-76f75df574-xq9t4\" (UID: \"b6d78a67-e0e4-42df-a851-94f64f8dabc6\") " pod="kube-system/coredns-76f75df574-xq9t4" Sep 4 17:21:24.889873 kubelet[2544]: I0904 17:21:24.889850 2544 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4dvz\" (UniqueName: \"kubernetes.io/projected/b6d78a67-e0e4-42df-a851-94f64f8dabc6-kube-api-access-r4dvz\") pod \"coredns-76f75df574-xq9t4\" (UID: \"b6d78a67-e0e4-42df-a851-94f64f8dabc6\") " pod="kube-system/coredns-76f75df574-xq9t4" Sep 4 17:21:24.893469 systemd[1]: Created slice kubepods-besteffort-pod0101168c_9721_4950_958b_1ab1d8e66f6e.slice - libcontainer container kubepods-besteffort-pod0101168c_9721_4950_958b_1ab1d8e66f6e.slice. Sep 4 17:21:24.896154 containerd[1446]: time="2024-09-04T17:21:24.896104348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vf6pl,Uid:0101168c-9721-4950-958b-1ab1d8e66f6e,Namespace:calico-system,Attempt:0,}" Sep 4 17:21:24.973619 kubelet[2544]: E0904 17:21:24.973295 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:21:24.980191 containerd[1446]: time="2024-09-04T17:21:24.979952700Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\"" Sep 4 17:21:25.070750 kubelet[2544]: E0904 17:21:25.070713 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:21:25.071400 containerd[1446]: time="2024-09-04T17:21:25.071362877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-mhvbp,Uid:83aa095b-3e52-41ca-9aa6-57186d153ed4,Namespace:kube-system,Attempt:0,}" Sep 4 17:21:25.076316 containerd[1446]: time="2024-09-04T17:21:25.076273147Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f9bdfc4c-7tvk2,Uid:ee12f804-e0c2-496a-8f48-b9a1d21198a6,Namespace:calico-system,Attempt:0,}" Sep 4 17:21:25.079924 kubelet[2544]: E0904 17:21:25.079614 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:21:25.080683 containerd[1446]: time="2024-09-04T17:21:25.080302648Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-xq9t4,Uid:b6d78a67-e0e4-42df-a851-94f64f8dabc6,Namespace:kube-system,Attempt:0,}" Sep 4 17:21:25.092784 containerd[1446]: time="2024-09-04T17:21:25.092735131Z" level=error msg="Failed to destroy network for sandbox \"37e9b575c775ae9c06074be09da8e51114f7dd8624c7b9cd9633335a06981793\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:21:25.093104 containerd[1446]: time="2024-09-04T17:21:25.093074710Z" level=error msg="encountered an error cleaning up failed sandbox \"37e9b575c775ae9c06074be09da8e51114f7dd8624c7b9cd9633335a06981793\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:21:25.093147 containerd[1446]: time="2024-09-04T17:21:25.093127753Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vf6pl,Uid:0101168c-9721-4950-958b-1ab1d8e66f6e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"37e9b575c775ae9c06074be09da8e51114f7dd8624c7b9cd9633335a06981793\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:21:25.100850 kubelet[2544]: E0904 17:21:25.100637 2544 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37e9b575c775ae9c06074be09da8e51114f7dd8624c7b9cd9633335a06981793\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:21:25.100850 kubelet[2544]: E0904 17:21:25.100738 2544 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37e9b575c775ae9c06074be09da8e51114f7dd8624c7b9cd9633335a06981793\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-vf6pl" Sep 4 17:21:25.100850 kubelet[2544]: E0904 17:21:25.100759 2544 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37e9b575c775ae9c06074be09da8e51114f7dd8624c7b9cd9633335a06981793\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-vf6pl" Sep 4 17:21:25.101099 kubelet[2544]: E0904 17:21:25.100823 2544 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-vf6pl_calico-system(0101168c-9721-4950-958b-1ab1d8e66f6e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-vf6pl_calico-system(0101168c-9721-4950-958b-1ab1d8e66f6e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"37e9b575c775ae9c06074be09da8e51114f7dd8624c7b9cd9633335a06981793\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-vf6pl" podUID="0101168c-9721-4950-958b-1ab1d8e66f6e" Sep 4 17:21:25.155576 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d884885ae065da951f2d31b0f1acfd162bcf90e8a2e7569e3f8d17ab7d4db2ff-rootfs.mount: Deactivated successfully. Sep 4 17:21:25.158564 containerd[1446]: time="2024-09-04T17:21:25.158419060Z" level=error msg="Failed to destroy network for sandbox \"f492927734a232672282105fe8747e905bc10a82b636a3e4afb9114e3f89d862\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:21:25.160222 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f492927734a232672282105fe8747e905bc10a82b636a3e4afb9114e3f89d862-shm.mount: Deactivated successfully. Sep 4 17:21:25.164500 kubelet[2544]: E0904 17:21:25.161478 2544 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f492927734a232672282105fe8747e905bc10a82b636a3e4afb9114e3f89d862\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:21:25.164500 kubelet[2544]: E0904 17:21:25.161533 2544 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f492927734a232672282105fe8747e905bc10a82b636a3e4afb9114e3f89d862\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-mhvbp" Sep 4 17:21:25.164500 kubelet[2544]: E0904 17:21:25.161555 2544 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f492927734a232672282105fe8747e905bc10a82b636a3e4afb9114e3f89d862\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-mhvbp" Sep 4 17:21:25.164726 containerd[1446]: time="2024-09-04T17:21:25.161181211Z" level=error msg="encountered an error cleaning up failed sandbox \"f492927734a232672282105fe8747e905bc10a82b636a3e4afb9114e3f89d862\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:21:25.164726 containerd[1446]: time="2024-09-04T17:21:25.161250095Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-mhvbp,Uid:83aa095b-3e52-41ca-9aa6-57186d153ed4,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f492927734a232672282105fe8747e905bc10a82b636a3e4afb9114e3f89d862\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:21:25.164726 containerd[1446]: time="2024-09-04T17:21:25.163258725Z" level=error msg="Failed to destroy network for sandbox \"1daaeefdbabfa7d7b55349f90cb67c1beceebf514cae48934250ab3b45bae8cd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:21:25.164726 containerd[1446]: time="2024-09-04T17:21:25.164248300Z" level=error msg="encountered an error cleaning up failed sandbox \"1daaeefdbabfa7d7b55349f90cb67c1beceebf514cae48934250ab3b45bae8cd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:21:25.164726 containerd[1446]: time="2024-09-04T17:21:25.164308303Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-xq9t4,Uid:b6d78a67-e0e4-42df-a851-94f64f8dabc6,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1daaeefdbabfa7d7b55349f90cb67c1beceebf514cae48934250ab3b45bae8cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:21:25.164851 kubelet[2544]: E0904 17:21:25.161628 2544 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-mhvbp_kube-system(83aa095b-3e52-41ca-9aa6-57186d153ed4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-mhvbp_kube-system(83aa095b-3e52-41ca-9aa6-57186d153ed4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f492927734a232672282105fe8747e905bc10a82b636a3e4afb9114e3f89d862\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-mhvbp" podUID="83aa095b-3e52-41ca-9aa6-57186d153ed4" Sep 4 17:21:25.165261 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1daaeefdbabfa7d7b55349f90cb67c1beceebf514cae48934250ab3b45bae8cd-shm.mount: Deactivated successfully. Sep 4 17:21:25.166152 kubelet[2544]: E0904 17:21:25.166116 2544 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1daaeefdbabfa7d7b55349f90cb67c1beceebf514cae48934250ab3b45bae8cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:21:25.166251 kubelet[2544]: E0904 17:21:25.166197 2544 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1daaeefdbabfa7d7b55349f90cb67c1beceebf514cae48934250ab3b45bae8cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-xq9t4" Sep 4 17:21:25.166251 kubelet[2544]: E0904 17:21:25.166217 2544 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1daaeefdbabfa7d7b55349f90cb67c1beceebf514cae48934250ab3b45bae8cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-xq9t4" Sep 4 17:21:25.166305 kubelet[2544]: E0904 17:21:25.166271 2544 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-xq9t4_kube-system(b6d78a67-e0e4-42df-a851-94f64f8dabc6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-xq9t4_kube-system(b6d78a67-e0e4-42df-a851-94f64f8dabc6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1daaeefdbabfa7d7b55349f90cb67c1beceebf514cae48934250ab3b45bae8cd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-xq9t4" podUID="b6d78a67-e0e4-42df-a851-94f64f8dabc6" Sep 4 17:21:25.176956 containerd[1446]: time="2024-09-04T17:21:25.176892234Z" level=error msg="Failed to destroy network for sandbox \"73b895013b90a54d3be04c0797e07638ddfb52669c37ab1418d18e8cf8ca5f0b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:21:25.178620 containerd[1446]: time="2024-09-04T17:21:25.177227053Z" level=error msg="encountered an error cleaning up failed sandbox \"73b895013b90a54d3be04c0797e07638ddfb52669c37ab1418d18e8cf8ca5f0b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:21:25.178620 containerd[1446]: time="2024-09-04T17:21:25.177278136Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f9bdfc4c-7tvk2,Uid:ee12f804-e0c2-496a-8f48-b9a1d21198a6,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"73b895013b90a54d3be04c0797e07638ddfb52669c37ab1418d18e8cf8ca5f0b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:21:25.178763 kubelet[2544]: E0904 17:21:25.177758 2544 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73b895013b90a54d3be04c0797e07638ddfb52669c37ab1418d18e8cf8ca5f0b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:21:25.178763 kubelet[2544]: E0904 17:21:25.177807 2544 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73b895013b90a54d3be04c0797e07638ddfb52669c37ab1418d18e8cf8ca5f0b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5f9bdfc4c-7tvk2" Sep 4 17:21:25.178763 kubelet[2544]: E0904 17:21:25.177828 2544 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73b895013b90a54d3be04c0797e07638ddfb52669c37ab1418d18e8cf8ca5f0b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5f9bdfc4c-7tvk2" Sep 4 17:21:25.178672 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-73b895013b90a54d3be04c0797e07638ddfb52669c37ab1418d18e8cf8ca5f0b-shm.mount: Deactivated successfully. Sep 4 17:21:25.178912 kubelet[2544]: E0904 17:21:25.177879 2544 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5f9bdfc4c-7tvk2_calico-system(ee12f804-e0c2-496a-8f48-b9a1d21198a6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5f9bdfc4c-7tvk2_calico-system(ee12f804-e0c2-496a-8f48-b9a1d21198a6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"73b895013b90a54d3be04c0797e07638ddfb52669c37ab1418d18e8cf8ca5f0b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5f9bdfc4c-7tvk2" podUID="ee12f804-e0c2-496a-8f48-b9a1d21198a6" Sep 4 17:21:25.980817 containerd[1446]: time="2024-09-04T17:21:25.980775518Z" level=info msg="StopPodSandbox for \"73b895013b90a54d3be04c0797e07638ddfb52669c37ab1418d18e8cf8ca5f0b\"" Sep 4 17:21:25.981378 kubelet[2544]: I0904 17:21:25.979896 2544 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="73b895013b90a54d3be04c0797e07638ddfb52669c37ab1418d18e8cf8ca5f0b" Sep 4 17:21:25.981695 containerd[1446]: time="2024-09-04T17:21:25.981120856Z" level=info msg="Ensure that sandbox 73b895013b90a54d3be04c0797e07638ddfb52669c37ab1418d18e8cf8ca5f0b in task-service has been cleanup successfully" Sep 4 17:21:25.986469 kubelet[2544]: I0904 17:21:25.986427 2544 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="37e9b575c775ae9c06074be09da8e51114f7dd8624c7b9cd9633335a06981793" Sep 4 17:21:25.987569 containerd[1446]: time="2024-09-04T17:21:25.987530729Z" level=info msg="StopPodSandbox for \"37e9b575c775ae9c06074be09da8e51114f7dd8624c7b9cd9633335a06981793\"" Sep 4 17:21:25.987778 containerd[1446]: time="2024-09-04T17:21:25.987714659Z" level=info msg="Ensure that sandbox 37e9b575c775ae9c06074be09da8e51114f7dd8624c7b9cd9633335a06981793 in task-service has been cleanup successfully" Sep 4 17:21:25.989234 kubelet[2544]: I0904 17:21:25.988846 2544 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1daaeefdbabfa7d7b55349f90cb67c1beceebf514cae48934250ab3b45bae8cd" Sep 4 17:21:25.989516 containerd[1446]: time="2024-09-04T17:21:25.989347508Z" level=info msg="StopPodSandbox for \"1daaeefdbabfa7d7b55349f90cb67c1beceebf514cae48934250ab3b45bae8cd\"" Sep 4 17:21:25.990036 containerd[1446]: time="2024-09-04T17:21:25.990006625Z" level=info msg="Ensure that sandbox 1daaeefdbabfa7d7b55349f90cb67c1beceebf514cae48934250ab3b45bae8cd in task-service has been cleanup successfully" Sep 4 17:21:25.992921 kubelet[2544]: I0904 17:21:25.992883 2544 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f492927734a232672282105fe8747e905bc10a82b636a3e4afb9114e3f89d862" Sep 4 17:21:25.994059 containerd[1446]: time="2024-09-04T17:21:25.994012765Z" level=info msg="StopPodSandbox for \"f492927734a232672282105fe8747e905bc10a82b636a3e4afb9114e3f89d862\"" Sep 4 17:21:25.994224 containerd[1446]: time="2024-09-04T17:21:25.994199775Z" level=info msg="Ensure that sandbox f492927734a232672282105fe8747e905bc10a82b636a3e4afb9114e3f89d862 in task-service has been cleanup successfully" Sep 4 17:21:26.023822 containerd[1446]: time="2024-09-04T17:21:26.023773442Z" level=error msg="StopPodSandbox for \"73b895013b90a54d3be04c0797e07638ddfb52669c37ab1418d18e8cf8ca5f0b\" failed" error="failed to destroy network for sandbox \"73b895013b90a54d3be04c0797e07638ddfb52669c37ab1418d18e8cf8ca5f0b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:21:26.028775 kubelet[2544]: E0904 17:21:26.028735 2544 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"73b895013b90a54d3be04c0797e07638ddfb52669c37ab1418d18e8cf8ca5f0b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="73b895013b90a54d3be04c0797e07638ddfb52669c37ab1418d18e8cf8ca5f0b" Sep 4 17:21:26.029153 kubelet[2544]: E0904 17:21:26.029045 2544 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"73b895013b90a54d3be04c0797e07638ddfb52669c37ab1418d18e8cf8ca5f0b"} Sep 4 17:21:26.029153 kubelet[2544]: E0904 17:21:26.029097 2544 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ee12f804-e0c2-496a-8f48-b9a1d21198a6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"73b895013b90a54d3be04c0797e07638ddfb52669c37ab1418d18e8cf8ca5f0b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 17:21:26.029153 kubelet[2544]: E0904 17:21:26.029129 2544 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ee12f804-e0c2-496a-8f48-b9a1d21198a6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"73b895013b90a54d3be04c0797e07638ddfb52669c37ab1418d18e8cf8ca5f0b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5f9bdfc4c-7tvk2" podUID="ee12f804-e0c2-496a-8f48-b9a1d21198a6" Sep 4 17:21:26.045396 containerd[1446]: time="2024-09-04T17:21:26.045335352Z" level=error msg="StopPodSandbox for \"1daaeefdbabfa7d7b55349f90cb67c1beceebf514cae48934250ab3b45bae8cd\" failed" error="failed to destroy network for sandbox \"1daaeefdbabfa7d7b55349f90cb67c1beceebf514cae48934250ab3b45bae8cd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:21:26.045839 kubelet[2544]: E0904 17:21:26.045603 2544 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1daaeefdbabfa7d7b55349f90cb67c1beceebf514cae48934250ab3b45bae8cd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1daaeefdbabfa7d7b55349f90cb67c1beceebf514cae48934250ab3b45bae8cd" Sep 4 17:21:26.045839 kubelet[2544]: E0904 17:21:26.045651 2544 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1daaeefdbabfa7d7b55349f90cb67c1beceebf514cae48934250ab3b45bae8cd"} Sep 4 17:21:26.045839 kubelet[2544]: E0904 17:21:26.045687 2544 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b6d78a67-e0e4-42df-a851-94f64f8dabc6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1daaeefdbabfa7d7b55349f90cb67c1beceebf514cae48934250ab3b45bae8cd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 17:21:26.045839 kubelet[2544]: E0904 17:21:26.045716 2544 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b6d78a67-e0e4-42df-a851-94f64f8dabc6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1daaeefdbabfa7d7b55349f90cb67c1beceebf514cae48934250ab3b45bae8cd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-xq9t4" podUID="b6d78a67-e0e4-42df-a851-94f64f8dabc6" Sep 4 17:21:26.056850 containerd[1446]: time="2024-09-04T17:21:26.056456145Z" level=error msg="StopPodSandbox for \"f492927734a232672282105fe8747e905bc10a82b636a3e4afb9114e3f89d862\" failed" error="failed to destroy network for sandbox \"f492927734a232672282105fe8747e905bc10a82b636a3e4afb9114e3f89d862\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:21:26.056993 kubelet[2544]: E0904 17:21:26.056871 2544 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f492927734a232672282105fe8747e905bc10a82b636a3e4afb9114e3f89d862\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f492927734a232672282105fe8747e905bc10a82b636a3e4afb9114e3f89d862" Sep 4 17:21:26.056993 kubelet[2544]: E0904 17:21:26.056915 2544 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f492927734a232672282105fe8747e905bc10a82b636a3e4afb9114e3f89d862"} Sep 4 17:21:26.056993 kubelet[2544]: E0904 17:21:26.056949 2544 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"83aa095b-3e52-41ca-9aa6-57186d153ed4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f492927734a232672282105fe8747e905bc10a82b636a3e4afb9114e3f89d862\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 17:21:26.056993 kubelet[2544]: E0904 17:21:26.056990 2544 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"83aa095b-3e52-41ca-9aa6-57186d153ed4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f492927734a232672282105fe8747e905bc10a82b636a3e4afb9114e3f89d862\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-mhvbp" podUID="83aa095b-3e52-41ca-9aa6-57186d153ed4" Sep 4 17:21:26.060257 containerd[1446]: time="2024-09-04T17:21:26.060210705Z" level=error msg="StopPodSandbox for \"37e9b575c775ae9c06074be09da8e51114f7dd8624c7b9cd9633335a06981793\" failed" error="failed to destroy network for sandbox \"37e9b575c775ae9c06074be09da8e51114f7dd8624c7b9cd9633335a06981793\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:21:26.060837 kubelet[2544]: E0904 17:21:26.060443 2544 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"37e9b575c775ae9c06074be09da8e51114f7dd8624c7b9cd9633335a06981793\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="37e9b575c775ae9c06074be09da8e51114f7dd8624c7b9cd9633335a06981793" Sep 4 17:21:26.060837 kubelet[2544]: E0904 17:21:26.060487 2544 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"37e9b575c775ae9c06074be09da8e51114f7dd8624c7b9cd9633335a06981793"} Sep 4 17:21:26.060837 kubelet[2544]: E0904 17:21:26.060526 2544 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0101168c-9721-4950-958b-1ab1d8e66f6e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"37e9b575c775ae9c06074be09da8e51114f7dd8624c7b9cd9633335a06981793\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 17:21:26.060837 kubelet[2544]: E0904 17:21:26.060557 2544 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0101168c-9721-4950-958b-1ab1d8e66f6e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"37e9b575c775ae9c06074be09da8e51114f7dd8624c7b9cd9633335a06981793\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-vf6pl" podUID="0101168c-9721-4950-958b-1ab1d8e66f6e" Sep 4 17:21:28.309934 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount267156878.mount: Deactivated successfully. Sep 4 17:21:28.585361 containerd[1446]: time="2024-09-04T17:21:28.585230781Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:21:28.586754 containerd[1446]: time="2024-09-04T17:21:28.586367238Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.1: active requests=0, bytes read=113057300" Sep 4 17:21:28.589724 containerd[1446]: time="2024-09-04T17:21:28.587739227Z" level=info msg="ImageCreate event name:\"sha256:373272045e41e00ebf8da7ce9fc6b26d326fb8b3e665d9f78bb121976f83b1dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:21:28.592081 containerd[1446]: time="2024-09-04T17:21:28.592024443Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.1\" with image id \"sha256:373272045e41e00ebf8da7ce9fc6b26d326fb8b3e665d9f78bb121976f83b1dc\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\", size \"113057162\" in 3.61199434s" Sep 4 17:21:28.592608 containerd[1446]: time="2024-09-04T17:21:28.592557910Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\" returns image reference \"sha256:373272045e41e00ebf8da7ce9fc6b26d326fb8b3e665d9f78bb121976f83b1dc\"" Sep 4 17:21:28.604640 containerd[1446]: time="2024-09-04T17:21:28.604572995Z" level=info msg="CreateContainer within sandbox \"c59607d1823a60cd23f356aac3438b96a2ef583bff810b67eb95ca5fbff258da\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 4 17:21:28.605485 containerd[1446]: time="2024-09-04T17:21:28.605438239Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:21:28.628926 containerd[1446]: time="2024-09-04T17:21:28.628873540Z" level=info msg="CreateContainer within sandbox \"c59607d1823a60cd23f356aac3438b96a2ef583bff810b67eb95ca5fbff258da\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"7a65fc2febf5594f307aa26741b36cce4a5457d8933baa6fa21b5c93bdb920b9\"" Sep 4 17:21:28.629554 containerd[1446]: time="2024-09-04T17:21:28.629529973Z" level=info msg="StartContainer for \"7a65fc2febf5594f307aa26741b36cce4a5457d8933baa6fa21b5c93bdb920b9\"" Sep 4 17:21:28.693832 systemd[1]: Started cri-containerd-7a65fc2febf5594f307aa26741b36cce4a5457d8933baa6fa21b5c93bdb920b9.scope - libcontainer container 7a65fc2febf5594f307aa26741b36cce4a5457d8933baa6fa21b5c93bdb920b9. Sep 4 17:21:28.786643 containerd[1446]: time="2024-09-04T17:21:28.786568764Z" level=info msg="StartContainer for \"7a65fc2febf5594f307aa26741b36cce4a5457d8933baa6fa21b5c93bdb920b9\" returns successfully" Sep 4 17:21:28.953799 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 4 17:21:28.953959 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 4 17:21:29.003703 kubelet[2544]: E0904 17:21:29.003668 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:21:29.019734 kubelet[2544]: I0904 17:21:29.019652 2544 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-f6vv6" podStartSLOduration=1.7112517980000002 podStartE2EDuration="15.019609919s" podCreationTimestamp="2024-09-04 17:21:14 +0000 UTC" firstStartedPulling="2024-09-04 17:21:15.284693734 +0000 UTC m=+22.503807318" lastFinishedPulling="2024-09-04 17:21:28.593051895 +0000 UTC m=+35.812165439" observedRunningTime="2024-09-04 17:21:29.018031682 +0000 UTC m=+36.237145266" watchObservedRunningTime="2024-09-04 17:21:29.019609919 +0000 UTC m=+36.238723503" Sep 4 17:21:29.983628 systemd[1]: Started sshd@7-10.0.0.51:22-10.0.0.1:39004.service - OpenSSH per-connection server daemon (10.0.0.1:39004). Sep 4 17:21:30.009768 kubelet[2544]: E0904 17:21:30.009556 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:21:30.031545 sshd[3623]: Accepted publickey for core from 10.0.0.1 port 39004 ssh2: RSA SHA256:bZG4GDjjdFyRf+7zZQ8+tZsxmoYB2474ukIYHB/jTWk Sep 4 17:21:30.033107 sshd[3623]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:21:30.039923 systemd-logind[1424]: New session 8 of user core. Sep 4 17:21:30.041441 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 4 17:21:30.190614 sshd[3623]: pam_unix(sshd:session): session closed for user core Sep 4 17:21:30.194965 systemd-logind[1424]: Session 8 logged out. Waiting for processes to exit. Sep 4 17:21:30.195438 systemd[1]: sshd@7-10.0.0.51:22-10.0.0.1:39004.service: Deactivated successfully. Sep 4 17:21:30.197343 systemd[1]: session-8.scope: Deactivated successfully. Sep 4 17:21:30.199757 systemd-logind[1424]: Removed session 8. Sep 4 17:21:30.445620 kernel: bpftool[3786]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 4 17:21:30.619106 systemd-networkd[1380]: vxlan.calico: Link UP Sep 4 17:21:30.619117 systemd-networkd[1380]: vxlan.calico: Gained carrier Sep 4 17:21:31.014746 kubelet[2544]: E0904 17:21:31.014045 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:21:32.119706 systemd-networkd[1380]: vxlan.calico: Gained IPv6LL Sep 4 17:21:35.203136 systemd[1]: Started sshd@8-10.0.0.51:22-10.0.0.1:53596.service - OpenSSH per-connection server daemon (10.0.0.1:53596). Sep 4 17:21:35.246343 sshd[3883]: Accepted publickey for core from 10.0.0.1 port 53596 ssh2: RSA SHA256:bZG4GDjjdFyRf+7zZQ8+tZsxmoYB2474ukIYHB/jTWk Sep 4 17:21:35.247879 sshd[3883]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:21:35.251993 systemd-logind[1424]: New session 9 of user core. Sep 4 17:21:35.261967 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 4 17:21:35.426459 sshd[3883]: pam_unix(sshd:session): session closed for user core Sep 4 17:21:35.429922 systemd[1]: sshd@8-10.0.0.51:22-10.0.0.1:53596.service: Deactivated successfully. Sep 4 17:21:35.431758 systemd[1]: session-9.scope: Deactivated successfully. Sep 4 17:21:35.432423 systemd-logind[1424]: Session 9 logged out. Waiting for processes to exit. Sep 4 17:21:35.435130 systemd-logind[1424]: Removed session 9. Sep 4 17:21:36.889960 containerd[1446]: time="2024-09-04T17:21:36.889904865Z" level=info msg="StopPodSandbox for \"73b895013b90a54d3be04c0797e07638ddfb52669c37ab1418d18e8cf8ca5f0b\"" Sep 4 17:21:36.894045 containerd[1446]: time="2024-09-04T17:21:36.889904905Z" level=info msg="StopPodSandbox for \"f492927734a232672282105fe8747e905bc10a82b636a3e4afb9114e3f89d862\"" Sep 4 17:21:37.170452 containerd[1446]: 2024-09-04 17:21:37.037 [INFO][3937] k8s.go 608: Cleaning up netns ContainerID="f492927734a232672282105fe8747e905bc10a82b636a3e4afb9114e3f89d862" Sep 4 17:21:37.170452 containerd[1446]: 2024-09-04 17:21:37.038 [INFO][3937] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="f492927734a232672282105fe8747e905bc10a82b636a3e4afb9114e3f89d862" iface="eth0" netns="/var/run/netns/cni-4d5f33bf-f7e1-a464-e4f1-839af54bc2f6" Sep 4 17:21:37.170452 containerd[1446]: 2024-09-04 17:21:37.039 [INFO][3937] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="f492927734a232672282105fe8747e905bc10a82b636a3e4afb9114e3f89d862" iface="eth0" netns="/var/run/netns/cni-4d5f33bf-f7e1-a464-e4f1-839af54bc2f6" Sep 4 17:21:37.170452 containerd[1446]: 2024-09-04 17:21:37.039 [INFO][3937] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="f492927734a232672282105fe8747e905bc10a82b636a3e4afb9114e3f89d862" iface="eth0" netns="/var/run/netns/cni-4d5f33bf-f7e1-a464-e4f1-839af54bc2f6" Sep 4 17:21:37.170452 containerd[1446]: 2024-09-04 17:21:37.039 [INFO][3937] k8s.go 615: Releasing IP address(es) ContainerID="f492927734a232672282105fe8747e905bc10a82b636a3e4afb9114e3f89d862" Sep 4 17:21:37.170452 containerd[1446]: 2024-09-04 17:21:37.039 [INFO][3937] utils.go 188: Calico CNI releasing IP address ContainerID="f492927734a232672282105fe8747e905bc10a82b636a3e4afb9114e3f89d862" Sep 4 17:21:37.170452 containerd[1446]: 2024-09-04 17:21:37.151 [INFO][3953] ipam_plugin.go 417: Releasing address using handleID ContainerID="f492927734a232672282105fe8747e905bc10a82b636a3e4afb9114e3f89d862" HandleID="k8s-pod-network.f492927734a232672282105fe8747e905bc10a82b636a3e4afb9114e3f89d862" Workload="localhost-k8s-coredns--76f75df574--mhvbp-eth0" Sep 4 17:21:37.170452 containerd[1446]: 2024-09-04 17:21:37.151 [INFO][3953] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:21:37.170452 containerd[1446]: 2024-09-04 17:21:37.151 [INFO][3953] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:21:37.170452 containerd[1446]: 2024-09-04 17:21:37.164 [WARNING][3953] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="f492927734a232672282105fe8747e905bc10a82b636a3e4afb9114e3f89d862" HandleID="k8s-pod-network.f492927734a232672282105fe8747e905bc10a82b636a3e4afb9114e3f89d862" Workload="localhost-k8s-coredns--76f75df574--mhvbp-eth0" Sep 4 17:21:37.170452 containerd[1446]: 2024-09-04 17:21:37.164 [INFO][3953] ipam_plugin.go 445: Releasing address using workloadID ContainerID="f492927734a232672282105fe8747e905bc10a82b636a3e4afb9114e3f89d862" HandleID="k8s-pod-network.f492927734a232672282105fe8747e905bc10a82b636a3e4afb9114e3f89d862" Workload="localhost-k8s-coredns--76f75df574--mhvbp-eth0" Sep 4 17:21:37.170452 containerd[1446]: 2024-09-04 17:21:37.165 [INFO][3953] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:21:37.170452 containerd[1446]: 2024-09-04 17:21:37.169 [INFO][3937] k8s.go 621: Teardown processing complete. ContainerID="f492927734a232672282105fe8747e905bc10a82b636a3e4afb9114e3f89d862" Sep 4 17:21:37.171013 containerd[1446]: time="2024-09-04T17:21:37.170685141Z" level=info msg="TearDown network for sandbox \"f492927734a232672282105fe8747e905bc10a82b636a3e4afb9114e3f89d862\" successfully" Sep 4 17:21:37.171013 containerd[1446]: time="2024-09-04T17:21:37.170716142Z" level=info msg="StopPodSandbox for \"f492927734a232672282105fe8747e905bc10a82b636a3e4afb9114e3f89d862\" returns successfully" Sep 4 17:21:37.172614 kubelet[2544]: E0904 17:21:37.171099 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:21:37.176375 containerd[1446]: time="2024-09-04T17:21:37.176318812Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-mhvbp,Uid:83aa095b-3e52-41ca-9aa6-57186d153ed4,Namespace:kube-system,Attempt:1,}" Sep 4 17:21:37.179049 systemd[1]: run-netns-cni\x2d4d5f33bf\x2df7e1\x2da464\x2de4f1\x2d839af54bc2f6.mount: Deactivated successfully. Sep 4 17:21:37.186671 containerd[1446]: 2024-09-04 17:21:37.031 [INFO][3938] k8s.go 608: Cleaning up netns ContainerID="73b895013b90a54d3be04c0797e07638ddfb52669c37ab1418d18e8cf8ca5f0b" Sep 4 17:21:37.186671 containerd[1446]: 2024-09-04 17:21:37.031 [INFO][3938] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="73b895013b90a54d3be04c0797e07638ddfb52669c37ab1418d18e8cf8ca5f0b" iface="eth0" netns="/var/run/netns/cni-3448e325-8518-6f72-2448-9ed4d897c122" Sep 4 17:21:37.186671 containerd[1446]: 2024-09-04 17:21:37.033 [INFO][3938] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="73b895013b90a54d3be04c0797e07638ddfb52669c37ab1418d18e8cf8ca5f0b" iface="eth0" netns="/var/run/netns/cni-3448e325-8518-6f72-2448-9ed4d897c122" Sep 4 17:21:37.186671 containerd[1446]: 2024-09-04 17:21:37.035 [INFO][3938] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="73b895013b90a54d3be04c0797e07638ddfb52669c37ab1418d18e8cf8ca5f0b" iface="eth0" netns="/var/run/netns/cni-3448e325-8518-6f72-2448-9ed4d897c122" Sep 4 17:21:37.186671 containerd[1446]: 2024-09-04 17:21:37.035 [INFO][3938] k8s.go 615: Releasing IP address(es) ContainerID="73b895013b90a54d3be04c0797e07638ddfb52669c37ab1418d18e8cf8ca5f0b" Sep 4 17:21:37.186671 containerd[1446]: 2024-09-04 17:21:37.035 [INFO][3938] utils.go 188: Calico CNI releasing IP address ContainerID="73b895013b90a54d3be04c0797e07638ddfb52669c37ab1418d18e8cf8ca5f0b" Sep 4 17:21:37.186671 containerd[1446]: 2024-09-04 17:21:37.151 [INFO][3952] ipam_plugin.go 417: Releasing address using handleID ContainerID="73b895013b90a54d3be04c0797e07638ddfb52669c37ab1418d18e8cf8ca5f0b" HandleID="k8s-pod-network.73b895013b90a54d3be04c0797e07638ddfb52669c37ab1418d18e8cf8ca5f0b" Workload="localhost-k8s-calico--kube--controllers--5f9bdfc4c--7tvk2-eth0" Sep 4 17:21:37.186671 containerd[1446]: 2024-09-04 17:21:37.151 [INFO][3952] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:21:37.186671 containerd[1446]: 2024-09-04 17:21:37.166 [INFO][3952] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:21:37.186671 containerd[1446]: 2024-09-04 17:21:37.178 [WARNING][3952] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="73b895013b90a54d3be04c0797e07638ddfb52669c37ab1418d18e8cf8ca5f0b" HandleID="k8s-pod-network.73b895013b90a54d3be04c0797e07638ddfb52669c37ab1418d18e8cf8ca5f0b" Workload="localhost-k8s-calico--kube--controllers--5f9bdfc4c--7tvk2-eth0" Sep 4 17:21:37.186671 containerd[1446]: 2024-09-04 17:21:37.179 [INFO][3952] ipam_plugin.go 445: Releasing address using workloadID ContainerID="73b895013b90a54d3be04c0797e07638ddfb52669c37ab1418d18e8cf8ca5f0b" HandleID="k8s-pod-network.73b895013b90a54d3be04c0797e07638ddfb52669c37ab1418d18e8cf8ca5f0b" Workload="localhost-k8s-calico--kube--controllers--5f9bdfc4c--7tvk2-eth0" Sep 4 17:21:37.186671 containerd[1446]: 2024-09-04 17:21:37.182 [INFO][3952] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:21:37.186671 containerd[1446]: 2024-09-04 17:21:37.185 [INFO][3938] k8s.go 621: Teardown processing complete. ContainerID="73b895013b90a54d3be04c0797e07638ddfb52669c37ab1418d18e8cf8ca5f0b" Sep 4 17:21:37.187149 containerd[1446]: time="2024-09-04T17:21:37.186762720Z" level=info msg="TearDown network for sandbox \"73b895013b90a54d3be04c0797e07638ddfb52669c37ab1418d18e8cf8ca5f0b\" successfully" Sep 4 17:21:37.187149 containerd[1446]: time="2024-09-04T17:21:37.186788761Z" level=info msg="StopPodSandbox for \"73b895013b90a54d3be04c0797e07638ddfb52669c37ab1418d18e8cf8ca5f0b\" returns successfully" Sep 4 17:21:37.188456 containerd[1446]: time="2024-09-04T17:21:37.188298183Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f9bdfc4c-7tvk2,Uid:ee12f804-e0c2-496a-8f48-b9a1d21198a6,Namespace:calico-system,Attempt:1,}" Sep 4 17:21:37.188562 systemd[1]: run-netns-cni\x2d3448e325\x2d8518\x2d6f72\x2d2448\x2d9ed4d897c122.mount: Deactivated successfully. Sep 4 17:21:37.321172 systemd-networkd[1380]: cali4941ee9eeb4: Link UP Sep 4 17:21:37.321507 systemd-networkd[1380]: cali4941ee9eeb4: Gained carrier Sep 4 17:21:37.342114 containerd[1446]: 2024-09-04 17:21:37.242 [INFO][3979] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--76f75df574--mhvbp-eth0 coredns-76f75df574- kube-system 83aa095b-3e52-41ca-9aa6-57186d153ed4 824 0 2024-09-04 17:21:08 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-76f75df574-mhvbp eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali4941ee9eeb4 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="7ee155579c2df6a56ea637923f7b0b258cddb2199a6befb1001d5b234a4e19a9" Namespace="kube-system" Pod="coredns-76f75df574-mhvbp" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--mhvbp-" Sep 4 17:21:37.342114 containerd[1446]: 2024-09-04 17:21:37.243 [INFO][3979] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="7ee155579c2df6a56ea637923f7b0b258cddb2199a6befb1001d5b234a4e19a9" Namespace="kube-system" Pod="coredns-76f75df574-mhvbp" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--mhvbp-eth0" Sep 4 17:21:37.342114 containerd[1446]: 2024-09-04 17:21:37.275 [INFO][3998] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7ee155579c2df6a56ea637923f7b0b258cddb2199a6befb1001d5b234a4e19a9" HandleID="k8s-pod-network.7ee155579c2df6a56ea637923f7b0b258cddb2199a6befb1001d5b234a4e19a9" Workload="localhost-k8s-coredns--76f75df574--mhvbp-eth0" Sep 4 17:21:37.342114 containerd[1446]: 2024-09-04 17:21:37.287 [INFO][3998] ipam_plugin.go 270: Auto assigning IP ContainerID="7ee155579c2df6a56ea637923f7b0b258cddb2199a6befb1001d5b234a4e19a9" HandleID="k8s-pod-network.7ee155579c2df6a56ea637923f7b0b258cddb2199a6befb1001d5b234a4e19a9" Workload="localhost-k8s-coredns--76f75df574--mhvbp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400058fc20), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-76f75df574-mhvbp", "timestamp":"2024-09-04 17:21:37.275305748 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:21:37.342114 containerd[1446]: 2024-09-04 17:21:37.288 [INFO][3998] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:21:37.342114 containerd[1446]: 2024-09-04 17:21:37.288 [INFO][3998] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:21:37.342114 containerd[1446]: 2024-09-04 17:21:37.288 [INFO][3998] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 4 17:21:37.342114 containerd[1446]: 2024-09-04 17:21:37.290 [INFO][3998] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.7ee155579c2df6a56ea637923f7b0b258cddb2199a6befb1001d5b234a4e19a9" host="localhost" Sep 4 17:21:37.342114 containerd[1446]: 2024-09-04 17:21:37.296 [INFO][3998] ipam.go 372: Looking up existing affinities for host host="localhost" Sep 4 17:21:37.342114 containerd[1446]: 2024-09-04 17:21:37.301 [INFO][3998] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Sep 4 17:21:37.342114 containerd[1446]: 2024-09-04 17:21:37.303 [INFO][3998] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 4 17:21:37.342114 containerd[1446]: 2024-09-04 17:21:37.305 [INFO][3998] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 4 17:21:37.342114 containerd[1446]: 2024-09-04 17:21:37.305 [INFO][3998] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7ee155579c2df6a56ea637923f7b0b258cddb2199a6befb1001d5b234a4e19a9" host="localhost" Sep 4 17:21:37.342114 containerd[1446]: 2024-09-04 17:21:37.307 [INFO][3998] ipam.go 1685: Creating new handle: k8s-pod-network.7ee155579c2df6a56ea637923f7b0b258cddb2199a6befb1001d5b234a4e19a9 Sep 4 17:21:37.342114 containerd[1446]: 2024-09-04 17:21:37.311 [INFO][3998] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7ee155579c2df6a56ea637923f7b0b258cddb2199a6befb1001d5b234a4e19a9" host="localhost" Sep 4 17:21:37.342114 containerd[1446]: 2024-09-04 17:21:37.316 [INFO][3998] ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.7ee155579c2df6a56ea637923f7b0b258cddb2199a6befb1001d5b234a4e19a9" host="localhost" Sep 4 17:21:37.342114 containerd[1446]: 2024-09-04 17:21:37.316 [INFO][3998] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.7ee155579c2df6a56ea637923f7b0b258cddb2199a6befb1001d5b234a4e19a9" host="localhost" Sep 4 17:21:37.342114 containerd[1446]: 2024-09-04 17:21:37.316 [INFO][3998] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:21:37.342114 containerd[1446]: 2024-09-04 17:21:37.316 [INFO][3998] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="7ee155579c2df6a56ea637923f7b0b258cddb2199a6befb1001d5b234a4e19a9" HandleID="k8s-pod-network.7ee155579c2df6a56ea637923f7b0b258cddb2199a6befb1001d5b234a4e19a9" Workload="localhost-k8s-coredns--76f75df574--mhvbp-eth0" Sep 4 17:21:37.343119 containerd[1446]: 2024-09-04 17:21:37.318 [INFO][3979] k8s.go 386: Populated endpoint ContainerID="7ee155579c2df6a56ea637923f7b0b258cddb2199a6befb1001d5b234a4e19a9" Namespace="kube-system" Pod="coredns-76f75df574-mhvbp" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--mhvbp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--mhvbp-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"83aa095b-3e52-41ca-9aa6-57186d153ed4", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 21, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-76f75df574-mhvbp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4941ee9eeb4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:21:37.343119 containerd[1446]: 2024-09-04 17:21:37.318 [INFO][3979] k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="7ee155579c2df6a56ea637923f7b0b258cddb2199a6befb1001d5b234a4e19a9" Namespace="kube-system" Pod="coredns-76f75df574-mhvbp" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--mhvbp-eth0" Sep 4 17:21:37.343119 containerd[1446]: 2024-09-04 17:21:37.318 [INFO][3979] dataplane_linux.go 68: Setting the host side veth name to cali4941ee9eeb4 ContainerID="7ee155579c2df6a56ea637923f7b0b258cddb2199a6befb1001d5b234a4e19a9" Namespace="kube-system" Pod="coredns-76f75df574-mhvbp" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--mhvbp-eth0" Sep 4 17:21:37.343119 containerd[1446]: 2024-09-04 17:21:37.322 [INFO][3979] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="7ee155579c2df6a56ea637923f7b0b258cddb2199a6befb1001d5b234a4e19a9" Namespace="kube-system" Pod="coredns-76f75df574-mhvbp" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--mhvbp-eth0" Sep 4 17:21:37.343119 containerd[1446]: 2024-09-04 17:21:37.322 [INFO][3979] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="7ee155579c2df6a56ea637923f7b0b258cddb2199a6befb1001d5b234a4e19a9" Namespace="kube-system" Pod="coredns-76f75df574-mhvbp" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--mhvbp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--mhvbp-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"83aa095b-3e52-41ca-9aa6-57186d153ed4", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 21, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7ee155579c2df6a56ea637923f7b0b258cddb2199a6befb1001d5b234a4e19a9", Pod:"coredns-76f75df574-mhvbp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4941ee9eeb4", MAC:"46:5d:e4:b2:00:b1", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:21:37.343119 containerd[1446]: 2024-09-04 17:21:37.335 [INFO][3979] k8s.go 500: Wrote updated endpoint to datastore ContainerID="7ee155579c2df6a56ea637923f7b0b258cddb2199a6befb1001d5b234a4e19a9" Namespace="kube-system" Pod="coredns-76f75df574-mhvbp" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--mhvbp-eth0" Sep 4 17:21:37.366947 systemd-networkd[1380]: calic85135ce791: Link UP Sep 4 17:21:37.367087 systemd-networkd[1380]: calic85135ce791: Gained carrier Sep 4 17:21:37.370254 containerd[1446]: time="2024-09-04T17:21:37.369662415Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:21:37.370254 containerd[1446]: time="2024-09-04T17:21:37.369730538Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:21:37.370254 containerd[1446]: time="2024-09-04T17:21:37.369747258Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:21:37.370254 containerd[1446]: time="2024-09-04T17:21:37.369844542Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:21:37.385408 containerd[1446]: 2024-09-04 17:21:37.239 [INFO][3968] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--5f9bdfc4c--7tvk2-eth0 calico-kube-controllers-5f9bdfc4c- calico-system ee12f804-e0c2-496a-8f48-b9a1d21198a6 823 0 2024-09-04 17:21:14 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5f9bdfc4c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-5f9bdfc4c-7tvk2 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calic85135ce791 [] []}} ContainerID="6bed030c2d63d7e6b286c447ea5dc14b9736a0c4a14af48f63cd52a46b362580" Namespace="calico-system" Pod="calico-kube-controllers-5f9bdfc4c-7tvk2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5f9bdfc4c--7tvk2-" Sep 4 17:21:37.385408 containerd[1446]: 2024-09-04 17:21:37.240 [INFO][3968] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="6bed030c2d63d7e6b286c447ea5dc14b9736a0c4a14af48f63cd52a46b362580" Namespace="calico-system" Pod="calico-kube-controllers-5f9bdfc4c-7tvk2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5f9bdfc4c--7tvk2-eth0" Sep 4 17:21:37.385408 containerd[1446]: 2024-09-04 17:21:37.275 [INFO][3997] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6bed030c2d63d7e6b286c447ea5dc14b9736a0c4a14af48f63cd52a46b362580" HandleID="k8s-pod-network.6bed030c2d63d7e6b286c447ea5dc14b9736a0c4a14af48f63cd52a46b362580" Workload="localhost-k8s-calico--kube--controllers--5f9bdfc4c--7tvk2-eth0" Sep 4 17:21:37.385408 containerd[1446]: 2024-09-04 17:21:37.287 [INFO][3997] ipam_plugin.go 270: Auto assigning IP ContainerID="6bed030c2d63d7e6b286c447ea5dc14b9736a0c4a14af48f63cd52a46b362580" HandleID="k8s-pod-network.6bed030c2d63d7e6b286c447ea5dc14b9736a0c4a14af48f63cd52a46b362580" Workload="localhost-k8s-calico--kube--controllers--5f9bdfc4c--7tvk2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400027c380), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-5f9bdfc4c-7tvk2", "timestamp":"2024-09-04 17:21:37.275219305 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:21:37.385408 containerd[1446]: 2024-09-04 17:21:37.288 [INFO][3997] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:21:37.385408 containerd[1446]: 2024-09-04 17:21:37.316 [INFO][3997] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:21:37.385408 containerd[1446]: 2024-09-04 17:21:37.316 [INFO][3997] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 4 17:21:37.385408 containerd[1446]: 2024-09-04 17:21:37.320 [INFO][3997] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.6bed030c2d63d7e6b286c447ea5dc14b9736a0c4a14af48f63cd52a46b362580" host="localhost" Sep 4 17:21:37.385408 containerd[1446]: 2024-09-04 17:21:37.328 [INFO][3997] ipam.go 372: Looking up existing affinities for host host="localhost" Sep 4 17:21:37.385408 containerd[1446]: 2024-09-04 17:21:37.334 [INFO][3997] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Sep 4 17:21:37.385408 containerd[1446]: 2024-09-04 17:21:37.336 [INFO][3997] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 4 17:21:37.385408 containerd[1446]: 2024-09-04 17:21:37.343 [INFO][3997] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 4 17:21:37.385408 containerd[1446]: 2024-09-04 17:21:37.343 [INFO][3997] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6bed030c2d63d7e6b286c447ea5dc14b9736a0c4a14af48f63cd52a46b362580" host="localhost" Sep 4 17:21:37.385408 containerd[1446]: 2024-09-04 17:21:37.351 [INFO][3997] ipam.go 1685: Creating new handle: k8s-pod-network.6bed030c2d63d7e6b286c447ea5dc14b9736a0c4a14af48f63cd52a46b362580 Sep 4 17:21:37.385408 containerd[1446]: 2024-09-04 17:21:37.355 [INFO][3997] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6bed030c2d63d7e6b286c447ea5dc14b9736a0c4a14af48f63cd52a46b362580" host="localhost" Sep 4 17:21:37.385408 containerd[1446]: 2024-09-04 17:21:37.360 [INFO][3997] ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.6bed030c2d63d7e6b286c447ea5dc14b9736a0c4a14af48f63cd52a46b362580" host="localhost" Sep 4 17:21:37.385408 containerd[1446]: 2024-09-04 17:21:37.360 [INFO][3997] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.6bed030c2d63d7e6b286c447ea5dc14b9736a0c4a14af48f63cd52a46b362580" host="localhost" Sep 4 17:21:37.385408 containerd[1446]: 2024-09-04 17:21:37.360 [INFO][3997] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:21:37.385408 containerd[1446]: 2024-09-04 17:21:37.360 [INFO][3997] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="6bed030c2d63d7e6b286c447ea5dc14b9736a0c4a14af48f63cd52a46b362580" HandleID="k8s-pod-network.6bed030c2d63d7e6b286c447ea5dc14b9736a0c4a14af48f63cd52a46b362580" Workload="localhost-k8s-calico--kube--controllers--5f9bdfc4c--7tvk2-eth0" Sep 4 17:21:37.386066 containerd[1446]: 2024-09-04 17:21:37.364 [INFO][3968] k8s.go 386: Populated endpoint ContainerID="6bed030c2d63d7e6b286c447ea5dc14b9736a0c4a14af48f63cd52a46b362580" Namespace="calico-system" Pod="calico-kube-controllers-5f9bdfc4c-7tvk2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5f9bdfc4c--7tvk2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5f9bdfc4c--7tvk2-eth0", GenerateName:"calico-kube-controllers-5f9bdfc4c-", Namespace:"calico-system", SelfLink:"", UID:"ee12f804-e0c2-496a-8f48-b9a1d21198a6", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 21, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5f9bdfc4c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-5f9bdfc4c-7tvk2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic85135ce791", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:21:37.386066 containerd[1446]: 2024-09-04 17:21:37.364 [INFO][3968] k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="6bed030c2d63d7e6b286c447ea5dc14b9736a0c4a14af48f63cd52a46b362580" Namespace="calico-system" Pod="calico-kube-controllers-5f9bdfc4c-7tvk2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5f9bdfc4c--7tvk2-eth0" Sep 4 17:21:37.386066 containerd[1446]: 2024-09-04 17:21:37.365 [INFO][3968] dataplane_linux.go 68: Setting the host side veth name to calic85135ce791 ContainerID="6bed030c2d63d7e6b286c447ea5dc14b9736a0c4a14af48f63cd52a46b362580" Namespace="calico-system" Pod="calico-kube-controllers-5f9bdfc4c-7tvk2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5f9bdfc4c--7tvk2-eth0" Sep 4 17:21:37.386066 containerd[1446]: 2024-09-04 17:21:37.367 [INFO][3968] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="6bed030c2d63d7e6b286c447ea5dc14b9736a0c4a14af48f63cd52a46b362580" Namespace="calico-system" Pod="calico-kube-controllers-5f9bdfc4c-7tvk2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5f9bdfc4c--7tvk2-eth0" Sep 4 17:21:37.386066 containerd[1446]: 2024-09-04 17:21:37.368 [INFO][3968] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="6bed030c2d63d7e6b286c447ea5dc14b9736a0c4a14af48f63cd52a46b362580" Namespace="calico-system" Pod="calico-kube-controllers-5f9bdfc4c-7tvk2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5f9bdfc4c--7tvk2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5f9bdfc4c--7tvk2-eth0", GenerateName:"calico-kube-controllers-5f9bdfc4c-", Namespace:"calico-system", SelfLink:"", UID:"ee12f804-e0c2-496a-8f48-b9a1d21198a6", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 21, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5f9bdfc4c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6bed030c2d63d7e6b286c447ea5dc14b9736a0c4a14af48f63cd52a46b362580", Pod:"calico-kube-controllers-5f9bdfc4c-7tvk2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic85135ce791", MAC:"9a:fa:a8:1f:94:b4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:21:37.386066 containerd[1446]: 2024-09-04 17:21:37.383 [INFO][3968] k8s.go 500: Wrote updated endpoint to datastore ContainerID="6bed030c2d63d7e6b286c447ea5dc14b9736a0c4a14af48f63cd52a46b362580" Namespace="calico-system" Pod="calico-kube-controllers-5f9bdfc4c-7tvk2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5f9bdfc4c--7tvk2-eth0" Sep 4 17:21:37.396808 systemd[1]: Started cri-containerd-7ee155579c2df6a56ea637923f7b0b258cddb2199a6befb1001d5b234a4e19a9.scope - libcontainer container 7ee155579c2df6a56ea637923f7b0b258cddb2199a6befb1001d5b234a4e19a9. Sep 4 17:21:37.412667 systemd-resolved[1310]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 17:21:37.414207 containerd[1446]: time="2024-09-04T17:21:37.414069955Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:21:37.414207 containerd[1446]: time="2024-09-04T17:21:37.414138838Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:21:37.414207 containerd[1446]: time="2024-09-04T17:21:37.414155038Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:21:37.415187 containerd[1446]: time="2024-09-04T17:21:37.415026834Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:21:37.432952 containerd[1446]: time="2024-09-04T17:21:37.432564713Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-mhvbp,Uid:83aa095b-3e52-41ca-9aa6-57186d153ed4,Namespace:kube-system,Attempt:1,} returns sandbox id \"7ee155579c2df6a56ea637923f7b0b258cddb2199a6befb1001d5b234a4e19a9\"" Sep 4 17:21:37.433934 kubelet[2544]: E0904 17:21:37.433702 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:21:37.436890 systemd[1]: Started cri-containerd-6bed030c2d63d7e6b286c447ea5dc14b9736a0c4a14af48f63cd52a46b362580.scope - libcontainer container 6bed030c2d63d7e6b286c447ea5dc14b9736a0c4a14af48f63cd52a46b362580. Sep 4 17:21:37.442809 containerd[1446]: time="2024-09-04T17:21:37.442759930Z" level=info msg="CreateContainer within sandbox \"7ee155579c2df6a56ea637923f7b0b258cddb2199a6befb1001d5b234a4e19a9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 17:21:37.453193 systemd-resolved[1310]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 17:21:37.469640 containerd[1446]: time="2024-09-04T17:21:37.469574349Z" level=info msg="CreateContainer within sandbox \"7ee155579c2df6a56ea637923f7b0b258cddb2199a6befb1001d5b234a4e19a9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f683d9edf38012c6605572c3fd7eb4c49c109316e5cb18ff300f8d172a95f90f\"" Sep 4 17:21:37.470637 containerd[1446]: time="2024-09-04T17:21:37.470085050Z" level=info msg="StartContainer for \"f683d9edf38012c6605572c3fd7eb4c49c109316e5cb18ff300f8d172a95f90f\"" Sep 4 17:21:37.476886 containerd[1446]: time="2024-09-04T17:21:37.476815966Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f9bdfc4c-7tvk2,Uid:ee12f804-e0c2-496a-8f48-b9a1d21198a6,Namespace:calico-system,Attempt:1,} returns sandbox id \"6bed030c2d63d7e6b286c447ea5dc14b9736a0c4a14af48f63cd52a46b362580\"" Sep 4 17:21:37.479009 containerd[1446]: time="2024-09-04T17:21:37.478973655Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\"" Sep 4 17:21:37.501798 systemd[1]: Started cri-containerd-f683d9edf38012c6605572c3fd7eb4c49c109316e5cb18ff300f8d172a95f90f.scope - libcontainer container f683d9edf38012c6605572c3fd7eb4c49c109316e5cb18ff300f8d172a95f90f. Sep 4 17:21:37.527958 containerd[1446]: time="2024-09-04T17:21:37.527827417Z" level=info msg="StartContainer for \"f683d9edf38012c6605572c3fd7eb4c49c109316e5cb18ff300f8d172a95f90f\" returns successfully" Sep 4 17:21:38.031614 kubelet[2544]: E0904 17:21:38.029285 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:21:38.037820 kubelet[2544]: I0904 17:21:38.037756 2544 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-mhvbp" podStartSLOduration=30.037716765 podStartE2EDuration="30.037716765s" podCreationTimestamp="2024-09-04 17:21:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:21:38.037257826 +0000 UTC m=+45.256371410" watchObservedRunningTime="2024-09-04 17:21:38.037716765 +0000 UTC m=+45.256830349" Sep 4 17:21:38.708164 containerd[1446]: time="2024-09-04T17:21:38.708109337Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:21:38.721287 containerd[1446]: time="2024-09-04T17:21:38.721154582Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.1: active requests=0, bytes read=31361753" Sep 4 17:21:38.732927 containerd[1446]: time="2024-09-04T17:21:38.732831012Z" level=info msg="ImageCreate event name:\"sha256:dde0e0aa888dfe01de8f2b6b4879c4391e01cc95a7a8a608194d8ed663fe6a39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:21:38.741033 containerd[1446]: time="2024-09-04T17:21:38.740982460Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:21:38.741534 containerd[1446]: time="2024-09-04T17:21:38.741480120Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" with image id \"sha256:dde0e0aa888dfe01de8f2b6b4879c4391e01cc95a7a8a608194d8ed663fe6a39\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\", size \"32729240\" in 1.262462264s" Sep 4 17:21:38.741534 containerd[1446]: time="2024-09-04T17:21:38.741525322Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" returns image reference \"sha256:dde0e0aa888dfe01de8f2b6b4879c4391e01cc95a7a8a608194d8ed663fe6a39\"" Sep 4 17:21:38.748522 containerd[1446]: time="2024-09-04T17:21:38.748475402Z" level=info msg="CreateContainer within sandbox \"6bed030c2d63d7e6b286c447ea5dc14b9736a0c4a14af48f63cd52a46b362580\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 4 17:21:38.766207 containerd[1446]: time="2024-09-04T17:21:38.765979226Z" level=info msg="CreateContainer within sandbox \"6bed030c2d63d7e6b286c447ea5dc14b9736a0c4a14af48f63cd52a46b362580\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"1b10d60a28ed39da35a4c9b4ebc7ec0fc5493048cd0a04f38711f78d7ed1b4ce\"" Sep 4 17:21:38.767756 containerd[1446]: time="2024-09-04T17:21:38.766490246Z" level=info msg="StartContainer for \"1b10d60a28ed39da35a4c9b4ebc7ec0fc5493048cd0a04f38711f78d7ed1b4ce\"" Sep 4 17:21:38.804770 systemd[1]: Started cri-containerd-1b10d60a28ed39da35a4c9b4ebc7ec0fc5493048cd0a04f38711f78d7ed1b4ce.scope - libcontainer container 1b10d60a28ed39da35a4c9b4ebc7ec0fc5493048cd0a04f38711f78d7ed1b4ce. Sep 4 17:21:38.836346 containerd[1446]: time="2024-09-04T17:21:38.836306095Z" level=info msg="StartContainer for \"1b10d60a28ed39da35a4c9b4ebc7ec0fc5493048cd0a04f38711f78d7ed1b4ce\" returns successfully" Sep 4 17:21:38.889176 containerd[1446]: time="2024-09-04T17:21:38.888863370Z" level=info msg="StopPodSandbox for \"37e9b575c775ae9c06074be09da8e51114f7dd8624c7b9cd9633335a06981793\"" Sep 4 17:21:38.890201 containerd[1446]: time="2024-09-04T17:21:38.890116860Z" level=info msg="StopPodSandbox for \"1daaeefdbabfa7d7b55349f90cb67c1beceebf514cae48934250ab3b45bae8cd\"" Sep 4 17:21:38.903907 systemd-networkd[1380]: calic85135ce791: Gained IPv6LL Sep 4 17:21:38.968953 systemd-networkd[1380]: cali4941ee9eeb4: Gained IPv6LL Sep 4 17:21:39.029756 containerd[1446]: 2024-09-04 17:21:38.955 [INFO][4237] k8s.go 608: Cleaning up netns ContainerID="1daaeefdbabfa7d7b55349f90cb67c1beceebf514cae48934250ab3b45bae8cd" Sep 4 17:21:39.029756 containerd[1446]: 2024-09-04 17:21:38.956 [INFO][4237] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="1daaeefdbabfa7d7b55349f90cb67c1beceebf514cae48934250ab3b45bae8cd" iface="eth0" netns="/var/run/netns/cni-ce74f1e1-08b0-9ad2-42aa-c9c6f8295518" Sep 4 17:21:39.029756 containerd[1446]: 2024-09-04 17:21:38.956 [INFO][4237] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="1daaeefdbabfa7d7b55349f90cb67c1beceebf514cae48934250ab3b45bae8cd" iface="eth0" netns="/var/run/netns/cni-ce74f1e1-08b0-9ad2-42aa-c9c6f8295518" Sep 4 17:21:39.029756 containerd[1446]: 2024-09-04 17:21:38.956 [INFO][4237] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="1daaeefdbabfa7d7b55349f90cb67c1beceebf514cae48934250ab3b45bae8cd" iface="eth0" netns="/var/run/netns/cni-ce74f1e1-08b0-9ad2-42aa-c9c6f8295518" Sep 4 17:21:39.029756 containerd[1446]: 2024-09-04 17:21:38.956 [INFO][4237] k8s.go 615: Releasing IP address(es) ContainerID="1daaeefdbabfa7d7b55349f90cb67c1beceebf514cae48934250ab3b45bae8cd" Sep 4 17:21:39.029756 containerd[1446]: 2024-09-04 17:21:38.956 [INFO][4237] utils.go 188: Calico CNI releasing IP address ContainerID="1daaeefdbabfa7d7b55349f90cb67c1beceebf514cae48934250ab3b45bae8cd" Sep 4 17:21:39.029756 containerd[1446]: 2024-09-04 17:21:39.009 [INFO][4258] ipam_plugin.go 417: Releasing address using handleID ContainerID="1daaeefdbabfa7d7b55349f90cb67c1beceebf514cae48934250ab3b45bae8cd" HandleID="k8s-pod-network.1daaeefdbabfa7d7b55349f90cb67c1beceebf514cae48934250ab3b45bae8cd" Workload="localhost-k8s-coredns--76f75df574--xq9t4-eth0" Sep 4 17:21:39.029756 containerd[1446]: 2024-09-04 17:21:39.009 [INFO][4258] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:21:39.029756 containerd[1446]: 2024-09-04 17:21:39.009 [INFO][4258] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:21:39.029756 containerd[1446]: 2024-09-04 17:21:39.022 [WARNING][4258] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="1daaeefdbabfa7d7b55349f90cb67c1beceebf514cae48934250ab3b45bae8cd" HandleID="k8s-pod-network.1daaeefdbabfa7d7b55349f90cb67c1beceebf514cae48934250ab3b45bae8cd" Workload="localhost-k8s-coredns--76f75df574--xq9t4-eth0" Sep 4 17:21:39.029756 containerd[1446]: 2024-09-04 17:21:39.022 [INFO][4258] ipam_plugin.go 445: Releasing address using workloadID ContainerID="1daaeefdbabfa7d7b55349f90cb67c1beceebf514cae48934250ab3b45bae8cd" HandleID="k8s-pod-network.1daaeefdbabfa7d7b55349f90cb67c1beceebf514cae48934250ab3b45bae8cd" Workload="localhost-k8s-coredns--76f75df574--xq9t4-eth0" Sep 4 17:21:39.029756 containerd[1446]: 2024-09-04 17:21:39.024 [INFO][4258] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:21:39.029756 containerd[1446]: 2024-09-04 17:21:39.027 [INFO][4237] k8s.go 621: Teardown processing complete. ContainerID="1daaeefdbabfa7d7b55349f90cb67c1beceebf514cae48934250ab3b45bae8cd" Sep 4 17:21:39.032382 containerd[1446]: time="2024-09-04T17:21:39.032280839Z" level=info msg="TearDown network for sandbox \"1daaeefdbabfa7d7b55349f90cb67c1beceebf514cae48934250ab3b45bae8cd\" successfully" Sep 4 17:21:39.032382 containerd[1446]: time="2024-09-04T17:21:39.032309120Z" level=info msg="StopPodSandbox for \"1daaeefdbabfa7d7b55349f90cb67c1beceebf514cae48934250ab3b45bae8cd\" returns successfully" Sep 4 17:21:39.032680 kubelet[2544]: E0904 17:21:39.032619 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:21:39.033374 containerd[1446]: time="2024-09-04T17:21:39.033336881Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-xq9t4,Uid:b6d78a67-e0e4-42df-a851-94f64f8dabc6,Namespace:kube-system,Attempt:1,}" Sep 4 17:21:39.039401 kubelet[2544]: E0904 17:21:39.039361 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:21:39.050602 containerd[1446]: 2024-09-04 17:21:38.952 [INFO][4248] k8s.go 608: Cleaning up netns ContainerID="37e9b575c775ae9c06074be09da8e51114f7dd8624c7b9cd9633335a06981793" Sep 4 17:21:39.050602 containerd[1446]: 2024-09-04 17:21:38.953 [INFO][4248] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="37e9b575c775ae9c06074be09da8e51114f7dd8624c7b9cd9633335a06981793" iface="eth0" netns="/var/run/netns/cni-51558421-6038-6125-bf49-7bc861883dd9" Sep 4 17:21:39.050602 containerd[1446]: 2024-09-04 17:21:38.953 [INFO][4248] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="37e9b575c775ae9c06074be09da8e51114f7dd8624c7b9cd9633335a06981793" iface="eth0" netns="/var/run/netns/cni-51558421-6038-6125-bf49-7bc861883dd9" Sep 4 17:21:39.050602 containerd[1446]: 2024-09-04 17:21:38.955 [INFO][4248] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="37e9b575c775ae9c06074be09da8e51114f7dd8624c7b9cd9633335a06981793" iface="eth0" netns="/var/run/netns/cni-51558421-6038-6125-bf49-7bc861883dd9" Sep 4 17:21:39.050602 containerd[1446]: 2024-09-04 17:21:38.955 [INFO][4248] k8s.go 615: Releasing IP address(es) ContainerID="37e9b575c775ae9c06074be09da8e51114f7dd8624c7b9cd9633335a06981793" Sep 4 17:21:39.050602 containerd[1446]: 2024-09-04 17:21:38.955 [INFO][4248] utils.go 188: Calico CNI releasing IP address ContainerID="37e9b575c775ae9c06074be09da8e51114f7dd8624c7b9cd9633335a06981793" Sep 4 17:21:39.050602 containerd[1446]: 2024-09-04 17:21:39.020 [INFO][4259] ipam_plugin.go 417: Releasing address using handleID ContainerID="37e9b575c775ae9c06074be09da8e51114f7dd8624c7b9cd9633335a06981793" HandleID="k8s-pod-network.37e9b575c775ae9c06074be09da8e51114f7dd8624c7b9cd9633335a06981793" Workload="localhost-k8s-csi--node--driver--vf6pl-eth0" Sep 4 17:21:39.050602 containerd[1446]: 2024-09-04 17:21:39.020 [INFO][4259] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:21:39.050602 containerd[1446]: 2024-09-04 17:21:39.024 [INFO][4259] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:21:39.050602 containerd[1446]: 2024-09-04 17:21:39.041 [WARNING][4259] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="37e9b575c775ae9c06074be09da8e51114f7dd8624c7b9cd9633335a06981793" HandleID="k8s-pod-network.37e9b575c775ae9c06074be09da8e51114f7dd8624c7b9cd9633335a06981793" Workload="localhost-k8s-csi--node--driver--vf6pl-eth0" Sep 4 17:21:39.050602 containerd[1446]: 2024-09-04 17:21:39.041 [INFO][4259] ipam_plugin.go 445: Releasing address using workloadID ContainerID="37e9b575c775ae9c06074be09da8e51114f7dd8624c7b9cd9633335a06981793" HandleID="k8s-pod-network.37e9b575c775ae9c06074be09da8e51114f7dd8624c7b9cd9633335a06981793" Workload="localhost-k8s-csi--node--driver--vf6pl-eth0" Sep 4 17:21:39.050602 containerd[1446]: 2024-09-04 17:21:39.045 [INFO][4259] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:21:39.050602 containerd[1446]: 2024-09-04 17:21:39.047 [INFO][4248] k8s.go 621: Teardown processing complete. ContainerID="37e9b575c775ae9c06074be09da8e51114f7dd8624c7b9cd9633335a06981793" Sep 4 17:21:39.051493 containerd[1446]: time="2024-09-04T17:21:39.050767010Z" level=info msg="TearDown network for sandbox \"37e9b575c775ae9c06074be09da8e51114f7dd8624c7b9cd9633335a06981793\" successfully" Sep 4 17:21:39.051493 containerd[1446]: time="2024-09-04T17:21:39.050798571Z" level=info msg="StopPodSandbox for \"37e9b575c775ae9c06074be09da8e51114f7dd8624c7b9cd9633335a06981793\" returns successfully" Sep 4 17:21:39.051829 containerd[1446]: time="2024-09-04T17:21:39.051751489Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vf6pl,Uid:0101168c-9721-4950-958b-1ab1d8e66f6e,Namespace:calico-system,Attempt:1,}" Sep 4 17:21:39.179172 systemd[1]: run-netns-cni\x2dce74f1e1\x2d08b0\x2d9ad2\x2d42aa\x2dc9c6f8295518.mount: Deactivated successfully. Sep 4 17:21:39.179265 systemd[1]: run-netns-cni\x2d51558421\x2d6038\x2d6125\x2dbf49\x2d7bc861883dd9.mount: Deactivated successfully. Sep 4 17:21:39.222956 systemd-networkd[1380]: cali4d1e8179b85: Link UP Sep 4 17:21:39.225908 systemd-networkd[1380]: cali4d1e8179b85: Gained carrier Sep 4 17:21:39.237935 kubelet[2544]: I0904 17:21:39.237895 2544 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5f9bdfc4c-7tvk2" podStartSLOduration=23.974258056 podStartE2EDuration="25.237649678s" podCreationTimestamp="2024-09-04 17:21:14 +0000 UTC" firstStartedPulling="2024-09-04 17:21:37.478407471 +0000 UTC m=+44.697521055" lastFinishedPulling="2024-09-04 17:21:38.741799093 +0000 UTC m=+45.960912677" observedRunningTime="2024-09-04 17:21:39.049854134 +0000 UTC m=+46.268967718" watchObservedRunningTime="2024-09-04 17:21:39.237649678 +0000 UTC m=+46.456763302" Sep 4 17:21:39.240899 containerd[1446]: 2024-09-04 17:21:39.127 [INFO][4276] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--76f75df574--xq9t4-eth0 coredns-76f75df574- kube-system b6d78a67-e0e4-42df-a851-94f64f8dabc6 862 0 2024-09-04 17:21:08 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-76f75df574-xq9t4 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali4d1e8179b85 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="a35d290910946341b225c4b0bace5fc257f78da858792fb482f13bd3876bf917" Namespace="kube-system" Pod="coredns-76f75df574-xq9t4" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--xq9t4-" Sep 4 17:21:39.240899 containerd[1446]: 2024-09-04 17:21:39.128 [INFO][4276] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a35d290910946341b225c4b0bace5fc257f78da858792fb482f13bd3876bf917" Namespace="kube-system" Pod="coredns-76f75df574-xq9t4" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--xq9t4-eth0" Sep 4 17:21:39.240899 containerd[1446]: 2024-09-04 17:21:39.164 [INFO][4305] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a35d290910946341b225c4b0bace5fc257f78da858792fb482f13bd3876bf917" HandleID="k8s-pod-network.a35d290910946341b225c4b0bace5fc257f78da858792fb482f13bd3876bf917" Workload="localhost-k8s-coredns--76f75df574--xq9t4-eth0" Sep 4 17:21:39.240899 containerd[1446]: 2024-09-04 17:21:39.187 [INFO][4305] ipam_plugin.go 270: Auto assigning IP ContainerID="a35d290910946341b225c4b0bace5fc257f78da858792fb482f13bd3876bf917" HandleID="k8s-pod-network.a35d290910946341b225c4b0bace5fc257f78da858792fb482f13bd3876bf917" Workload="localhost-k8s-coredns--76f75df574--xq9t4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000301db0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-76f75df574-xq9t4", "timestamp":"2024-09-04 17:21:39.164041408 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:21:39.240899 containerd[1446]: 2024-09-04 17:21:39.187 [INFO][4305] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:21:39.240899 containerd[1446]: 2024-09-04 17:21:39.187 [INFO][4305] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:21:39.240899 containerd[1446]: 2024-09-04 17:21:39.187 [INFO][4305] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 4 17:21:39.240899 containerd[1446]: 2024-09-04 17:21:39.189 [INFO][4305] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a35d290910946341b225c4b0bace5fc257f78da858792fb482f13bd3876bf917" host="localhost" Sep 4 17:21:39.240899 containerd[1446]: 2024-09-04 17:21:39.193 [INFO][4305] ipam.go 372: Looking up existing affinities for host host="localhost" Sep 4 17:21:39.240899 containerd[1446]: 2024-09-04 17:21:39.198 [INFO][4305] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Sep 4 17:21:39.240899 containerd[1446]: 2024-09-04 17:21:39.201 [INFO][4305] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 4 17:21:39.240899 containerd[1446]: 2024-09-04 17:21:39.206 [INFO][4305] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 4 17:21:39.240899 containerd[1446]: 2024-09-04 17:21:39.206 [INFO][4305] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a35d290910946341b225c4b0bace5fc257f78da858792fb482f13bd3876bf917" host="localhost" Sep 4 17:21:39.240899 containerd[1446]: 2024-09-04 17:21:39.207 [INFO][4305] ipam.go 1685: Creating new handle: k8s-pod-network.a35d290910946341b225c4b0bace5fc257f78da858792fb482f13bd3876bf917 Sep 4 17:21:39.240899 containerd[1446]: 2024-09-04 17:21:39.211 [INFO][4305] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a35d290910946341b225c4b0bace5fc257f78da858792fb482f13bd3876bf917" host="localhost" Sep 4 17:21:39.240899 containerd[1446]: 2024-09-04 17:21:39.216 [INFO][4305] ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.a35d290910946341b225c4b0bace5fc257f78da858792fb482f13bd3876bf917" host="localhost" Sep 4 17:21:39.240899 containerd[1446]: 2024-09-04 17:21:39.216 [INFO][4305] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.a35d290910946341b225c4b0bace5fc257f78da858792fb482f13bd3876bf917" host="localhost" Sep 4 17:21:39.240899 containerd[1446]: 2024-09-04 17:21:39.216 [INFO][4305] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:21:39.240899 containerd[1446]: 2024-09-04 17:21:39.216 [INFO][4305] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="a35d290910946341b225c4b0bace5fc257f78da858792fb482f13bd3876bf917" HandleID="k8s-pod-network.a35d290910946341b225c4b0bace5fc257f78da858792fb482f13bd3876bf917" Workload="localhost-k8s-coredns--76f75df574--xq9t4-eth0" Sep 4 17:21:39.242086 containerd[1446]: 2024-09-04 17:21:39.219 [INFO][4276] k8s.go 386: Populated endpoint ContainerID="a35d290910946341b225c4b0bace5fc257f78da858792fb482f13bd3876bf917" Namespace="kube-system" Pod="coredns-76f75df574-xq9t4" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--xq9t4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--xq9t4-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"b6d78a67-e0e4-42df-a851-94f64f8dabc6", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 21, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-76f75df574-xq9t4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4d1e8179b85", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:21:39.242086 containerd[1446]: 2024-09-04 17:21:39.219 [INFO][4276] k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="a35d290910946341b225c4b0bace5fc257f78da858792fb482f13bd3876bf917" Namespace="kube-system" Pod="coredns-76f75df574-xq9t4" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--xq9t4-eth0" Sep 4 17:21:39.242086 containerd[1446]: 2024-09-04 17:21:39.219 [INFO][4276] dataplane_linux.go 68: Setting the host side veth name to cali4d1e8179b85 ContainerID="a35d290910946341b225c4b0bace5fc257f78da858792fb482f13bd3876bf917" Namespace="kube-system" Pod="coredns-76f75df574-xq9t4" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--xq9t4-eth0" Sep 4 17:21:39.242086 containerd[1446]: 2024-09-04 17:21:39.223 [INFO][4276] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="a35d290910946341b225c4b0bace5fc257f78da858792fb482f13bd3876bf917" Namespace="kube-system" Pod="coredns-76f75df574-xq9t4" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--xq9t4-eth0" Sep 4 17:21:39.242086 containerd[1446]: 2024-09-04 17:21:39.223 [INFO][4276] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a35d290910946341b225c4b0bace5fc257f78da858792fb482f13bd3876bf917" Namespace="kube-system" Pod="coredns-76f75df574-xq9t4" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--xq9t4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--xq9t4-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"b6d78a67-e0e4-42df-a851-94f64f8dabc6", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 21, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a35d290910946341b225c4b0bace5fc257f78da858792fb482f13bd3876bf917", Pod:"coredns-76f75df574-xq9t4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4d1e8179b85", MAC:"b6:7f:9a:2a:f1:e0", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:21:39.242086 containerd[1446]: 2024-09-04 17:21:39.237 [INFO][4276] k8s.go 500: Wrote updated endpoint to datastore ContainerID="a35d290910946341b225c4b0bace5fc257f78da858792fb482f13bd3876bf917" Namespace="kube-system" Pod="coredns-76f75df574-xq9t4" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--xq9t4-eth0" Sep 4 17:21:39.266352 systemd-networkd[1380]: caliefec90acc31: Link UP Sep 4 17:21:39.267251 systemd-networkd[1380]: caliefec90acc31: Gained carrier Sep 4 17:21:39.283336 containerd[1446]: 2024-09-04 17:21:39.132 [INFO][4288] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--vf6pl-eth0 csi-node-driver- calico-system 0101168c-9721-4950-958b-1ab1d8e66f6e 861 0 2024-09-04 17:21:14 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:78cd84fb8c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s localhost csi-node-driver-vf6pl eth0 default [] [] [kns.calico-system ksa.calico-system.default] caliefec90acc31 [] []}} ContainerID="088454ff4203bdd92e763f8558d85ef51fde14b93c5e17f1da2e21c8b4d1c62f" Namespace="calico-system" Pod="csi-node-driver-vf6pl" WorkloadEndpoint="localhost-k8s-csi--node--driver--vf6pl-" Sep 4 17:21:39.283336 containerd[1446]: 2024-09-04 17:21:39.132 [INFO][4288] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="088454ff4203bdd92e763f8558d85ef51fde14b93c5e17f1da2e21c8b4d1c62f" Namespace="calico-system" Pod="csi-node-driver-vf6pl" WorkloadEndpoint="localhost-k8s-csi--node--driver--vf6pl-eth0" Sep 4 17:21:39.283336 containerd[1446]: 2024-09-04 17:21:39.185 [INFO][4310] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="088454ff4203bdd92e763f8558d85ef51fde14b93c5e17f1da2e21c8b4d1c62f" HandleID="k8s-pod-network.088454ff4203bdd92e763f8558d85ef51fde14b93c5e17f1da2e21c8b4d1c62f" Workload="localhost-k8s-csi--node--driver--vf6pl-eth0" Sep 4 17:21:39.283336 containerd[1446]: 2024-09-04 17:21:39.202 [INFO][4310] ipam_plugin.go 270: Auto assigning IP ContainerID="088454ff4203bdd92e763f8558d85ef51fde14b93c5e17f1da2e21c8b4d1c62f" HandleID="k8s-pod-network.088454ff4203bdd92e763f8558d85ef51fde14b93c5e17f1da2e21c8b4d1c62f" Workload="localhost-k8s-csi--node--driver--vf6pl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000369940), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-vf6pl", "timestamp":"2024-09-04 17:21:39.185675023 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:21:39.283336 containerd[1446]: 2024-09-04 17:21:39.203 [INFO][4310] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:21:39.283336 containerd[1446]: 2024-09-04 17:21:39.217 [INFO][4310] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:21:39.283336 containerd[1446]: 2024-09-04 17:21:39.217 [INFO][4310] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 4 17:21:39.283336 containerd[1446]: 2024-09-04 17:21:39.220 [INFO][4310] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.088454ff4203bdd92e763f8558d85ef51fde14b93c5e17f1da2e21c8b4d1c62f" host="localhost" Sep 4 17:21:39.283336 containerd[1446]: 2024-09-04 17:21:39.228 [INFO][4310] ipam.go 372: Looking up existing affinities for host host="localhost" Sep 4 17:21:39.283336 containerd[1446]: 2024-09-04 17:21:39.237 [INFO][4310] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Sep 4 17:21:39.283336 containerd[1446]: 2024-09-04 17:21:39.242 [INFO][4310] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 4 17:21:39.283336 containerd[1446]: 2024-09-04 17:21:39.245 [INFO][4310] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 4 17:21:39.283336 containerd[1446]: 2024-09-04 17:21:39.245 [INFO][4310] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.088454ff4203bdd92e763f8558d85ef51fde14b93c5e17f1da2e21c8b4d1c62f" host="localhost" Sep 4 17:21:39.283336 containerd[1446]: 2024-09-04 17:21:39.249 [INFO][4310] ipam.go 1685: Creating new handle: k8s-pod-network.088454ff4203bdd92e763f8558d85ef51fde14b93c5e17f1da2e21c8b4d1c62f Sep 4 17:21:39.283336 containerd[1446]: 2024-09-04 17:21:39.253 [INFO][4310] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.088454ff4203bdd92e763f8558d85ef51fde14b93c5e17f1da2e21c8b4d1c62f" host="localhost" Sep 4 17:21:39.283336 containerd[1446]: 2024-09-04 17:21:39.261 [INFO][4310] ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.088454ff4203bdd92e763f8558d85ef51fde14b93c5e17f1da2e21c8b4d1c62f" host="localhost" Sep 4 17:21:39.283336 containerd[1446]: 2024-09-04 17:21:39.261 [INFO][4310] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.088454ff4203bdd92e763f8558d85ef51fde14b93c5e17f1da2e21c8b4d1c62f" host="localhost" Sep 4 17:21:39.283336 containerd[1446]: 2024-09-04 17:21:39.261 [INFO][4310] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:21:39.283336 containerd[1446]: 2024-09-04 17:21:39.262 [INFO][4310] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="088454ff4203bdd92e763f8558d85ef51fde14b93c5e17f1da2e21c8b4d1c62f" HandleID="k8s-pod-network.088454ff4203bdd92e763f8558d85ef51fde14b93c5e17f1da2e21c8b4d1c62f" Workload="localhost-k8s-csi--node--driver--vf6pl-eth0" Sep 4 17:21:39.284879 containerd[1446]: 2024-09-04 17:21:39.264 [INFO][4288] k8s.go 386: Populated endpoint ContainerID="088454ff4203bdd92e763f8558d85ef51fde14b93c5e17f1da2e21c8b4d1c62f" Namespace="calico-system" Pod="csi-node-driver-vf6pl" WorkloadEndpoint="localhost-k8s-csi--node--driver--vf6pl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--vf6pl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0101168c-9721-4950-958b-1ab1d8e66f6e", ResourceVersion:"861", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 21, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-vf6pl", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"caliefec90acc31", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:21:39.284879 containerd[1446]: 2024-09-04 17:21:39.264 [INFO][4288] k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="088454ff4203bdd92e763f8558d85ef51fde14b93c5e17f1da2e21c8b4d1c62f" Namespace="calico-system" Pod="csi-node-driver-vf6pl" WorkloadEndpoint="localhost-k8s-csi--node--driver--vf6pl-eth0" Sep 4 17:21:39.284879 containerd[1446]: 2024-09-04 17:21:39.264 [INFO][4288] dataplane_linux.go 68: Setting the host side veth name to caliefec90acc31 ContainerID="088454ff4203bdd92e763f8558d85ef51fde14b93c5e17f1da2e21c8b4d1c62f" Namespace="calico-system" Pod="csi-node-driver-vf6pl" WorkloadEndpoint="localhost-k8s-csi--node--driver--vf6pl-eth0" Sep 4 17:21:39.284879 containerd[1446]: 2024-09-04 17:21:39.266 [INFO][4288] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="088454ff4203bdd92e763f8558d85ef51fde14b93c5e17f1da2e21c8b4d1c62f" Namespace="calico-system" Pod="csi-node-driver-vf6pl" WorkloadEndpoint="localhost-k8s-csi--node--driver--vf6pl-eth0" Sep 4 17:21:39.284879 containerd[1446]: 2024-09-04 17:21:39.267 [INFO][4288] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="088454ff4203bdd92e763f8558d85ef51fde14b93c5e17f1da2e21c8b4d1c62f" Namespace="calico-system" Pod="csi-node-driver-vf6pl" WorkloadEndpoint="localhost-k8s-csi--node--driver--vf6pl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--vf6pl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0101168c-9721-4950-958b-1ab1d8e66f6e", ResourceVersion:"861", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 21, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"088454ff4203bdd92e763f8558d85ef51fde14b93c5e17f1da2e21c8b4d1c62f", Pod:"csi-node-driver-vf6pl", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"caliefec90acc31", MAC:"f2:63:4d:fa:4b:cc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:21:39.284879 containerd[1446]: 2024-09-04 17:21:39.277 [INFO][4288] k8s.go 500: Wrote updated endpoint to datastore ContainerID="088454ff4203bdd92e763f8558d85ef51fde14b93c5e17f1da2e21c8b4d1c62f" Namespace="calico-system" Pod="csi-node-driver-vf6pl" WorkloadEndpoint="localhost-k8s-csi--node--driver--vf6pl-eth0" Sep 4 17:21:39.298232 containerd[1446]: time="2024-09-04T17:21:39.269586741Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:21:39.298232 containerd[1446]: time="2024-09-04T17:21:39.298209392Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:21:39.298232 containerd[1446]: time="2024-09-04T17:21:39.298227393Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:21:39.298533 containerd[1446]: time="2024-09-04T17:21:39.298329837Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:21:39.316809 containerd[1446]: time="2024-09-04T17:21:39.316049018Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:21:39.316809 containerd[1446]: time="2024-09-04T17:21:39.316116700Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:21:39.316809 containerd[1446]: time="2024-09-04T17:21:39.316132781Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:21:39.316809 containerd[1446]: time="2024-09-04T17:21:39.316238025Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:21:39.355774 systemd[1]: Started cri-containerd-088454ff4203bdd92e763f8558d85ef51fde14b93c5e17f1da2e21c8b4d1c62f.scope - libcontainer container 088454ff4203bdd92e763f8558d85ef51fde14b93c5e17f1da2e21c8b4d1c62f. Sep 4 17:21:39.356965 systemd[1]: Started cri-containerd-a35d290910946341b225c4b0bace5fc257f78da858792fb482f13bd3876bf917.scope - libcontainer container a35d290910946341b225c4b0bace5fc257f78da858792fb482f13bd3876bf917. Sep 4 17:21:39.400380 systemd-resolved[1310]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 17:21:39.418823 systemd-resolved[1310]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 17:21:39.432127 containerd[1446]: time="2024-09-04T17:21:39.432076805Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-xq9t4,Uid:b6d78a67-e0e4-42df-a851-94f64f8dabc6,Namespace:kube-system,Attempt:1,} returns sandbox id \"a35d290910946341b225c4b0bace5fc257f78da858792fb482f13bd3876bf917\"" Sep 4 17:21:39.433709 kubelet[2544]: E0904 17:21:39.433686 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:21:39.437120 containerd[1446]: time="2024-09-04T17:21:39.437074122Z" level=info msg="CreateContainer within sandbox \"a35d290910946341b225c4b0bace5fc257f78da858792fb482f13bd3876bf917\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 17:21:39.440786 containerd[1446]: time="2024-09-04T17:21:39.440749388Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vf6pl,Uid:0101168c-9721-4950-958b-1ab1d8e66f6e,Namespace:calico-system,Attempt:1,} returns sandbox id \"088454ff4203bdd92e763f8558d85ef51fde14b93c5e17f1da2e21c8b4d1c62f\"" Sep 4 17:21:39.443714 containerd[1446]: time="2024-09-04T17:21:39.443665823Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\"" Sep 4 17:21:39.460374 containerd[1446]: time="2024-09-04T17:21:39.460329522Z" level=info msg="CreateContainer within sandbox \"a35d290910946341b225c4b0bace5fc257f78da858792fb482f13bd3876bf917\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"dfdfcfc3a35bb1046fe2956b6cf880ce00d081627c8fedbbbfc77991907cbaa5\"" Sep 4 17:21:39.462766 containerd[1446]: time="2024-09-04T17:21:39.462659214Z" level=info msg="StartContainer for \"dfdfcfc3a35bb1046fe2956b6cf880ce00d081627c8fedbbbfc77991907cbaa5\"" Sep 4 17:21:39.490790 systemd[1]: Started cri-containerd-dfdfcfc3a35bb1046fe2956b6cf880ce00d081627c8fedbbbfc77991907cbaa5.scope - libcontainer container dfdfcfc3a35bb1046fe2956b6cf880ce00d081627c8fedbbbfc77991907cbaa5. Sep 4 17:21:39.515766 containerd[1446]: time="2024-09-04T17:21:39.515717272Z" level=info msg="StartContainer for \"dfdfcfc3a35bb1046fe2956b6cf880ce00d081627c8fedbbbfc77991907cbaa5\" returns successfully" Sep 4 17:21:40.052252 kubelet[2544]: E0904 17:21:40.051825 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:21:40.054484 kubelet[2544]: E0904 17:21:40.052810 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:21:40.077311 kubelet[2544]: I0904 17:21:40.077256 2544 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-xq9t4" podStartSLOduration=32.076818005 podStartE2EDuration="32.076818005s" podCreationTimestamp="2024-09-04 17:21:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:21:40.063510648 +0000 UTC m=+47.282624232" watchObservedRunningTime="2024-09-04 17:21:40.076818005 +0000 UTC m=+47.295931669" Sep 4 17:21:40.311737 systemd-networkd[1380]: cali4d1e8179b85: Gained IPv6LL Sep 4 17:21:40.448527 systemd[1]: Started sshd@9-10.0.0.51:22-10.0.0.1:53612.service - OpenSSH per-connection server daemon (10.0.0.1:53612). Sep 4 17:21:40.456969 containerd[1446]: time="2024-09-04T17:21:40.456207556Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:21:40.456969 containerd[1446]: time="2024-09-04T17:21:40.456933304Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.1: active requests=0, bytes read=7211060" Sep 4 17:21:40.457474 containerd[1446]: time="2024-09-04T17:21:40.457437884Z" level=info msg="ImageCreate event name:\"sha256:dd6cf4bf9b3656f9dd9713f21ac1be96858f750a9a3bf340983fb7072f4eda2a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:21:40.459739 containerd[1446]: time="2024-09-04T17:21:40.459700292Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:21:40.460381 containerd[1446]: time="2024-09-04T17:21:40.460350037Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.1\" with image id \"sha256:dd6cf4bf9b3656f9dd9713f21ac1be96858f750a9a3bf340983fb7072f4eda2a\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\", size \"8578579\" in 1.016648933s" Sep 4 17:21:40.460459 containerd[1446]: time="2024-09-04T17:21:40.460384238Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\" returns image reference \"sha256:dd6cf4bf9b3656f9dd9713f21ac1be96858f750a9a3bf340983fb7072f4eda2a\"" Sep 4 17:21:40.479559 containerd[1446]: time="2024-09-04T17:21:40.479505542Z" level=info msg="CreateContainer within sandbox \"088454ff4203bdd92e763f8558d85ef51fde14b93c5e17f1da2e21c8b4d1c62f\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 4 17:21:40.500248 containerd[1446]: time="2024-09-04T17:21:40.500073421Z" level=info msg="CreateContainer within sandbox \"088454ff4203bdd92e763f8558d85ef51fde14b93c5e17f1da2e21c8b4d1c62f\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"4686a78c857232c6e8970aa8cd95434de46c82db987dcdc07deaeccb5a95ea20\"" Sep 4 17:21:40.500786 containerd[1446]: time="2024-09-04T17:21:40.500731367Z" level=info msg="StartContainer for \"4686a78c857232c6e8970aa8cd95434de46c82db987dcdc07deaeccb5a95ea20\"" Sep 4 17:21:40.503371 sshd[4501]: Accepted publickey for core from 10.0.0.1 port 53612 ssh2: RSA SHA256:bZG4GDjjdFyRf+7zZQ8+tZsxmoYB2474ukIYHB/jTWk Sep 4 17:21:40.506291 systemd-networkd[1380]: caliefec90acc31: Gained IPv6LL Sep 4 17:21:40.506295 sshd[4501]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:21:40.510429 systemd-logind[1424]: New session 10 of user core. Sep 4 17:21:40.519207 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 4 17:21:40.545813 systemd[1]: Started cri-containerd-4686a78c857232c6e8970aa8cd95434de46c82db987dcdc07deaeccb5a95ea20.scope - libcontainer container 4686a78c857232c6e8970aa8cd95434de46c82db987dcdc07deaeccb5a95ea20. Sep 4 17:21:40.576085 containerd[1446]: time="2024-09-04T17:21:40.575848327Z" level=info msg="StartContainer for \"4686a78c857232c6e8970aa8cd95434de46c82db987dcdc07deaeccb5a95ea20\" returns successfully" Sep 4 17:21:40.578859 containerd[1446]: time="2024-09-04T17:21:40.578831963Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\"" Sep 4 17:21:40.780860 sshd[4501]: pam_unix(sshd:session): session closed for user core Sep 4 17:21:40.789878 systemd[1]: sshd@9-10.0.0.51:22-10.0.0.1:53612.service: Deactivated successfully. Sep 4 17:21:40.792761 systemd[1]: session-10.scope: Deactivated successfully. Sep 4 17:21:40.794416 systemd-logind[1424]: Session 10 logged out. Waiting for processes to exit. Sep 4 17:21:40.805986 systemd[1]: Started sshd@10-10.0.0.51:22-10.0.0.1:53614.service - OpenSSH per-connection server daemon (10.0.0.1:53614). Sep 4 17:21:40.807070 systemd-logind[1424]: Removed session 10. Sep 4 17:21:40.840687 sshd[4550]: Accepted publickey for core from 10.0.0.1 port 53614 ssh2: RSA SHA256:bZG4GDjjdFyRf+7zZQ8+tZsxmoYB2474ukIYHB/jTWk Sep 4 17:21:40.841511 sshd[4550]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:21:40.847229 systemd-logind[1424]: New session 11 of user core. Sep 4 17:21:40.853781 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 4 17:21:41.062074 kubelet[2544]: E0904 17:21:41.062041 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:21:41.079813 sshd[4550]: pam_unix(sshd:session): session closed for user core Sep 4 17:21:41.090142 systemd[1]: sshd@10-10.0.0.51:22-10.0.0.1:53614.service: Deactivated successfully. Sep 4 17:21:41.093622 systemd[1]: session-11.scope: Deactivated successfully. Sep 4 17:21:41.095935 systemd-logind[1424]: Session 11 logged out. Waiting for processes to exit. Sep 4 17:21:41.111945 systemd[1]: Started sshd@11-10.0.0.51:22-10.0.0.1:53622.service - OpenSSH per-connection server daemon (10.0.0.1:53622). Sep 4 17:21:41.112663 systemd-logind[1424]: Removed session 11. Sep 4 17:21:41.146640 sshd[4572]: Accepted publickey for core from 10.0.0.1 port 53622 ssh2: RSA SHA256:bZG4GDjjdFyRf+7zZQ8+tZsxmoYB2474ukIYHB/jTWk Sep 4 17:21:41.147231 sshd[4572]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:21:41.150991 systemd-logind[1424]: New session 12 of user core. Sep 4 17:21:41.161748 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 4 17:21:41.319721 sshd[4572]: pam_unix(sshd:session): session closed for user core Sep 4 17:21:41.323905 systemd[1]: sshd@11-10.0.0.51:22-10.0.0.1:53622.service: Deactivated successfully. Sep 4 17:21:41.325804 systemd[1]: session-12.scope: Deactivated successfully. Sep 4 17:21:41.328326 systemd-logind[1424]: Session 12 logged out. Waiting for processes to exit. Sep 4 17:21:41.329220 systemd-logind[1424]: Removed session 12. Sep 4 17:21:41.538664 containerd[1446]: time="2024-09-04T17:21:41.538598269Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:21:41.539539 containerd[1446]: time="2024-09-04T17:21:41.539101288Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1: active requests=0, bytes read=12116870" Sep 4 17:21:41.539830 containerd[1446]: time="2024-09-04T17:21:41.539812275Z" level=info msg="ImageCreate event name:\"sha256:4df800f2dc90e056e3dc95be5afe5cd399ce8785c6817ddeaf07b498cb85207a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:21:41.542460 containerd[1446]: time="2024-09-04T17:21:41.542039960Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:21:41.543351 containerd[1446]: time="2024-09-04T17:21:41.543216045Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" with image id \"sha256:4df800f2dc90e056e3dc95be5afe5cd399ce8785c6817ddeaf07b498cb85207a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\", size \"13484341\" in 964.230516ms" Sep 4 17:21:41.543351 containerd[1446]: time="2024-09-04T17:21:41.543253047Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" returns image reference \"sha256:4df800f2dc90e056e3dc95be5afe5cd399ce8785c6817ddeaf07b498cb85207a\"" Sep 4 17:21:41.545826 containerd[1446]: time="2024-09-04T17:21:41.545788824Z" level=info msg="CreateContainer within sandbox \"088454ff4203bdd92e763f8558d85ef51fde14b93c5e17f1da2e21c8b4d1c62f\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 4 17:21:41.560673 containerd[1446]: time="2024-09-04T17:21:41.560564989Z" level=info msg="CreateContainer within sandbox \"088454ff4203bdd92e763f8558d85ef51fde14b93c5e17f1da2e21c8b4d1c62f\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"051685a40bdabd9c261213ee660cba5e9e4014230c78734a2bb598276cc4cc8e\"" Sep 4 17:21:41.561315 containerd[1446]: time="2024-09-04T17:21:41.561212174Z" level=info msg="StartContainer for \"051685a40bdabd9c261213ee660cba5e9e4014230c78734a2bb598276cc4cc8e\"" Sep 4 17:21:41.595793 systemd[1]: Started cri-containerd-051685a40bdabd9c261213ee660cba5e9e4014230c78734a2bb598276cc4cc8e.scope - libcontainer container 051685a40bdabd9c261213ee660cba5e9e4014230c78734a2bb598276cc4cc8e. Sep 4 17:21:41.625199 containerd[1446]: time="2024-09-04T17:21:41.625133940Z" level=info msg="StartContainer for \"051685a40bdabd9c261213ee660cba5e9e4014230c78734a2bb598276cc4cc8e\" returns successfully" Sep 4 17:21:41.970755 kubelet[2544]: I0904 17:21:41.970718 2544 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 4 17:21:41.973616 kubelet[2544]: I0904 17:21:41.973587 2544 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 4 17:21:46.331194 systemd[1]: Started sshd@12-10.0.0.51:22-10.0.0.1:43462.service - OpenSSH per-connection server daemon (10.0.0.1:43462). Sep 4 17:21:46.375763 sshd[4641]: Accepted publickey for core from 10.0.0.1 port 43462 ssh2: RSA SHA256:bZG4GDjjdFyRf+7zZQ8+tZsxmoYB2474ukIYHB/jTWk Sep 4 17:21:46.377305 sshd[4641]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:21:46.381661 systemd-logind[1424]: New session 13 of user core. Sep 4 17:21:46.391744 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 4 17:21:46.596713 sshd[4641]: pam_unix(sshd:session): session closed for user core Sep 4 17:21:46.604083 systemd[1]: sshd@12-10.0.0.51:22-10.0.0.1:43462.service: Deactivated successfully. Sep 4 17:21:46.605672 systemd[1]: session-13.scope: Deactivated successfully. Sep 4 17:21:46.607285 systemd-logind[1424]: Session 13 logged out. Waiting for processes to exit. Sep 4 17:21:46.608611 systemd[1]: Started sshd@13-10.0.0.51:22-10.0.0.1:43466.service - OpenSSH per-connection server daemon (10.0.0.1:43466). Sep 4 17:21:46.609340 systemd-logind[1424]: Removed session 13. Sep 4 17:21:46.646062 sshd[4655]: Accepted publickey for core from 10.0.0.1 port 43466 ssh2: RSA SHA256:bZG4GDjjdFyRf+7zZQ8+tZsxmoYB2474ukIYHB/jTWk Sep 4 17:21:46.647726 sshd[4655]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:21:46.655816 systemd-logind[1424]: New session 14 of user core. Sep 4 17:21:46.661802 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 4 17:21:46.908008 sshd[4655]: pam_unix(sshd:session): session closed for user core Sep 4 17:21:46.919216 systemd[1]: sshd@13-10.0.0.51:22-10.0.0.1:43466.service: Deactivated successfully. Sep 4 17:21:46.921758 systemd[1]: session-14.scope: Deactivated successfully. Sep 4 17:21:46.924168 systemd-logind[1424]: Session 14 logged out. Waiting for processes to exit. Sep 4 17:21:46.930970 systemd[1]: Started sshd@14-10.0.0.51:22-10.0.0.1:43480.service - OpenSSH per-connection server daemon (10.0.0.1:43480). Sep 4 17:21:46.932287 systemd-logind[1424]: Removed session 14. Sep 4 17:21:46.978391 sshd[4667]: Accepted publickey for core from 10.0.0.1 port 43480 ssh2: RSA SHA256:bZG4GDjjdFyRf+7zZQ8+tZsxmoYB2474ukIYHB/jTWk Sep 4 17:21:46.980097 sshd[4667]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:21:46.984266 systemd-logind[1424]: New session 15 of user core. Sep 4 17:21:46.994801 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 4 17:21:48.449565 sshd[4667]: pam_unix(sshd:session): session closed for user core Sep 4 17:21:48.455621 systemd[1]: sshd@14-10.0.0.51:22-10.0.0.1:43480.service: Deactivated successfully. Sep 4 17:21:48.459762 systemd[1]: session-15.scope: Deactivated successfully. Sep 4 17:21:48.462564 systemd-logind[1424]: Session 15 logged out. Waiting for processes to exit. Sep 4 17:21:48.470435 systemd[1]: Started sshd@15-10.0.0.51:22-10.0.0.1:43496.service - OpenSSH per-connection server daemon (10.0.0.1:43496). Sep 4 17:21:48.474642 systemd-logind[1424]: Removed session 15. Sep 4 17:21:48.508816 sshd[4689]: Accepted publickey for core from 10.0.0.1 port 43496 ssh2: RSA SHA256:bZG4GDjjdFyRf+7zZQ8+tZsxmoYB2474ukIYHB/jTWk Sep 4 17:21:48.510360 sshd[4689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:21:48.514553 systemd-logind[1424]: New session 16 of user core. Sep 4 17:21:48.518790 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 4 17:21:48.832490 sshd[4689]: pam_unix(sshd:session): session closed for user core Sep 4 17:21:48.839313 systemd[1]: sshd@15-10.0.0.51:22-10.0.0.1:43496.service: Deactivated successfully. Sep 4 17:21:48.841403 systemd[1]: session-16.scope: Deactivated successfully. Sep 4 17:21:48.842181 systemd-logind[1424]: Session 16 logged out. Waiting for processes to exit. Sep 4 17:21:48.855567 systemd[1]: Started sshd@16-10.0.0.51:22-10.0.0.1:43500.service - OpenSSH per-connection server daemon (10.0.0.1:43500). Sep 4 17:21:48.856730 systemd-logind[1424]: Removed session 16. Sep 4 17:21:48.892034 sshd[4702]: Accepted publickey for core from 10.0.0.1 port 43500 ssh2: RSA SHA256:bZG4GDjjdFyRf+7zZQ8+tZsxmoYB2474ukIYHB/jTWk Sep 4 17:21:48.895448 sshd[4702]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:21:48.900569 systemd-logind[1424]: New session 17 of user core. Sep 4 17:21:48.907738 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 4 17:21:49.056461 sshd[4702]: pam_unix(sshd:session): session closed for user core Sep 4 17:21:49.060971 systemd[1]: sshd@16-10.0.0.51:22-10.0.0.1:43500.service: Deactivated successfully. Sep 4 17:21:49.062946 systemd[1]: session-17.scope: Deactivated successfully. Sep 4 17:21:49.064711 systemd-logind[1424]: Session 17 logged out. Waiting for processes to exit. Sep 4 17:21:49.065665 systemd-logind[1424]: Removed session 17. Sep 4 17:21:52.770017 kubelet[2544]: E0904 17:21:52.769915 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:21:52.788539 kubelet[2544]: I0904 17:21:52.788493 2544 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-vf6pl" podStartSLOduration=36.682524336 podStartE2EDuration="38.783573897s" podCreationTimestamp="2024-09-04 17:21:14 +0000 UTC" firstStartedPulling="2024-09-04 17:21:39.442692384 +0000 UTC m=+46.661805968" lastFinishedPulling="2024-09-04 17:21:41.543741945 +0000 UTC m=+48.762855529" observedRunningTime="2024-09-04 17:21:42.075369806 +0000 UTC m=+49.294483390" watchObservedRunningTime="2024-09-04 17:21:52.783573897 +0000 UTC m=+60.002687481" Sep 4 17:21:52.875220 containerd[1446]: time="2024-09-04T17:21:52.875168492Z" level=info msg="StopPodSandbox for \"73b895013b90a54d3be04c0797e07638ddfb52669c37ab1418d18e8cf8ca5f0b\"" Sep 4 17:21:52.976949 containerd[1446]: 2024-09-04 17:21:52.929 [WARNING][4769] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="73b895013b90a54d3be04c0797e07638ddfb52669c37ab1418d18e8cf8ca5f0b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5f9bdfc4c--7tvk2-eth0", GenerateName:"calico-kube-controllers-5f9bdfc4c-", Namespace:"calico-system", SelfLink:"", UID:"ee12f804-e0c2-496a-8f48-b9a1d21198a6", ResourceVersion:"890", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 21, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5f9bdfc4c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6bed030c2d63d7e6b286c447ea5dc14b9736a0c4a14af48f63cd52a46b362580", Pod:"calico-kube-controllers-5f9bdfc4c-7tvk2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic85135ce791", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:21:52.976949 containerd[1446]: 2024-09-04 17:21:52.929 [INFO][4769] k8s.go 608: Cleaning up netns ContainerID="73b895013b90a54d3be04c0797e07638ddfb52669c37ab1418d18e8cf8ca5f0b" Sep 4 17:21:52.976949 containerd[1446]: 2024-09-04 17:21:52.929 [INFO][4769] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="73b895013b90a54d3be04c0797e07638ddfb52669c37ab1418d18e8cf8ca5f0b" iface="eth0" netns="" Sep 4 17:21:52.976949 containerd[1446]: 2024-09-04 17:21:52.930 [INFO][4769] k8s.go 615: Releasing IP address(es) ContainerID="73b895013b90a54d3be04c0797e07638ddfb52669c37ab1418d18e8cf8ca5f0b" Sep 4 17:21:52.976949 containerd[1446]: 2024-09-04 17:21:52.930 [INFO][4769] utils.go 188: Calico CNI releasing IP address ContainerID="73b895013b90a54d3be04c0797e07638ddfb52669c37ab1418d18e8cf8ca5f0b" Sep 4 17:21:52.976949 containerd[1446]: 2024-09-04 17:21:52.958 [INFO][4779] ipam_plugin.go 417: Releasing address using handleID ContainerID="73b895013b90a54d3be04c0797e07638ddfb52669c37ab1418d18e8cf8ca5f0b" HandleID="k8s-pod-network.73b895013b90a54d3be04c0797e07638ddfb52669c37ab1418d18e8cf8ca5f0b" Workload="localhost-k8s-calico--kube--controllers--5f9bdfc4c--7tvk2-eth0" Sep 4 17:21:52.976949 containerd[1446]: 2024-09-04 17:21:52.958 [INFO][4779] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:21:52.976949 containerd[1446]: 2024-09-04 17:21:52.959 [INFO][4779] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:21:52.976949 containerd[1446]: 2024-09-04 17:21:52.971 [WARNING][4779] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="73b895013b90a54d3be04c0797e07638ddfb52669c37ab1418d18e8cf8ca5f0b" HandleID="k8s-pod-network.73b895013b90a54d3be04c0797e07638ddfb52669c37ab1418d18e8cf8ca5f0b" Workload="localhost-k8s-calico--kube--controllers--5f9bdfc4c--7tvk2-eth0" Sep 4 17:21:52.976949 containerd[1446]: 2024-09-04 17:21:52.971 [INFO][4779] ipam_plugin.go 445: Releasing address using workloadID ContainerID="73b895013b90a54d3be04c0797e07638ddfb52669c37ab1418d18e8cf8ca5f0b" HandleID="k8s-pod-network.73b895013b90a54d3be04c0797e07638ddfb52669c37ab1418d18e8cf8ca5f0b" Workload="localhost-k8s-calico--kube--controllers--5f9bdfc4c--7tvk2-eth0" Sep 4 17:21:52.976949 containerd[1446]: 2024-09-04 17:21:52.973 [INFO][4779] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:21:52.976949 containerd[1446]: 2024-09-04 17:21:52.975 [INFO][4769] k8s.go 621: Teardown processing complete. ContainerID="73b895013b90a54d3be04c0797e07638ddfb52669c37ab1418d18e8cf8ca5f0b" Sep 4 17:21:52.976949 containerd[1446]: time="2024-09-04T17:21:52.976937950Z" level=info msg="TearDown network for sandbox \"73b895013b90a54d3be04c0797e07638ddfb52669c37ab1418d18e8cf8ca5f0b\" successfully" Sep 4 17:21:52.976949 containerd[1446]: time="2024-09-04T17:21:52.976961110Z" level=info msg="StopPodSandbox for \"73b895013b90a54d3be04c0797e07638ddfb52669c37ab1418d18e8cf8ca5f0b\" returns successfully" Sep 4 17:21:52.977481 containerd[1446]: time="2024-09-04T17:21:52.977416086Z" level=info msg="RemovePodSandbox for \"73b895013b90a54d3be04c0797e07638ddfb52669c37ab1418d18e8cf8ca5f0b\"" Sep 4 17:21:52.977481 containerd[1446]: time="2024-09-04T17:21:52.977444647Z" level=info msg="Forcibly stopping sandbox \"73b895013b90a54d3be04c0797e07638ddfb52669c37ab1418d18e8cf8ca5f0b\"" Sep 4 17:21:53.057091 containerd[1446]: 2024-09-04 17:21:53.018 [WARNING][4802] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="73b895013b90a54d3be04c0797e07638ddfb52669c37ab1418d18e8cf8ca5f0b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5f9bdfc4c--7tvk2-eth0", GenerateName:"calico-kube-controllers-5f9bdfc4c-", Namespace:"calico-system", SelfLink:"", UID:"ee12f804-e0c2-496a-8f48-b9a1d21198a6", ResourceVersion:"890", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 21, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5f9bdfc4c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6bed030c2d63d7e6b286c447ea5dc14b9736a0c4a14af48f63cd52a46b362580", Pod:"calico-kube-controllers-5f9bdfc4c-7tvk2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic85135ce791", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:21:53.057091 containerd[1446]: 2024-09-04 17:21:53.019 [INFO][4802] k8s.go 608: Cleaning up netns ContainerID="73b895013b90a54d3be04c0797e07638ddfb52669c37ab1418d18e8cf8ca5f0b" Sep 4 17:21:53.057091 containerd[1446]: 2024-09-04 17:21:53.019 [INFO][4802] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="73b895013b90a54d3be04c0797e07638ddfb52669c37ab1418d18e8cf8ca5f0b" iface="eth0" netns="" Sep 4 17:21:53.057091 containerd[1446]: 2024-09-04 17:21:53.019 [INFO][4802] k8s.go 615: Releasing IP address(es) ContainerID="73b895013b90a54d3be04c0797e07638ddfb52669c37ab1418d18e8cf8ca5f0b" Sep 4 17:21:53.057091 containerd[1446]: 2024-09-04 17:21:53.019 [INFO][4802] utils.go 188: Calico CNI releasing IP address ContainerID="73b895013b90a54d3be04c0797e07638ddfb52669c37ab1418d18e8cf8ca5f0b" Sep 4 17:21:53.057091 containerd[1446]: 2024-09-04 17:21:53.041 [INFO][4810] ipam_plugin.go 417: Releasing address using handleID ContainerID="73b895013b90a54d3be04c0797e07638ddfb52669c37ab1418d18e8cf8ca5f0b" HandleID="k8s-pod-network.73b895013b90a54d3be04c0797e07638ddfb52669c37ab1418d18e8cf8ca5f0b" Workload="localhost-k8s-calico--kube--controllers--5f9bdfc4c--7tvk2-eth0" Sep 4 17:21:53.057091 containerd[1446]: 2024-09-04 17:21:53.041 [INFO][4810] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:21:53.057091 containerd[1446]: 2024-09-04 17:21:53.041 [INFO][4810] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:21:53.057091 containerd[1446]: 2024-09-04 17:21:53.050 [WARNING][4810] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="73b895013b90a54d3be04c0797e07638ddfb52669c37ab1418d18e8cf8ca5f0b" HandleID="k8s-pod-network.73b895013b90a54d3be04c0797e07638ddfb52669c37ab1418d18e8cf8ca5f0b" Workload="localhost-k8s-calico--kube--controllers--5f9bdfc4c--7tvk2-eth0" Sep 4 17:21:53.057091 containerd[1446]: 2024-09-04 17:21:53.050 [INFO][4810] ipam_plugin.go 445: Releasing address using workloadID ContainerID="73b895013b90a54d3be04c0797e07638ddfb52669c37ab1418d18e8cf8ca5f0b" HandleID="k8s-pod-network.73b895013b90a54d3be04c0797e07638ddfb52669c37ab1418d18e8cf8ca5f0b" Workload="localhost-k8s-calico--kube--controllers--5f9bdfc4c--7tvk2-eth0" Sep 4 17:21:53.057091 containerd[1446]: 2024-09-04 17:21:53.052 [INFO][4810] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:21:53.057091 containerd[1446]: 2024-09-04 17:21:53.054 [INFO][4802] k8s.go 621: Teardown processing complete. ContainerID="73b895013b90a54d3be04c0797e07638ddfb52669c37ab1418d18e8cf8ca5f0b" Sep 4 17:21:53.057091 containerd[1446]: time="2024-09-04T17:21:53.055922506Z" level=info msg="TearDown network for sandbox \"73b895013b90a54d3be04c0797e07638ddfb52669c37ab1418d18e8cf8ca5f0b\" successfully" Sep 4 17:21:53.065516 containerd[1446]: time="2024-09-04T17:21:53.065475704Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"73b895013b90a54d3be04c0797e07638ddfb52669c37ab1418d18e8cf8ca5f0b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 17:21:53.065709 containerd[1446]: time="2024-09-04T17:21:53.065688711Z" level=info msg="RemovePodSandbox \"73b895013b90a54d3be04c0797e07638ddfb52669c37ab1418d18e8cf8ca5f0b\" returns successfully" Sep 4 17:21:53.066258 containerd[1446]: time="2024-09-04T17:21:53.066228209Z" level=info msg="StopPodSandbox for \"37e9b575c775ae9c06074be09da8e51114f7dd8624c7b9cd9633335a06981793\"" Sep 4 17:21:53.153320 containerd[1446]: 2024-09-04 17:21:53.111 [WARNING][4832] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="37e9b575c775ae9c06074be09da8e51114f7dd8624c7b9cd9633335a06981793" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--vf6pl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0101168c-9721-4950-958b-1ab1d8e66f6e", ResourceVersion:"943", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 21, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"088454ff4203bdd92e763f8558d85ef51fde14b93c5e17f1da2e21c8b4d1c62f", Pod:"csi-node-driver-vf6pl", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"caliefec90acc31", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:21:53.153320 containerd[1446]: 2024-09-04 17:21:53.111 [INFO][4832] k8s.go 608: Cleaning up netns ContainerID="37e9b575c775ae9c06074be09da8e51114f7dd8624c7b9cd9633335a06981793" Sep 4 17:21:53.153320 containerd[1446]: 2024-09-04 17:21:53.111 [INFO][4832] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="37e9b575c775ae9c06074be09da8e51114f7dd8624c7b9cd9633335a06981793" iface="eth0" netns="" Sep 4 17:21:53.153320 containerd[1446]: 2024-09-04 17:21:53.111 [INFO][4832] k8s.go 615: Releasing IP address(es) ContainerID="37e9b575c775ae9c06074be09da8e51114f7dd8624c7b9cd9633335a06981793" Sep 4 17:21:53.153320 containerd[1446]: 2024-09-04 17:21:53.111 [INFO][4832] utils.go 188: Calico CNI releasing IP address ContainerID="37e9b575c775ae9c06074be09da8e51114f7dd8624c7b9cd9633335a06981793" Sep 4 17:21:53.153320 containerd[1446]: 2024-09-04 17:21:53.137 [INFO][4840] ipam_plugin.go 417: Releasing address using handleID ContainerID="37e9b575c775ae9c06074be09da8e51114f7dd8624c7b9cd9633335a06981793" HandleID="k8s-pod-network.37e9b575c775ae9c06074be09da8e51114f7dd8624c7b9cd9633335a06981793" Workload="localhost-k8s-csi--node--driver--vf6pl-eth0" Sep 4 17:21:53.153320 containerd[1446]: 2024-09-04 17:21:53.137 [INFO][4840] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:21:53.153320 containerd[1446]: 2024-09-04 17:21:53.137 [INFO][4840] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:21:53.153320 containerd[1446]: 2024-09-04 17:21:53.146 [WARNING][4840] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="37e9b575c775ae9c06074be09da8e51114f7dd8624c7b9cd9633335a06981793" HandleID="k8s-pod-network.37e9b575c775ae9c06074be09da8e51114f7dd8624c7b9cd9633335a06981793" Workload="localhost-k8s-csi--node--driver--vf6pl-eth0" Sep 4 17:21:53.153320 containerd[1446]: 2024-09-04 17:21:53.146 [INFO][4840] ipam_plugin.go 445: Releasing address using workloadID ContainerID="37e9b575c775ae9c06074be09da8e51114f7dd8624c7b9cd9633335a06981793" HandleID="k8s-pod-network.37e9b575c775ae9c06074be09da8e51114f7dd8624c7b9cd9633335a06981793" Workload="localhost-k8s-csi--node--driver--vf6pl-eth0" Sep 4 17:21:53.153320 containerd[1446]: 2024-09-04 17:21:53.149 [INFO][4840] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:21:53.153320 containerd[1446]: 2024-09-04 17:21:53.150 [INFO][4832] k8s.go 621: Teardown processing complete. ContainerID="37e9b575c775ae9c06074be09da8e51114f7dd8624c7b9cd9633335a06981793" Sep 4 17:21:53.155531 containerd[1446]: time="2024-09-04T17:21:53.153374871Z" level=info msg="TearDown network for sandbox \"37e9b575c775ae9c06074be09da8e51114f7dd8624c7b9cd9633335a06981793\" successfully" Sep 4 17:21:53.155531 containerd[1446]: time="2024-09-04T17:21:53.153399512Z" level=info msg="StopPodSandbox for \"37e9b575c775ae9c06074be09da8e51114f7dd8624c7b9cd9633335a06981793\" returns successfully" Sep 4 17:21:53.155531 containerd[1446]: time="2024-09-04T17:21:53.154669874Z" level=info msg="RemovePodSandbox for \"37e9b575c775ae9c06074be09da8e51114f7dd8624c7b9cd9633335a06981793\"" Sep 4 17:21:53.155531 containerd[1446]: time="2024-09-04T17:21:53.155016765Z" level=info msg="Forcibly stopping sandbox \"37e9b575c775ae9c06074be09da8e51114f7dd8624c7b9cd9633335a06981793\"" Sep 4 17:21:53.230001 containerd[1446]: 2024-09-04 17:21:53.194 [WARNING][4863] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="37e9b575c775ae9c06074be09da8e51114f7dd8624c7b9cd9633335a06981793" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--vf6pl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0101168c-9721-4950-958b-1ab1d8e66f6e", ResourceVersion:"943", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 21, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"088454ff4203bdd92e763f8558d85ef51fde14b93c5e17f1da2e21c8b4d1c62f", Pod:"csi-node-driver-vf6pl", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"caliefec90acc31", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:21:53.230001 containerd[1446]: 2024-09-04 17:21:53.195 [INFO][4863] k8s.go 608: Cleaning up netns ContainerID="37e9b575c775ae9c06074be09da8e51114f7dd8624c7b9cd9633335a06981793" Sep 4 17:21:53.230001 containerd[1446]: 2024-09-04 17:21:53.195 [INFO][4863] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="37e9b575c775ae9c06074be09da8e51114f7dd8624c7b9cd9633335a06981793" iface="eth0" netns="" Sep 4 17:21:53.230001 containerd[1446]: 2024-09-04 17:21:53.195 [INFO][4863] k8s.go 615: Releasing IP address(es) ContainerID="37e9b575c775ae9c06074be09da8e51114f7dd8624c7b9cd9633335a06981793" Sep 4 17:21:53.230001 containerd[1446]: 2024-09-04 17:21:53.195 [INFO][4863] utils.go 188: Calico CNI releasing IP address ContainerID="37e9b575c775ae9c06074be09da8e51114f7dd8624c7b9cd9633335a06981793" Sep 4 17:21:53.230001 containerd[1446]: 2024-09-04 17:21:53.214 [INFO][4872] ipam_plugin.go 417: Releasing address using handleID ContainerID="37e9b575c775ae9c06074be09da8e51114f7dd8624c7b9cd9633335a06981793" HandleID="k8s-pod-network.37e9b575c775ae9c06074be09da8e51114f7dd8624c7b9cd9633335a06981793" Workload="localhost-k8s-csi--node--driver--vf6pl-eth0" Sep 4 17:21:53.230001 containerd[1446]: 2024-09-04 17:21:53.214 [INFO][4872] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:21:53.230001 containerd[1446]: 2024-09-04 17:21:53.214 [INFO][4872] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:21:53.230001 containerd[1446]: 2024-09-04 17:21:53.223 [WARNING][4872] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="37e9b575c775ae9c06074be09da8e51114f7dd8624c7b9cd9633335a06981793" HandleID="k8s-pod-network.37e9b575c775ae9c06074be09da8e51114f7dd8624c7b9cd9633335a06981793" Workload="localhost-k8s-csi--node--driver--vf6pl-eth0" Sep 4 17:21:53.230001 containerd[1446]: 2024-09-04 17:21:53.223 [INFO][4872] ipam_plugin.go 445: Releasing address using workloadID ContainerID="37e9b575c775ae9c06074be09da8e51114f7dd8624c7b9cd9633335a06981793" HandleID="k8s-pod-network.37e9b575c775ae9c06074be09da8e51114f7dd8624c7b9cd9633335a06981793" Workload="localhost-k8s-csi--node--driver--vf6pl-eth0" Sep 4 17:21:53.230001 containerd[1446]: 2024-09-04 17:21:53.225 [INFO][4872] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:21:53.230001 containerd[1446]: 2024-09-04 17:21:53.227 [INFO][4863] k8s.go 621: Teardown processing complete. ContainerID="37e9b575c775ae9c06074be09da8e51114f7dd8624c7b9cd9633335a06981793" Sep 4 17:21:53.230001 containerd[1446]: time="2024-09-04T17:21:53.229871658Z" level=info msg="TearDown network for sandbox \"37e9b575c775ae9c06074be09da8e51114f7dd8624c7b9cd9633335a06981793\" successfully" Sep 4 17:21:53.232550 containerd[1446]: time="2024-09-04T17:21:53.232504625Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"37e9b575c775ae9c06074be09da8e51114f7dd8624c7b9cd9633335a06981793\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 17:21:53.232642 containerd[1446]: time="2024-09-04T17:21:53.232570748Z" level=info msg="RemovePodSandbox \"37e9b575c775ae9c06074be09da8e51114f7dd8624c7b9cd9633335a06981793\" returns successfully" Sep 4 17:21:53.233112 containerd[1446]: time="2024-09-04T17:21:53.233082685Z" level=info msg="StopPodSandbox for \"1daaeefdbabfa7d7b55349f90cb67c1beceebf514cae48934250ab3b45bae8cd\"" Sep 4 17:21:53.305045 containerd[1446]: 2024-09-04 17:21:53.269 [WARNING][4895] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1daaeefdbabfa7d7b55349f90cb67c1beceebf514cae48934250ab3b45bae8cd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--xq9t4-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"b6d78a67-e0e4-42df-a851-94f64f8dabc6", ResourceVersion:"885", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 21, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a35d290910946341b225c4b0bace5fc257f78da858792fb482f13bd3876bf917", Pod:"coredns-76f75df574-xq9t4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4d1e8179b85", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:21:53.305045 containerd[1446]: 2024-09-04 17:21:53.269 [INFO][4895] k8s.go 608: Cleaning up netns ContainerID="1daaeefdbabfa7d7b55349f90cb67c1beceebf514cae48934250ab3b45bae8cd" Sep 4 17:21:53.305045 containerd[1446]: 2024-09-04 17:21:53.269 [INFO][4895] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="1daaeefdbabfa7d7b55349f90cb67c1beceebf514cae48934250ab3b45bae8cd" iface="eth0" netns="" Sep 4 17:21:53.305045 containerd[1446]: 2024-09-04 17:21:53.270 [INFO][4895] k8s.go 615: Releasing IP address(es) ContainerID="1daaeefdbabfa7d7b55349f90cb67c1beceebf514cae48934250ab3b45bae8cd" Sep 4 17:21:53.305045 containerd[1446]: 2024-09-04 17:21:53.270 [INFO][4895] utils.go 188: Calico CNI releasing IP address ContainerID="1daaeefdbabfa7d7b55349f90cb67c1beceebf514cae48934250ab3b45bae8cd" Sep 4 17:21:53.305045 containerd[1446]: 2024-09-04 17:21:53.288 [INFO][4902] ipam_plugin.go 417: Releasing address using handleID ContainerID="1daaeefdbabfa7d7b55349f90cb67c1beceebf514cae48934250ab3b45bae8cd" HandleID="k8s-pod-network.1daaeefdbabfa7d7b55349f90cb67c1beceebf514cae48934250ab3b45bae8cd" Workload="localhost-k8s-coredns--76f75df574--xq9t4-eth0" Sep 4 17:21:53.305045 containerd[1446]: 2024-09-04 17:21:53.289 [INFO][4902] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:21:53.305045 containerd[1446]: 2024-09-04 17:21:53.289 [INFO][4902] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:21:53.305045 containerd[1446]: 2024-09-04 17:21:53.299 [WARNING][4902] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="1daaeefdbabfa7d7b55349f90cb67c1beceebf514cae48934250ab3b45bae8cd" HandleID="k8s-pod-network.1daaeefdbabfa7d7b55349f90cb67c1beceebf514cae48934250ab3b45bae8cd" Workload="localhost-k8s-coredns--76f75df574--xq9t4-eth0" Sep 4 17:21:53.305045 containerd[1446]: 2024-09-04 17:21:53.299 [INFO][4902] ipam_plugin.go 445: Releasing address using workloadID ContainerID="1daaeefdbabfa7d7b55349f90cb67c1beceebf514cae48934250ab3b45bae8cd" HandleID="k8s-pod-network.1daaeefdbabfa7d7b55349f90cb67c1beceebf514cae48934250ab3b45bae8cd" Workload="localhost-k8s-coredns--76f75df574--xq9t4-eth0" Sep 4 17:21:53.305045 containerd[1446]: 2024-09-04 17:21:53.300 [INFO][4902] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:21:53.305045 containerd[1446]: 2024-09-04 17:21:53.302 [INFO][4895] k8s.go 621: Teardown processing complete. ContainerID="1daaeefdbabfa7d7b55349f90cb67c1beceebf514cae48934250ab3b45bae8cd" Sep 4 17:21:53.305817 containerd[1446]: time="2024-09-04T17:21:53.305527097Z" level=info msg="TearDown network for sandbox \"1daaeefdbabfa7d7b55349f90cb67c1beceebf514cae48934250ab3b45bae8cd\" successfully" Sep 4 17:21:53.305817 containerd[1446]: time="2024-09-04T17:21:53.305558978Z" level=info msg="StopPodSandbox for \"1daaeefdbabfa7d7b55349f90cb67c1beceebf514cae48934250ab3b45bae8cd\" returns successfully" Sep 4 17:21:53.306519 containerd[1446]: time="2024-09-04T17:21:53.306171078Z" level=info msg="RemovePodSandbox for \"1daaeefdbabfa7d7b55349f90cb67c1beceebf514cae48934250ab3b45bae8cd\"" Sep 4 17:21:53.306519 containerd[1446]: time="2024-09-04T17:21:53.306202079Z" level=info msg="Forcibly stopping sandbox \"1daaeefdbabfa7d7b55349f90cb67c1beceebf514cae48934250ab3b45bae8cd\"" Sep 4 17:21:53.376794 containerd[1446]: 2024-09-04 17:21:53.342 [WARNING][4924] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1daaeefdbabfa7d7b55349f90cb67c1beceebf514cae48934250ab3b45bae8cd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--xq9t4-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"b6d78a67-e0e4-42df-a851-94f64f8dabc6", ResourceVersion:"885", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 21, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a35d290910946341b225c4b0bace5fc257f78da858792fb482f13bd3876bf917", Pod:"coredns-76f75df574-xq9t4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4d1e8179b85", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:21:53.376794 containerd[1446]: 2024-09-04 17:21:53.342 [INFO][4924] k8s.go 608: Cleaning up netns ContainerID="1daaeefdbabfa7d7b55349f90cb67c1beceebf514cae48934250ab3b45bae8cd" Sep 4 17:21:53.376794 containerd[1446]: 2024-09-04 17:21:53.342 [INFO][4924] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="1daaeefdbabfa7d7b55349f90cb67c1beceebf514cae48934250ab3b45bae8cd" iface="eth0" netns="" Sep 4 17:21:53.376794 containerd[1446]: 2024-09-04 17:21:53.342 [INFO][4924] k8s.go 615: Releasing IP address(es) ContainerID="1daaeefdbabfa7d7b55349f90cb67c1beceebf514cae48934250ab3b45bae8cd" Sep 4 17:21:53.376794 containerd[1446]: 2024-09-04 17:21:53.342 [INFO][4924] utils.go 188: Calico CNI releasing IP address ContainerID="1daaeefdbabfa7d7b55349f90cb67c1beceebf514cae48934250ab3b45bae8cd" Sep 4 17:21:53.376794 containerd[1446]: 2024-09-04 17:21:53.363 [INFO][4932] ipam_plugin.go 417: Releasing address using handleID ContainerID="1daaeefdbabfa7d7b55349f90cb67c1beceebf514cae48934250ab3b45bae8cd" HandleID="k8s-pod-network.1daaeefdbabfa7d7b55349f90cb67c1beceebf514cae48934250ab3b45bae8cd" Workload="localhost-k8s-coredns--76f75df574--xq9t4-eth0" Sep 4 17:21:53.376794 containerd[1446]: 2024-09-04 17:21:53.363 [INFO][4932] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:21:53.376794 containerd[1446]: 2024-09-04 17:21:53.363 [INFO][4932] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:21:53.376794 containerd[1446]: 2024-09-04 17:21:53.372 [WARNING][4932] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="1daaeefdbabfa7d7b55349f90cb67c1beceebf514cae48934250ab3b45bae8cd" HandleID="k8s-pod-network.1daaeefdbabfa7d7b55349f90cb67c1beceebf514cae48934250ab3b45bae8cd" Workload="localhost-k8s-coredns--76f75df574--xq9t4-eth0" Sep 4 17:21:53.376794 containerd[1446]: 2024-09-04 17:21:53.372 [INFO][4932] ipam_plugin.go 445: Releasing address using workloadID ContainerID="1daaeefdbabfa7d7b55349f90cb67c1beceebf514cae48934250ab3b45bae8cd" HandleID="k8s-pod-network.1daaeefdbabfa7d7b55349f90cb67c1beceebf514cae48934250ab3b45bae8cd" Workload="localhost-k8s-coredns--76f75df574--xq9t4-eth0" Sep 4 17:21:53.376794 containerd[1446]: 2024-09-04 17:21:53.373 [INFO][4932] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:21:53.376794 containerd[1446]: 2024-09-04 17:21:53.375 [INFO][4924] k8s.go 621: Teardown processing complete. ContainerID="1daaeefdbabfa7d7b55349f90cb67c1beceebf514cae48934250ab3b45bae8cd" Sep 4 17:21:53.376794 containerd[1446]: time="2024-09-04T17:21:53.376745948Z" level=info msg="TearDown network for sandbox \"1daaeefdbabfa7d7b55349f90cb67c1beceebf514cae48934250ab3b45bae8cd\" successfully" Sep 4 17:21:53.382138 containerd[1446]: time="2024-09-04T17:21:53.382096806Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1daaeefdbabfa7d7b55349f90cb67c1beceebf514cae48934250ab3b45bae8cd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 17:21:53.382316 containerd[1446]: time="2024-09-04T17:21:53.382165488Z" level=info msg="RemovePodSandbox \"1daaeefdbabfa7d7b55349f90cb67c1beceebf514cae48934250ab3b45bae8cd\" returns successfully" Sep 4 17:21:53.383595 containerd[1446]: time="2024-09-04T17:21:53.383284726Z" level=info msg="StopPodSandbox for \"f492927734a232672282105fe8747e905bc10a82b636a3e4afb9114e3f89d862\"" Sep 4 17:21:53.458233 containerd[1446]: 2024-09-04 17:21:53.419 [WARNING][4954] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f492927734a232672282105fe8747e905bc10a82b636a3e4afb9114e3f89d862" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--mhvbp-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"83aa095b-3e52-41ca-9aa6-57186d153ed4", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 21, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7ee155579c2df6a56ea637923f7b0b258cddb2199a6befb1001d5b234a4e19a9", Pod:"coredns-76f75df574-mhvbp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4941ee9eeb4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:21:53.458233 containerd[1446]: 2024-09-04 17:21:53.419 [INFO][4954] k8s.go 608: Cleaning up netns ContainerID="f492927734a232672282105fe8747e905bc10a82b636a3e4afb9114e3f89d862" Sep 4 17:21:53.458233 containerd[1446]: 2024-09-04 17:21:53.419 [INFO][4954] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="f492927734a232672282105fe8747e905bc10a82b636a3e4afb9114e3f89d862" iface="eth0" netns="" Sep 4 17:21:53.458233 containerd[1446]: 2024-09-04 17:21:53.419 [INFO][4954] k8s.go 615: Releasing IP address(es) ContainerID="f492927734a232672282105fe8747e905bc10a82b636a3e4afb9114e3f89d862" Sep 4 17:21:53.458233 containerd[1446]: 2024-09-04 17:21:53.419 [INFO][4954] utils.go 188: Calico CNI releasing IP address ContainerID="f492927734a232672282105fe8747e905bc10a82b636a3e4afb9114e3f89d862" Sep 4 17:21:53.458233 containerd[1446]: 2024-09-04 17:21:53.439 [INFO][4962] ipam_plugin.go 417: Releasing address using handleID ContainerID="f492927734a232672282105fe8747e905bc10a82b636a3e4afb9114e3f89d862" HandleID="k8s-pod-network.f492927734a232672282105fe8747e905bc10a82b636a3e4afb9114e3f89d862" Workload="localhost-k8s-coredns--76f75df574--mhvbp-eth0" Sep 4 17:21:53.458233 containerd[1446]: 2024-09-04 17:21:53.439 [INFO][4962] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:21:53.458233 containerd[1446]: 2024-09-04 17:21:53.440 [INFO][4962] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:21:53.458233 containerd[1446]: 2024-09-04 17:21:53.451 [WARNING][4962] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="f492927734a232672282105fe8747e905bc10a82b636a3e4afb9114e3f89d862" HandleID="k8s-pod-network.f492927734a232672282105fe8747e905bc10a82b636a3e4afb9114e3f89d862" Workload="localhost-k8s-coredns--76f75df574--mhvbp-eth0" Sep 4 17:21:53.458233 containerd[1446]: 2024-09-04 17:21:53.452 [INFO][4962] ipam_plugin.go 445: Releasing address using workloadID ContainerID="f492927734a232672282105fe8747e905bc10a82b636a3e4afb9114e3f89d862" HandleID="k8s-pod-network.f492927734a232672282105fe8747e905bc10a82b636a3e4afb9114e3f89d862" Workload="localhost-k8s-coredns--76f75df574--mhvbp-eth0" Sep 4 17:21:53.458233 containerd[1446]: 2024-09-04 17:21:53.453 [INFO][4962] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:21:53.458233 containerd[1446]: 2024-09-04 17:21:53.455 [INFO][4954] k8s.go 621: Teardown processing complete. ContainerID="f492927734a232672282105fe8747e905bc10a82b636a3e4afb9114e3f89d862" Sep 4 17:21:53.458983 containerd[1446]: time="2024-09-04T17:21:53.458383546Z" level=info msg="TearDown network for sandbox \"f492927734a232672282105fe8747e905bc10a82b636a3e4afb9114e3f89d862\" successfully" Sep 4 17:21:53.458983 containerd[1446]: time="2024-09-04T17:21:53.458416787Z" level=info msg="StopPodSandbox for \"f492927734a232672282105fe8747e905bc10a82b636a3e4afb9114e3f89d862\" returns successfully" Sep 4 17:21:53.458983 containerd[1446]: time="2024-09-04T17:21:53.458877562Z" level=info msg="RemovePodSandbox for \"f492927734a232672282105fe8747e905bc10a82b636a3e4afb9114e3f89d862\"" Sep 4 17:21:53.458983 containerd[1446]: time="2024-09-04T17:21:53.458905643Z" level=info msg="Forcibly stopping sandbox \"f492927734a232672282105fe8747e905bc10a82b636a3e4afb9114e3f89d862\"" Sep 4 17:21:53.531855 containerd[1446]: 2024-09-04 17:21:53.499 [WARNING][4985] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f492927734a232672282105fe8747e905bc10a82b636a3e4afb9114e3f89d862" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--mhvbp-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"83aa095b-3e52-41ca-9aa6-57186d153ed4", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 21, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7ee155579c2df6a56ea637923f7b0b258cddb2199a6befb1001d5b234a4e19a9", Pod:"coredns-76f75df574-mhvbp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4941ee9eeb4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:21:53.531855 containerd[1446]: 2024-09-04 17:21:53.500 [INFO][4985] k8s.go 608: Cleaning up netns ContainerID="f492927734a232672282105fe8747e905bc10a82b636a3e4afb9114e3f89d862" Sep 4 17:21:53.531855 containerd[1446]: 2024-09-04 17:21:53.500 [INFO][4985] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="f492927734a232672282105fe8747e905bc10a82b636a3e4afb9114e3f89d862" iface="eth0" netns="" Sep 4 17:21:53.531855 containerd[1446]: 2024-09-04 17:21:53.500 [INFO][4985] k8s.go 615: Releasing IP address(es) ContainerID="f492927734a232672282105fe8747e905bc10a82b636a3e4afb9114e3f89d862" Sep 4 17:21:53.531855 containerd[1446]: 2024-09-04 17:21:53.500 [INFO][4985] utils.go 188: Calico CNI releasing IP address ContainerID="f492927734a232672282105fe8747e905bc10a82b636a3e4afb9114e3f89d862" Sep 4 17:21:53.531855 containerd[1446]: 2024-09-04 17:21:53.518 [INFO][4993] ipam_plugin.go 417: Releasing address using handleID ContainerID="f492927734a232672282105fe8747e905bc10a82b636a3e4afb9114e3f89d862" HandleID="k8s-pod-network.f492927734a232672282105fe8747e905bc10a82b636a3e4afb9114e3f89d862" Workload="localhost-k8s-coredns--76f75df574--mhvbp-eth0" Sep 4 17:21:53.531855 containerd[1446]: 2024-09-04 17:21:53.518 [INFO][4993] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:21:53.531855 containerd[1446]: 2024-09-04 17:21:53.518 [INFO][4993] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:21:53.531855 containerd[1446]: 2024-09-04 17:21:53.526 [WARNING][4993] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="f492927734a232672282105fe8747e905bc10a82b636a3e4afb9114e3f89d862" HandleID="k8s-pod-network.f492927734a232672282105fe8747e905bc10a82b636a3e4afb9114e3f89d862" Workload="localhost-k8s-coredns--76f75df574--mhvbp-eth0" Sep 4 17:21:53.531855 containerd[1446]: 2024-09-04 17:21:53.526 [INFO][4993] ipam_plugin.go 445: Releasing address using workloadID ContainerID="f492927734a232672282105fe8747e905bc10a82b636a3e4afb9114e3f89d862" HandleID="k8s-pod-network.f492927734a232672282105fe8747e905bc10a82b636a3e4afb9114e3f89d862" Workload="localhost-k8s-coredns--76f75df574--mhvbp-eth0" Sep 4 17:21:53.531855 containerd[1446]: 2024-09-04 17:21:53.528 [INFO][4993] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:21:53.531855 containerd[1446]: 2024-09-04 17:21:53.530 [INFO][4985] k8s.go 621: Teardown processing complete. ContainerID="f492927734a232672282105fe8747e905bc10a82b636a3e4afb9114e3f89d862" Sep 4 17:21:53.531855 containerd[1446]: time="2024-09-04T17:21:53.531889953Z" level=info msg="TearDown network for sandbox \"f492927734a232672282105fe8747e905bc10a82b636a3e4afb9114e3f89d862\" successfully" Sep 4 17:21:53.535452 containerd[1446]: time="2024-09-04T17:21:53.535418471Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f492927734a232672282105fe8747e905bc10a82b636a3e4afb9114e3f89d862\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 17:21:53.535522 containerd[1446]: time="2024-09-04T17:21:53.535477073Z" level=info msg="RemovePodSandbox \"f492927734a232672282105fe8747e905bc10a82b636a3e4afb9114e3f89d862\" returns successfully" Sep 4 17:21:54.070218 systemd[1]: Started sshd@17-10.0.0.51:22-10.0.0.1:34784.service - OpenSSH per-connection server daemon (10.0.0.1:34784). Sep 4 17:21:54.119789 sshd[5001]: Accepted publickey for core from 10.0.0.1 port 34784 ssh2: RSA SHA256:bZG4GDjjdFyRf+7zZQ8+tZsxmoYB2474ukIYHB/jTWk Sep 4 17:21:54.121942 sshd[5001]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:21:54.126864 systemd-logind[1424]: New session 18 of user core. Sep 4 17:21:54.134832 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 4 17:21:54.301006 sshd[5001]: pam_unix(sshd:session): session closed for user core Sep 4 17:21:54.305488 systemd[1]: sshd@17-10.0.0.51:22-10.0.0.1:34784.service: Deactivated successfully. Sep 4 17:21:54.308168 systemd[1]: session-18.scope: Deactivated successfully. Sep 4 17:21:54.308760 systemd-logind[1424]: Session 18 logged out. Waiting for processes to exit. Sep 4 17:21:54.309937 systemd-logind[1424]: Removed session 18. Sep 4 17:21:59.310539 systemd[1]: Started sshd@18-10.0.0.51:22-10.0.0.1:34794.service - OpenSSH per-connection server daemon (10.0.0.1:34794). Sep 4 17:21:59.350803 sshd[5036]: Accepted publickey for core from 10.0.0.1 port 34794 ssh2: RSA SHA256:bZG4GDjjdFyRf+7zZQ8+tZsxmoYB2474ukIYHB/jTWk Sep 4 17:21:59.352301 sshd[5036]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:21:59.357649 systemd-logind[1424]: New session 19 of user core. Sep 4 17:21:59.367893 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 4 17:21:59.530913 sshd[5036]: pam_unix(sshd:session): session closed for user core Sep 4 17:21:59.534648 systemd[1]: sshd@18-10.0.0.51:22-10.0.0.1:34794.service: Deactivated successfully. Sep 4 17:21:59.536753 systemd[1]: session-19.scope: Deactivated successfully. Sep 4 17:21:59.537402 systemd-logind[1424]: Session 19 logged out. Waiting for processes to exit. Sep 4 17:21:59.538566 systemd-logind[1424]: Removed session 19. Sep 4 17:22:04.544822 systemd[1]: Started sshd@19-10.0.0.51:22-10.0.0.1:38470.service - OpenSSH per-connection server daemon (10.0.0.1:38470). Sep 4 17:22:04.587777 sshd[5056]: Accepted publickey for core from 10.0.0.1 port 38470 ssh2: RSA SHA256:bZG4GDjjdFyRf+7zZQ8+tZsxmoYB2474ukIYHB/jTWk Sep 4 17:22:04.589250 sshd[5056]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:22:04.593689 systemd-logind[1424]: New session 20 of user core. Sep 4 17:22:04.603758 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 4 17:22:04.729389 sshd[5056]: pam_unix(sshd:session): session closed for user core Sep 4 17:22:04.732800 systemd[1]: sshd@19-10.0.0.51:22-10.0.0.1:38470.service: Deactivated successfully. Sep 4 17:22:04.734426 systemd[1]: session-20.scope: Deactivated successfully. Sep 4 17:22:04.736163 systemd-logind[1424]: Session 20 logged out. Waiting for processes to exit. Sep 4 17:22:04.737205 systemd-logind[1424]: Removed session 20.