Aug 5 22:21:11.903988 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Aug 5 22:21:11.904009 kernel: Linux version 6.6.43-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT Mon Aug 5 20:37:57 -00 2024 Aug 5 22:21:11.904019 kernel: KASLR enabled Aug 5 22:21:11.904025 kernel: efi: EFI v2.7 by EDK II Aug 5 22:21:11.904030 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb900018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Aug 5 22:21:11.904036 kernel: random: crng init done Aug 5 22:21:11.904043 kernel: ACPI: Early table checksum verification disabled Aug 5 22:21:11.904049 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Aug 5 22:21:11.904055 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Aug 5 22:21:11.904062 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Aug 5 22:21:11.904068 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Aug 5 22:21:11.904074 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Aug 5 22:21:11.904080 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Aug 5 22:21:11.904086 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Aug 5 22:21:11.904093 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 5 22:21:11.904101 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Aug 5 22:21:11.904107 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Aug 5 22:21:11.904114 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Aug 5 22:21:11.904120 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Aug 5 22:21:11.904126 kernel: NUMA: Failed to initialise from firmware Aug 5 22:21:11.904133 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Aug 5 22:21:11.904139 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Aug 5 22:21:11.904145 kernel: Zone ranges: Aug 5 22:21:11.904151 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Aug 5 22:21:11.904158 kernel: DMA32 empty Aug 5 22:21:11.904165 kernel: Normal empty Aug 5 22:21:11.904171 kernel: Movable zone start for each node Aug 5 22:21:11.904178 kernel: Early memory node ranges Aug 5 22:21:11.904184 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Aug 5 22:21:11.904190 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Aug 5 22:21:11.904197 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Aug 5 22:21:11.904203 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Aug 5 22:21:11.904209 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Aug 5 22:21:11.904215 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Aug 5 22:21:11.904222 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Aug 5 22:21:11.904228 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Aug 5 22:21:11.904234 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Aug 5 22:21:11.904242 kernel: psci: probing for conduit method from ACPI. Aug 5 22:21:11.904248 kernel: psci: PSCIv1.1 detected in firmware. Aug 5 22:21:11.904255 kernel: psci: Using standard PSCI v0.2 function IDs Aug 5 22:21:11.904264 kernel: psci: Trusted OS migration not required Aug 5 22:21:11.904270 kernel: psci: SMC Calling Convention v1.1 Aug 5 22:21:11.904277 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Aug 5 22:21:11.904286 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Aug 5 22:21:11.904292 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Aug 5 22:21:11.904299 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Aug 5 22:21:11.904306 kernel: Detected PIPT I-cache on CPU0 Aug 5 22:21:11.904313 kernel: CPU features: detected: GIC system register CPU interface Aug 5 22:21:11.904320 kernel: CPU features: detected: Hardware dirty bit management Aug 5 22:21:11.904326 kernel: CPU features: detected: Spectre-v4 Aug 5 22:21:11.904333 kernel: CPU features: detected: Spectre-BHB Aug 5 22:21:11.904340 kernel: CPU features: kernel page table isolation forced ON by KASLR Aug 5 22:21:11.904347 kernel: CPU features: detected: Kernel page table isolation (KPTI) Aug 5 22:21:11.904355 kernel: CPU features: detected: ARM erratum 1418040 Aug 5 22:21:11.904361 kernel: alternatives: applying boot alternatives Aug 5 22:21:11.904369 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=4052403b8e39e55d48e6afcca927358798017aa0d33c868bc3038260a8d9be90 Aug 5 22:21:11.904376 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 5 22:21:11.904383 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 5 22:21:11.904390 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 5 22:21:11.904396 kernel: Fallback order for Node 0: 0 Aug 5 22:21:11.904403 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Aug 5 22:21:11.904410 kernel: Policy zone: DMA Aug 5 22:21:11.904416 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 5 22:21:11.904423 kernel: software IO TLB: area num 4. Aug 5 22:21:11.904431 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Aug 5 22:21:11.904438 kernel: Memory: 2386852K/2572288K available (10240K kernel code, 2182K rwdata, 8072K rodata, 39040K init, 897K bss, 185436K reserved, 0K cma-reserved) Aug 5 22:21:11.904445 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Aug 5 22:21:11.904452 kernel: trace event string verifier disabled Aug 5 22:21:11.904458 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 5 22:21:11.904466 kernel: rcu: RCU event tracing is enabled. Aug 5 22:21:11.904472 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Aug 5 22:21:11.904479 kernel: Trampoline variant of Tasks RCU enabled. Aug 5 22:21:11.904486 kernel: Tracing variant of Tasks RCU enabled. Aug 5 22:21:11.904493 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 5 22:21:11.904500 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Aug 5 22:21:11.904507 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Aug 5 22:21:11.904515 kernel: GICv3: 256 SPIs implemented Aug 5 22:21:11.904522 kernel: GICv3: 0 Extended SPIs implemented Aug 5 22:21:11.904529 kernel: Root IRQ handler: gic_handle_irq Aug 5 22:21:11.904535 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Aug 5 22:21:11.904542 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Aug 5 22:21:11.904548 kernel: ITS [mem 0x08080000-0x0809ffff] Aug 5 22:21:11.904555 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400d0000 (indirect, esz 8, psz 64K, shr 1) Aug 5 22:21:11.904562 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400e0000 (flat, esz 8, psz 64K, shr 1) Aug 5 22:21:11.904569 kernel: GICv3: using LPI property table @0x00000000400f0000 Aug 5 22:21:11.904576 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Aug 5 22:21:11.904583 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 5 22:21:11.904591 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 5 22:21:11.904597 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Aug 5 22:21:11.904604 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Aug 5 22:21:11.904611 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Aug 5 22:21:11.904618 kernel: arm-pv: using stolen time PV Aug 5 22:21:11.904625 kernel: Console: colour dummy device 80x25 Aug 5 22:21:11.904632 kernel: ACPI: Core revision 20230628 Aug 5 22:21:11.904639 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Aug 5 22:21:11.904645 kernel: pid_max: default: 32768 minimum: 301 Aug 5 22:21:11.904652 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Aug 5 22:21:11.904661 kernel: SELinux: Initializing. Aug 5 22:21:11.904684 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 5 22:21:11.904692 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 5 22:21:11.904699 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Aug 5 22:21:11.904706 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Aug 5 22:21:11.904713 kernel: rcu: Hierarchical SRCU implementation. Aug 5 22:21:11.904720 kernel: rcu: Max phase no-delay instances is 400. Aug 5 22:21:11.904727 kernel: Platform MSI: ITS@0x8080000 domain created Aug 5 22:21:11.904733 kernel: PCI/MSI: ITS@0x8080000 domain created Aug 5 22:21:11.904742 kernel: Remapping and enabling EFI services. Aug 5 22:21:11.904749 kernel: smp: Bringing up secondary CPUs ... Aug 5 22:21:11.904756 kernel: Detected PIPT I-cache on CPU1 Aug 5 22:21:11.904763 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Aug 5 22:21:11.904770 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Aug 5 22:21:11.904776 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 5 22:21:11.904783 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Aug 5 22:21:11.904790 kernel: Detected PIPT I-cache on CPU2 Aug 5 22:21:11.904797 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Aug 5 22:21:11.904804 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Aug 5 22:21:11.904812 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 5 22:21:11.904819 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Aug 5 22:21:11.904831 kernel: Detected PIPT I-cache on CPU3 Aug 5 22:21:11.904839 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Aug 5 22:21:11.904847 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Aug 5 22:21:11.904854 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 5 22:21:11.904861 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Aug 5 22:21:11.904868 kernel: smp: Brought up 1 node, 4 CPUs Aug 5 22:21:11.904875 kernel: SMP: Total of 4 processors activated. Aug 5 22:21:11.904883 kernel: CPU features: detected: 32-bit EL0 Support Aug 5 22:21:11.904891 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Aug 5 22:21:11.904898 kernel: CPU features: detected: Common not Private translations Aug 5 22:21:11.904905 kernel: CPU features: detected: CRC32 instructions Aug 5 22:21:11.904912 kernel: CPU features: detected: Enhanced Virtualization Traps Aug 5 22:21:11.904920 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Aug 5 22:21:11.904927 kernel: CPU features: detected: LSE atomic instructions Aug 5 22:21:11.904934 kernel: CPU features: detected: Privileged Access Never Aug 5 22:21:11.904943 kernel: CPU features: detected: RAS Extension Support Aug 5 22:21:11.904950 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Aug 5 22:21:11.904957 kernel: CPU: All CPU(s) started at EL1 Aug 5 22:21:11.904964 kernel: alternatives: applying system-wide alternatives Aug 5 22:21:11.904971 kernel: devtmpfs: initialized Aug 5 22:21:11.904979 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 5 22:21:11.904986 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Aug 5 22:21:11.904993 kernel: pinctrl core: initialized pinctrl subsystem Aug 5 22:21:11.905000 kernel: SMBIOS 3.0.0 present. Aug 5 22:21:11.905009 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Aug 5 22:21:11.905016 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 5 22:21:11.905023 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Aug 5 22:21:11.905031 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Aug 5 22:21:11.905038 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Aug 5 22:21:11.905045 kernel: audit: initializing netlink subsys (disabled) Aug 5 22:21:11.905052 kernel: audit: type=2000 audit(0.025:1): state=initialized audit_enabled=0 res=1 Aug 5 22:21:11.905060 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 5 22:21:11.905067 kernel: cpuidle: using governor menu Aug 5 22:21:11.905076 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Aug 5 22:21:11.905083 kernel: ASID allocator initialised with 32768 entries Aug 5 22:21:11.905090 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 5 22:21:11.905097 kernel: Serial: AMBA PL011 UART driver Aug 5 22:21:11.905104 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Aug 5 22:21:11.905111 kernel: Modules: 0 pages in range for non-PLT usage Aug 5 22:21:11.905119 kernel: Modules: 509120 pages in range for PLT usage Aug 5 22:21:11.905126 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 5 22:21:11.905133 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Aug 5 22:21:11.905141 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Aug 5 22:21:11.905149 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Aug 5 22:21:11.905156 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 5 22:21:11.905163 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Aug 5 22:21:11.905170 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Aug 5 22:21:11.905178 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Aug 5 22:21:11.905185 kernel: ACPI: Added _OSI(Module Device) Aug 5 22:21:11.905192 kernel: ACPI: Added _OSI(Processor Device) Aug 5 22:21:11.905199 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Aug 5 22:21:11.905208 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 5 22:21:11.905215 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 5 22:21:11.905222 kernel: ACPI: Interpreter enabled Aug 5 22:21:11.905230 kernel: ACPI: Using GIC for interrupt routing Aug 5 22:21:11.905237 kernel: ACPI: MCFG table detected, 1 entries Aug 5 22:21:11.905244 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Aug 5 22:21:11.905251 kernel: printk: console [ttyAMA0] enabled Aug 5 22:21:11.905258 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 5 22:21:11.905409 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Aug 5 22:21:11.905492 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Aug 5 22:21:11.905562 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Aug 5 22:21:11.905630 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Aug 5 22:21:11.905725 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Aug 5 22:21:11.905737 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Aug 5 22:21:11.905745 kernel: PCI host bridge to bus 0000:00 Aug 5 22:21:11.905818 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Aug 5 22:21:11.905883 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Aug 5 22:21:11.905943 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Aug 5 22:21:11.906002 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 5 22:21:11.906083 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Aug 5 22:21:11.906164 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Aug 5 22:21:11.906234 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Aug 5 22:21:11.906305 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Aug 5 22:21:11.906373 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Aug 5 22:21:11.906440 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Aug 5 22:21:11.906509 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Aug 5 22:21:11.906577 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Aug 5 22:21:11.906637 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Aug 5 22:21:11.906740 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Aug 5 22:21:11.906810 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Aug 5 22:21:11.906820 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Aug 5 22:21:11.906828 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Aug 5 22:21:11.906835 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Aug 5 22:21:11.906843 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Aug 5 22:21:11.906850 kernel: iommu: Default domain type: Translated Aug 5 22:21:11.906857 kernel: iommu: DMA domain TLB invalidation policy: strict mode Aug 5 22:21:11.906865 kernel: efivars: Registered efivars operations Aug 5 22:21:11.906872 kernel: vgaarb: loaded Aug 5 22:21:11.906881 kernel: clocksource: Switched to clocksource arch_sys_counter Aug 5 22:21:11.906888 kernel: VFS: Disk quotas dquot_6.6.0 Aug 5 22:21:11.906896 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 5 22:21:11.906903 kernel: pnp: PnP ACPI init Aug 5 22:21:11.906982 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Aug 5 22:21:11.906993 kernel: pnp: PnP ACPI: found 1 devices Aug 5 22:21:11.907001 kernel: NET: Registered PF_INET protocol family Aug 5 22:21:11.907008 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 5 22:21:11.907017 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Aug 5 22:21:11.907025 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 5 22:21:11.907032 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 5 22:21:11.907039 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Aug 5 22:21:11.907046 kernel: TCP: Hash tables configured (established 32768 bind 32768) Aug 5 22:21:11.907054 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 5 22:21:11.907061 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 5 22:21:11.907069 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 5 22:21:11.907076 kernel: PCI: CLS 0 bytes, default 64 Aug 5 22:21:11.907084 kernel: kvm [1]: HYP mode not available Aug 5 22:21:11.907091 kernel: Initialise system trusted keyrings Aug 5 22:21:11.907099 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Aug 5 22:21:11.907106 kernel: Key type asymmetric registered Aug 5 22:21:11.907113 kernel: Asymmetric key parser 'x509' registered Aug 5 22:21:11.907120 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Aug 5 22:21:11.907127 kernel: io scheduler mq-deadline registered Aug 5 22:21:11.907135 kernel: io scheduler kyber registered Aug 5 22:21:11.907142 kernel: io scheduler bfq registered Aug 5 22:21:11.907150 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Aug 5 22:21:11.907158 kernel: ACPI: button: Power Button [PWRB] Aug 5 22:21:11.907165 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Aug 5 22:21:11.907235 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Aug 5 22:21:11.907245 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 5 22:21:11.907252 kernel: thunder_xcv, ver 1.0 Aug 5 22:21:11.907259 kernel: thunder_bgx, ver 1.0 Aug 5 22:21:11.907266 kernel: nicpf, ver 1.0 Aug 5 22:21:11.907274 kernel: nicvf, ver 1.0 Aug 5 22:21:11.907352 kernel: rtc-efi rtc-efi.0: registered as rtc0 Aug 5 22:21:11.907418 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-08-05T22:21:11 UTC (1722896471) Aug 5 22:21:11.907428 kernel: hid: raw HID events driver (C) Jiri Kosina Aug 5 22:21:11.907435 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Aug 5 22:21:11.907443 kernel: watchdog: Delayed init of the lockup detector failed: -19 Aug 5 22:21:11.907450 kernel: watchdog: Hard watchdog permanently disabled Aug 5 22:21:11.907457 kernel: NET: Registered PF_INET6 protocol family Aug 5 22:21:11.907464 kernel: Segment Routing with IPv6 Aug 5 22:21:11.907474 kernel: In-situ OAM (IOAM) with IPv6 Aug 5 22:21:11.907481 kernel: NET: Registered PF_PACKET protocol family Aug 5 22:21:11.907488 kernel: Key type dns_resolver registered Aug 5 22:21:11.907495 kernel: registered taskstats version 1 Aug 5 22:21:11.907503 kernel: Loading compiled-in X.509 certificates Aug 5 22:21:11.907510 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.43-flatcar: 99cab5c9e2f0f3a5ca972c2df7b3d6ed64d627d4' Aug 5 22:21:11.907517 kernel: Key type .fscrypt registered Aug 5 22:21:11.907524 kernel: Key type fscrypt-provisioning registered Aug 5 22:21:11.907532 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 5 22:21:11.907540 kernel: ima: Allocated hash algorithm: sha1 Aug 5 22:21:11.907547 kernel: ima: No architecture policies found Aug 5 22:21:11.907555 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Aug 5 22:21:11.907562 kernel: clk: Disabling unused clocks Aug 5 22:21:11.907570 kernel: Freeing unused kernel memory: 39040K Aug 5 22:21:11.907577 kernel: Run /init as init process Aug 5 22:21:11.907584 kernel: with arguments: Aug 5 22:21:11.907591 kernel: /init Aug 5 22:21:11.907598 kernel: with environment: Aug 5 22:21:11.907607 kernel: HOME=/ Aug 5 22:21:11.907614 kernel: TERM=linux Aug 5 22:21:11.907621 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 5 22:21:11.907630 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 5 22:21:11.907639 systemd[1]: Detected virtualization kvm. Aug 5 22:21:11.907647 systemd[1]: Detected architecture arm64. Aug 5 22:21:11.907654 systemd[1]: Running in initrd. Aug 5 22:21:11.907662 systemd[1]: No hostname configured, using default hostname. Aug 5 22:21:11.907689 systemd[1]: Hostname set to . Aug 5 22:21:11.907698 systemd[1]: Initializing machine ID from VM UUID. Aug 5 22:21:11.907705 systemd[1]: Queued start job for default target initrd.target. Aug 5 22:21:11.907713 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 5 22:21:11.907721 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 5 22:21:11.907729 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 5 22:21:11.907737 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 5 22:21:11.907745 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 5 22:21:11.907755 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 5 22:21:11.907765 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 5 22:21:11.907773 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 5 22:21:11.907781 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 5 22:21:11.907789 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 5 22:21:11.907796 systemd[1]: Reached target paths.target - Path Units. Aug 5 22:21:11.907806 systemd[1]: Reached target slices.target - Slice Units. Aug 5 22:21:11.907814 systemd[1]: Reached target swap.target - Swaps. Aug 5 22:21:11.907821 systemd[1]: Reached target timers.target - Timer Units. Aug 5 22:21:11.907829 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 5 22:21:11.907837 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 5 22:21:11.907845 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 5 22:21:11.907853 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Aug 5 22:21:11.907861 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 5 22:21:11.907869 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 5 22:21:11.907878 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 5 22:21:11.907886 systemd[1]: Reached target sockets.target - Socket Units. Aug 5 22:21:11.907893 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 5 22:21:11.907901 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 5 22:21:11.907909 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 5 22:21:11.907917 systemd[1]: Starting systemd-fsck-usr.service... Aug 5 22:21:11.907924 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 5 22:21:11.907932 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 5 22:21:11.907940 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 22:21:11.907949 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 5 22:21:11.907957 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 5 22:21:11.907965 systemd[1]: Finished systemd-fsck-usr.service. Aug 5 22:21:11.907974 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 5 22:21:11.908000 systemd-journald[239]: Collecting audit messages is disabled. Aug 5 22:21:11.908034 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 22:21:11.908043 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 5 22:21:11.908053 systemd-journald[239]: Journal started Aug 5 22:21:11.908073 systemd-journald[239]: Runtime Journal (/run/log/journal/f4905f8ae93e48ceac6a28ea628b970f) is 5.9M, max 47.3M, 41.4M free. Aug 5 22:21:11.899398 systemd-modules-load[240]: Inserted module 'overlay' Aug 5 22:21:11.911710 systemd[1]: Started systemd-journald.service - Journal Service. Aug 5 22:21:11.914133 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 5 22:21:11.918116 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 5 22:21:11.918136 kernel: Bridge firewalling registered Aug 5 22:21:11.916927 systemd-modules-load[240]: Inserted module 'br_netfilter' Aug 5 22:21:11.917834 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 5 22:21:11.921843 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 5 22:21:11.923541 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 5 22:21:11.925317 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Aug 5 22:21:11.934203 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 5 22:21:11.936912 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 5 22:21:11.938940 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 5 22:21:11.940092 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Aug 5 22:21:11.949787 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 5 22:21:11.953319 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 5 22:21:11.963518 dracut-cmdline[276]: dracut-dracut-053 Aug 5 22:21:11.966463 dracut-cmdline[276]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=4052403b8e39e55d48e6afcca927358798017aa0d33c868bc3038260a8d9be90 Aug 5 22:21:11.979338 systemd-resolved[280]: Positive Trust Anchors: Aug 5 22:21:11.979357 systemd-resolved[280]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 5 22:21:11.979387 systemd-resolved[280]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Aug 5 22:21:11.983932 systemd-resolved[280]: Defaulting to hostname 'linux'. Aug 5 22:21:11.985804 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 5 22:21:11.986645 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 5 22:21:12.041705 kernel: SCSI subsystem initialized Aug 5 22:21:12.047702 kernel: Loading iSCSI transport class v2.0-870. Aug 5 22:21:12.055704 kernel: iscsi: registered transport (tcp) Aug 5 22:21:12.070743 kernel: iscsi: registered transport (qla4xxx) Aug 5 22:21:12.070782 kernel: QLogic iSCSI HBA Driver Aug 5 22:21:12.117959 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 5 22:21:12.127806 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 5 22:21:12.146972 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 5 22:21:12.147014 kernel: device-mapper: uevent: version 1.0.3 Aug 5 22:21:12.148206 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Aug 5 22:21:12.200695 kernel: raid6: neonx8 gen() 15750 MB/s Aug 5 22:21:12.217710 kernel: raid6: neonx4 gen() 15653 MB/s Aug 5 22:21:12.234694 kernel: raid6: neonx2 gen() 13209 MB/s Aug 5 22:21:12.251691 kernel: raid6: neonx1 gen() 10479 MB/s Aug 5 22:21:12.268689 kernel: raid6: int64x8 gen() 6955 MB/s Aug 5 22:21:12.285686 kernel: raid6: int64x4 gen() 7344 MB/s Aug 5 22:21:12.302699 kernel: raid6: int64x2 gen() 6125 MB/s Aug 5 22:21:12.319693 kernel: raid6: int64x1 gen() 5050 MB/s Aug 5 22:21:12.319717 kernel: raid6: using algorithm neonx8 gen() 15750 MB/s Aug 5 22:21:12.336699 kernel: raid6: .... xor() 11912 MB/s, rmw enabled Aug 5 22:21:12.336719 kernel: raid6: using neon recovery algorithm Aug 5 22:21:12.341923 kernel: xor: measuring software checksum speed Aug 5 22:21:12.341944 kernel: 8regs : 19869 MB/sec Aug 5 22:21:12.342786 kernel: 32regs : 19678 MB/sec Aug 5 22:21:12.343950 kernel: arm64_neon : 27179 MB/sec Aug 5 22:21:12.343964 kernel: xor: using function: arm64_neon (27179 MB/sec) Aug 5 22:21:12.395736 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 5 22:21:12.407722 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 5 22:21:12.417865 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 5 22:21:12.430734 systemd-udevd[462]: Using default interface naming scheme 'v255'. Aug 5 22:21:12.434907 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 5 22:21:12.438755 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 5 22:21:12.453167 dracut-pre-trigger[466]: rd.md=0: removing MD RAID activation Aug 5 22:21:12.480609 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 5 22:21:12.490854 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 5 22:21:12.530624 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 5 22:21:12.544959 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 5 22:21:12.557615 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 5 22:21:12.560209 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 5 22:21:12.562547 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 5 22:21:12.564540 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 5 22:21:12.576851 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 5 22:21:12.580155 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Aug 5 22:21:12.593357 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Aug 5 22:21:12.593468 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 5 22:21:12.593479 kernel: GPT:9289727 != 19775487 Aug 5 22:21:12.593488 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 5 22:21:12.593498 kernel: GPT:9289727 != 19775487 Aug 5 22:21:12.593507 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 5 22:21:12.593516 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 5 22:21:12.584260 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 5 22:21:12.584361 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 5 22:21:12.585490 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 5 22:21:12.586387 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 5 22:21:12.586512 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 22:21:12.587609 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 22:21:12.600131 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 22:21:12.601386 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 5 22:21:12.612260 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 22:21:12.618873 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 5 22:21:12.623462 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (517) Aug 5 22:21:12.623484 kernel: BTRFS: device fsid 278882ec-4175-45f0-a12b-7fddc0d6d9a3 devid 1 transid 41 /dev/vda3 scanned by (udev-worker) (520) Aug 5 22:21:12.626443 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Aug 5 22:21:12.635753 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Aug 5 22:21:12.639876 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 5 22:21:12.641164 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 5 22:21:12.646210 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Aug 5 22:21:12.647080 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Aug 5 22:21:12.656837 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 5 22:21:12.663553 disk-uuid[562]: Primary Header is updated. Aug 5 22:21:12.663553 disk-uuid[562]: Secondary Entries is updated. Aug 5 22:21:12.663553 disk-uuid[562]: Secondary Header is updated. Aug 5 22:21:12.666689 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 5 22:21:13.682718 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 5 22:21:13.682878 disk-uuid[563]: The operation has completed successfully. Aug 5 22:21:13.702528 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 5 22:21:13.702637 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 5 22:21:13.725886 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 5 22:21:13.729509 sh[577]: Success Aug 5 22:21:13.741717 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Aug 5 22:21:13.768259 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 5 22:21:13.783954 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 5 22:21:13.785319 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 5 22:21:13.795250 kernel: BTRFS info (device dm-0): first mount of filesystem 278882ec-4175-45f0-a12b-7fddc0d6d9a3 Aug 5 22:21:13.795312 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Aug 5 22:21:13.795334 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Aug 5 22:21:13.797053 kernel: BTRFS info (device dm-0): disabling log replay at mount time Aug 5 22:21:13.797068 kernel: BTRFS info (device dm-0): using free space tree Aug 5 22:21:13.800293 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 5 22:21:13.801562 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 5 22:21:13.811796 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 5 22:21:13.813275 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 5 22:21:13.820845 kernel: BTRFS info (device vda6): first mount of filesystem 47327e03-a391-4166-b35e-18ba93a1f298 Aug 5 22:21:13.820879 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Aug 5 22:21:13.820890 kernel: BTRFS info (device vda6): using free space tree Aug 5 22:21:13.824717 kernel: BTRFS info (device vda6): auto enabling async discard Aug 5 22:21:13.831072 systemd[1]: mnt-oem.mount: Deactivated successfully. Aug 5 22:21:13.832368 kernel: BTRFS info (device vda6): last unmount of filesystem 47327e03-a391-4166-b35e-18ba93a1f298 Aug 5 22:21:13.837645 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 5 22:21:13.841860 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 5 22:21:13.913964 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 5 22:21:13.926162 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 5 22:21:13.940219 ignition[670]: Ignition 2.18.0 Aug 5 22:21:13.940229 ignition[670]: Stage: fetch-offline Aug 5 22:21:13.940261 ignition[670]: no configs at "/usr/lib/ignition/base.d" Aug 5 22:21:13.940269 ignition[670]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 5 22:21:13.940353 ignition[670]: parsed url from cmdline: "" Aug 5 22:21:13.940357 ignition[670]: no config URL provided Aug 5 22:21:13.940361 ignition[670]: reading system config file "/usr/lib/ignition/user.ign" Aug 5 22:21:13.940369 ignition[670]: no config at "/usr/lib/ignition/user.ign" Aug 5 22:21:13.940391 ignition[670]: op(1): [started] loading QEMU firmware config module Aug 5 22:21:13.940405 ignition[670]: op(1): executing: "modprobe" "qemu_fw_cfg" Aug 5 22:21:13.952284 ignition[670]: op(1): [finished] loading QEMU firmware config module Aug 5 22:21:13.956557 systemd-networkd[769]: lo: Link UP Aug 5 22:21:13.956566 systemd-networkd[769]: lo: Gained carrier Aug 5 22:21:13.957305 systemd-networkd[769]: Enumeration completed Aug 5 22:21:13.957719 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 5 22:21:13.959159 systemd[1]: Reached target network.target - Network. Aug 5 22:21:13.960802 systemd-networkd[769]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 5 22:21:13.960804 systemd-networkd[769]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 5 22:21:13.961567 systemd-networkd[769]: eth0: Link UP Aug 5 22:21:13.961570 systemd-networkd[769]: eth0: Gained carrier Aug 5 22:21:13.961577 systemd-networkd[769]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 5 22:21:13.980713 systemd-networkd[769]: eth0: DHCPv4 address 10.0.0.142/16, gateway 10.0.0.1 acquired from 10.0.0.1 Aug 5 22:21:13.999669 ignition[670]: parsing config with SHA512: b2d7525f8d359d9fa4b9a5c914efbb26607c88b4c8ea91b836741dc5bf5147c3620856cbf0e87172b55e18cfdfe7772f1ee64bec426d6a8f155217b352fd2fd1 Aug 5 22:21:14.006713 unknown[670]: fetched base config from "system" Aug 5 22:21:14.006723 unknown[670]: fetched user config from "qemu" Aug 5 22:21:14.007124 ignition[670]: fetch-offline: fetch-offline passed Aug 5 22:21:14.007178 ignition[670]: Ignition finished successfully Aug 5 22:21:14.009066 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 5 22:21:14.010976 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Aug 5 22:21:14.019851 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 5 22:21:14.030735 ignition[777]: Ignition 2.18.0 Aug 5 22:21:14.030746 ignition[777]: Stage: kargs Aug 5 22:21:14.030900 ignition[777]: no configs at "/usr/lib/ignition/base.d" Aug 5 22:21:14.030910 ignition[777]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 5 22:21:14.031758 ignition[777]: kargs: kargs passed Aug 5 22:21:14.034408 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 5 22:21:14.031804 ignition[777]: Ignition finished successfully Aug 5 22:21:14.043824 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 5 22:21:14.054964 ignition[786]: Ignition 2.18.0 Aug 5 22:21:14.054975 ignition[786]: Stage: disks Aug 5 22:21:14.055138 ignition[786]: no configs at "/usr/lib/ignition/base.d" Aug 5 22:21:14.055148 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 5 22:21:14.056037 ignition[786]: disks: disks passed Aug 5 22:21:14.056085 ignition[786]: Ignition finished successfully Aug 5 22:21:14.058713 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 5 22:21:14.060026 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 5 22:21:14.061598 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 5 22:21:14.063298 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 5 22:21:14.065194 systemd[1]: Reached target sysinit.target - System Initialization. Aug 5 22:21:14.066846 systemd[1]: Reached target basic.target - Basic System. Aug 5 22:21:14.082810 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 5 22:21:14.093568 systemd-fsck[797]: ROOT: clean, 14/553520 files, 52654/553472 blocks Aug 5 22:21:14.097761 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 5 22:21:14.112772 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 5 22:21:14.153693 kernel: EXT4-fs (vda9): mounted filesystem 44c9fced-dca5-4347-a15f-96911c2e5e61 r/w with ordered data mode. Quota mode: none. Aug 5 22:21:14.154050 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 5 22:21:14.155083 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 5 22:21:14.166754 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 5 22:21:14.168314 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 5 22:21:14.169678 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Aug 5 22:21:14.173727 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (805) Aug 5 22:21:14.169721 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 5 22:21:14.177604 kernel: BTRFS info (device vda6): first mount of filesystem 47327e03-a391-4166-b35e-18ba93a1f298 Aug 5 22:21:14.177630 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Aug 5 22:21:14.177641 kernel: BTRFS info (device vda6): using free space tree Aug 5 22:21:14.169743 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 5 22:21:14.176703 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 5 22:21:14.179465 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 5 22:21:14.183700 kernel: BTRFS info (device vda6): auto enabling async discard Aug 5 22:21:14.185175 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 5 22:21:14.226089 initrd-setup-root[829]: cut: /sysroot/etc/passwd: No such file or directory Aug 5 22:21:14.232131 initrd-setup-root[836]: cut: /sysroot/etc/group: No such file or directory Aug 5 22:21:14.236406 initrd-setup-root[843]: cut: /sysroot/etc/shadow: No such file or directory Aug 5 22:21:14.240062 initrd-setup-root[850]: cut: /sysroot/etc/gshadow: No such file or directory Aug 5 22:21:14.307524 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 5 22:21:14.322798 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 5 22:21:14.325153 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 5 22:21:14.329691 kernel: BTRFS info (device vda6): last unmount of filesystem 47327e03-a391-4166-b35e-18ba93a1f298 Aug 5 22:21:14.346580 ignition[918]: INFO : Ignition 2.18.0 Aug 5 22:21:14.346580 ignition[918]: INFO : Stage: mount Aug 5 22:21:14.348309 ignition[918]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 5 22:21:14.348309 ignition[918]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 5 22:21:14.348309 ignition[918]: INFO : mount: mount passed Aug 5 22:21:14.348309 ignition[918]: INFO : Ignition finished successfully Aug 5 22:21:14.349040 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 5 22:21:14.351221 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 5 22:21:14.367822 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 5 22:21:14.794764 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 5 22:21:14.803862 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 5 22:21:14.810026 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (931) Aug 5 22:21:14.810064 kernel: BTRFS info (device vda6): first mount of filesystem 47327e03-a391-4166-b35e-18ba93a1f298 Aug 5 22:21:14.810075 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Aug 5 22:21:14.811267 kernel: BTRFS info (device vda6): using free space tree Aug 5 22:21:14.813691 kernel: BTRFS info (device vda6): auto enabling async discard Aug 5 22:21:14.814396 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 5 22:21:14.832040 ignition[949]: INFO : Ignition 2.18.0 Aug 5 22:21:14.832040 ignition[949]: INFO : Stage: files Aug 5 22:21:14.833392 ignition[949]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 5 22:21:14.833392 ignition[949]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 5 22:21:14.833392 ignition[949]: DEBUG : files: compiled without relabeling support, skipping Aug 5 22:21:14.836082 ignition[949]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 5 22:21:14.836082 ignition[949]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 5 22:21:14.838475 ignition[949]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 5 22:21:14.838475 ignition[949]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 5 22:21:14.838475 ignition[949]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 5 22:21:14.837541 unknown[949]: wrote ssh authorized keys file for user: core Aug 5 22:21:14.842466 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Aug 5 22:21:14.842466 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Aug 5 22:21:15.082881 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 5 22:21:15.134564 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Aug 5 22:21:15.134564 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Aug 5 22:21:15.137552 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Aug 5 22:21:15.137552 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 5 22:21:15.137552 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 5 22:21:15.137552 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 5 22:21:15.137552 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 5 22:21:15.137552 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 5 22:21:15.137552 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 5 22:21:15.137552 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 5 22:21:15.137552 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 5 22:21:15.137552 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Aug 5 22:21:15.137552 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Aug 5 22:21:15.137552 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Aug 5 22:21:15.137552 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Aug 5 22:21:15.185929 systemd-networkd[769]: eth0: Gained IPv6LL Aug 5 22:21:15.442274 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Aug 5 22:21:15.762238 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Aug 5 22:21:15.762238 ignition[949]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Aug 5 22:21:15.765125 ignition[949]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 5 22:21:15.765125 ignition[949]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 5 22:21:15.765125 ignition[949]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Aug 5 22:21:15.765125 ignition[949]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Aug 5 22:21:15.765125 ignition[949]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Aug 5 22:21:15.765125 ignition[949]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Aug 5 22:21:15.765125 ignition[949]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Aug 5 22:21:15.765125 ignition[949]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Aug 5 22:21:15.785382 ignition[949]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Aug 5 22:21:15.789231 ignition[949]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Aug 5 22:21:15.790567 ignition[949]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Aug 5 22:21:15.790567 ignition[949]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Aug 5 22:21:15.790567 ignition[949]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Aug 5 22:21:15.790567 ignition[949]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 5 22:21:15.790567 ignition[949]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 5 22:21:15.790567 ignition[949]: INFO : files: files passed Aug 5 22:21:15.790567 ignition[949]: INFO : Ignition finished successfully Aug 5 22:21:15.791313 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 5 22:21:15.800845 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 5 22:21:15.803845 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 5 22:21:15.805167 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 5 22:21:15.806530 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 5 22:21:15.810713 initrd-setup-root-after-ignition[977]: grep: /sysroot/oem/oem-release: No such file or directory Aug 5 22:21:15.814068 initrd-setup-root-after-ignition[979]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 5 22:21:15.814068 initrd-setup-root-after-ignition[979]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 5 22:21:15.816965 initrd-setup-root-after-ignition[983]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 5 22:21:15.819702 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 5 22:21:15.821294 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 5 22:21:15.827838 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 5 22:21:15.847399 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 5 22:21:15.847516 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 5 22:21:15.849593 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 5 22:21:15.851171 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 5 22:21:15.852893 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 5 22:21:15.853656 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 5 22:21:15.869196 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 5 22:21:15.877808 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 5 22:21:15.885173 systemd[1]: Stopped target network.target - Network. Aug 5 22:21:15.886181 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 5 22:21:15.887826 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 5 22:21:15.889632 systemd[1]: Stopped target timers.target - Timer Units. Aug 5 22:21:15.891358 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 5 22:21:15.891478 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 5 22:21:15.893731 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 5 22:21:15.895648 systemd[1]: Stopped target basic.target - Basic System. Aug 5 22:21:15.897124 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 5 22:21:15.898720 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 5 22:21:15.900589 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 5 22:21:15.902500 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 5 22:21:15.904273 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 5 22:21:15.906140 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 5 22:21:15.907748 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 5 22:21:15.909223 systemd[1]: Stopped target swap.target - Swaps. Aug 5 22:21:15.910666 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 5 22:21:15.910825 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 5 22:21:15.912877 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 5 22:21:15.914622 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 5 22:21:15.916489 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 5 22:21:15.916599 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 5 22:21:15.918522 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 5 22:21:15.918642 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 5 22:21:15.921100 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 5 22:21:15.921215 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 5 22:21:15.923013 systemd[1]: Stopped target paths.target - Path Units. Aug 5 22:21:15.924279 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 5 22:21:15.927729 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 5 22:21:15.928670 systemd[1]: Stopped target slices.target - Slice Units. Aug 5 22:21:15.930267 systemd[1]: Stopped target sockets.target - Socket Units. Aug 5 22:21:15.931659 systemd[1]: iscsid.socket: Deactivated successfully. Aug 5 22:21:15.931773 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 5 22:21:15.933104 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 5 22:21:15.933189 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 5 22:21:15.934270 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 5 22:21:15.934375 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 5 22:21:15.935869 systemd[1]: ignition-files.service: Deactivated successfully. Aug 5 22:21:15.935969 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 5 22:21:15.948838 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 5 22:21:15.950394 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 5 22:21:15.951506 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 5 22:21:15.954572 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 5 22:21:15.956449 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 5 22:21:15.956578 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 5 22:21:15.958838 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 5 22:21:15.958939 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 5 22:21:15.962708 systemd-networkd[769]: eth0: DHCPv6 lease lost Aug 5 22:21:15.963436 ignition[1003]: INFO : Ignition 2.18.0 Aug 5 22:21:15.963436 ignition[1003]: INFO : Stage: umount Aug 5 22:21:15.965082 ignition[1003]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 5 22:21:15.965082 ignition[1003]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 5 22:21:15.967181 ignition[1003]: INFO : umount: umount passed Aug 5 22:21:15.967181 ignition[1003]: INFO : Ignition finished successfully Aug 5 22:21:15.965999 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 5 22:21:15.966746 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 5 22:21:15.970949 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 5 22:21:15.971445 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 5 22:21:15.972954 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 5 22:21:15.974666 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 5 22:21:15.974858 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 5 22:21:15.977559 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 5 22:21:15.977642 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 5 22:21:15.980639 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 5 22:21:15.980751 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 5 22:21:15.982368 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 5 22:21:15.982422 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 5 22:21:15.986482 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 5 22:21:15.986529 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 5 22:21:15.987426 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 5 22:21:15.987462 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 5 22:21:15.988667 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 5 22:21:15.988733 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 5 22:21:16.000780 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 5 22:21:16.001707 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 5 22:21:16.001773 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 5 22:21:16.003562 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 5 22:21:16.003608 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 5 22:21:16.005305 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 5 22:21:16.005347 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 5 22:21:16.007408 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 5 22:21:16.007453 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Aug 5 22:21:16.009181 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 5 22:21:16.019437 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 5 22:21:16.019573 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 5 22:21:16.021110 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 5 22:21:16.021195 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 5 22:21:16.022516 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 5 22:21:16.022598 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 5 22:21:16.024845 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 5 22:21:16.024912 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 5 22:21:16.026195 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 5 22:21:16.026226 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 5 22:21:16.027501 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 5 22:21:16.027544 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 5 22:21:16.029710 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 5 22:21:16.029750 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 5 22:21:16.032084 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 5 22:21:16.032127 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 5 22:21:16.034695 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 5 22:21:16.034762 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 5 22:21:16.048865 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 5 22:21:16.049842 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 5 22:21:16.049896 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 5 22:21:16.051520 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Aug 5 22:21:16.051556 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 5 22:21:16.053031 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 5 22:21:16.053066 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 5 22:21:16.054736 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 5 22:21:16.054769 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 22:21:16.056574 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 5 22:21:16.056646 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 5 22:21:16.060135 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 5 22:21:16.061780 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 5 22:21:16.070635 systemd[1]: Switching root. Aug 5 22:21:16.096494 systemd-journald[239]: Journal stopped Aug 5 22:21:16.824156 systemd-journald[239]: Received SIGTERM from PID 1 (systemd). Aug 5 22:21:16.824219 kernel: SELinux: policy capability network_peer_controls=1 Aug 5 22:21:16.824232 kernel: SELinux: policy capability open_perms=1 Aug 5 22:21:16.824242 kernel: SELinux: policy capability extended_socket_class=1 Aug 5 22:21:16.824251 kernel: SELinux: policy capability always_check_network=0 Aug 5 22:21:16.824261 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 5 22:21:16.824270 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 5 22:21:16.824280 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 5 22:21:16.824290 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 5 22:21:16.824301 kernel: audit: type=1403 audit(1722896476.245:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 5 22:21:16.824313 systemd[1]: Successfully loaded SELinux policy in 34.423ms. Aug 5 22:21:16.824347 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.418ms. Aug 5 22:21:16.824360 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 5 22:21:16.824371 systemd[1]: Detected virtualization kvm. Aug 5 22:21:16.824382 systemd[1]: Detected architecture arm64. Aug 5 22:21:16.824392 systemd[1]: Detected first boot. Aug 5 22:21:16.824410 systemd[1]: Initializing machine ID from VM UUID. Aug 5 22:21:16.824431 zram_generator::config[1049]: No configuration found. Aug 5 22:21:16.824445 systemd[1]: Populated /etc with preset unit settings. Aug 5 22:21:16.824456 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 5 22:21:16.824466 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Aug 5 22:21:16.824480 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 5 22:21:16.824491 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 5 22:21:16.824501 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 5 22:21:16.824617 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 5 22:21:16.824634 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 5 22:21:16.824696 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 5 22:21:16.824710 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 5 22:21:16.824721 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 5 22:21:16.824731 systemd[1]: Created slice user.slice - User and Session Slice. Aug 5 22:21:16.824742 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 5 22:21:16.824753 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 5 22:21:16.824764 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 5 22:21:16.824774 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 5 22:21:16.824785 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 5 22:21:16.824798 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 5 22:21:16.824809 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Aug 5 22:21:16.824820 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 5 22:21:16.824831 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Aug 5 22:21:16.824841 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Aug 5 22:21:16.824852 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Aug 5 22:21:16.824877 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 5 22:21:16.824891 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 5 22:21:16.824901 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 5 22:21:16.824913 systemd[1]: Reached target slices.target - Slice Units. Aug 5 22:21:16.824924 systemd[1]: Reached target swap.target - Swaps. Aug 5 22:21:16.824934 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 5 22:21:16.824952 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 5 22:21:16.824969 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 5 22:21:16.824983 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 5 22:21:16.824994 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 5 22:21:16.825004 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 5 22:21:16.825017 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 5 22:21:16.825028 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 5 22:21:16.825038 systemd[1]: Mounting media.mount - External Media Directory... Aug 5 22:21:16.825048 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 5 22:21:16.825059 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 5 22:21:16.825069 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 5 22:21:16.825084 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 5 22:21:16.825094 systemd[1]: Reached target machines.target - Containers. Aug 5 22:21:16.825106 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 5 22:21:16.825117 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 5 22:21:16.825839 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 5 22:21:16.825860 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 5 22:21:16.825871 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 5 22:21:16.825882 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 5 22:21:16.825893 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 5 22:21:16.825904 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 5 22:21:16.825915 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 5 22:21:16.825934 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 5 22:21:16.825944 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 5 22:21:16.825955 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Aug 5 22:21:16.825965 kernel: fuse: init (API version 7.39) Aug 5 22:21:16.825976 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 5 22:21:16.825986 systemd[1]: Stopped systemd-fsck-usr.service. Aug 5 22:21:16.825996 kernel: loop: module loaded Aug 5 22:21:16.826006 kernel: ACPI: bus type drm_connector registered Aug 5 22:21:16.826016 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 5 22:21:16.826028 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 5 22:21:16.826039 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 5 22:21:16.826049 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 5 22:21:16.826060 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 5 22:21:16.826092 systemd-journald[1112]: Collecting audit messages is disabled. Aug 5 22:21:16.826117 systemd[1]: verity-setup.service: Deactivated successfully. Aug 5 22:21:16.826128 systemd[1]: Stopped verity-setup.service. Aug 5 22:21:16.826141 systemd-journald[1112]: Journal started Aug 5 22:21:16.826161 systemd-journald[1112]: Runtime Journal (/run/log/journal/f4905f8ae93e48ceac6a28ea628b970f) is 5.9M, max 47.3M, 41.4M free. Aug 5 22:21:16.614057 systemd[1]: Queued start job for default target multi-user.target. Aug 5 22:21:16.641071 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Aug 5 22:21:16.641440 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 5 22:21:16.828806 systemd[1]: Started systemd-journald.service - Journal Service. Aug 5 22:21:16.829392 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 5 22:21:16.830498 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 5 22:21:16.831440 systemd[1]: Mounted media.mount - External Media Directory. Aug 5 22:21:16.832405 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 5 22:21:16.833286 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 5 22:21:16.834176 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 5 22:21:16.835132 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 5 22:21:16.836202 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 5 22:21:16.837343 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 5 22:21:16.837477 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 5 22:21:16.838611 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 5 22:21:16.838766 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 5 22:21:16.840042 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 5 22:21:16.840156 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 5 22:21:16.841312 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 5 22:21:16.841442 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 5 22:21:16.842855 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 5 22:21:16.842983 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 5 22:21:16.844131 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 5 22:21:16.844269 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 5 22:21:16.845530 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 5 22:21:16.846578 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 5 22:21:16.848246 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 5 22:21:16.859632 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 5 22:21:16.869866 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 5 22:21:16.871784 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 5 22:21:16.872569 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 5 22:21:16.872607 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 5 22:21:16.874548 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Aug 5 22:21:16.876693 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 5 22:21:16.878886 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 5 22:21:16.881886 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 5 22:21:16.883256 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 5 22:21:16.885198 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 5 22:21:16.886235 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 5 22:21:16.889850 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 5 22:21:16.890814 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 5 22:21:16.893942 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 5 22:21:16.894477 systemd-journald[1112]: Time spent on flushing to /var/log/journal/f4905f8ae93e48ceac6a28ea628b970f is 23.756ms for 853 entries. Aug 5 22:21:16.894477 systemd-journald[1112]: System Journal (/var/log/journal/f4905f8ae93e48ceac6a28ea628b970f) is 8.0M, max 195.6M, 187.6M free. Aug 5 22:21:16.935761 systemd-journald[1112]: Received client request to flush runtime journal. Aug 5 22:21:16.935816 kernel: loop0: detected capacity change from 0 to 194512 Aug 5 22:21:16.935882 kernel: block loop0: the capability attribute has been deprecated. Aug 5 22:21:16.898657 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 5 22:21:16.903068 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 5 22:21:16.905385 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 5 22:21:16.906949 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 5 22:21:16.908244 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 5 22:21:16.909543 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 5 22:21:16.911033 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 5 22:21:16.916467 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 5 22:21:16.929920 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Aug 5 22:21:16.934855 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Aug 5 22:21:16.940686 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 5 22:21:16.943219 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 5 22:21:16.943775 systemd-tmpfiles[1160]: ACLs are not supported, ignoring. Aug 5 22:21:16.943789 systemd-tmpfiles[1160]: ACLs are not supported, ignoring. Aug 5 22:21:16.950760 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 5 22:21:16.953734 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 5 22:21:16.956083 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 5 22:21:16.956842 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Aug 5 22:21:16.960916 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 5 22:21:16.964161 udevadm[1171]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Aug 5 22:21:16.986705 kernel: loop1: detected capacity change from 0 to 59688 Aug 5 22:21:16.996451 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 5 22:21:17.003884 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 5 22:21:17.012708 kernel: loop2: detected capacity change from 0 to 113672 Aug 5 22:21:17.016530 systemd-tmpfiles[1182]: ACLs are not supported, ignoring. Aug 5 22:21:17.016552 systemd-tmpfiles[1182]: ACLs are not supported, ignoring. Aug 5 22:21:17.022506 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 5 22:21:17.053833 kernel: loop3: detected capacity change from 0 to 194512 Aug 5 22:21:17.059719 kernel: loop4: detected capacity change from 0 to 59688 Aug 5 22:21:17.063715 kernel: loop5: detected capacity change from 0 to 113672 Aug 5 22:21:17.066291 (sd-merge)[1186]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Aug 5 22:21:17.066652 (sd-merge)[1186]: Merged extensions into '/usr'. Aug 5 22:21:17.070763 systemd[1]: Reloading requested from client PID 1159 ('systemd-sysext') (unit systemd-sysext.service)... Aug 5 22:21:17.070810 systemd[1]: Reloading... Aug 5 22:21:17.126741 zram_generator::config[1211]: No configuration found. Aug 5 22:21:17.165704 ldconfig[1154]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 5 22:21:17.220778 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 5 22:21:17.259291 systemd[1]: Reloading finished in 188 ms. Aug 5 22:21:17.290909 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 5 22:21:17.292306 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 5 22:21:17.305025 systemd[1]: Starting ensure-sysext.service... Aug 5 22:21:17.306786 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Aug 5 22:21:17.322805 systemd[1]: Reloading requested from client PID 1245 ('systemctl') (unit ensure-sysext.service)... Aug 5 22:21:17.322823 systemd[1]: Reloading... Aug 5 22:21:17.333124 systemd-tmpfiles[1247]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 5 22:21:17.339760 systemd-tmpfiles[1247]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 5 22:21:17.341050 systemd-tmpfiles[1247]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 5 22:21:17.341266 systemd-tmpfiles[1247]: ACLs are not supported, ignoring. Aug 5 22:21:17.341322 systemd-tmpfiles[1247]: ACLs are not supported, ignoring. Aug 5 22:21:17.343954 systemd-tmpfiles[1247]: Detected autofs mount point /boot during canonicalization of boot. Aug 5 22:21:17.343968 systemd-tmpfiles[1247]: Skipping /boot Aug 5 22:21:17.350562 systemd-tmpfiles[1247]: Detected autofs mount point /boot during canonicalization of boot. Aug 5 22:21:17.350577 systemd-tmpfiles[1247]: Skipping /boot Aug 5 22:21:17.368706 zram_generator::config[1270]: No configuration found. Aug 5 22:21:17.453990 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 5 22:21:17.492309 systemd[1]: Reloading finished in 169 ms. Aug 5 22:21:17.507592 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 5 22:21:17.516156 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Aug 5 22:21:17.524063 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 5 22:21:17.526555 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 5 22:21:17.530578 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 5 22:21:17.535008 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 5 22:21:17.541934 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 5 22:21:17.544327 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 5 22:21:17.547812 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 5 22:21:17.549493 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 5 22:21:17.554767 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 5 22:21:17.560898 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 5 22:21:17.562073 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 5 22:21:17.569776 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 5 22:21:17.571646 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 5 22:21:17.573704 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 5 22:21:17.573835 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 5 22:21:17.575395 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 5 22:21:17.575511 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 5 22:21:17.577298 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 5 22:21:17.577422 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 5 22:21:17.578619 systemd-udevd[1317]: Using default interface naming scheme 'v255'. Aug 5 22:21:17.591246 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 5 22:21:17.593988 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 5 22:21:17.596284 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 5 22:21:17.603382 systemd[1]: Finished ensure-sysext.service. Aug 5 22:21:17.605513 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 5 22:21:17.631218 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 5 22:21:17.633382 augenrules[1364]: No rules Aug 5 22:21:17.633807 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 5 22:21:17.637627 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 5 22:21:17.640844 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 5 22:21:17.641711 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1351) Aug 5 22:21:17.642742 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 5 22:21:17.645395 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 5 22:21:17.649617 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Aug 5 22:21:17.653078 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 5 22:21:17.655850 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 5 22:21:17.656181 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 5 22:21:17.657651 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 5 22:21:17.659102 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 5 22:21:17.659292 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 5 22:21:17.660742 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 5 22:21:17.660880 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 5 22:21:17.664858 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 5 22:21:17.665003 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 5 22:21:17.674037 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 5 22:21:17.675516 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 5 22:21:17.676726 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 5 22:21:17.688176 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Aug 5 22:21:17.688300 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 5 22:21:17.688374 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 5 22:21:17.713100 systemd-resolved[1313]: Positive Trust Anchors: Aug 5 22:21:17.713406 systemd-resolved[1313]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 5 22:21:17.713483 systemd-resolved[1313]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Aug 5 22:21:17.720144 systemd-resolved[1313]: Defaulting to hostname 'linux'. Aug 5 22:21:17.720788 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1340) Aug 5 22:21:17.723727 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 5 22:21:17.729730 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 5 22:21:17.739088 systemd-networkd[1373]: lo: Link UP Aug 5 22:21:17.739105 systemd-networkd[1373]: lo: Gained carrier Aug 5 22:21:17.739865 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Aug 5 22:21:17.741499 systemd-networkd[1373]: Enumeration completed Aug 5 22:21:17.742109 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 5 22:21:17.742601 systemd-networkd[1373]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 5 22:21:17.742608 systemd-networkd[1373]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 5 22:21:17.743725 systemd-networkd[1373]: eth0: Link UP Aug 5 22:21:17.743733 systemd-networkd[1373]: eth0: Gained carrier Aug 5 22:21:17.743746 systemd-networkd[1373]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 5 22:21:17.745404 systemd[1]: Reached target network.target - Network. Aug 5 22:21:17.746432 systemd[1]: Reached target time-set.target - System Time Set. Aug 5 22:21:17.754786 systemd-networkd[1373]: eth0: DHCPv4 address 10.0.0.142/16, gateway 10.0.0.1 acquired from 10.0.0.1 Aug 5 22:21:17.755426 systemd-timesyncd[1378]: Network configuration changed, trying to establish connection. Aug 5 22:21:17.756790 systemd-timesyncd[1378]: Contacted time server 10.0.0.1:123 (10.0.0.1). Aug 5 22:21:17.756833 systemd-timesyncd[1378]: Initial clock synchronization to Mon 2024-08-05 22:21:17.698545 UTC. Aug 5 22:21:17.756872 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 5 22:21:17.762973 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 5 22:21:17.765247 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 5 22:21:17.779161 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 22:21:17.784727 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 5 22:21:17.805755 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Aug 5 22:21:17.815935 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Aug 5 22:21:17.828787 lvm[1402]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 5 22:21:17.834612 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 22:21:17.868400 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Aug 5 22:21:17.870003 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 5 22:21:17.871137 systemd[1]: Reached target sysinit.target - System Initialization. Aug 5 22:21:17.872107 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 5 22:21:17.873218 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 5 22:21:17.874585 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 5 22:21:17.875495 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 5 22:21:17.876786 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 5 22:21:17.877970 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 5 22:21:17.878008 systemd[1]: Reached target paths.target - Path Units. Aug 5 22:21:17.878886 systemd[1]: Reached target timers.target - Timer Units. Aug 5 22:21:17.880604 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 5 22:21:17.882741 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 5 22:21:17.891495 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 5 22:21:17.893435 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Aug 5 22:21:17.894977 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 5 22:21:17.896111 systemd[1]: Reached target sockets.target - Socket Units. Aug 5 22:21:17.896844 systemd[1]: Reached target basic.target - Basic System. Aug 5 22:21:17.897528 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 5 22:21:17.897576 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 5 22:21:17.898517 systemd[1]: Starting containerd.service - containerd container runtime... Aug 5 22:21:17.900580 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 5 22:21:17.901974 lvm[1409]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 5 22:21:17.903831 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 5 22:21:17.906553 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 5 22:21:17.907445 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 5 22:21:17.912473 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 5 22:21:17.914226 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 5 22:21:17.916136 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 5 22:21:17.919105 jq[1412]: false Aug 5 22:21:17.924190 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 5 22:21:17.933237 dbus-daemon[1411]: [system] SELinux support is enabled Aug 5 22:21:17.935921 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 5 22:21:17.936935 extend-filesystems[1413]: Found loop3 Aug 5 22:21:17.936935 extend-filesystems[1413]: Found loop4 Aug 5 22:21:17.936935 extend-filesystems[1413]: Found loop5 Aug 5 22:21:17.936935 extend-filesystems[1413]: Found vda Aug 5 22:21:17.936935 extend-filesystems[1413]: Found vda1 Aug 5 22:21:17.936935 extend-filesystems[1413]: Found vda2 Aug 5 22:21:17.936935 extend-filesystems[1413]: Found vda3 Aug 5 22:21:17.936935 extend-filesystems[1413]: Found usr Aug 5 22:21:17.936935 extend-filesystems[1413]: Found vda4 Aug 5 22:21:17.936935 extend-filesystems[1413]: Found vda6 Aug 5 22:21:17.936935 extend-filesystems[1413]: Found vda7 Aug 5 22:21:17.936935 extend-filesystems[1413]: Found vda9 Aug 5 22:21:17.936935 extend-filesystems[1413]: Checking size of /dev/vda9 Aug 5 22:21:17.949585 extend-filesystems[1413]: Resized partition /dev/vda9 Aug 5 22:21:17.939581 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 5 22:21:17.940058 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 5 22:21:17.943870 systemd[1]: Starting update-engine.service - Update Engine... Aug 5 22:21:17.947946 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 5 22:21:17.949446 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 5 22:21:17.958433 extend-filesystems[1434]: resize2fs 1.47.0 (5-Feb-2023) Aug 5 22:21:17.962103 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Aug 5 22:21:17.964470 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 5 22:21:17.966750 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Aug 5 22:21:17.966759 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 5 22:21:17.967079 systemd[1]: motdgen.service: Deactivated successfully. Aug 5 22:21:17.967236 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 5 22:21:17.968051 jq[1433]: true Aug 5 22:21:17.970826 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 5 22:21:17.970966 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 5 22:21:17.992823 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1350) Aug 5 22:21:17.990333 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 5 22:21:17.990357 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 5 22:21:17.993435 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 5 22:21:17.993457 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 5 22:21:17.996378 (ntainerd)[1442]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 5 22:21:18.004662 jq[1441]: true Aug 5 22:21:18.006687 tar[1437]: linux-arm64/helm Aug 5 22:21:18.010969 systemd-logind[1424]: Watching system buttons on /dev/input/event0 (Power Button) Aug 5 22:21:18.025279 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Aug 5 22:21:18.011529 systemd-logind[1424]: New seat seat0. Aug 5 22:21:18.017999 systemd[1]: Started systemd-logind.service - User Login Management. Aug 5 22:21:18.026990 extend-filesystems[1434]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Aug 5 22:21:18.026990 extend-filesystems[1434]: old_desc_blocks = 1, new_desc_blocks = 1 Aug 5 22:21:18.026990 extend-filesystems[1434]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Aug 5 22:21:18.032914 update_engine[1430]: I0805 22:21:18.026732 1430 main.cc:92] Flatcar Update Engine starting Aug 5 22:21:18.032914 update_engine[1430]: I0805 22:21:18.030037 1430 update_check_scheduler.cc:74] Next update check in 9m8s Aug 5 22:21:18.029976 systemd[1]: Started update-engine.service - Update Engine. Aug 5 22:21:18.033200 extend-filesystems[1413]: Resized filesystem in /dev/vda9 Aug 5 22:21:18.032188 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 5 22:21:18.032394 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 5 22:21:18.039151 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 5 22:21:18.072267 bash[1467]: Updated "/home/core/.ssh/authorized_keys" Aug 5 22:21:18.074536 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 5 22:21:18.078414 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Aug 5 22:21:18.086165 locksmithd[1466]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 5 22:21:18.210737 containerd[1442]: time="2024-08-05T22:21:18.210422449Z" level=info msg="starting containerd" revision=1fbfc07f8d28210e62bdbcbf7b950bac8028afbf version=v1.7.17 Aug 5 22:21:18.235491 containerd[1442]: time="2024-08-05T22:21:18.235392131Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Aug 5 22:21:18.235491 containerd[1442]: time="2024-08-05T22:21:18.235432464Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Aug 5 22:21:18.237884 containerd[1442]: time="2024-08-05T22:21:18.236901548Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.43-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Aug 5 22:21:18.237884 containerd[1442]: time="2024-08-05T22:21:18.236936142Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Aug 5 22:21:18.237884 containerd[1442]: time="2024-08-05T22:21:18.237155143Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 5 22:21:18.237884 containerd[1442]: time="2024-08-05T22:21:18.237172161Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Aug 5 22:21:18.237884 containerd[1442]: time="2024-08-05T22:21:18.237238679Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Aug 5 22:21:18.237884 containerd[1442]: time="2024-08-05T22:21:18.237279091Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Aug 5 22:21:18.237884 containerd[1442]: time="2024-08-05T22:21:18.237290609Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Aug 5 22:21:18.237884 containerd[1442]: time="2024-08-05T22:21:18.237343337Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Aug 5 22:21:18.237884 containerd[1442]: time="2024-08-05T22:21:18.237517462Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Aug 5 22:21:18.237884 containerd[1442]: time="2024-08-05T22:21:18.237533843Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Aug 5 22:21:18.237884 containerd[1442]: time="2024-08-05T22:21:18.237542969Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Aug 5 22:21:18.238149 containerd[1442]: time="2024-08-05T22:21:18.237625269Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 5 22:21:18.238149 containerd[1442]: time="2024-08-05T22:21:18.237638461Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Aug 5 22:21:18.238149 containerd[1442]: time="2024-08-05T22:21:18.237721917Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Aug 5 22:21:18.238149 containerd[1442]: time="2024-08-05T22:21:18.237736464Z" level=info msg="metadata content store policy set" policy=shared Aug 5 22:21:18.241076 containerd[1442]: time="2024-08-05T22:21:18.241049903Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Aug 5 22:21:18.241174 containerd[1442]: time="2024-08-05T22:21:18.241152131Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Aug 5 22:21:18.241275 containerd[1442]: time="2024-08-05T22:21:18.241258383Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Aug 5 22:21:18.241407 containerd[1442]: time="2024-08-05T22:21:18.241390581Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Aug 5 22:21:18.241620 containerd[1442]: time="2024-08-05T22:21:18.241603764Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Aug 5 22:21:18.241704 containerd[1442]: time="2024-08-05T22:21:18.241691006Z" level=info msg="NRI interface is disabled by configuration." Aug 5 22:21:18.241834 containerd[1442]: time="2024-08-05T22:21:18.241816429Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Aug 5 22:21:18.242053 containerd[1442]: time="2024-08-05T22:21:18.242033079Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Aug 5 22:21:18.242177 containerd[1442]: time="2024-08-05T22:21:18.242160693Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Aug 5 22:21:18.242236 containerd[1442]: time="2024-08-05T22:21:18.242223624Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Aug 5 22:21:18.242339 containerd[1442]: time="2024-08-05T22:21:18.242325014Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Aug 5 22:21:18.242414 containerd[1442]: time="2024-08-05T22:21:18.242399303Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Aug 5 22:21:18.242802 containerd[1442]: time="2024-08-05T22:21:18.242498382Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Aug 5 22:21:18.242802 containerd[1442]: time="2024-08-05T22:21:18.242516875Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Aug 5 22:21:18.242802 containerd[1442]: time="2024-08-05T22:21:18.242528991Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Aug 5 22:21:18.242802 containerd[1442]: time="2024-08-05T22:21:18.242544653Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Aug 5 22:21:18.242802 containerd[1442]: time="2024-08-05T22:21:18.242557008Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Aug 5 22:21:18.242802 containerd[1442]: time="2024-08-05T22:21:18.242581519Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Aug 5 22:21:18.242802 containerd[1442]: time="2024-08-05T22:21:18.242594352Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Aug 5 22:21:18.242802 containerd[1442]: time="2024-08-05T22:21:18.242731891Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Aug 5 22:21:18.243534 containerd[1442]: time="2024-08-05T22:21:18.243502162Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Aug 5 22:21:18.243714 containerd[1442]: time="2024-08-05T22:21:18.243696494Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Aug 5 22:21:18.244024 containerd[1442]: time="2024-08-05T22:21:18.243794377Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Aug 5 22:21:18.244024 containerd[1442]: time="2024-08-05T22:21:18.243824268Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Aug 5 22:21:18.244931 containerd[1442]: time="2024-08-05T22:21:18.244905526Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Aug 5 22:21:18.245140 containerd[1442]: time="2024-08-05T22:21:18.245121180Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Aug 5 22:21:18.245698 containerd[1442]: time="2024-08-05T22:21:18.245203161Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Aug 5 22:21:18.245698 containerd[1442]: time="2024-08-05T22:21:18.245221016Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Aug 5 22:21:18.245698 containerd[1442]: time="2024-08-05T22:21:18.245236280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Aug 5 22:21:18.245698 containerd[1442]: time="2024-08-05T22:21:18.245248117Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Aug 5 22:21:18.245698 containerd[1442]: time="2024-08-05T22:21:18.245258917Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Aug 5 22:21:18.245698 containerd[1442]: time="2024-08-05T22:21:18.245269917Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Aug 5 22:21:18.245698 containerd[1442]: time="2024-08-05T22:21:18.245282511Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Aug 5 22:21:18.245698 containerd[1442]: time="2024-08-05T22:21:18.245411401Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Aug 5 22:21:18.245698 containerd[1442]: time="2024-08-05T22:21:18.245428499Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Aug 5 22:21:18.245698 containerd[1442]: time="2024-08-05T22:21:18.245439658Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Aug 5 22:21:18.245698 containerd[1442]: time="2024-08-05T22:21:18.245452771Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Aug 5 22:21:18.245698 containerd[1442]: time="2024-08-05T22:21:18.245464488Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Aug 5 22:21:18.245698 containerd[1442]: time="2024-08-05T22:21:18.245477760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Aug 5 22:21:18.245698 containerd[1442]: time="2024-08-05T22:21:18.245489317Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Aug 5 22:21:18.245698 containerd[1442]: time="2024-08-05T22:21:18.245499520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Aug 5 22:21:18.246355 containerd[1442]: time="2024-08-05T22:21:18.246289121Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Aug 5 22:21:18.246623 containerd[1442]: time="2024-08-05T22:21:18.246604013Z" level=info msg="Connect containerd service" Aug 5 22:21:18.246822 containerd[1442]: time="2024-08-05T22:21:18.246802450Z" level=info msg="using legacy CRI server" Aug 5 22:21:18.247012 containerd[1442]: time="2024-08-05T22:21:18.246883634Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 5 22:21:18.247512 containerd[1442]: time="2024-08-05T22:21:18.247189319Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Aug 5 22:21:18.248281 containerd[1442]: time="2024-08-05T22:21:18.248255034Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 5 22:21:18.248466 containerd[1442]: time="2024-08-05T22:21:18.248442630Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Aug 5 22:21:18.248933 containerd[1442]: time="2024-08-05T22:21:18.248850343Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Aug 5 22:21:18.249073 containerd[1442]: time="2024-08-05T22:21:18.249056751Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Aug 5 22:21:18.249195 containerd[1442]: time="2024-08-05T22:21:18.249152641Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Aug 5 22:21:18.249892 containerd[1442]: time="2024-08-05T22:21:18.248781554Z" level=info msg="Start subscribing containerd event" Aug 5 22:21:18.249892 containerd[1442]: time="2024-08-05T22:21:18.249345777Z" level=info msg="Start recovering state" Aug 5 22:21:18.249892 containerd[1442]: time="2024-08-05T22:21:18.249414367Z" level=info msg="Start event monitor" Aug 5 22:21:18.249892 containerd[1442]: time="2024-08-05T22:21:18.249425686Z" level=info msg="Start snapshots syncer" Aug 5 22:21:18.249892 containerd[1442]: time="2024-08-05T22:21:18.249434773Z" level=info msg="Start cni network conf syncer for default" Aug 5 22:21:18.249892 containerd[1442]: time="2024-08-05T22:21:18.249447566Z" level=info msg="Start streaming server" Aug 5 22:21:18.250491 containerd[1442]: time="2024-08-05T22:21:18.250467607Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 5 22:21:18.250775 containerd[1442]: time="2024-08-05T22:21:18.250754800Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 5 22:21:18.251043 systemd[1]: Started containerd.service - containerd container runtime. Aug 5 22:21:18.253232 containerd[1442]: time="2024-08-05T22:21:18.252890015Z" level=info msg="containerd successfully booted in 0.043169s" Aug 5 22:21:18.369726 tar[1437]: linux-arm64/LICENSE Aug 5 22:21:18.369840 tar[1437]: linux-arm64/README.md Aug 5 22:21:18.383897 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 5 22:21:18.830292 sshd_keygen[1428]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 5 22:21:18.848597 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 5 22:21:18.859904 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 5 22:21:18.865152 systemd[1]: issuegen.service: Deactivated successfully. Aug 5 22:21:18.865364 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 5 22:21:18.869901 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 5 22:21:18.880629 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 5 22:21:18.884305 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 5 22:21:18.886589 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Aug 5 22:21:18.888182 systemd[1]: Reached target getty.target - Login Prompts. Aug 5 22:21:19.665880 systemd-networkd[1373]: eth0: Gained IPv6LL Aug 5 22:21:19.667577 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 5 22:21:19.669813 systemd[1]: Reached target network-online.target - Network is Online. Aug 5 22:21:19.681974 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Aug 5 22:21:19.684285 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:21:19.686348 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 5 22:21:19.700082 systemd[1]: coreos-metadata.service: Deactivated successfully. Aug 5 22:21:19.700659 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Aug 5 22:21:19.702263 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 5 22:21:19.705662 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 5 22:21:20.151393 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:21:20.152976 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 5 22:21:20.155792 systemd[1]: Startup finished in 549ms (kernel) + 4.539s (initrd) + 3.943s (userspace) = 9.033s. Aug 5 22:21:20.156588 (kubelet)[1525]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 5 22:21:20.622457 kubelet[1525]: E0805 22:21:20.622327 1525 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 5 22:21:20.624950 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 5 22:21:20.625096 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 5 22:21:24.075574 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 5 22:21:24.076723 systemd[1]: Started sshd@0-10.0.0.142:22-10.0.0.1:36670.service - OpenSSH per-connection server daemon (10.0.0.1:36670). Aug 5 22:21:24.125567 sshd[1539]: Accepted publickey for core from 10.0.0.1 port 36670 ssh2: RSA SHA256:m+vSf9MZ8jyHy+Dz2uz+ngzM5NRoRVVH/LZDa5ltoPE Aug 5 22:21:24.127201 sshd[1539]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:21:24.135945 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 5 22:21:24.151869 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 5 22:21:24.153581 systemd-logind[1424]: New session 1 of user core. Aug 5 22:21:24.160212 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 5 22:21:24.162988 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 5 22:21:24.168319 (systemd)[1543]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:21:24.242325 systemd[1543]: Queued start job for default target default.target. Aug 5 22:21:24.250570 systemd[1543]: Created slice app.slice - User Application Slice. Aug 5 22:21:24.250598 systemd[1543]: Reached target paths.target - Paths. Aug 5 22:21:24.250610 systemd[1543]: Reached target timers.target - Timers. Aug 5 22:21:24.251815 systemd[1543]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 5 22:21:24.261899 systemd[1543]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 5 22:21:24.261960 systemd[1543]: Reached target sockets.target - Sockets. Aug 5 22:21:24.261972 systemd[1543]: Reached target basic.target - Basic System. Aug 5 22:21:24.262007 systemd[1543]: Reached target default.target - Main User Target. Aug 5 22:21:24.262033 systemd[1543]: Startup finished in 88ms. Aug 5 22:21:24.262352 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 5 22:21:24.263689 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 5 22:21:24.324664 systemd[1]: Started sshd@1-10.0.0.142:22-10.0.0.1:36686.service - OpenSSH per-connection server daemon (10.0.0.1:36686). Aug 5 22:21:24.356913 sshd[1554]: Accepted publickey for core from 10.0.0.1 port 36686 ssh2: RSA SHA256:m+vSf9MZ8jyHy+Dz2uz+ngzM5NRoRVVH/LZDa5ltoPE Aug 5 22:21:24.358114 sshd[1554]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:21:24.361730 systemd-logind[1424]: New session 2 of user core. Aug 5 22:21:24.376832 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 5 22:21:24.428365 sshd[1554]: pam_unix(sshd:session): session closed for user core Aug 5 22:21:24.436822 systemd[1]: sshd@1-10.0.0.142:22-10.0.0.1:36686.service: Deactivated successfully. Aug 5 22:21:24.438606 systemd[1]: session-2.scope: Deactivated successfully. Aug 5 22:21:24.440055 systemd-logind[1424]: Session 2 logged out. Waiting for processes to exit. Aug 5 22:21:24.441995 systemd[1]: Started sshd@2-10.0.0.142:22-10.0.0.1:36702.service - OpenSSH per-connection server daemon (10.0.0.1:36702). Aug 5 22:21:24.442530 systemd-logind[1424]: Removed session 2. Aug 5 22:21:24.474689 sshd[1561]: Accepted publickey for core from 10.0.0.1 port 36702 ssh2: RSA SHA256:m+vSf9MZ8jyHy+Dz2uz+ngzM5NRoRVVH/LZDa5ltoPE Aug 5 22:21:24.475850 sshd[1561]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:21:24.479325 systemd-logind[1424]: New session 3 of user core. Aug 5 22:21:24.489815 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 5 22:21:24.537168 sshd[1561]: pam_unix(sshd:session): session closed for user core Aug 5 22:21:24.545905 systemd[1]: sshd@2-10.0.0.142:22-10.0.0.1:36702.service: Deactivated successfully. Aug 5 22:21:24.547808 systemd[1]: session-3.scope: Deactivated successfully. Aug 5 22:21:24.549098 systemd-logind[1424]: Session 3 logged out. Waiting for processes to exit. Aug 5 22:21:24.550058 systemd[1]: Started sshd@3-10.0.0.142:22-10.0.0.1:36710.service - OpenSSH per-connection server daemon (10.0.0.1:36710). Aug 5 22:21:24.550849 systemd-logind[1424]: Removed session 3. Aug 5 22:21:24.583040 sshd[1568]: Accepted publickey for core from 10.0.0.1 port 36710 ssh2: RSA SHA256:m+vSf9MZ8jyHy+Dz2uz+ngzM5NRoRVVH/LZDa5ltoPE Aug 5 22:21:24.584317 sshd[1568]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:21:24.588186 systemd-logind[1424]: New session 4 of user core. Aug 5 22:21:24.610848 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 5 22:21:24.663570 sshd[1568]: pam_unix(sshd:session): session closed for user core Aug 5 22:21:24.676987 systemd[1]: sshd@3-10.0.0.142:22-10.0.0.1:36710.service: Deactivated successfully. Aug 5 22:21:24.678360 systemd[1]: session-4.scope: Deactivated successfully. Aug 5 22:21:24.680721 systemd-logind[1424]: Session 4 logged out. Waiting for processes to exit. Aug 5 22:21:24.692998 systemd[1]: Started sshd@4-10.0.0.142:22-10.0.0.1:36726.service - OpenSSH per-connection server daemon (10.0.0.1:36726). Aug 5 22:21:24.694921 systemd-logind[1424]: Removed session 4. Aug 5 22:21:24.722441 sshd[1575]: Accepted publickey for core from 10.0.0.1 port 36726 ssh2: RSA SHA256:m+vSf9MZ8jyHy+Dz2uz+ngzM5NRoRVVH/LZDa5ltoPE Aug 5 22:21:24.723522 sshd[1575]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:21:24.727234 systemd-logind[1424]: New session 5 of user core. Aug 5 22:21:24.737797 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 5 22:21:24.797453 sudo[1578]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 5 22:21:24.797666 sudo[1578]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 5 22:21:24.815274 sudo[1578]: pam_unix(sudo:session): session closed for user root Aug 5 22:21:24.816945 sshd[1575]: pam_unix(sshd:session): session closed for user core Aug 5 22:21:24.830879 systemd[1]: sshd@4-10.0.0.142:22-10.0.0.1:36726.service: Deactivated successfully. Aug 5 22:21:24.832192 systemd[1]: session-5.scope: Deactivated successfully. Aug 5 22:21:24.833419 systemd-logind[1424]: Session 5 logged out. Waiting for processes to exit. Aug 5 22:21:24.834506 systemd[1]: Started sshd@5-10.0.0.142:22-10.0.0.1:36728.service - OpenSSH per-connection server daemon (10.0.0.1:36728). Aug 5 22:21:24.835252 systemd-logind[1424]: Removed session 5. Aug 5 22:21:24.873972 sshd[1583]: Accepted publickey for core from 10.0.0.1 port 36728 ssh2: RSA SHA256:m+vSf9MZ8jyHy+Dz2uz+ngzM5NRoRVVH/LZDa5ltoPE Aug 5 22:21:24.874637 sshd[1583]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:21:24.881434 systemd-logind[1424]: New session 6 of user core. Aug 5 22:21:24.888662 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 5 22:21:24.943786 sudo[1587]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 5 22:21:24.944030 sudo[1587]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 5 22:21:24.947698 sudo[1587]: pam_unix(sudo:session): session closed for user root Aug 5 22:21:24.952484 sudo[1586]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Aug 5 22:21:24.952764 sudo[1586]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 5 22:21:24.975933 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Aug 5 22:21:24.976990 auditctl[1590]: No rules Aug 5 22:21:24.977293 systemd[1]: audit-rules.service: Deactivated successfully. Aug 5 22:21:24.977458 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Aug 5 22:21:24.981072 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 5 22:21:25.003222 augenrules[1608]: No rules Aug 5 22:21:25.005739 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 5 22:21:25.006983 sudo[1586]: pam_unix(sudo:session): session closed for user root Aug 5 22:21:25.008655 sshd[1583]: pam_unix(sshd:session): session closed for user core Aug 5 22:21:25.017812 systemd[1]: sshd@5-10.0.0.142:22-10.0.0.1:36728.service: Deactivated successfully. Aug 5 22:21:25.019882 systemd[1]: session-6.scope: Deactivated successfully. Aug 5 22:21:25.021535 systemd-logind[1424]: Session 6 logged out. Waiting for processes to exit. Aug 5 22:21:25.022652 systemd[1]: Started sshd@6-10.0.0.142:22-10.0.0.1:36738.service - OpenSSH per-connection server daemon (10.0.0.1:36738). Aug 5 22:21:25.024062 systemd-logind[1424]: Removed session 6. Aug 5 22:21:25.068228 sshd[1616]: Accepted publickey for core from 10.0.0.1 port 36738 ssh2: RSA SHA256:m+vSf9MZ8jyHy+Dz2uz+ngzM5NRoRVVH/LZDa5ltoPE Aug 5 22:21:25.069423 sshd[1616]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:21:25.075127 systemd-logind[1424]: New session 7 of user core. Aug 5 22:21:25.088364 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 5 22:21:25.139305 sudo[1619]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 5 22:21:25.139555 sudo[1619]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 5 22:21:25.242962 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 5 22:21:25.243009 (dockerd)[1630]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 5 22:21:25.474538 dockerd[1630]: time="2024-08-05T22:21:25.474083448Z" level=info msg="Starting up" Aug 5 22:21:25.553023 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2747137150-merged.mount: Deactivated successfully. Aug 5 22:21:25.573879 dockerd[1630]: time="2024-08-05T22:21:25.573828601Z" level=info msg="Loading containers: start." Aug 5 22:21:25.664716 kernel: Initializing XFRM netlink socket Aug 5 22:21:25.726258 systemd-networkd[1373]: docker0: Link UP Aug 5 22:21:25.749029 dockerd[1630]: time="2024-08-05T22:21:25.748936337Z" level=info msg="Loading containers: done." Aug 5 22:21:25.806148 dockerd[1630]: time="2024-08-05T22:21:25.806100016Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 5 22:21:25.806316 dockerd[1630]: time="2024-08-05T22:21:25.806296401Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Aug 5 22:21:25.806427 dockerd[1630]: time="2024-08-05T22:21:25.806409498Z" level=info msg="Daemon has completed initialization" Aug 5 22:21:25.830749 dockerd[1630]: time="2024-08-05T22:21:25.830573656Z" level=info msg="API listen on /run/docker.sock" Aug 5 22:21:25.830984 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 5 22:21:26.420687 containerd[1442]: time="2024-08-05T22:21:26.420626688Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.7\"" Aug 5 22:21:26.550918 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck671737546-merged.mount: Deactivated successfully. Aug 5 22:21:27.228660 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3064892488.mount: Deactivated successfully. Aug 5 22:21:28.994850 containerd[1442]: time="2024-08-05T22:21:28.994727065Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:21:28.995647 containerd[1442]: time="2024-08-05T22:21:28.995477074Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.7: active requests=0, bytes read=32285113" Aug 5 22:21:28.998195 containerd[1442]: time="2024-08-05T22:21:28.996974655Z" level=info msg="ImageCreate event name:\"sha256:09da0e2c1634057a9cb3d1ab3187c1e87431acaae308ee0504a9f637fc1b1165\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:21:29.001270 containerd[1442]: time="2024-08-05T22:21:29.001237145Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:7b104771c13b9e3537846c3f6949000785e1fbc66d07f123ebcea22c8eb918b3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:21:29.002246 containerd[1442]: time="2024-08-05T22:21:29.002214556Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.7\" with image id \"sha256:09da0e2c1634057a9cb3d1ab3187c1e87431acaae308ee0504a9f637fc1b1165\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:7b104771c13b9e3537846c3f6949000785e1fbc66d07f123ebcea22c8eb918b3\", size \"32281911\" in 2.581546156s" Aug 5 22:21:29.002310 containerd[1442]: time="2024-08-05T22:21:29.002253087Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.7\" returns image reference \"sha256:09da0e2c1634057a9cb3d1ab3187c1e87431acaae308ee0504a9f637fc1b1165\"" Aug 5 22:21:29.021072 containerd[1442]: time="2024-08-05T22:21:29.020985489Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.7\"" Aug 5 22:21:30.875825 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 5 22:21:30.882905 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:21:30.970260 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:21:30.973504 (kubelet)[1846]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 5 22:21:31.014430 kubelet[1846]: E0805 22:21:31.014377 1846 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 5 22:21:31.018976 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 5 22:21:31.019127 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 5 22:21:31.347951 containerd[1442]: time="2024-08-05T22:21:31.347807238Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:21:31.349217 containerd[1442]: time="2024-08-05T22:21:31.349093575Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.7: active requests=0, bytes read=29362253" Aug 5 22:21:31.350133 containerd[1442]: time="2024-08-05T22:21:31.350111055Z" level=info msg="ImageCreate event name:\"sha256:42d71ec0804ba94e173cb2bf05d873aad38ec4db300c158498d54f2b8c8368d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:21:31.352960 containerd[1442]: time="2024-08-05T22:21:31.352902227Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e3356f078f7ce72984385d4ca5e726a8cb05ce355d6b158f41aa9b5dbaff9b19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:21:31.354119 containerd[1442]: time="2024-08-05T22:21:31.354064559Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.7\" with image id \"sha256:42d71ec0804ba94e173cb2bf05d873aad38ec4db300c158498d54f2b8c8368d1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e3356f078f7ce72984385d4ca5e726a8cb05ce355d6b158f41aa9b5dbaff9b19\", size \"30849518\" in 2.333039419s" Aug 5 22:21:31.354119 containerd[1442]: time="2024-08-05T22:21:31.354099424Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.7\" returns image reference \"sha256:42d71ec0804ba94e173cb2bf05d873aad38ec4db300c158498d54f2b8c8368d1\"" Aug 5 22:21:31.373969 containerd[1442]: time="2024-08-05T22:21:31.373844100Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.7\"" Aug 5 22:21:32.769883 containerd[1442]: time="2024-08-05T22:21:32.769836385Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:21:32.770747 containerd[1442]: time="2024-08-05T22:21:32.770581926Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.7: active requests=0, bytes read=15751351" Aug 5 22:21:32.771440 containerd[1442]: time="2024-08-05T22:21:32.771390894Z" level=info msg="ImageCreate event name:\"sha256:aa0debff447ecc9a9254154628d35be75d6ddcf6f680bc2672e176729f16ac03\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:21:32.774144 containerd[1442]: time="2024-08-05T22:21:32.774088398Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c6203fbc102cc80a7d934946b7eacb7491480a65db56db203cb3035deecaaa39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:21:32.775275 containerd[1442]: time="2024-08-05T22:21:32.775244654Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.7\" with image id \"sha256:aa0debff447ecc9a9254154628d35be75d6ddcf6f680bc2672e176729f16ac03\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c6203fbc102cc80a7d934946b7eacb7491480a65db56db203cb3035deecaaa39\", size \"17238634\" in 1.401359458s" Aug 5 22:21:32.775324 containerd[1442]: time="2024-08-05T22:21:32.775280601Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.7\" returns image reference \"sha256:aa0debff447ecc9a9254154628d35be75d6ddcf6f680bc2672e176729f16ac03\"" Aug 5 22:21:32.793965 containerd[1442]: time="2024-08-05T22:21:32.793932670Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.7\"" Aug 5 22:21:35.032299 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4122667372.mount: Deactivated successfully. Aug 5 22:21:35.391745 containerd[1442]: time="2024-08-05T22:21:35.391581925Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:21:35.392348 containerd[1442]: time="2024-08-05T22:21:35.392262899Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.7: active requests=0, bytes read=25251734" Aug 5 22:21:35.392923 containerd[1442]: time="2024-08-05T22:21:35.392872279Z" level=info msg="ImageCreate event name:\"sha256:25c9adc8cf12a1aec7e02751b8e9faca4907a0551a6d16c425e576622fdb59db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:21:35.395704 containerd[1442]: time="2024-08-05T22:21:35.395167732Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4d5e787d71c41243379cbb323d2b3a920fa50825cab19d20ef3344a808d18c4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:21:35.395781 containerd[1442]: time="2024-08-05T22:21:35.395723018Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.7\" with image id \"sha256:25c9adc8cf12a1aec7e02751b8e9faca4907a0551a6d16c425e576622fdb59db\", repo tag \"registry.k8s.io/kube-proxy:v1.29.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:4d5e787d71c41243379cbb323d2b3a920fa50825cab19d20ef3344a808d18c4e\", size \"25250751\" in 2.601754878s" Aug 5 22:21:35.395781 containerd[1442]: time="2024-08-05T22:21:35.395755298Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.7\" returns image reference \"sha256:25c9adc8cf12a1aec7e02751b8e9faca4907a0551a6d16c425e576622fdb59db\"" Aug 5 22:21:35.413126 containerd[1442]: time="2024-08-05T22:21:35.413087095Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Aug 5 22:21:35.965160 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2578190490.mount: Deactivated successfully. Aug 5 22:21:37.891818 containerd[1442]: time="2024-08-05T22:21:37.891762855Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:21:37.892379 containerd[1442]: time="2024-08-05T22:21:37.892330049Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Aug 5 22:21:37.893244 containerd[1442]: time="2024-08-05T22:21:37.893215625Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:21:37.898309 containerd[1442]: time="2024-08-05T22:21:37.898267434Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:21:37.899291 containerd[1442]: time="2024-08-05T22:21:37.899170711Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 2.486046658s" Aug 5 22:21:37.899291 containerd[1442]: time="2024-08-05T22:21:37.899206553Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Aug 5 22:21:37.918763 containerd[1442]: time="2024-08-05T22:21:37.918731560Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Aug 5 22:21:38.507344 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1593628618.mount: Deactivated successfully. Aug 5 22:21:38.514007 containerd[1442]: time="2024-08-05T22:21:38.513954251Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:21:38.515290 containerd[1442]: time="2024-08-05T22:21:38.514765599Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" Aug 5 22:21:38.516311 containerd[1442]: time="2024-08-05T22:21:38.516157327Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:21:38.519383 containerd[1442]: time="2024-08-05T22:21:38.519201722Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:21:38.519776 containerd[1442]: time="2024-08-05T22:21:38.519751252Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 600.987885ms" Aug 5 22:21:38.519845 containerd[1442]: time="2024-08-05T22:21:38.519779384Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Aug 5 22:21:38.541349 containerd[1442]: time="2024-08-05T22:21:38.541284395Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Aug 5 22:21:39.145445 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2928610539.mount: Deactivated successfully. Aug 5 22:21:41.239327 containerd[1442]: time="2024-08-05T22:21:41.239265670Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:21:41.240098 containerd[1442]: time="2024-08-05T22:21:41.240053461Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200788" Aug 5 22:21:41.240758 containerd[1442]: time="2024-08-05T22:21:41.240726786Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:21:41.244111 containerd[1442]: time="2024-08-05T22:21:41.244074668Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:21:41.245795 containerd[1442]: time="2024-08-05T22:21:41.245756322Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 2.704437839s" Aug 5 22:21:41.245832 containerd[1442]: time="2024-08-05T22:21:41.245798927Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Aug 5 22:21:41.269419 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 5 22:21:41.275855 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:21:41.363829 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:21:41.367426 (kubelet)[2010]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 5 22:21:41.405453 kubelet[2010]: E0805 22:21:41.405401 2010 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 5 22:21:41.408306 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 5 22:21:41.408457 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 5 22:21:46.617341 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:21:46.639148 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:21:46.654910 systemd[1]: Reloading requested from client PID 2090 ('systemctl') (unit session-7.scope)... Aug 5 22:21:46.654926 systemd[1]: Reloading... Aug 5 22:21:46.725166 zram_generator::config[2124]: No configuration found. Aug 5 22:21:46.830915 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 5 22:21:46.886176 systemd[1]: Reloading finished in 230 ms. Aug 5 22:21:46.931107 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:21:46.933083 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:21:46.935090 systemd[1]: kubelet.service: Deactivated successfully. Aug 5 22:21:46.936798 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:21:46.952121 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:21:47.049272 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:21:47.054981 (kubelet)[2175]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 5 22:21:47.107726 kubelet[2175]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 5 22:21:47.107726 kubelet[2175]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 5 22:21:47.107726 kubelet[2175]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 5 22:21:47.107726 kubelet[2175]: I0805 22:21:47.106997 2175 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 5 22:21:47.877515 kubelet[2175]: I0805 22:21:47.877470 2175 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Aug 5 22:21:47.877515 kubelet[2175]: I0805 22:21:47.877506 2175 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 5 22:21:47.877749 kubelet[2175]: I0805 22:21:47.877726 2175 server.go:919] "Client rotation is on, will bootstrap in background" Aug 5 22:21:47.905787 kubelet[2175]: I0805 22:21:47.903783 2175 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 5 22:21:47.909011 kubelet[2175]: E0805 22:21:47.908982 2175 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.142:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.142:6443: connect: connection refused Aug 5 22:21:47.916009 kubelet[2175]: I0805 22:21:47.915991 2175 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 5 22:21:47.916980 kubelet[2175]: I0805 22:21:47.916952 2175 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 5 22:21:47.917167 kubelet[2175]: I0805 22:21:47.917142 2175 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Aug 5 22:21:47.917167 kubelet[2175]: I0805 22:21:47.917167 2175 topology_manager.go:138] "Creating topology manager with none policy" Aug 5 22:21:47.917277 kubelet[2175]: I0805 22:21:47.917176 2175 container_manager_linux.go:301] "Creating device plugin manager" Aug 5 22:21:47.917277 kubelet[2175]: I0805 22:21:47.917275 2175 state_mem.go:36] "Initialized new in-memory state store" Aug 5 22:21:47.923171 kubelet[2175]: I0805 22:21:47.923136 2175 kubelet.go:396] "Attempting to sync node with API server" Aug 5 22:21:47.923171 kubelet[2175]: I0805 22:21:47.923165 2175 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 5 22:21:47.923245 kubelet[2175]: I0805 22:21:47.923187 2175 kubelet.go:312] "Adding apiserver pod source" Aug 5 22:21:47.923245 kubelet[2175]: I0805 22:21:47.923202 2175 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 5 22:21:47.923623 kubelet[2175]: W0805 22:21:47.923582 2175 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.142:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.142:6443: connect: connection refused Aug 5 22:21:47.923651 kubelet[2175]: E0805 22:21:47.923629 2175 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.142:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.142:6443: connect: connection refused Aug 5 22:21:47.924821 kubelet[2175]: W0805 22:21:47.924773 2175 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.142:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.142:6443: connect: connection refused Aug 5 22:21:47.924821 kubelet[2175]: E0805 22:21:47.924820 2175 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.142:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.142:6443: connect: connection refused Aug 5 22:21:47.927205 kubelet[2175]: I0805 22:21:47.927190 2175 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Aug 5 22:21:47.933164 kubelet[2175]: I0805 22:21:47.931340 2175 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 5 22:21:47.933252 kubelet[2175]: W0805 22:21:47.933237 2175 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 5 22:21:47.936816 kubelet[2175]: I0805 22:21:47.935823 2175 server.go:1256] "Started kubelet" Aug 5 22:21:47.936816 kubelet[2175]: I0805 22:21:47.935873 2175 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Aug 5 22:21:47.936816 kubelet[2175]: I0805 22:21:47.936253 2175 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 5 22:21:47.936816 kubelet[2175]: I0805 22:21:47.936476 2175 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 5 22:21:47.937759 kubelet[2175]: I0805 22:21:47.937695 2175 server.go:461] "Adding debug handlers to kubelet server" Aug 5 22:21:47.939276 kubelet[2175]: I0805 22:21:47.938214 2175 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 5 22:21:47.944656 kubelet[2175]: I0805 22:21:47.944092 2175 volume_manager.go:291] "Starting Kubelet Volume Manager" Aug 5 22:21:47.944656 kubelet[2175]: I0805 22:21:47.944226 2175 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Aug 5 22:21:47.944656 kubelet[2175]: I0805 22:21:47.944284 2175 reconciler_new.go:29] "Reconciler: start to sync state" Aug 5 22:21:47.944656 kubelet[2175]: W0805 22:21:47.944532 2175 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.142:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.142:6443: connect: connection refused Aug 5 22:21:47.944656 kubelet[2175]: E0805 22:21:47.944564 2175 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.142:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.142:6443: connect: connection refused Aug 5 22:21:47.944656 kubelet[2175]: E0805 22:21:47.944601 2175 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 5 22:21:47.944850 kubelet[2175]: E0805 22:21:47.944811 2175 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.142:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.142:6443: connect: connection refused" interval="200ms" Aug 5 22:21:47.947161 kubelet[2175]: I0805 22:21:47.946750 2175 factory.go:221] Registration of the systemd container factory successfully Aug 5 22:21:47.947161 kubelet[2175]: I0805 22:21:47.946823 2175 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 5 22:21:47.947498 kubelet[2175]: E0805 22:21:47.947472 2175 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.142:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.142:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.17e8f53ee4463847 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-08-05 22:21:47.935799367 +0000 UTC m=+0.875727254,LastTimestamp:2024-08-05 22:21:47.935799367 +0000 UTC m=+0.875727254,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Aug 5 22:21:47.948236 kubelet[2175]: I0805 22:21:47.948214 2175 factory.go:221] Registration of the containerd container factory successfully Aug 5 22:21:47.957734 kubelet[2175]: I0805 22:21:47.957556 2175 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 5 22:21:47.959331 kubelet[2175]: I0805 22:21:47.959307 2175 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 5 22:21:47.959968 kubelet[2175]: I0805 22:21:47.959327 2175 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 5 22:21:47.960080 kubelet[2175]: I0805 22:21:47.959551 2175 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 5 22:21:47.960080 kubelet[2175]: I0805 22:21:47.959987 2175 state_mem.go:36] "Initialized new in-memory state store" Aug 5 22:21:47.960080 kubelet[2175]: I0805 22:21:47.959994 2175 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 5 22:21:47.960080 kubelet[2175]: I0805 22:21:47.960013 2175 kubelet.go:2329] "Starting kubelet main sync loop" Aug 5 22:21:47.960080 kubelet[2175]: E0805 22:21:47.960075 2175 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 5 22:21:47.960786 kubelet[2175]: W0805 22:21:47.960624 2175 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.142:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.142:6443: connect: connection refused Aug 5 22:21:47.960786 kubelet[2175]: E0805 22:21:47.960668 2175 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.142:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.142:6443: connect: connection refused Aug 5 22:21:48.020505 kubelet[2175]: I0805 22:21:48.020461 2175 policy_none.go:49] "None policy: Start" Aug 5 22:21:48.021359 kubelet[2175]: I0805 22:21:48.021333 2175 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 5 22:21:48.021437 kubelet[2175]: I0805 22:21:48.021378 2175 state_mem.go:35] "Initializing new in-memory state store" Aug 5 22:21:48.026399 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Aug 5 22:21:48.040067 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Aug 5 22:21:48.042844 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Aug 5 22:21:48.046467 kubelet[2175]: I0805 22:21:48.046424 2175 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Aug 5 22:21:48.046865 kubelet[2175]: E0805 22:21:48.046848 2175 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.142:6443/api/v1/nodes\": dial tcp 10.0.0.142:6443: connect: connection refused" node="localhost" Aug 5 22:21:48.060991 kubelet[2175]: E0805 22:21:48.060964 2175 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 5 22:21:48.063529 kubelet[2175]: I0805 22:21:48.063384 2175 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 5 22:21:48.063644 kubelet[2175]: I0805 22:21:48.063616 2175 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 5 22:21:48.065038 kubelet[2175]: E0805 22:21:48.065004 2175 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Aug 5 22:21:48.146083 kubelet[2175]: E0805 22:21:48.145990 2175 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.142:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.142:6443: connect: connection refused" interval="400ms" Aug 5 22:21:48.248862 kubelet[2175]: I0805 22:21:48.248511 2175 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Aug 5 22:21:48.249019 kubelet[2175]: E0805 22:21:48.249000 2175 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.142:6443/api/v1/nodes\": dial tcp 10.0.0.142:6443: connect: connection refused" node="localhost" Aug 5 22:21:48.263303 kubelet[2175]: I0805 22:21:48.263275 2175 topology_manager.go:215] "Topology Admit Handler" podUID="3191b16da442b122fcea7ace33890008" podNamespace="kube-system" podName="kube-apiserver-localhost" Aug 5 22:21:48.264406 kubelet[2175]: I0805 22:21:48.264315 2175 topology_manager.go:215] "Topology Admit Handler" podUID="088f5b844ad7241e38f298babde6e061" podNamespace="kube-system" podName="kube-controller-manager-localhost" Aug 5 22:21:48.268071 kubelet[2175]: I0805 22:21:48.268050 2175 topology_manager.go:215] "Topology Admit Handler" podUID="cb686d9581fc5af7d1cc8e14735ce3db" podNamespace="kube-system" podName="kube-scheduler-localhost" Aug 5 22:21:48.276652 systemd[1]: Created slice kubepods-burstable-pod3191b16da442b122fcea7ace33890008.slice - libcontainer container kubepods-burstable-pod3191b16da442b122fcea7ace33890008.slice. Aug 5 22:21:48.295612 systemd[1]: Created slice kubepods-burstable-podcb686d9581fc5af7d1cc8e14735ce3db.slice - libcontainer container kubepods-burstable-podcb686d9581fc5af7d1cc8e14735ce3db.slice. Aug 5 22:21:48.302323 systemd[1]: Created slice kubepods-burstable-pod088f5b844ad7241e38f298babde6e061.slice - libcontainer container kubepods-burstable-pod088f5b844ad7241e38f298babde6e061.slice. Aug 5 22:21:48.309298 kubelet[2175]: E0805 22:21:48.309265 2175 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.142:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.142:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.17e8f53ee4463847 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-08-05 22:21:47.935799367 +0000 UTC m=+0.875727254,LastTimestamp:2024-08-05 22:21:47.935799367 +0000 UTC m=+0.875727254,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Aug 5 22:21:48.345960 kubelet[2175]: I0805 22:21:48.345829 2175 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3191b16da442b122fcea7ace33890008-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"3191b16da442b122fcea7ace33890008\") " pod="kube-system/kube-apiserver-localhost" Aug 5 22:21:48.345960 kubelet[2175]: I0805 22:21:48.345876 2175 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/088f5b844ad7241e38f298babde6e061-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"088f5b844ad7241e38f298babde6e061\") " pod="kube-system/kube-controller-manager-localhost" Aug 5 22:21:48.346241 kubelet[2175]: I0805 22:21:48.345898 2175 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/088f5b844ad7241e38f298babde6e061-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"088f5b844ad7241e38f298babde6e061\") " pod="kube-system/kube-controller-manager-localhost" Aug 5 22:21:48.346241 kubelet[2175]: I0805 22:21:48.346121 2175 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/088f5b844ad7241e38f298babde6e061-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"088f5b844ad7241e38f298babde6e061\") " pod="kube-system/kube-controller-manager-localhost" Aug 5 22:21:48.346241 kubelet[2175]: I0805 22:21:48.346146 2175 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cb686d9581fc5af7d1cc8e14735ce3db-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"cb686d9581fc5af7d1cc8e14735ce3db\") " pod="kube-system/kube-scheduler-localhost" Aug 5 22:21:48.346241 kubelet[2175]: I0805 22:21:48.346164 2175 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3191b16da442b122fcea7ace33890008-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"3191b16da442b122fcea7ace33890008\") " pod="kube-system/kube-apiserver-localhost" Aug 5 22:21:48.346241 kubelet[2175]: I0805 22:21:48.346184 2175 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3191b16da442b122fcea7ace33890008-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"3191b16da442b122fcea7ace33890008\") " pod="kube-system/kube-apiserver-localhost" Aug 5 22:21:48.346397 kubelet[2175]: I0805 22:21:48.346228 2175 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/088f5b844ad7241e38f298babde6e061-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"088f5b844ad7241e38f298babde6e061\") " pod="kube-system/kube-controller-manager-localhost" Aug 5 22:21:48.346397 kubelet[2175]: I0805 22:21:48.346268 2175 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/088f5b844ad7241e38f298babde6e061-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"088f5b844ad7241e38f298babde6e061\") " pod="kube-system/kube-controller-manager-localhost" Aug 5 22:21:48.547195 kubelet[2175]: E0805 22:21:48.547077 2175 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.142:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.142:6443: connect: connection refused" interval="800ms" Aug 5 22:21:48.593384 kubelet[2175]: E0805 22:21:48.593305 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:21:48.593978 containerd[1442]: time="2024-08-05T22:21:48.593931517Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:3191b16da442b122fcea7ace33890008,Namespace:kube-system,Attempt:0,}" Aug 5 22:21:48.600193 kubelet[2175]: E0805 22:21:48.600160 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:21:48.600753 containerd[1442]: time="2024-08-05T22:21:48.600505231Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:cb686d9581fc5af7d1cc8e14735ce3db,Namespace:kube-system,Attempt:0,}" Aug 5 22:21:48.605273 kubelet[2175]: E0805 22:21:48.605255 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:21:48.605614 containerd[1442]: time="2024-08-05T22:21:48.605584528Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:088f5b844ad7241e38f298babde6e061,Namespace:kube-system,Attempt:0,}" Aug 5 22:21:48.650257 kubelet[2175]: I0805 22:21:48.650230 2175 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Aug 5 22:21:48.650563 kubelet[2175]: E0805 22:21:48.650531 2175 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.142:6443/api/v1/nodes\": dial tcp 10.0.0.142:6443: connect: connection refused" node="localhost" Aug 5 22:21:49.051144 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount793824643.mount: Deactivated successfully. Aug 5 22:21:49.054220 containerd[1442]: time="2024-08-05T22:21:49.054180917Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 22:21:49.054942 containerd[1442]: time="2024-08-05T22:21:49.054817604Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Aug 5 22:21:49.057893 containerd[1442]: time="2024-08-05T22:21:49.057806335Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 22:21:49.059528 containerd[1442]: time="2024-08-05T22:21:49.059481672Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 5 22:21:49.061054 containerd[1442]: time="2024-08-05T22:21:49.059763973Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 5 22:21:49.062427 containerd[1442]: time="2024-08-05T22:21:49.062390482Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 22:21:49.064483 containerd[1442]: time="2024-08-05T22:21:49.064438115Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 470.410968ms" Aug 5 22:21:49.065136 containerd[1442]: time="2024-08-05T22:21:49.065089395Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 22:21:49.067723 containerd[1442]: time="2024-08-05T22:21:49.067644659Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 22:21:49.068729 containerd[1442]: time="2024-08-05T22:21:49.068693984Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 463.040012ms" Aug 5 22:21:49.069478 containerd[1442]: time="2024-08-05T22:21:49.069447973Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 468.862544ms" Aug 5 22:21:49.199168 kubelet[2175]: W0805 22:21:49.199103 2175 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.142:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.142:6443: connect: connection refused Aug 5 22:21:49.199168 kubelet[2175]: E0805 22:21:49.199171 2175 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.142:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.142:6443: connect: connection refused Aug 5 22:21:49.262433 containerd[1442]: time="2024-08-05T22:21:49.262308616Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:21:49.262609 containerd[1442]: time="2024-08-05T22:21:49.262514754Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:21:49.262609 containerd[1442]: time="2024-08-05T22:21:49.262561731Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:21:49.262701 containerd[1442]: time="2024-08-05T22:21:49.262607589Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:21:49.262701 containerd[1442]: time="2024-08-05T22:21:49.262625020Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:21:49.263313 containerd[1442]: time="2024-08-05T22:21:49.262520592Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:21:49.263466 containerd[1442]: time="2024-08-05T22:21:49.263423628Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:21:49.263466 containerd[1442]: time="2024-08-05T22:21:49.263442978Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:21:49.263559 containerd[1442]: time="2024-08-05T22:21:49.263458131Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:21:49.263559 containerd[1442]: time="2024-08-05T22:21:49.263510665Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:21:49.263559 containerd[1442]: time="2024-08-05T22:21:49.263528416Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:21:49.263725 containerd[1442]: time="2024-08-05T22:21:49.263689057Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:21:49.283830 systemd[1]: Started cri-containerd-2efb592d22e2c09fdc3940defbc6fe44c8dd23d738b7a505050858219319cddd.scope - libcontainer container 2efb592d22e2c09fdc3940defbc6fe44c8dd23d738b7a505050858219319cddd. Aug 5 22:21:49.284956 systemd[1]: Started cri-containerd-5b8af9b6a9995232090c3da0c77bb77f878dcdffb1dd903b0a598487439fb48b.scope - libcontainer container 5b8af9b6a9995232090c3da0c77bb77f878dcdffb1dd903b0a598487439fb48b. Aug 5 22:21:49.286586 systemd[1]: Started cri-containerd-842dd089bf89a02308b1f9e482fc892948b1f01cb560f636a8ca07619679261a.scope - libcontainer container 842dd089bf89a02308b1f9e482fc892948b1f01cb560f636a8ca07619679261a. Aug 5 22:21:49.301425 kubelet[2175]: W0805 22:21:49.301065 2175 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.142:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.142:6443: connect: connection refused Aug 5 22:21:49.302270 kubelet[2175]: E0805 22:21:49.301802 2175 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.142:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.142:6443: connect: connection refused Aug 5 22:21:49.319703 containerd[1442]: time="2024-08-05T22:21:49.318961449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:088f5b844ad7241e38f298babde6e061,Namespace:kube-system,Attempt:0,} returns sandbox id \"2efb592d22e2c09fdc3940defbc6fe44c8dd23d738b7a505050858219319cddd\"" Aug 5 22:21:49.321823 kubelet[2175]: E0805 22:21:49.321794 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:21:49.322651 containerd[1442]: time="2024-08-05T22:21:49.322439499Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:3191b16da442b122fcea7ace33890008,Namespace:kube-system,Attempt:0,} returns sandbox id \"5b8af9b6a9995232090c3da0c77bb77f878dcdffb1dd903b0a598487439fb48b\"" Aug 5 22:21:49.323008 kubelet[2175]: E0805 22:21:49.322879 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:21:49.325684 containerd[1442]: time="2024-08-05T22:21:49.325625893Z" level=info msg="CreateContainer within sandbox \"5b8af9b6a9995232090c3da0c77bb77f878dcdffb1dd903b0a598487439fb48b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 5 22:21:49.326227 containerd[1442]: time="2024-08-05T22:21:49.325823996Z" level=info msg="CreateContainer within sandbox \"2efb592d22e2c09fdc3940defbc6fe44c8dd23d738b7a505050858219319cddd\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 5 22:21:49.327450 containerd[1442]: time="2024-08-05T22:21:49.327422370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:cb686d9581fc5af7d1cc8e14735ce3db,Namespace:kube-system,Attempt:0,} returns sandbox id \"842dd089bf89a02308b1f9e482fc892948b1f01cb560f636a8ca07619679261a\"" Aug 5 22:21:49.328048 kubelet[2175]: E0805 22:21:49.328020 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:21:49.329930 containerd[1442]: time="2024-08-05T22:21:49.329902911Z" level=info msg="CreateContainer within sandbox \"842dd089bf89a02308b1f9e482fc892948b1f01cb560f636a8ca07619679261a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 5 22:21:49.341128 containerd[1442]: time="2024-08-05T22:21:49.341002095Z" level=info msg="CreateContainer within sandbox \"5b8af9b6a9995232090c3da0c77bb77f878dcdffb1dd903b0a598487439fb48b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1466cbf3c323ab3bc343b7f90926c39bc5774906bf5a93ee5cee43fe57d66bc0\"" Aug 5 22:21:49.341839 containerd[1442]: time="2024-08-05T22:21:49.341643620Z" level=info msg="StartContainer for \"1466cbf3c323ab3bc343b7f90926c39bc5774906bf5a93ee5cee43fe57d66bc0\"" Aug 5 22:21:49.343517 containerd[1442]: time="2024-08-05T22:21:49.343455290Z" level=info msg="CreateContainer within sandbox \"2efb592d22e2c09fdc3940defbc6fe44c8dd23d738b7a505050858219319cddd\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"021823db094b07766b628c8d955b1c07e72ece647ee14936936abda6e2861d6d\"" Aug 5 22:21:49.344037 containerd[1442]: time="2024-08-05T22:21:49.344007618Z" level=info msg="StartContainer for \"021823db094b07766b628c8d955b1c07e72ece647ee14936936abda6e2861d6d\"" Aug 5 22:21:49.345979 containerd[1442]: time="2024-08-05T22:21:49.345892172Z" level=info msg="CreateContainer within sandbox \"842dd089bf89a02308b1f9e482fc892948b1f01cb560f636a8ca07619679261a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"445939a4395e7d7d36795fd31c86aa185a8364513262c72b4e76e706f713f6a5\"" Aug 5 22:21:49.346311 containerd[1442]: time="2024-08-05T22:21:49.346284179Z" level=info msg="StartContainer for \"445939a4395e7d7d36795fd31c86aa185a8364513262c72b4e76e706f713f6a5\"" Aug 5 22:21:49.347982 kubelet[2175]: E0805 22:21:49.347936 2175 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.142:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.142:6443: connect: connection refused" interval="1.6s" Aug 5 22:21:49.372908 systemd[1]: Started cri-containerd-021823db094b07766b628c8d955b1c07e72ece647ee14936936abda6e2861d6d.scope - libcontainer container 021823db094b07766b628c8d955b1c07e72ece647ee14936936abda6e2861d6d. Aug 5 22:21:49.374055 systemd[1]: Started cri-containerd-1466cbf3c323ab3bc343b7f90926c39bc5774906bf5a93ee5cee43fe57d66bc0.scope - libcontainer container 1466cbf3c323ab3bc343b7f90926c39bc5774906bf5a93ee5cee43fe57d66bc0. Aug 5 22:21:49.376853 systemd[1]: Started cri-containerd-445939a4395e7d7d36795fd31c86aa185a8364513262c72b4e76e706f713f6a5.scope - libcontainer container 445939a4395e7d7d36795fd31c86aa185a8364513262c72b4e76e706f713f6a5. Aug 5 22:21:49.411871 containerd[1442]: time="2024-08-05T22:21:49.411769631Z" level=info msg="StartContainer for \"021823db094b07766b628c8d955b1c07e72ece647ee14936936abda6e2861d6d\" returns successfully" Aug 5 22:21:49.411871 containerd[1442]: time="2024-08-05T22:21:49.411818647Z" level=info msg="StartContainer for \"445939a4395e7d7d36795fd31c86aa185a8364513262c72b4e76e706f713f6a5\" returns successfully" Aug 5 22:21:49.411871 containerd[1442]: time="2024-08-05T22:21:49.411872260Z" level=info msg="StartContainer for \"1466cbf3c323ab3bc343b7f90926c39bc5774906bf5a93ee5cee43fe57d66bc0\" returns successfully" Aug 5 22:21:49.452484 kubelet[2175]: I0805 22:21:49.452160 2175 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Aug 5 22:21:49.452484 kubelet[2175]: E0805 22:21:49.452457 2175 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.142:6443/api/v1/nodes\": dial tcp 10.0.0.142:6443: connect: connection refused" node="localhost" Aug 5 22:21:49.476937 kubelet[2175]: W0805 22:21:49.476845 2175 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.142:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.142:6443: connect: connection refused Aug 5 22:21:49.476937 kubelet[2175]: E0805 22:21:49.476914 2175 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.142:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.142:6443: connect: connection refused Aug 5 22:21:49.502954 kubelet[2175]: W0805 22:21:49.502919 2175 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.142:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.142:6443: connect: connection refused Aug 5 22:21:49.503052 kubelet[2175]: E0805 22:21:49.502957 2175 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.142:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.142:6443: connect: connection refused Aug 5 22:21:49.967461 kubelet[2175]: E0805 22:21:49.967255 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:21:49.969919 kubelet[2175]: E0805 22:21:49.969898 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:21:49.971185 kubelet[2175]: E0805 22:21:49.971154 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:21:50.954390 kubelet[2175]: E0805 22:21:50.954319 2175 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Aug 5 22:21:50.973972 kubelet[2175]: E0805 22:21:50.973928 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:21:51.054095 kubelet[2175]: I0805 22:21:51.053836 2175 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Aug 5 22:21:51.071250 kubelet[2175]: I0805 22:21:51.070560 2175 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Aug 5 22:21:51.077189 kubelet[2175]: E0805 22:21:51.077159 2175 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 5 22:21:51.177618 kubelet[2175]: E0805 22:21:51.177589 2175 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 5 22:21:51.278398 kubelet[2175]: E0805 22:21:51.278287 2175 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 5 22:21:51.378818 kubelet[2175]: E0805 22:21:51.378790 2175 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 5 22:21:51.479668 kubelet[2175]: E0805 22:21:51.479634 2175 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 5 22:21:51.580172 kubelet[2175]: E0805 22:21:51.580047 2175 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 5 22:21:51.680841 kubelet[2175]: E0805 22:21:51.680789 2175 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 5 22:21:51.781324 kubelet[2175]: E0805 22:21:51.781286 2175 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 5 22:21:51.925877 kubelet[2175]: I0805 22:21:51.925751 2175 apiserver.go:52] "Watching apiserver" Aug 5 22:21:51.945336 kubelet[2175]: I0805 22:21:51.945271 2175 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Aug 5 22:21:53.154345 kubelet[2175]: E0805 22:21:53.153797 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:21:53.683611 systemd[1]: Reloading requested from client PID 2447 ('systemctl') (unit session-7.scope)... Aug 5 22:21:53.683625 systemd[1]: Reloading... Aug 5 22:21:53.744701 zram_generator::config[2484]: No configuration found. Aug 5 22:21:53.834910 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 5 22:21:53.902185 systemd[1]: Reloading finished in 218 ms. Aug 5 22:21:53.938869 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:21:53.952592 systemd[1]: kubelet.service: Deactivated successfully. Aug 5 22:21:53.952823 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:21:53.952941 systemd[1]: kubelet.service: Consumed 1.195s CPU time, 114.9M memory peak, 0B memory swap peak. Aug 5 22:21:53.963027 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:21:54.050554 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:21:54.054154 (kubelet)[2526]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 5 22:21:54.090958 kubelet[2526]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 5 22:21:54.090958 kubelet[2526]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 5 22:21:54.090958 kubelet[2526]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 5 22:21:54.090958 kubelet[2526]: I0805 22:21:54.090935 2526 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 5 22:21:54.094868 kubelet[2526]: I0805 22:21:54.094844 2526 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Aug 5 22:21:54.094868 kubelet[2526]: I0805 22:21:54.094868 2526 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 5 22:21:54.095546 kubelet[2526]: I0805 22:21:54.095024 2526 server.go:919] "Client rotation is on, will bootstrap in background" Aug 5 22:21:54.096493 kubelet[2526]: I0805 22:21:54.096417 2526 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 5 22:21:54.098228 kubelet[2526]: I0805 22:21:54.098203 2526 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 5 22:21:54.104602 kubelet[2526]: I0805 22:21:54.104538 2526 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 5 22:21:54.104968 kubelet[2526]: I0805 22:21:54.104953 2526 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 5 22:21:54.105251 kubelet[2526]: I0805 22:21:54.105228 2526 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Aug 5 22:21:54.105461 kubelet[2526]: I0805 22:21:54.105310 2526 topology_manager.go:138] "Creating topology manager with none policy" Aug 5 22:21:54.105461 kubelet[2526]: I0805 22:21:54.105322 2526 container_manager_linux.go:301] "Creating device plugin manager" Aug 5 22:21:54.105461 kubelet[2526]: I0805 22:21:54.105352 2526 state_mem.go:36] "Initialized new in-memory state store" Aug 5 22:21:54.105724 kubelet[2526]: I0805 22:21:54.105711 2526 kubelet.go:396] "Attempting to sync node with API server" Aug 5 22:21:54.106336 kubelet[2526]: I0805 22:21:54.106319 2526 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 5 22:21:54.106453 kubelet[2526]: I0805 22:21:54.106441 2526 kubelet.go:312] "Adding apiserver pod source" Aug 5 22:21:54.106453 kubelet[2526]: I0805 22:21:54.106494 2526 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 5 22:21:54.108933 kubelet[2526]: I0805 22:21:54.108500 2526 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Aug 5 22:21:54.108933 kubelet[2526]: I0805 22:21:54.108659 2526 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 5 22:21:54.110023 kubelet[2526]: I0805 22:21:54.109994 2526 server.go:1256] "Started kubelet" Aug 5 22:21:54.111353 kubelet[2526]: I0805 22:21:54.111326 2526 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 5 22:21:54.111549 kubelet[2526]: I0805 22:21:54.111520 2526 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 5 22:21:54.111595 kubelet[2526]: I0805 22:21:54.111572 2526 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Aug 5 22:21:54.111776 kubelet[2526]: I0805 22:21:54.111747 2526 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 5 22:21:54.112301 kubelet[2526]: I0805 22:21:54.112282 2526 server.go:461] "Adding debug handlers to kubelet server" Aug 5 22:21:54.115528 kubelet[2526]: E0805 22:21:54.115506 2526 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 5 22:21:54.115585 kubelet[2526]: I0805 22:21:54.115540 2526 volume_manager.go:291] "Starting Kubelet Volume Manager" Aug 5 22:21:54.116871 kubelet[2526]: I0805 22:21:54.116092 2526 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Aug 5 22:21:54.116871 kubelet[2526]: I0805 22:21:54.116482 2526 reconciler_new.go:29] "Reconciler: start to sync state" Aug 5 22:21:54.118433 kubelet[2526]: E0805 22:21:54.117857 2526 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 5 22:21:54.122036 kubelet[2526]: I0805 22:21:54.122003 2526 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 5 22:21:54.137771 kubelet[2526]: I0805 22:21:54.136820 2526 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 5 22:21:54.137771 kubelet[2526]: I0805 22:21:54.137035 2526 factory.go:221] Registration of the containerd container factory successfully Aug 5 22:21:54.137771 kubelet[2526]: I0805 22:21:54.137049 2526 factory.go:221] Registration of the systemd container factory successfully Aug 5 22:21:54.140720 kubelet[2526]: I0805 22:21:54.140698 2526 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 5 22:21:54.140804 kubelet[2526]: I0805 22:21:54.140794 2526 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 5 22:21:54.140941 kubelet[2526]: I0805 22:21:54.140928 2526 kubelet.go:2329] "Starting kubelet main sync loop" Aug 5 22:21:54.141052 kubelet[2526]: E0805 22:21:54.141041 2526 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 5 22:21:54.180578 kubelet[2526]: I0805 22:21:54.180550 2526 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 5 22:21:54.180578 kubelet[2526]: I0805 22:21:54.180570 2526 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 5 22:21:54.180578 kubelet[2526]: I0805 22:21:54.180587 2526 state_mem.go:36] "Initialized new in-memory state store" Aug 5 22:21:54.180833 kubelet[2526]: I0805 22:21:54.180804 2526 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 5 22:21:54.180869 kubelet[2526]: I0805 22:21:54.180843 2526 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 5 22:21:54.180869 kubelet[2526]: I0805 22:21:54.180850 2526 policy_none.go:49] "None policy: Start" Aug 5 22:21:54.181364 kubelet[2526]: I0805 22:21:54.181344 2526 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 5 22:21:54.181425 kubelet[2526]: I0805 22:21:54.181370 2526 state_mem.go:35] "Initializing new in-memory state store" Aug 5 22:21:54.181514 kubelet[2526]: I0805 22:21:54.181500 2526 state_mem.go:75] "Updated machine memory state" Aug 5 22:21:54.185313 kubelet[2526]: I0805 22:21:54.184918 2526 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 5 22:21:54.185313 kubelet[2526]: I0805 22:21:54.185130 2526 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 5 22:21:54.219384 kubelet[2526]: I0805 22:21:54.219312 2526 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Aug 5 22:21:54.225115 kubelet[2526]: I0805 22:21:54.224893 2526 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Aug 5 22:21:54.225115 kubelet[2526]: I0805 22:21:54.224958 2526 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Aug 5 22:21:54.241197 kubelet[2526]: I0805 22:21:54.241176 2526 topology_manager.go:215] "Topology Admit Handler" podUID="3191b16da442b122fcea7ace33890008" podNamespace="kube-system" podName="kube-apiserver-localhost" Aug 5 22:21:54.241357 kubelet[2526]: I0805 22:21:54.241344 2526 topology_manager.go:215] "Topology Admit Handler" podUID="088f5b844ad7241e38f298babde6e061" podNamespace="kube-system" podName="kube-controller-manager-localhost" Aug 5 22:21:54.242055 kubelet[2526]: I0805 22:21:54.242016 2526 topology_manager.go:215] "Topology Admit Handler" podUID="cb686d9581fc5af7d1cc8e14735ce3db" podNamespace="kube-system" podName="kube-scheduler-localhost" Aug 5 22:21:54.247354 kubelet[2526]: E0805 22:21:54.247332 2526 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Aug 5 22:21:54.316865 kubelet[2526]: I0805 22:21:54.316742 2526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/088f5b844ad7241e38f298babde6e061-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"088f5b844ad7241e38f298babde6e061\") " pod="kube-system/kube-controller-manager-localhost" Aug 5 22:21:54.316865 kubelet[2526]: I0805 22:21:54.316794 2526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3191b16da442b122fcea7ace33890008-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"3191b16da442b122fcea7ace33890008\") " pod="kube-system/kube-apiserver-localhost" Aug 5 22:21:54.316865 kubelet[2526]: I0805 22:21:54.316818 2526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3191b16da442b122fcea7ace33890008-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"3191b16da442b122fcea7ace33890008\") " pod="kube-system/kube-apiserver-localhost" Aug 5 22:21:54.316865 kubelet[2526]: I0805 22:21:54.316845 2526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/088f5b844ad7241e38f298babde6e061-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"088f5b844ad7241e38f298babde6e061\") " pod="kube-system/kube-controller-manager-localhost" Aug 5 22:21:54.317059 kubelet[2526]: I0805 22:21:54.316891 2526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/088f5b844ad7241e38f298babde6e061-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"088f5b844ad7241e38f298babde6e061\") " pod="kube-system/kube-controller-manager-localhost" Aug 5 22:21:54.317059 kubelet[2526]: I0805 22:21:54.316927 2526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/088f5b844ad7241e38f298babde6e061-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"088f5b844ad7241e38f298babde6e061\") " pod="kube-system/kube-controller-manager-localhost" Aug 5 22:21:54.317059 kubelet[2526]: I0805 22:21:54.316946 2526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3191b16da442b122fcea7ace33890008-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"3191b16da442b122fcea7ace33890008\") " pod="kube-system/kube-apiserver-localhost" Aug 5 22:21:54.317059 kubelet[2526]: I0805 22:21:54.316977 2526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/088f5b844ad7241e38f298babde6e061-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"088f5b844ad7241e38f298babde6e061\") " pod="kube-system/kube-controller-manager-localhost" Aug 5 22:21:54.317059 kubelet[2526]: I0805 22:21:54.317000 2526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cb686d9581fc5af7d1cc8e14735ce3db-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"cb686d9581fc5af7d1cc8e14735ce3db\") " pod="kube-system/kube-scheduler-localhost" Aug 5 22:21:54.547076 kubelet[2526]: E0805 22:21:54.546921 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:21:54.547168 kubelet[2526]: E0805 22:21:54.547110 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:21:54.548120 kubelet[2526]: E0805 22:21:54.548101 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:21:55.109977 kubelet[2526]: I0805 22:21:55.109741 2526 apiserver.go:52] "Watching apiserver" Aug 5 22:21:55.160609 kubelet[2526]: E0805 22:21:55.160557 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:21:55.160856 kubelet[2526]: E0805 22:21:55.160830 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:21:55.164013 kubelet[2526]: E0805 22:21:55.163976 2526 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Aug 5 22:21:55.164988 kubelet[2526]: E0805 22:21:55.164970 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:21:55.179094 kubelet[2526]: I0805 22:21:55.179018 2526 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.178979557 podStartE2EDuration="1.178979557s" podCreationTimestamp="2024-08-05 22:21:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 22:21:55.178253959 +0000 UTC m=+1.120884730" watchObservedRunningTime="2024-08-05 22:21:55.178979557 +0000 UTC m=+1.121610328" Aug 5 22:21:55.200321 kubelet[2526]: I0805 22:21:55.200057 2526 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.2000238149999998 podStartE2EDuration="2.200023815s" podCreationTimestamp="2024-08-05 22:21:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 22:21:55.199065975 +0000 UTC m=+1.141696746" watchObservedRunningTime="2024-08-05 22:21:55.200023815 +0000 UTC m=+1.142654586" Aug 5 22:21:55.200321 kubelet[2526]: I0805 22:21:55.200125 2526 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.200108947 podStartE2EDuration="1.200108947s" podCreationTimestamp="2024-08-05 22:21:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 22:21:55.192746363 +0000 UTC m=+1.135377134" watchObservedRunningTime="2024-08-05 22:21:55.200108947 +0000 UTC m=+1.142739718" Aug 5 22:21:55.218619 kubelet[2526]: I0805 22:21:55.218587 2526 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Aug 5 22:21:56.161820 kubelet[2526]: E0805 22:21:56.161787 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:21:58.118516 sudo[1619]: pam_unix(sudo:session): session closed for user root Aug 5 22:21:58.120226 sshd[1616]: pam_unix(sshd:session): session closed for user core Aug 5 22:21:58.124261 systemd-logind[1424]: Session 7 logged out. Waiting for processes to exit. Aug 5 22:21:58.124457 systemd[1]: sshd@6-10.0.0.142:22-10.0.0.1:36738.service: Deactivated successfully. Aug 5 22:21:58.126047 systemd[1]: session-7.scope: Deactivated successfully. Aug 5 22:21:58.126278 systemd[1]: session-7.scope: Consumed 7.445s CPU time, 138.4M memory peak, 0B memory swap peak. Aug 5 22:21:58.126904 systemd-logind[1424]: Removed session 7. Aug 5 22:21:58.389185 kubelet[2526]: E0805 22:21:58.389077 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:22:00.379699 kubelet[2526]: E0805 22:22:00.379347 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:22:01.173629 kubelet[2526]: E0805 22:22:01.173158 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:22:02.480711 kubelet[2526]: E0805 22:22:02.480544 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:22:03.125143 update_engine[1430]: I0805 22:22:03.125057 1430 update_attempter.cc:509] Updating boot flags... Aug 5 22:22:03.145760 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2624) Aug 5 22:22:03.177562 kubelet[2526]: E0805 22:22:03.177521 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:22:03.186700 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2622) Aug 5 22:22:03.208705 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2622) Aug 5 22:22:08.397514 kubelet[2526]: E0805 22:22:08.397246 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:22:09.806187 kubelet[2526]: I0805 22:22:09.806136 2526 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 5 22:22:09.806552 containerd[1442]: time="2024-08-05T22:22:09.806439645Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 5 22:22:09.806840 kubelet[2526]: I0805 22:22:09.806815 2526 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 5 22:22:10.370647 kubelet[2526]: I0805 22:22:10.370163 2526 topology_manager.go:215] "Topology Admit Handler" podUID="45f18c29-bd4c-45b5-96d8-ccf81a1bc782" podNamespace="kube-system" podName="kube-proxy-2ljmh" Aug 5 22:22:10.380302 systemd[1]: Created slice kubepods-besteffort-pod45f18c29_bd4c_45b5_96d8_ccf81a1bc782.slice - libcontainer container kubepods-besteffort-pod45f18c29_bd4c_45b5_96d8_ccf81a1bc782.slice. Aug 5 22:22:10.420552 kubelet[2526]: I0805 22:22:10.420506 2526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/45f18c29-bd4c-45b5-96d8-ccf81a1bc782-lib-modules\") pod \"kube-proxy-2ljmh\" (UID: \"45f18c29-bd4c-45b5-96d8-ccf81a1bc782\") " pod="kube-system/kube-proxy-2ljmh" Aug 5 22:22:10.420552 kubelet[2526]: I0805 22:22:10.420554 2526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/45f18c29-bd4c-45b5-96d8-ccf81a1bc782-xtables-lock\") pod \"kube-proxy-2ljmh\" (UID: \"45f18c29-bd4c-45b5-96d8-ccf81a1bc782\") " pod="kube-system/kube-proxy-2ljmh" Aug 5 22:22:10.420738 kubelet[2526]: I0805 22:22:10.420581 2526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgts2\" (UniqueName: \"kubernetes.io/projected/45f18c29-bd4c-45b5-96d8-ccf81a1bc782-kube-api-access-jgts2\") pod \"kube-proxy-2ljmh\" (UID: \"45f18c29-bd4c-45b5-96d8-ccf81a1bc782\") " pod="kube-system/kube-proxy-2ljmh" Aug 5 22:22:10.420738 kubelet[2526]: I0805 22:22:10.420602 2526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/45f18c29-bd4c-45b5-96d8-ccf81a1bc782-kube-proxy\") pod \"kube-proxy-2ljmh\" (UID: \"45f18c29-bd4c-45b5-96d8-ccf81a1bc782\") " pod="kube-system/kube-proxy-2ljmh" Aug 5 22:22:10.529425 kubelet[2526]: E0805 22:22:10.529367 2526 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Aug 5 22:22:10.529425 kubelet[2526]: E0805 22:22:10.529403 2526 projected.go:200] Error preparing data for projected volume kube-api-access-jgts2 for pod kube-system/kube-proxy-2ljmh: configmap "kube-root-ca.crt" not found Aug 5 22:22:10.529579 kubelet[2526]: E0805 22:22:10.529469 2526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/45f18c29-bd4c-45b5-96d8-ccf81a1bc782-kube-api-access-jgts2 podName:45f18c29-bd4c-45b5-96d8-ccf81a1bc782 nodeName:}" failed. No retries permitted until 2024-08-05 22:22:11.029448148 +0000 UTC m=+16.972078879 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-jgts2" (UniqueName: "kubernetes.io/projected/45f18c29-bd4c-45b5-96d8-ccf81a1bc782-kube-api-access-jgts2") pod "kube-proxy-2ljmh" (UID: "45f18c29-bd4c-45b5-96d8-ccf81a1bc782") : configmap "kube-root-ca.crt" not found Aug 5 22:22:10.893358 kubelet[2526]: I0805 22:22:10.893293 2526 topology_manager.go:215] "Topology Admit Handler" podUID="874f9a12-8d3a-400d-b30e-e0e567234e1c" podNamespace="tigera-operator" podName="tigera-operator-76c4974c85-74vv7" Aug 5 22:22:10.900880 systemd[1]: Created slice kubepods-besteffort-pod874f9a12_8d3a_400d_b30e_e0e567234e1c.slice - libcontainer container kubepods-besteffort-pod874f9a12_8d3a_400d_b30e_e0e567234e1c.slice. Aug 5 22:22:10.923898 kubelet[2526]: I0805 22:22:10.923747 2526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/874f9a12-8d3a-400d-b30e-e0e567234e1c-var-lib-calico\") pod \"tigera-operator-76c4974c85-74vv7\" (UID: \"874f9a12-8d3a-400d-b30e-e0e567234e1c\") " pod="tigera-operator/tigera-operator-76c4974c85-74vv7" Aug 5 22:22:10.923898 kubelet[2526]: I0805 22:22:10.923835 2526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpkh7\" (UniqueName: \"kubernetes.io/projected/874f9a12-8d3a-400d-b30e-e0e567234e1c-kube-api-access-vpkh7\") pod \"tigera-operator-76c4974c85-74vv7\" (UID: \"874f9a12-8d3a-400d-b30e-e0e567234e1c\") " pod="tigera-operator/tigera-operator-76c4974c85-74vv7" Aug 5 22:22:11.205559 containerd[1442]: time="2024-08-05T22:22:11.204826829Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-74vv7,Uid:874f9a12-8d3a-400d-b30e-e0e567234e1c,Namespace:tigera-operator,Attempt:0,}" Aug 5 22:22:11.229216 containerd[1442]: time="2024-08-05T22:22:11.228896530Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:22:11.229216 containerd[1442]: time="2024-08-05T22:22:11.228959483Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:22:11.229216 containerd[1442]: time="2024-08-05T22:22:11.228979800Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:22:11.229216 containerd[1442]: time="2024-08-05T22:22:11.228992639Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:22:11.252874 systemd[1]: Started cri-containerd-8da5355d5268ef9d4827892e4b79d75ae45f95c25f7dec2734df63d0128bff0d.scope - libcontainer container 8da5355d5268ef9d4827892e4b79d75ae45f95c25f7dec2734df63d0128bff0d. Aug 5 22:22:11.280477 containerd[1442]: time="2024-08-05T22:22:11.280425689Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-74vv7,Uid:874f9a12-8d3a-400d-b30e-e0e567234e1c,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"8da5355d5268ef9d4827892e4b79d75ae45f95c25f7dec2734df63d0128bff0d\"" Aug 5 22:22:11.283710 containerd[1442]: time="2024-08-05T22:22:11.283262312Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\"" Aug 5 22:22:11.295434 kubelet[2526]: E0805 22:22:11.295402 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:22:11.296237 containerd[1442]: time="2024-08-05T22:22:11.295837978Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2ljmh,Uid:45f18c29-bd4c-45b5-96d8-ccf81a1bc782,Namespace:kube-system,Attempt:0,}" Aug 5 22:22:11.314251 containerd[1442]: time="2024-08-05T22:22:11.314144004Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:22:11.314251 containerd[1442]: time="2024-08-05T22:22:11.314208996Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:22:11.314251 containerd[1442]: time="2024-08-05T22:22:11.314229954Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:22:11.314251 containerd[1442]: time="2024-08-05T22:22:11.314243912Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:22:11.334888 systemd[1]: Started cri-containerd-1d99a8aeb435f98765b2c2ae633c665554237ca61737536242284167fbadb83a.scope - libcontainer container 1d99a8aeb435f98765b2c2ae633c665554237ca61737536242284167fbadb83a. Aug 5 22:22:11.354290 containerd[1442]: time="2024-08-05T22:22:11.354240001Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2ljmh,Uid:45f18c29-bd4c-45b5-96d8-ccf81a1bc782,Namespace:kube-system,Attempt:0,} returns sandbox id \"1d99a8aeb435f98765b2c2ae633c665554237ca61737536242284167fbadb83a\"" Aug 5 22:22:11.355512 kubelet[2526]: E0805 22:22:11.355486 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:22:11.363390 containerd[1442]: time="2024-08-05T22:22:11.363268009Z" level=info msg="CreateContainer within sandbox \"1d99a8aeb435f98765b2c2ae633c665554237ca61737536242284167fbadb83a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 5 22:22:11.376841 containerd[1442]: time="2024-08-05T22:22:11.376788443Z" level=info msg="CreateContainer within sandbox \"1d99a8aeb435f98765b2c2ae633c665554237ca61737536242284167fbadb83a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"74569d472b436c8c004053ff78c48202fda0b80ece994b117829203f683aa3a6\"" Aug 5 22:22:11.378728 containerd[1442]: time="2024-08-05T22:22:11.377646261Z" level=info msg="StartContainer for \"74569d472b436c8c004053ff78c48202fda0b80ece994b117829203f683aa3a6\"" Aug 5 22:22:11.409829 systemd[1]: Started cri-containerd-74569d472b436c8c004053ff78c48202fda0b80ece994b117829203f683aa3a6.scope - libcontainer container 74569d472b436c8c004053ff78c48202fda0b80ece994b117829203f683aa3a6. Aug 5 22:22:11.431201 containerd[1442]: time="2024-08-05T22:22:11.431143106Z" level=info msg="StartContainer for \"74569d472b436c8c004053ff78c48202fda0b80ece994b117829203f683aa3a6\" returns successfully" Aug 5 22:22:12.196256 kubelet[2526]: E0805 22:22:12.195935 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:22:12.315061 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3489267412.mount: Deactivated successfully. Aug 5 22:22:14.639231 containerd[1442]: time="2024-08-05T22:22:14.639179404Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:22:14.640349 containerd[1442]: time="2024-08-05T22:22:14.639731150Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.0: active requests=0, bytes read=19473646" Aug 5 22:22:14.641599 containerd[1442]: time="2024-08-05T22:22:14.640893076Z" level=info msg="ImageCreate event name:\"sha256:5886f48e233edcb89c0e8e3cdbdc40101f3c2dfbe67d7717f01d19c27cd78f92\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:22:14.643566 containerd[1442]: time="2024-08-05T22:22:14.643525978Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:22:14.644288 containerd[1442]: time="2024-08-05T22:22:14.644249747Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.0\" with image id \"sha256:5886f48e233edcb89c0e8e3cdbdc40101f3c2dfbe67d7717f01d19c27cd78f92\", repo tag \"quay.io/tigera/operator:v1.34.0\", repo digest \"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\", size \"19467821\" in 3.360839852s" Aug 5 22:22:14.644470 containerd[1442]: time="2024-08-05T22:22:14.644446408Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\" returns image reference \"sha256:5886f48e233edcb89c0e8e3cdbdc40101f3c2dfbe67d7717f01d19c27cd78f92\"" Aug 5 22:22:14.646515 containerd[1442]: time="2024-08-05T22:22:14.646471690Z" level=info msg="CreateContainer within sandbox \"8da5355d5268ef9d4827892e4b79d75ae45f95c25f7dec2734df63d0128bff0d\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Aug 5 22:22:14.658623 containerd[1442]: time="2024-08-05T22:22:14.658580665Z" level=info msg="CreateContainer within sandbox \"8da5355d5268ef9d4827892e4b79d75ae45f95c25f7dec2734df63d0128bff0d\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"2ac182f4fd2bd5b46865da448e003bec47e7f2b5b69a5aee36baa53cbe911296\"" Aug 5 22:22:14.659281 containerd[1442]: time="2024-08-05T22:22:14.659257359Z" level=info msg="StartContainer for \"2ac182f4fd2bd5b46865da448e003bec47e7f2b5b69a5aee36baa53cbe911296\"" Aug 5 22:22:14.687866 systemd[1]: Started cri-containerd-2ac182f4fd2bd5b46865da448e003bec47e7f2b5b69a5aee36baa53cbe911296.scope - libcontainer container 2ac182f4fd2bd5b46865da448e003bec47e7f2b5b69a5aee36baa53cbe911296. Aug 5 22:22:14.710657 containerd[1442]: time="2024-08-05T22:22:14.710606333Z" level=info msg="StartContainer for \"2ac182f4fd2bd5b46865da448e003bec47e7f2b5b69a5aee36baa53cbe911296\" returns successfully" Aug 5 22:22:15.212007 kubelet[2526]: I0805 22:22:15.211913 2526 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-2ljmh" podStartSLOduration=5.211872206 podStartE2EDuration="5.211872206s" podCreationTimestamp="2024-08-05 22:22:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 22:22:12.20528255 +0000 UTC m=+18.147913401" watchObservedRunningTime="2024-08-05 22:22:15.211872206 +0000 UTC m=+21.154502977" Aug 5 22:22:15.212702 kubelet[2526]: I0805 22:22:15.212556 2526 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4974c85-74vv7" podStartSLOduration=1.849690078 podStartE2EDuration="5.212499428s" podCreationTimestamp="2024-08-05 22:22:10 +0000 UTC" firstStartedPulling="2024-08-05 22:22:11.281984544 +0000 UTC m=+17.224615315" lastFinishedPulling="2024-08-05 22:22:14.644793894 +0000 UTC m=+20.587424665" observedRunningTime="2024-08-05 22:22:15.210922733 +0000 UTC m=+21.153553504" watchObservedRunningTime="2024-08-05 22:22:15.212499428 +0000 UTC m=+21.155130239" Aug 5 22:22:18.363230 kubelet[2526]: I0805 22:22:18.363180 2526 topology_manager.go:215] "Topology Admit Handler" podUID="0e66ea83-6b0f-4244-a06d-6a9815b0a3cd" podNamespace="calico-system" podName="calico-typha-6df77d5cb9-wppv9" Aug 5 22:22:18.371109 kubelet[2526]: I0805 22:22:18.371072 2526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82cwl\" (UniqueName: \"kubernetes.io/projected/0e66ea83-6b0f-4244-a06d-6a9815b0a3cd-kube-api-access-82cwl\") pod \"calico-typha-6df77d5cb9-wppv9\" (UID: \"0e66ea83-6b0f-4244-a06d-6a9815b0a3cd\") " pod="calico-system/calico-typha-6df77d5cb9-wppv9" Aug 5 22:22:18.371297 kubelet[2526]: I0805 22:22:18.371118 2526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0e66ea83-6b0f-4244-a06d-6a9815b0a3cd-tigera-ca-bundle\") pod \"calico-typha-6df77d5cb9-wppv9\" (UID: \"0e66ea83-6b0f-4244-a06d-6a9815b0a3cd\") " pod="calico-system/calico-typha-6df77d5cb9-wppv9" Aug 5 22:22:18.371297 kubelet[2526]: I0805 22:22:18.371146 2526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/0e66ea83-6b0f-4244-a06d-6a9815b0a3cd-typha-certs\") pod \"calico-typha-6df77d5cb9-wppv9\" (UID: \"0e66ea83-6b0f-4244-a06d-6a9815b0a3cd\") " pod="calico-system/calico-typha-6df77d5cb9-wppv9" Aug 5 22:22:18.381340 systemd[1]: Created slice kubepods-besteffort-pod0e66ea83_6b0f_4244_a06d_6a9815b0a3cd.slice - libcontainer container kubepods-besteffort-pod0e66ea83_6b0f_4244_a06d_6a9815b0a3cd.slice. Aug 5 22:22:18.414410 kubelet[2526]: I0805 22:22:18.414341 2526 topology_manager.go:215] "Topology Admit Handler" podUID="6e49443a-7575-4b86-95e7-446d05915bf6" podNamespace="calico-system" podName="calico-node-rngjl" Aug 5 22:22:18.423092 systemd[1]: Created slice kubepods-besteffort-pod6e49443a_7575_4b86_95e7_446d05915bf6.slice - libcontainer container kubepods-besteffort-pod6e49443a_7575_4b86_95e7_446d05915bf6.slice. Aug 5 22:22:18.471524 kubelet[2526]: I0805 22:22:18.471473 2526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e49443a-7575-4b86-95e7-446d05915bf6-tigera-ca-bundle\") pod \"calico-node-rngjl\" (UID: \"6e49443a-7575-4b86-95e7-446d05915bf6\") " pod="calico-system/calico-node-rngjl" Aug 5 22:22:18.471524 kubelet[2526]: I0805 22:22:18.471534 2526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6e49443a-7575-4b86-95e7-446d05915bf6-lib-modules\") pod \"calico-node-rngjl\" (UID: \"6e49443a-7575-4b86-95e7-446d05915bf6\") " pod="calico-system/calico-node-rngjl" Aug 5 22:22:18.473066 kubelet[2526]: I0805 22:22:18.471696 2526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/6e49443a-7575-4b86-95e7-446d05915bf6-policysync\") pod \"calico-node-rngjl\" (UID: \"6e49443a-7575-4b86-95e7-446d05915bf6\") " pod="calico-system/calico-node-rngjl" Aug 5 22:22:18.473066 kubelet[2526]: I0805 22:22:18.471808 2526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6e49443a-7575-4b86-95e7-446d05915bf6-xtables-lock\") pod \"calico-node-rngjl\" (UID: \"6e49443a-7575-4b86-95e7-446d05915bf6\") " pod="calico-system/calico-node-rngjl" Aug 5 22:22:18.473066 kubelet[2526]: I0805 22:22:18.471844 2526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6e49443a-7575-4b86-95e7-446d05915bf6-var-lib-calico\") pod \"calico-node-rngjl\" (UID: \"6e49443a-7575-4b86-95e7-446d05915bf6\") " pod="calico-system/calico-node-rngjl" Aug 5 22:22:18.473066 kubelet[2526]: I0805 22:22:18.471865 2526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/6e49443a-7575-4b86-95e7-446d05915bf6-node-certs\") pod \"calico-node-rngjl\" (UID: \"6e49443a-7575-4b86-95e7-446d05915bf6\") " pod="calico-system/calico-node-rngjl" Aug 5 22:22:18.473066 kubelet[2526]: I0805 22:22:18.471887 2526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/6e49443a-7575-4b86-95e7-446d05915bf6-var-run-calico\") pod \"calico-node-rngjl\" (UID: \"6e49443a-7575-4b86-95e7-446d05915bf6\") " pod="calico-system/calico-node-rngjl" Aug 5 22:22:18.473783 kubelet[2526]: I0805 22:22:18.471909 2526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/6e49443a-7575-4b86-95e7-446d05915bf6-cni-bin-dir\") pod \"calico-node-rngjl\" (UID: \"6e49443a-7575-4b86-95e7-446d05915bf6\") " pod="calico-system/calico-node-rngjl" Aug 5 22:22:18.473783 kubelet[2526]: I0805 22:22:18.471940 2526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/6e49443a-7575-4b86-95e7-446d05915bf6-cni-net-dir\") pod \"calico-node-rngjl\" (UID: \"6e49443a-7575-4b86-95e7-446d05915bf6\") " pod="calico-system/calico-node-rngjl" Aug 5 22:22:18.473783 kubelet[2526]: I0805 22:22:18.472320 2526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/6e49443a-7575-4b86-95e7-446d05915bf6-cni-log-dir\") pod \"calico-node-rngjl\" (UID: \"6e49443a-7575-4b86-95e7-446d05915bf6\") " pod="calico-system/calico-node-rngjl" Aug 5 22:22:18.473783 kubelet[2526]: I0805 22:22:18.472358 2526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/6e49443a-7575-4b86-95e7-446d05915bf6-flexvol-driver-host\") pod \"calico-node-rngjl\" (UID: \"6e49443a-7575-4b86-95e7-446d05915bf6\") " pod="calico-system/calico-node-rngjl" Aug 5 22:22:18.473783 kubelet[2526]: I0805 22:22:18.472386 2526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqbmd\" (UniqueName: \"kubernetes.io/projected/6e49443a-7575-4b86-95e7-446d05915bf6-kube-api-access-bqbmd\") pod \"calico-node-rngjl\" (UID: \"6e49443a-7575-4b86-95e7-446d05915bf6\") " pod="calico-system/calico-node-rngjl" Aug 5 22:22:18.532828 kubelet[2526]: I0805 22:22:18.532795 2526 topology_manager.go:215] "Topology Admit Handler" podUID="f86a8cc4-dcb1-4c07-ba89-5368804c0223" podNamespace="calico-system" podName="csi-node-driver-4gbsx" Aug 5 22:22:18.533844 kubelet[2526]: E0805 22:22:18.533657 2526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4gbsx" podUID="f86a8cc4-dcb1-4c07-ba89-5368804c0223" Aug 5 22:22:18.572647 kubelet[2526]: I0805 22:22:18.572608 2526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f86a8cc4-dcb1-4c07-ba89-5368804c0223-registration-dir\") pod \"csi-node-driver-4gbsx\" (UID: \"f86a8cc4-dcb1-4c07-ba89-5368804c0223\") " pod="calico-system/csi-node-driver-4gbsx" Aug 5 22:22:18.572785 kubelet[2526]: I0805 22:22:18.572682 2526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f86a8cc4-dcb1-4c07-ba89-5368804c0223-socket-dir\") pod \"csi-node-driver-4gbsx\" (UID: \"f86a8cc4-dcb1-4c07-ba89-5368804c0223\") " pod="calico-system/csi-node-driver-4gbsx" Aug 5 22:22:18.572785 kubelet[2526]: I0805 22:22:18.572728 2526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/f86a8cc4-dcb1-4c07-ba89-5368804c0223-varrun\") pod \"csi-node-driver-4gbsx\" (UID: \"f86a8cc4-dcb1-4c07-ba89-5368804c0223\") " pod="calico-system/csi-node-driver-4gbsx" Aug 5 22:22:18.572785 kubelet[2526]: I0805 22:22:18.572771 2526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f86a8cc4-dcb1-4c07-ba89-5368804c0223-kubelet-dir\") pod \"csi-node-driver-4gbsx\" (UID: \"f86a8cc4-dcb1-4c07-ba89-5368804c0223\") " pod="calico-system/csi-node-driver-4gbsx" Aug 5 22:22:18.572864 kubelet[2526]: I0805 22:22:18.572833 2526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bt78m\" (UniqueName: \"kubernetes.io/projected/f86a8cc4-dcb1-4c07-ba89-5368804c0223-kube-api-access-bt78m\") pod \"csi-node-driver-4gbsx\" (UID: \"f86a8cc4-dcb1-4c07-ba89-5368804c0223\") " pod="calico-system/csi-node-driver-4gbsx" Aug 5 22:22:18.589718 kubelet[2526]: E0805 22:22:18.586927 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:22:18.589718 kubelet[2526]: W0805 22:22:18.586950 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:22:18.589718 kubelet[2526]: E0805 22:22:18.586971 2526 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:22:18.673608 kubelet[2526]: E0805 22:22:18.673573 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:22:18.673608 kubelet[2526]: W0805 22:22:18.673598 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:22:18.673608 kubelet[2526]: E0805 22:22:18.673620 2526 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:22:18.673908 kubelet[2526]: E0805 22:22:18.673890 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:22:18.673908 kubelet[2526]: W0805 22:22:18.673904 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:22:18.673998 kubelet[2526]: E0805 22:22:18.673923 2526 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:22:18.674129 kubelet[2526]: E0805 22:22:18.674110 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:22:18.674129 kubelet[2526]: W0805 22:22:18.674122 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:22:18.674129 kubelet[2526]: E0805 22:22:18.674139 2526 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:22:18.674519 kubelet[2526]: E0805 22:22:18.674500 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:22:18.674657 kubelet[2526]: W0805 22:22:18.674586 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:22:18.674657 kubelet[2526]: E0805 22:22:18.674618 2526 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:22:18.675032 kubelet[2526]: E0805 22:22:18.674927 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:22:18.675032 kubelet[2526]: W0805 22:22:18.674939 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:22:18.675032 kubelet[2526]: E0805 22:22:18.674959 2526 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:22:18.675211 kubelet[2526]: E0805 22:22:18.675198 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:22:18.675282 kubelet[2526]: W0805 22:22:18.675271 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:22:18.675429 kubelet[2526]: E0805 22:22:18.675354 2526 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:22:18.675590 kubelet[2526]: E0805 22:22:18.675568 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:22:18.675590 kubelet[2526]: W0805 22:22:18.675586 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:22:18.675753 kubelet[2526]: E0805 22:22:18.675606 2526 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:22:18.675780 kubelet[2526]: E0805 22:22:18.675754 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:22:18.675780 kubelet[2526]: W0805 22:22:18.675762 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:22:18.675780 kubelet[2526]: E0805 22:22:18.675773 2526 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:22:18.675937 kubelet[2526]: E0805 22:22:18.675916 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:22:18.675937 kubelet[2526]: W0805 22:22:18.675928 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:22:18.676042 kubelet[2526]: E0805 22:22:18.676020 2526 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:22:18.676097 kubelet[2526]: E0805 22:22:18.676088 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:22:18.676097 kubelet[2526]: W0805 22:22:18.676096 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:22:18.676271 kubelet[2526]: E0805 22:22:18.676156 2526 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:22:18.676271 kubelet[2526]: E0805 22:22:18.676229 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:22:18.676271 kubelet[2526]: W0805 22:22:18.676237 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:22:18.676362 kubelet[2526]: E0805 22:22:18.676339 2526 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:22:18.676441 kubelet[2526]: E0805 22:22:18.676427 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:22:18.676441 kubelet[2526]: W0805 22:22:18.676438 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:22:18.676505 kubelet[2526]: E0805 22:22:18.676452 2526 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:22:18.676646 kubelet[2526]: E0805 22:22:18.676633 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:22:18.676646 kubelet[2526]: W0805 22:22:18.676645 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:22:18.676722 kubelet[2526]: E0805 22:22:18.676659 2526 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:22:18.676840 kubelet[2526]: E0805 22:22:18.676826 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:22:18.676840 kubelet[2526]: W0805 22:22:18.676838 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:22:18.677042 kubelet[2526]: E0805 22:22:18.676852 2526 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:22:18.677042 kubelet[2526]: E0805 22:22:18.677034 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:22:18.677042 kubelet[2526]: W0805 22:22:18.677042 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:22:18.677168 kubelet[2526]: E0805 22:22:18.677057 2526 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:22:18.677197 kubelet[2526]: E0805 22:22:18.677191 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:22:18.677220 kubelet[2526]: W0805 22:22:18.677198 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:22:18.677220 kubelet[2526]: E0805 22:22:18.677211 2526 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:22:18.677397 kubelet[2526]: E0805 22:22:18.677358 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:22:18.677397 kubelet[2526]: W0805 22:22:18.677371 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:22:18.677454 kubelet[2526]: E0805 22:22:18.677442 2526 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:22:18.677528 kubelet[2526]: E0805 22:22:18.677517 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:22:18.677528 kubelet[2526]: W0805 22:22:18.677527 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:22:18.677586 kubelet[2526]: E0805 22:22:18.677578 2526 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:22:18.677802 kubelet[2526]: E0805 22:22:18.677789 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:22:18.677802 kubelet[2526]: W0805 22:22:18.677800 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:22:18.677879 kubelet[2526]: E0805 22:22:18.677862 2526 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:22:18.678001 kubelet[2526]: E0805 22:22:18.677987 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:22:18.678001 kubelet[2526]: W0805 22:22:18.677999 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:22:18.678054 kubelet[2526]: E0805 22:22:18.678016 2526 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:22:18.678182 kubelet[2526]: E0805 22:22:18.678169 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:22:18.678182 kubelet[2526]: W0805 22:22:18.678181 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:22:18.678241 kubelet[2526]: E0805 22:22:18.678195 2526 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:22:18.678358 kubelet[2526]: E0805 22:22:18.678347 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:22:18.678358 kubelet[2526]: W0805 22:22:18.678358 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:22:18.678439 kubelet[2526]: E0805 22:22:18.678383 2526 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:22:18.678571 kubelet[2526]: E0805 22:22:18.678561 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:22:18.678571 kubelet[2526]: W0805 22:22:18.678570 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:22:18.678632 kubelet[2526]: E0805 22:22:18.678582 2526 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:22:18.679122 kubelet[2526]: E0805 22:22:18.679099 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:22:18.679122 kubelet[2526]: W0805 22:22:18.679115 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:22:18.679189 kubelet[2526]: E0805 22:22:18.679133 2526 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:22:18.679779 kubelet[2526]: E0805 22:22:18.679359 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:22:18.679779 kubelet[2526]: W0805 22:22:18.679372 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:22:18.679779 kubelet[2526]: E0805 22:22:18.679386 2526 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:22:18.691255 kubelet[2526]: E0805 22:22:18.691221 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:22:18.692396 containerd[1442]: time="2024-08-05T22:22:18.691963495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6df77d5cb9-wppv9,Uid:0e66ea83-6b0f-4244-a06d-6a9815b0a3cd,Namespace:calico-system,Attempt:0,}" Aug 5 22:22:18.692728 kubelet[2526]: E0805 22:22:18.692604 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:22:18.692728 kubelet[2526]: W0805 22:22:18.692617 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:22:18.692728 kubelet[2526]: E0805 22:22:18.692659 2526 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:22:18.717601 containerd[1442]: time="2024-08-05T22:22:18.717470687Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:22:18.717601 containerd[1442]: time="2024-08-05T22:22:18.717521923Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:22:18.717601 containerd[1442]: time="2024-08-05T22:22:18.717535362Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:22:18.717601 containerd[1442]: time="2024-08-05T22:22:18.717546081Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:22:18.728495 kubelet[2526]: E0805 22:22:18.728457 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:22:18.730604 containerd[1442]: time="2024-08-05T22:22:18.730359552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rngjl,Uid:6e49443a-7575-4b86-95e7-446d05915bf6,Namespace:calico-system,Attempt:0,}" Aug 5 22:22:18.736985 systemd[1]: Started cri-containerd-b9e3d6b6901bacde775df695318aa7dfca5a19f8dd3bb9156b1c7fa63a0e01ae.scope - libcontainer container b9e3d6b6901bacde775df695318aa7dfca5a19f8dd3bb9156b1c7fa63a0e01ae. Aug 5 22:22:18.750112 containerd[1442]: time="2024-08-05T22:22:18.750027345Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:22:18.750246 containerd[1442]: time="2024-08-05T22:22:18.750089461Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:22:18.750246 containerd[1442]: time="2024-08-05T22:22:18.750108819Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:22:18.750246 containerd[1442]: time="2024-08-05T22:22:18.750124778Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:22:18.764936 systemd[1]: Started cri-containerd-d9bb25eb5077021ff05f07a82cfddd79b055477e178678f47809adbf759b8c64.scope - libcontainer container d9bb25eb5077021ff05f07a82cfddd79b055477e178678f47809adbf759b8c64. Aug 5 22:22:18.776879 containerd[1442]: time="2024-08-05T22:22:18.776834359Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6df77d5cb9-wppv9,Uid:0e66ea83-6b0f-4244-a06d-6a9815b0a3cd,Namespace:calico-system,Attempt:0,} returns sandbox id \"b9e3d6b6901bacde775df695318aa7dfca5a19f8dd3bb9156b1c7fa63a0e01ae\"" Aug 5 22:22:18.777704 kubelet[2526]: E0805 22:22:18.777585 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:22:18.785275 containerd[1442]: time="2024-08-05T22:22:18.785223964Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\"" Aug 5 22:22:18.803601 containerd[1442]: time="2024-08-05T22:22:18.803553379Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rngjl,Uid:6e49443a-7575-4b86-95e7-446d05915bf6,Namespace:calico-system,Attempt:0,} returns sandbox id \"d9bb25eb5077021ff05f07a82cfddd79b055477e178678f47809adbf759b8c64\"" Aug 5 22:22:18.804405 kubelet[2526]: E0805 22:22:18.804382 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:22:20.142263 kubelet[2526]: E0805 22:22:20.142220 2526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4gbsx" podUID="f86a8cc4-dcb1-4c07-ba89-5368804c0223" Aug 5 22:22:21.623907 containerd[1442]: time="2024-08-05T22:22:21.623855761Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:22:21.624607 containerd[1442]: time="2024-08-05T22:22:21.624525279Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.0: active requests=0, bytes read=27476513" Aug 5 22:22:21.625266 containerd[1442]: time="2024-08-05T22:22:21.625232995Z" level=info msg="ImageCreate event name:\"sha256:2551880d36cd0ce4c6820747ffe4c40cbf344d26df0ecd878808432ad4f78f03\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:22:21.631073 containerd[1442]: time="2024-08-05T22:22:21.631038753Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:22:21.631859 containerd[1442]: time="2024-08-05T22:22:21.631833224Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.0\" with image id \"sha256:2551880d36cd0ce4c6820747ffe4c40cbf344d26df0ecd878808432ad4f78f03\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\", size \"28843073\" in 2.846551424s" Aug 5 22:22:21.631946 containerd[1442]: time="2024-08-05T22:22:21.631930578Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\" returns image reference \"sha256:2551880d36cd0ce4c6820747ffe4c40cbf344d26df0ecd878808432ad4f78f03\"" Aug 5 22:22:21.654082 containerd[1442]: time="2024-08-05T22:22:21.654052560Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\"" Aug 5 22:22:21.661089 containerd[1442]: time="2024-08-05T22:22:21.661058283Z" level=info msg="CreateContainer within sandbox \"b9e3d6b6901bacde775df695318aa7dfca5a19f8dd3bb9156b1c7fa63a0e01ae\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Aug 5 22:22:21.673037 containerd[1442]: time="2024-08-05T22:22:21.672984580Z" level=info msg="CreateContainer within sandbox \"b9e3d6b6901bacde775df695318aa7dfca5a19f8dd3bb9156b1c7fa63a0e01ae\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"2cc24eb8876802ccda02f48fe56976447b3381ab75ac299cc8b2b3ebb13029a8\"" Aug 5 22:22:21.674566 containerd[1442]: time="2024-08-05T22:22:21.673471430Z" level=info msg="StartContainer for \"2cc24eb8876802ccda02f48fe56976447b3381ab75ac299cc8b2b3ebb13029a8\"" Aug 5 22:22:21.700852 systemd[1]: Started cri-containerd-2cc24eb8876802ccda02f48fe56976447b3381ab75ac299cc8b2b3ebb13029a8.scope - libcontainer container 2cc24eb8876802ccda02f48fe56976447b3381ab75ac299cc8b2b3ebb13029a8. Aug 5 22:22:21.735372 containerd[1442]: time="2024-08-05T22:22:21.733591845Z" level=info msg="StartContainer for \"2cc24eb8876802ccda02f48fe56976447b3381ab75ac299cc8b2b3ebb13029a8\" returns successfully" Aug 5 22:22:22.143846 kubelet[2526]: E0805 22:22:22.143796 2526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4gbsx" podUID="f86a8cc4-dcb1-4c07-ba89-5368804c0223" Aug 5 22:22:22.223111 containerd[1442]: time="2024-08-05T22:22:22.223044430Z" level=info msg="StopContainer for \"2cc24eb8876802ccda02f48fe56976447b3381ab75ac299cc8b2b3ebb13029a8\" with timeout 300 (s)" Aug 5 22:22:22.224125 containerd[1442]: time="2024-08-05T22:22:22.224077009Z" level=info msg="Stop container \"2cc24eb8876802ccda02f48fe56976447b3381ab75ac299cc8b2b3ebb13029a8\" with signal terminated" Aug 5 22:22:22.237371 systemd[1]: cri-containerd-2cc24eb8876802ccda02f48fe56976447b3381ab75ac299cc8b2b3ebb13029a8.scope: Deactivated successfully. Aug 5 22:22:22.253926 kubelet[2526]: I0805 22:22:22.253535 2526 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-6df77d5cb9-wppv9" podStartSLOduration=1.399463815 podStartE2EDuration="4.253492761s" podCreationTimestamp="2024-08-05 22:22:18 +0000 UTC" firstStartedPulling="2024-08-05 22:22:18.778331805 +0000 UTC m=+24.720962576" lastFinishedPulling="2024-08-05 22:22:21.632360751 +0000 UTC m=+27.574991522" observedRunningTime="2024-08-05 22:22:22.253247086 +0000 UTC m=+28.195877857" watchObservedRunningTime="2024-08-05 22:22:22.253492761 +0000 UTC m=+28.196123532" Aug 5 22:22:22.401148 containerd[1442]: time="2024-08-05T22:22:22.400819437Z" level=info msg="shim disconnected" id=2cc24eb8876802ccda02f48fe56976447b3381ab75ac299cc8b2b3ebb13029a8 namespace=k8s.io Aug 5 22:22:22.401148 containerd[1442]: time="2024-08-05T22:22:22.400871956Z" level=warning msg="cleaning up after shim disconnected" id=2cc24eb8876802ccda02f48fe56976447b3381ab75ac299cc8b2b3ebb13029a8 namespace=k8s.io Aug 5 22:22:22.401148 containerd[1442]: time="2024-08-05T22:22:22.400881796Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 22:22:22.414189 containerd[1442]: time="2024-08-05T22:22:22.414137722Z" level=info msg="StopContainer for \"2cc24eb8876802ccda02f48fe56976447b3381ab75ac299cc8b2b3ebb13029a8\" returns successfully" Aug 5 22:22:22.414586 containerd[1442]: time="2024-08-05T22:22:22.414562873Z" level=info msg="StopPodSandbox for \"b9e3d6b6901bacde775df695318aa7dfca5a19f8dd3bb9156b1c7fa63a0e01ae\"" Aug 5 22:22:22.414631 containerd[1442]: time="2024-08-05T22:22:22.414595992Z" level=info msg="Container to stop \"2cc24eb8876802ccda02f48fe56976447b3381ab75ac299cc8b2b3ebb13029a8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 5 22:22:22.419981 systemd[1]: cri-containerd-b9e3d6b6901bacde775df695318aa7dfca5a19f8dd3bb9156b1c7fa63a0e01ae.scope: Deactivated successfully. Aug 5 22:22:22.440846 containerd[1442]: time="2024-08-05T22:22:22.440639694Z" level=info msg="shim disconnected" id=b9e3d6b6901bacde775df695318aa7dfca5a19f8dd3bb9156b1c7fa63a0e01ae namespace=k8s.io Aug 5 22:22:22.440846 containerd[1442]: time="2024-08-05T22:22:22.440714133Z" level=warning msg="cleaning up after shim disconnected" id=b9e3d6b6901bacde775df695318aa7dfca5a19f8dd3bb9156b1c7fa63a0e01ae namespace=k8s.io Aug 5 22:22:22.440846 containerd[1442]: time="2024-08-05T22:22:22.440724813Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 22:22:22.450846 containerd[1442]: time="2024-08-05T22:22:22.450808604Z" level=info msg="TearDown network for sandbox \"b9e3d6b6901bacde775df695318aa7dfca5a19f8dd3bb9156b1c7fa63a0e01ae\" successfully" Aug 5 22:22:22.450846 containerd[1442]: time="2024-08-05T22:22:22.450839084Z" level=info msg="StopPodSandbox for \"b9e3d6b6901bacde775df695318aa7dfca5a19f8dd3bb9156b1c7fa63a0e01ae\" returns successfully" Aug 5 22:22:22.469272 kubelet[2526]: I0805 22:22:22.469220 2526 topology_manager.go:215] "Topology Admit Handler" podUID="6a7e6375-28c1-4b5d-9013-d5546ab2cd6a" podNamespace="calico-system" podName="calico-typha-6c98fb7cd8-7ns5r" Aug 5 22:22:22.469414 kubelet[2526]: E0805 22:22:22.469304 2526 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0e66ea83-6b0f-4244-a06d-6a9815b0a3cd" containerName="calico-typha" Aug 5 22:22:22.469414 kubelet[2526]: I0805 22:22:22.469339 2526 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e66ea83-6b0f-4244-a06d-6a9815b0a3cd" containerName="calico-typha" Aug 5 22:22:22.479375 systemd[1]: Created slice kubepods-besteffort-pod6a7e6375_28c1_4b5d_9013_d5546ab2cd6a.slice - libcontainer container kubepods-besteffort-pod6a7e6375_28c1_4b5d_9013_d5546ab2cd6a.slice. Aug 5 22:22:22.502714 kubelet[2526]: E0805 22:22:22.502687 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:22:22.502714 kubelet[2526]: W0805 22:22:22.502708 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:22:22.502844 kubelet[2526]: E0805 22:22:22.502729 2526 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:22:22.502917 kubelet[2526]: E0805 22:22:22.502904 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:22:22.502917 kubelet[2526]: W0805 22:22:22.502915 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:22:22.502978 kubelet[2526]: E0805 22:22:22.502929 2526 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:22:22.503103 kubelet[2526]: E0805 22:22:22.503090 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:22:22.503103 kubelet[2526]: W0805 22:22:22.503101 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:22:22.503171 kubelet[2526]: E0805 22:22:22.503113 2526 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:22:22.503270 kubelet[2526]: E0805 22:22:22.503259 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:22:22.503270 kubelet[2526]: W0805 22:22:22.503269 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:22:22.503328 kubelet[2526]: E0805 22:22:22.503280 2526 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:22:22.503445 kubelet[2526]: E0805 22:22:22.503427 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:22:22.503474 kubelet[2526]: W0805 22:22:22.503444 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:22:22.503474 kubelet[2526]: E0805 22:22:22.503458 2526 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:22:22.503608 kubelet[2526]: E0805 22:22:22.503596 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:22:22.503608 kubelet[2526]: W0805 22:22:22.503607 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:22:22.503661 kubelet[2526]: E0805 22:22:22.503619 2526 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:22:22.503781 kubelet[2526]: E0805 22:22:22.503770 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:22:22.503812 kubelet[2526]: W0805 22:22:22.503781 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:22:22.503812 kubelet[2526]: E0805 22:22:22.503792 2526 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:22:22.503939 kubelet[2526]: E0805 22:22:22.503928 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:22:22.503939 kubelet[2526]: W0805 22:22:22.503938 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:22:22.503990 kubelet[2526]: E0805 22:22:22.503948 2526 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:22:22.504116 kubelet[2526]: E0805 22:22:22.504105 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:22:22.504146 kubelet[2526]: W0805 22:22:22.504115 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:22:22.504146 kubelet[2526]: E0805 22:22:22.504126 2526 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:22:22.504283 kubelet[2526]: E0805 22:22:22.504271 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:22:22.504283 kubelet[2526]: W0805 22:22:22.504282 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:22:22.504335 kubelet[2526]: E0805 22:22:22.504293 2526 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:22:22.504446 kubelet[2526]: E0805 22:22:22.504435 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:22:22.504446 kubelet[2526]: W0805 22:22:22.504445 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:22:22.504503 kubelet[2526]: E0805 22:22:22.504456 2526 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:22:22.504627 kubelet[2526]: E0805 22:22:22.504616 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:22:22.504653 kubelet[2526]: W0805 22:22:22.504627 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:22:22.504653 kubelet[2526]: E0805 22:22:22.504638 2526 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:22:22.504929 kubelet[2526]: E0805 22:22:22.504910 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:22:22.504929 kubelet[2526]: W0805 22:22:22.504927 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:22:22.505002 kubelet[2526]: E0805 22:22:22.504941 2526 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:22:22.505002 kubelet[2526]: I0805 22:22:22.504970 2526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/6a7e6375-28c1-4b5d-9013-d5546ab2cd6a-typha-certs\") pod \"calico-typha-6c98fb7cd8-7ns5r\" (UID: \"6a7e6375-28c1-4b5d-9013-d5546ab2cd6a\") " pod="calico-system/calico-typha-6c98fb7cd8-7ns5r" Aug 5 22:22:22.505134 kubelet[2526]: E0805 22:22:22.505115 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:22:22.505134 kubelet[2526]: W0805 22:22:22.505126 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:22:22.505210 kubelet[2526]: E0805 22:22:22.505139 2526 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:22:22.505210 kubelet[2526]: I0805 22:22:22.505156 2526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a7e6375-28c1-4b5d-9013-d5546ab2cd6a-tigera-ca-bundle\") pod \"calico-typha-6c98fb7cd8-7ns5r\" (UID: \"6a7e6375-28c1-4b5d-9013-d5546ab2cd6a\") " pod="calico-system/calico-typha-6c98fb7cd8-7ns5r" Aug 5 22:22:22.505312 kubelet[2526]: E0805 22:22:22.505301 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:22:22.505312 kubelet[2526]: W0805 22:22:22.505311 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:22:22.505368 kubelet[2526]: E0805 22:22:22.505322 2526 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:22:22.505368 kubelet[2526]: I0805 22:22:22.505340 2526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4xhj\" (UniqueName: \"kubernetes.io/projected/6a7e6375-28c1-4b5d-9013-d5546ab2cd6a-kube-api-access-x4xhj\") pod \"calico-typha-6c98fb7cd8-7ns5r\" (UID: \"6a7e6375-28c1-4b5d-9013-d5546ab2cd6a\") " pod="calico-system/calico-typha-6c98fb7cd8-7ns5r" Aug 5 22:22:22.505509 kubelet[2526]: E0805 22:22:22.505495 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:22:22.505509 kubelet[2526]: W0805 22:22:22.505506 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:22:22.505560 kubelet[2526]: E0805 22:22:22.505520 2526 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:22:22.505652 kubelet[2526]: E0805 22:22:22.505641 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:22:22.505652 kubelet[2526]: W0805 22:22:22.505650 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:22:22.505726 kubelet[2526]: E0805 22:22:22.505665 2526 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:22:22.505838 kubelet[2526]: E0805 22:22:22.505824 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:22:22.505838 kubelet[2526]: W0805 22:22:22.505835 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:22:22.505894 kubelet[2526]: E0805 22:22:22.505849 2526 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:22:22.505985 kubelet[2526]: E0805 22:22:22.505968 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:22:22.505985 kubelet[2526]: W0805 22:22:22.505977 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:22:22.505985 kubelet[2526]: E0805 22:22:22.505987 2526 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:22:22.506137 kubelet[2526]: E0805 22:22:22.506124 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:22:22.506137 kubelet[2526]: W0805 22:22:22.506134 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:22:22.506184 kubelet[2526]: E0805 22:22:22.506144 2526 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:22:22.506284 kubelet[2526]: E0805 22:22:22.506273 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:22:22.506315 kubelet[2526]: W0805 22:22:22.506284 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:22:22.506315 kubelet[2526]: E0805 22:22:22.506296 2526 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:22:22.606647 kubelet[2526]: E0805 22:22:22.606492 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:22:22.606647 kubelet[2526]: W0805 22:22:22.606511 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:22:22.606647 kubelet[2526]: E0805 22:22:22.606530 2526 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:22:22.606647 kubelet[2526]: I0805 22:22:22.606567 2526 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/0e66ea83-6b0f-4244-a06d-6a9815b0a3cd-typha-certs\") pod \"0e66ea83-6b0f-4244-a06d-6a9815b0a3cd\" (UID: \"0e66ea83-6b0f-4244-a06d-6a9815b0a3cd\") " Aug 5 22:22:22.606886 kubelet[2526]: E0805 22:22:22.606872 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:22:22.606943 kubelet[2526]: W0805 22:22:22.606933 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:22:22.607016 kubelet[2526]: E0805 22:22:22.607005 2526 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:22:22.607093 kubelet[2526]: I0805 22:22:22.607083 2526 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-82cwl\" (UniqueName: \"kubernetes.io/projected/0e66ea83-6b0f-4244-a06d-6a9815b0a3cd-kube-api-access-82cwl\") pod \"0e66ea83-6b0f-4244-a06d-6a9815b0a3cd\" (UID: \"0e66ea83-6b0f-4244-a06d-6a9815b0a3cd\") " Aug 5 22:22:22.607305 kubelet[2526]: E0805 22:22:22.607271 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:22:22.607305 kubelet[2526]: W0805 22:22:22.607289 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:22:22.607372 kubelet[2526]: E0805 22:22:22.607310 2526 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:22:22.607573 kubelet[2526]: E0805 22:22:22.607536 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:22:22.607573 kubelet[2526]: W0805 22:22:22.607553 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:22:22.607573 kubelet[2526]: E0805 22:22:22.607571 2526 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:22:22.607654 kubelet[2526]: I0805 22:22:22.607595 2526 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0e66ea83-6b0f-4244-a06d-6a9815b0a3cd-tigera-ca-bundle\") pod \"0e66ea83-6b0f-4244-a06d-6a9815b0a3cd\" (UID: \"0e66ea83-6b0f-4244-a06d-6a9815b0a3cd\") " Aug 5 22:22:22.607839 kubelet[2526]: E0805 22:22:22.607805 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:22:22.607839 kubelet[2526]: W0805 22:22:22.607820 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:22:22.607839 kubelet[2526]: E0805 22:22:22.607839 2526 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:22:22.608034 kubelet[2526]: E0805 22:22:22.608019 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:22:22.608034 kubelet[2526]: W0805 22:22:22.608031 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:22:22.608100 kubelet[2526]: E0805 22:22:22.608048 2526 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:22:22.608736 kubelet[2526]: E0805 22:22:22.608721 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:22:22.608736 kubelet[2526]: W0805 22:22:22.608734 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:22:22.608814 kubelet[2526]: E0805 22:22:22.608753 2526 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:22:22.612748 kubelet[2526]: E0805 22:22:22.609197 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:22:22.612748 kubelet[2526]: W0805 22:22:22.609213 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:22:22.612748 kubelet[2526]: E0805 22:22:22.609228 2526 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:22:22.612748 kubelet[2526]: E0805 22:22:22.609399 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:22:22.612748 kubelet[2526]: W0805 22:22:22.609407 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:22:22.612748 kubelet[2526]: E0805 22:22:22.609417 2526 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:22:22.612748 kubelet[2526]: E0805 22:22:22.609554 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:22:22.612748 kubelet[2526]: W0805 22:22:22.609561 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:22:22.612748 kubelet[2526]: E0805 22:22:22.609572 2526 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:22:22.612748 kubelet[2526]: E0805 22:22:22.609762 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:22:22.613031 kubelet[2526]: W0805 22:22:22.609770 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:22:22.613031 kubelet[2526]: E0805 22:22:22.609783 2526 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:22:22.613031 kubelet[2526]: E0805 22:22:22.612061 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:22:22.613031 kubelet[2526]: W0805 22:22:22.612086 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:22:22.613031 kubelet[2526]: E0805 22:22:22.612102 2526 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:22:22.613730 kubelet[2526]: E0805 22:22:22.613609 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:22:22.613730 kubelet[2526]: W0805 22:22:22.613621 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:22:22.613730 kubelet[2526]: E0805 22:22:22.613643 2526 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:22:22.614066 kubelet[2526]: E0805 22:22:22.613965 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:22:22.614066 kubelet[2526]: W0805 22:22:22.613982 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:22:22.614319 kubelet[2526]: E0805 22:22:22.614253 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:22:22.614319 kubelet[2526]: W0805 22:22:22.614265 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:22:22.614319 kubelet[2526]: E0805 22:22:22.614278 2526 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:22:22.614520 kubelet[2526]: E0805 22:22:22.614499 2526 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:22:22.614647 kubelet[2526]: E0805 22:22:22.614637 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:22:22.614726 kubelet[2526]: W0805 22:22:22.614715 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:22:22.614726 kubelet[2526]: E0805 22:22:22.614767 2526 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:22:22.615038 kubelet[2526]: E0805 22:22:22.615023 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:22:22.615038 kubelet[2526]: W0805 22:22:22.615037 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:22:22.615152 kubelet[2526]: E0805 22:22:22.615054 2526 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:22:22.615301 kubelet[2526]: E0805 22:22:22.615245 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:22:22.615301 kubelet[2526]: W0805 22:22:22.615258 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:22:22.615301 kubelet[2526]: E0805 22:22:22.615274 2526 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:22:22.615428 kubelet[2526]: E0805 22:22:22.615407 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:22:22.615428 kubelet[2526]: W0805 22:22:22.615426 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:22:22.616657 kubelet[2526]: E0805 22:22:22.615441 2526 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:22:22.616657 kubelet[2526]: I0805 22:22:22.615407 2526 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e66ea83-6b0f-4244-a06d-6a9815b0a3cd-typha-certs" (OuterVolumeSpecName: "typha-certs") pod "0e66ea83-6b0f-4244-a06d-6a9815b0a3cd" (UID: "0e66ea83-6b0f-4244-a06d-6a9815b0a3cd"). InnerVolumeSpecName "typha-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 5 22:22:22.616657 kubelet[2526]: E0805 22:22:22.615620 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:22:22.616657 kubelet[2526]: W0805 22:22:22.615630 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:22:22.616657 kubelet[2526]: E0805 22:22:22.615642 2526 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:22:22.616657 kubelet[2526]: I0805 22:22:22.615714 2526 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e66ea83-6b0f-4244-a06d-6a9815b0a3cd-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "0e66ea83-6b0f-4244-a06d-6a9815b0a3cd" (UID: "0e66ea83-6b0f-4244-a06d-6a9815b0a3cd"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 5 22:22:22.616657 kubelet[2526]: E0805 22:22:22.615807 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:22:22.616657 kubelet[2526]: W0805 22:22:22.615814 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:22:22.617360 kubelet[2526]: E0805 22:22:22.615825 2526 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:22:22.617360 kubelet[2526]: E0805 22:22:22.616004 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:22:22.617360 kubelet[2526]: W0805 22:22:22.616013 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:22:22.617360 kubelet[2526]: E0805 22:22:22.616024 2526 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:22:22.617360 kubelet[2526]: E0805 22:22:22.616783 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:22:22.617360 kubelet[2526]: W0805 22:22:22.616794 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:22:22.617360 kubelet[2526]: E0805 22:22:22.616808 2526 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:22:22.617798 kubelet[2526]: I0805 22:22:22.617611 2526 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e66ea83-6b0f-4244-a06d-6a9815b0a3cd-kube-api-access-82cwl" (OuterVolumeSpecName: "kube-api-access-82cwl") pod "0e66ea83-6b0f-4244-a06d-6a9815b0a3cd" (UID: "0e66ea83-6b0f-4244-a06d-6a9815b0a3cd"). InnerVolumeSpecName "kube-api-access-82cwl". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 5 22:22:22.624161 kubelet[2526]: E0805 22:22:22.624145 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:22:22.624299 kubelet[2526]: W0805 22:22:22.624195 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:22:22.624299 kubelet[2526]: E0805 22:22:22.624212 2526 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:22:22.639626 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2cc24eb8876802ccda02f48fe56976447b3381ab75ac299cc8b2b3ebb13029a8-rootfs.mount: Deactivated successfully. Aug 5 22:22:22.639720 systemd[1]: var-lib-kubelet-pods-0e66ea83\x2d6b0f\x2d4244\x2da06d\x2d6a9815b0a3cd-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dtypha-1.mount: Deactivated successfully. Aug 5 22:22:22.639777 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b9e3d6b6901bacde775df695318aa7dfca5a19f8dd3bb9156b1c7fa63a0e01ae-rootfs.mount: Deactivated successfully. Aug 5 22:22:22.639825 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b9e3d6b6901bacde775df695318aa7dfca5a19f8dd3bb9156b1c7fa63a0e01ae-shm.mount: Deactivated successfully. Aug 5 22:22:22.640174 systemd[1]: var-lib-kubelet-pods-0e66ea83\x2d6b0f\x2d4244\x2da06d\x2d6a9815b0a3cd-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d82cwl.mount: Deactivated successfully. Aug 5 22:22:22.640231 systemd[1]: var-lib-kubelet-pods-0e66ea83\x2d6b0f\x2d4244\x2da06d\x2d6a9815b0a3cd-volumes-kubernetes.io\x7esecret-typha\x2dcerts.mount: Deactivated successfully. Aug 5 22:22:22.712595 kubelet[2526]: I0805 22:22:22.712500 2526 reconciler_common.go:300] "Volume detached for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/0e66ea83-6b0f-4244-a06d-6a9815b0a3cd-typha-certs\") on node \"localhost\" DevicePath \"\"" Aug 5 22:22:22.712738 kubelet[2526]: I0805 22:22:22.712704 2526 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-82cwl\" (UniqueName: \"kubernetes.io/projected/0e66ea83-6b0f-4244-a06d-6a9815b0a3cd-kube-api-access-82cwl\") on node \"localhost\" DevicePath \"\"" Aug 5 22:22:22.712738 kubelet[2526]: I0805 22:22:22.712725 2526 reconciler_common.go:300] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0e66ea83-6b0f-4244-a06d-6a9815b0a3cd-tigera-ca-bundle\") on node \"localhost\" DevicePath \"\"" Aug 5 22:22:22.782776 kubelet[2526]: E0805 22:22:22.782738 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:22:22.784314 containerd[1442]: time="2024-08-05T22:22:22.784130597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6c98fb7cd8-7ns5r,Uid:6a7e6375-28c1-4b5d-9013-d5546ab2cd6a,Namespace:calico-system,Attempt:0,}" Aug 5 22:22:22.813076 containerd[1442]: time="2024-08-05T22:22:22.812971041Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:22:22.813076 containerd[1442]: time="2024-08-05T22:22:22.813015640Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:22:22.813076 containerd[1442]: time="2024-08-05T22:22:22.813029080Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:22:22.813076 containerd[1442]: time="2024-08-05T22:22:22.813038640Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:22:22.852605 systemd[1]: Started cri-containerd-f2bc8eabad09f4db95da1ab26bd8d18e1e39b107e7eb6bcaf5c5b035bf62bb8c.scope - libcontainer container f2bc8eabad09f4db95da1ab26bd8d18e1e39b107e7eb6bcaf5c5b035bf62bb8c. Aug 5 22:22:22.887048 containerd[1442]: time="2024-08-05T22:22:22.886515682Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6c98fb7cd8-7ns5r,Uid:6a7e6375-28c1-4b5d-9013-d5546ab2cd6a,Namespace:calico-system,Attempt:0,} returns sandbox id \"f2bc8eabad09f4db95da1ab26bd8d18e1e39b107e7eb6bcaf5c5b035bf62bb8c\"" Aug 5 22:22:22.889815 kubelet[2526]: E0805 22:22:22.888645 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:22:22.898130 containerd[1442]: time="2024-08-05T22:22:22.898040924Z" level=info msg="CreateContainer within sandbox \"f2bc8eabad09f4db95da1ab26bd8d18e1e39b107e7eb6bcaf5c5b035bf62bb8c\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Aug 5 22:22:22.912170 containerd[1442]: time="2024-08-05T22:22:22.912073794Z" level=info msg="CreateContainer within sandbox \"f2bc8eabad09f4db95da1ab26bd8d18e1e39b107e7eb6bcaf5c5b035bf62bb8c\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"d4a133ce629e68aaee2c1887fe1de90099e9cca810907d5f1c8a68021745b2c5\"" Aug 5 22:22:22.912713 containerd[1442]: time="2024-08-05T22:22:22.912659102Z" level=info msg="StartContainer for \"d4a133ce629e68aaee2c1887fe1de90099e9cca810907d5f1c8a68021745b2c5\"" Aug 5 22:22:22.943972 systemd[1]: Started cri-containerd-d4a133ce629e68aaee2c1887fe1de90099e9cca810907d5f1c8a68021745b2c5.scope - libcontainer container d4a133ce629e68aaee2c1887fe1de90099e9cca810907d5f1c8a68021745b2c5. Aug 5 22:22:22.958327 containerd[1442]: time="2024-08-05T22:22:22.958284359Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:22:22.960772 containerd[1442]: time="2024-08-05T22:22:22.960737228Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0: active requests=0, bytes read=4916009" Aug 5 22:22:22.961700 containerd[1442]: time="2024-08-05T22:22:22.961386655Z" level=info msg="ImageCreate event name:\"sha256:4b6a6a9b369fa6127e23e376ac423670fa81290e0860917acaacae108e3cc064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:22:22.967624 containerd[1442]: time="2024-08-05T22:22:22.967536688Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:22:22.969422 containerd[1442]: time="2024-08-05T22:22:22.969206893Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" with image id \"sha256:4b6a6a9b369fa6127e23e376ac423670fa81290e0860917acaacae108e3cc064\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\", size \"6282537\" in 1.315024421s" Aug 5 22:22:22.969422 containerd[1442]: time="2024-08-05T22:22:22.969252172Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" returns image reference \"sha256:4b6a6a9b369fa6127e23e376ac423670fa81290e0860917acaacae108e3cc064\"" Aug 5 22:22:22.972846 containerd[1442]: time="2024-08-05T22:22:22.972817059Z" level=info msg="CreateContainer within sandbox \"d9bb25eb5077021ff05f07a82cfddd79b055477e178678f47809adbf759b8c64\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Aug 5 22:22:22.982113 containerd[1442]: time="2024-08-05T22:22:22.982072267Z" level=info msg="StartContainer for \"d4a133ce629e68aaee2c1887fe1de90099e9cca810907d5f1c8a68021745b2c5\" returns successfully" Aug 5 22:22:22.985154 containerd[1442]: time="2024-08-05T22:22:22.985041006Z" level=info msg="CreateContainer within sandbox \"d9bb25eb5077021ff05f07a82cfddd79b055477e178678f47809adbf759b8c64\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"bf7c481cd6d1ada623e1b22e1b1fff57c9956c9dcb8703cf7a4cfa8976356fc1\"" Aug 5 22:22:22.985534 containerd[1442]: time="2024-08-05T22:22:22.985492517Z" level=info msg="StartContainer for \"bf7c481cd6d1ada623e1b22e1b1fff57c9956c9dcb8703cf7a4cfa8976356fc1\"" Aug 5 22:22:23.023843 systemd[1]: Started cri-containerd-bf7c481cd6d1ada623e1b22e1b1fff57c9956c9dcb8703cf7a4cfa8976356fc1.scope - libcontainer container bf7c481cd6d1ada623e1b22e1b1fff57c9956c9dcb8703cf7a4cfa8976356fc1. Aug 5 22:22:23.072006 containerd[1442]: time="2024-08-05T22:22:23.071955291Z" level=info msg="StartContainer for \"bf7c481cd6d1ada623e1b22e1b1fff57c9956c9dcb8703cf7a4cfa8976356fc1\" returns successfully" Aug 5 22:22:23.104427 systemd[1]: cri-containerd-bf7c481cd6d1ada623e1b22e1b1fff57c9956c9dcb8703cf7a4cfa8976356fc1.scope: Deactivated successfully. Aug 5 22:22:23.131466 containerd[1442]: time="2024-08-05T22:22:23.131413897Z" level=info msg="shim disconnected" id=bf7c481cd6d1ada623e1b22e1b1fff57c9956c9dcb8703cf7a4cfa8976356fc1 namespace=k8s.io Aug 5 22:22:23.131466 containerd[1442]: time="2024-08-05T22:22:23.131462216Z" level=warning msg="cleaning up after shim disconnected" id=bf7c481cd6d1ada623e1b22e1b1fff57c9956c9dcb8703cf7a4cfa8976356fc1 namespace=k8s.io Aug 5 22:22:23.131466 containerd[1442]: time="2024-08-05T22:22:23.131470496Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 22:22:23.225629 containerd[1442]: time="2024-08-05T22:22:23.225325010Z" level=info msg="StopPodSandbox for \"d9bb25eb5077021ff05f07a82cfddd79b055477e178678f47809adbf759b8c64\"" Aug 5 22:22:23.225629 containerd[1442]: time="2024-08-05T22:22:23.225368289Z" level=info msg="Container to stop \"bf7c481cd6d1ada623e1b22e1b1fff57c9956c9dcb8703cf7a4cfa8976356fc1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 5 22:22:23.229543 kubelet[2526]: I0805 22:22:23.226760 2526 scope.go:117] "RemoveContainer" containerID="2cc24eb8876802ccda02f48fe56976447b3381ab75ac299cc8b2b3ebb13029a8" Aug 5 22:22:23.232500 kubelet[2526]: E0805 22:22:23.232477 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:22:23.233166 systemd[1]: Removed slice kubepods-besteffort-pod0e66ea83_6b0f_4244_a06d_6a9815b0a3cd.slice - libcontainer container kubepods-besteffort-pod0e66ea83_6b0f_4244_a06d_6a9815b0a3cd.slice. Aug 5 22:22:23.236361 systemd[1]: cri-containerd-d9bb25eb5077021ff05f07a82cfddd79b055477e178678f47809adbf759b8c64.scope: Deactivated successfully. Aug 5 22:22:23.236689 containerd[1442]: time="2024-08-05T22:22:23.236649623Z" level=info msg="RemoveContainer for \"2cc24eb8876802ccda02f48fe56976447b3381ab75ac299cc8b2b3ebb13029a8\"" Aug 5 22:22:23.240704 containerd[1442]: time="2024-08-05T22:22:23.240656022Z" level=info msg="RemoveContainer for \"2cc24eb8876802ccda02f48fe56976447b3381ab75ac299cc8b2b3ebb13029a8\" returns successfully" Aug 5 22:22:23.241227 kubelet[2526]: I0805 22:22:23.241191 2526 scope.go:117] "RemoveContainer" containerID="2cc24eb8876802ccda02f48fe56976447b3381ab75ac299cc8b2b3ebb13029a8" Aug 5 22:22:23.241510 containerd[1442]: time="2024-08-05T22:22:23.241412767Z" level=error msg="ContainerStatus for \"2cc24eb8876802ccda02f48fe56976447b3381ab75ac299cc8b2b3ebb13029a8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2cc24eb8876802ccda02f48fe56976447b3381ab75ac299cc8b2b3ebb13029a8\": not found" Aug 5 22:22:23.241723 kubelet[2526]: E0805 22:22:23.241703 2526 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2cc24eb8876802ccda02f48fe56976447b3381ab75ac299cc8b2b3ebb13029a8\": not found" containerID="2cc24eb8876802ccda02f48fe56976447b3381ab75ac299cc8b2b3ebb13029a8" Aug 5 22:22:23.241784 kubelet[2526]: I0805 22:22:23.241752 2526 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2cc24eb8876802ccda02f48fe56976447b3381ab75ac299cc8b2b3ebb13029a8"} err="failed to get container status \"2cc24eb8876802ccda02f48fe56976447b3381ab75ac299cc8b2b3ebb13029a8\": rpc error: code = NotFound desc = an error occurred when try to find container \"2cc24eb8876802ccda02f48fe56976447b3381ab75ac299cc8b2b3ebb13029a8\": not found" Aug 5 22:22:23.254997 kubelet[2526]: I0805 22:22:23.254970 2526 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-6c98fb7cd8-7ns5r" podStartSLOduration=4.254930455 podStartE2EDuration="4.254930455s" podCreationTimestamp="2024-08-05 22:22:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 22:22:23.253762159 +0000 UTC m=+29.196392930" watchObservedRunningTime="2024-08-05 22:22:23.254930455 +0000 UTC m=+29.197561226" Aug 5 22:22:23.273733 containerd[1442]: time="2024-08-05T22:22:23.273635880Z" level=info msg="shim disconnected" id=d9bb25eb5077021ff05f07a82cfddd79b055477e178678f47809adbf759b8c64 namespace=k8s.io Aug 5 22:22:23.273964 containerd[1442]: time="2024-08-05T22:22:23.273946394Z" level=warning msg="cleaning up after shim disconnected" id=d9bb25eb5077021ff05f07a82cfddd79b055477e178678f47809adbf759b8c64 namespace=k8s.io Aug 5 22:22:23.274044 containerd[1442]: time="2024-08-05T22:22:23.274032152Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 22:22:23.286740 containerd[1442]: time="2024-08-05T22:22:23.286705337Z" level=info msg="TearDown network for sandbox \"d9bb25eb5077021ff05f07a82cfddd79b055477e178678f47809adbf759b8c64\" successfully" Aug 5 22:22:23.286852 containerd[1442]: time="2024-08-05T22:22:23.286837015Z" level=info msg="StopPodSandbox for \"d9bb25eb5077021ff05f07a82cfddd79b055477e178678f47809adbf759b8c64\" returns successfully" Aug 5 22:22:23.316586 kubelet[2526]: I0805 22:22:23.316553 2526 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/6e49443a-7575-4b86-95e7-446d05915bf6-node-certs\") pod \"6e49443a-7575-4b86-95e7-446d05915bf6\" (UID: \"6e49443a-7575-4b86-95e7-446d05915bf6\") " Aug 5 22:22:23.316586 kubelet[2526]: I0805 22:22:23.316593 2526 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e49443a-7575-4b86-95e7-446d05915bf6-tigera-ca-bundle\") pod \"6e49443a-7575-4b86-95e7-446d05915bf6\" (UID: \"6e49443a-7575-4b86-95e7-446d05915bf6\") " Aug 5 22:22:23.316840 kubelet[2526]: I0805 22:22:23.316614 2526 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/6e49443a-7575-4b86-95e7-446d05915bf6-cni-net-dir\") pod \"6e49443a-7575-4b86-95e7-446d05915bf6\" (UID: \"6e49443a-7575-4b86-95e7-446d05915bf6\") " Aug 5 22:22:23.316840 kubelet[2526]: I0805 22:22:23.316634 2526 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/6e49443a-7575-4b86-95e7-446d05915bf6-var-run-calico\") pod \"6e49443a-7575-4b86-95e7-446d05915bf6\" (UID: \"6e49443a-7575-4b86-95e7-446d05915bf6\") " Aug 5 22:22:23.316840 kubelet[2526]: I0805 22:22:23.316653 2526 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/6e49443a-7575-4b86-95e7-446d05915bf6-policysync\") pod \"6e49443a-7575-4b86-95e7-446d05915bf6\" (UID: \"6e49443a-7575-4b86-95e7-446d05915bf6\") " Aug 5 22:22:23.316840 kubelet[2526]: I0805 22:22:23.316671 2526 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/6e49443a-7575-4b86-95e7-446d05915bf6-flexvol-driver-host\") pod \"6e49443a-7575-4b86-95e7-446d05915bf6\" (UID: \"6e49443a-7575-4b86-95e7-446d05915bf6\") " Aug 5 22:22:23.316840 kubelet[2526]: I0805 22:22:23.316706 2526 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bqbmd\" (UniqueName: \"kubernetes.io/projected/6e49443a-7575-4b86-95e7-446d05915bf6-kube-api-access-bqbmd\") pod \"6e49443a-7575-4b86-95e7-446d05915bf6\" (UID: \"6e49443a-7575-4b86-95e7-446d05915bf6\") " Aug 5 22:22:23.316840 kubelet[2526]: I0805 22:22:23.316724 2526 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/6e49443a-7575-4b86-95e7-446d05915bf6-cni-bin-dir\") pod \"6e49443a-7575-4b86-95e7-446d05915bf6\" (UID: \"6e49443a-7575-4b86-95e7-446d05915bf6\") " Aug 5 22:22:23.316986 kubelet[2526]: I0805 22:22:23.316743 2526 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6e49443a-7575-4b86-95e7-446d05915bf6-xtables-lock\") pod \"6e49443a-7575-4b86-95e7-446d05915bf6\" (UID: \"6e49443a-7575-4b86-95e7-446d05915bf6\") " Aug 5 22:22:23.316986 kubelet[2526]: I0805 22:22:23.316761 2526 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6e49443a-7575-4b86-95e7-446d05915bf6-var-lib-calico\") pod \"6e49443a-7575-4b86-95e7-446d05915bf6\" (UID: \"6e49443a-7575-4b86-95e7-446d05915bf6\") " Aug 5 22:22:23.316986 kubelet[2526]: I0805 22:22:23.316780 2526 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6e49443a-7575-4b86-95e7-446d05915bf6-lib-modules\") pod \"6e49443a-7575-4b86-95e7-446d05915bf6\" (UID: \"6e49443a-7575-4b86-95e7-446d05915bf6\") " Aug 5 22:22:23.316986 kubelet[2526]: I0805 22:22:23.316798 2526 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/6e49443a-7575-4b86-95e7-446d05915bf6-cni-log-dir\") pod \"6e49443a-7575-4b86-95e7-446d05915bf6\" (UID: \"6e49443a-7575-4b86-95e7-446d05915bf6\") " Aug 5 22:22:23.316986 kubelet[2526]: I0805 22:22:23.316893 2526 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e49443a-7575-4b86-95e7-446d05915bf6-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "6e49443a-7575-4b86-95e7-446d05915bf6" (UID: "6e49443a-7575-4b86-95e7-446d05915bf6"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 5 22:22:23.317116 kubelet[2526]: I0805 22:22:23.317047 2526 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e49443a-7575-4b86-95e7-446d05915bf6-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "6e49443a-7575-4b86-95e7-446d05915bf6" (UID: "6e49443a-7575-4b86-95e7-446d05915bf6"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 5 22:22:23.317116 kubelet[2526]: I0805 22:22:23.317081 2526 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e49443a-7575-4b86-95e7-446d05915bf6-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "6e49443a-7575-4b86-95e7-446d05915bf6" (UID: "6e49443a-7575-4b86-95e7-446d05915bf6"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 5 22:22:23.317116 kubelet[2526]: I0805 22:22:23.317098 2526 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e49443a-7575-4b86-95e7-446d05915bf6-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "6e49443a-7575-4b86-95e7-446d05915bf6" (UID: "6e49443a-7575-4b86-95e7-446d05915bf6"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 5 22:22:23.317116 kubelet[2526]: I0805 22:22:23.317113 2526 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e49443a-7575-4b86-95e7-446d05915bf6-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "6e49443a-7575-4b86-95e7-446d05915bf6" (UID: "6e49443a-7575-4b86-95e7-446d05915bf6"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 5 22:22:23.317453 kubelet[2526]: I0805 22:22:23.317258 2526 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e49443a-7575-4b86-95e7-446d05915bf6-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "6e49443a-7575-4b86-95e7-446d05915bf6" (UID: "6e49443a-7575-4b86-95e7-446d05915bf6"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 5 22:22:23.317453 kubelet[2526]: I0805 22:22:23.317346 2526 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e49443a-7575-4b86-95e7-446d05915bf6-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "6e49443a-7575-4b86-95e7-446d05915bf6" (UID: "6e49443a-7575-4b86-95e7-446d05915bf6"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 5 22:22:23.317453 kubelet[2526]: I0805 22:22:23.317366 2526 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e49443a-7575-4b86-95e7-446d05915bf6-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "6e49443a-7575-4b86-95e7-446d05915bf6" (UID: "6e49443a-7575-4b86-95e7-446d05915bf6"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 5 22:22:23.317453 kubelet[2526]: I0805 22:22:23.317384 2526 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e49443a-7575-4b86-95e7-446d05915bf6-policysync" (OuterVolumeSpecName: "policysync") pod "6e49443a-7575-4b86-95e7-446d05915bf6" (UID: "6e49443a-7575-4b86-95e7-446d05915bf6"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 5 22:22:23.317453 kubelet[2526]: I0805 22:22:23.317407 2526 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e49443a-7575-4b86-95e7-446d05915bf6-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "6e49443a-7575-4b86-95e7-446d05915bf6" (UID: "6e49443a-7575-4b86-95e7-446d05915bf6"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 5 22:22:23.318911 kubelet[2526]: I0805 22:22:23.318879 2526 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e49443a-7575-4b86-95e7-446d05915bf6-node-certs" (OuterVolumeSpecName: "node-certs") pod "6e49443a-7575-4b86-95e7-446d05915bf6" (UID: "6e49443a-7575-4b86-95e7-446d05915bf6"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 5 22:22:23.319602 kubelet[2526]: I0805 22:22:23.319567 2526 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e49443a-7575-4b86-95e7-446d05915bf6-kube-api-access-bqbmd" (OuterVolumeSpecName: "kube-api-access-bqbmd") pod "6e49443a-7575-4b86-95e7-446d05915bf6" (UID: "6e49443a-7575-4b86-95e7-446d05915bf6"). InnerVolumeSpecName "kube-api-access-bqbmd". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 5 22:22:23.417223 kubelet[2526]: I0805 22:22:23.417173 2526 reconciler_common.go:300] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/6e49443a-7575-4b86-95e7-446d05915bf6-policysync\") on node \"localhost\" DevicePath \"\"" Aug 5 22:22:23.417223 kubelet[2526]: I0805 22:22:23.417210 2526 reconciler_common.go:300] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/6e49443a-7575-4b86-95e7-446d05915bf6-flexvol-driver-host\") on node \"localhost\" DevicePath \"\"" Aug 5 22:22:23.417223 kubelet[2526]: I0805 22:22:23.417222 2526 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-bqbmd\" (UniqueName: \"kubernetes.io/projected/6e49443a-7575-4b86-95e7-446d05915bf6-kube-api-access-bqbmd\") on node \"localhost\" DevicePath \"\"" Aug 5 22:22:23.417223 kubelet[2526]: I0805 22:22:23.417235 2526 reconciler_common.go:300] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/6e49443a-7575-4b86-95e7-446d05915bf6-cni-bin-dir\") on node \"localhost\" DevicePath \"\"" Aug 5 22:22:23.417430 kubelet[2526]: I0805 22:22:23.417244 2526 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6e49443a-7575-4b86-95e7-446d05915bf6-xtables-lock\") on node \"localhost\" DevicePath \"\"" Aug 5 22:22:23.417430 kubelet[2526]: I0805 22:22:23.417256 2526 reconciler_common.go:300] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6e49443a-7575-4b86-95e7-446d05915bf6-var-lib-calico\") on node \"localhost\" DevicePath \"\"" Aug 5 22:22:23.417430 kubelet[2526]: I0805 22:22:23.417265 2526 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6e49443a-7575-4b86-95e7-446d05915bf6-lib-modules\") on node \"localhost\" DevicePath \"\"" Aug 5 22:22:23.417430 kubelet[2526]: I0805 22:22:23.417273 2526 reconciler_common.go:300] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/6e49443a-7575-4b86-95e7-446d05915bf6-cni-log-dir\") on node \"localhost\" DevicePath \"\"" Aug 5 22:22:23.417430 kubelet[2526]: I0805 22:22:23.417283 2526 reconciler_common.go:300] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/6e49443a-7575-4b86-95e7-446d05915bf6-node-certs\") on node \"localhost\" DevicePath \"\"" Aug 5 22:22:23.417430 kubelet[2526]: I0805 22:22:23.417292 2526 reconciler_common.go:300] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e49443a-7575-4b86-95e7-446d05915bf6-tigera-ca-bundle\") on node \"localhost\" DevicePath \"\"" Aug 5 22:22:23.417430 kubelet[2526]: I0805 22:22:23.417301 2526 reconciler_common.go:300] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/6e49443a-7575-4b86-95e7-446d05915bf6-cni-net-dir\") on node \"localhost\" DevicePath \"\"" Aug 5 22:22:23.417430 kubelet[2526]: I0805 22:22:23.417309 2526 reconciler_common.go:300] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/6e49443a-7575-4b86-95e7-446d05915bf6-var-run-calico\") on node \"localhost\" DevicePath \"\"" Aug 5 22:22:23.638615 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d9bb25eb5077021ff05f07a82cfddd79b055477e178678f47809adbf759b8c64-rootfs.mount: Deactivated successfully. Aug 5 22:22:23.638724 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d9bb25eb5077021ff05f07a82cfddd79b055477e178678f47809adbf759b8c64-shm.mount: Deactivated successfully. Aug 5 22:22:23.638782 systemd[1]: var-lib-kubelet-pods-6e49443a\x2d7575\x2d4b86\x2d95e7\x2d446d05915bf6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbqbmd.mount: Deactivated successfully. Aug 5 22:22:23.638840 systemd[1]: var-lib-kubelet-pods-6e49443a\x2d7575\x2d4b86\x2d95e7\x2d446d05915bf6-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. Aug 5 22:22:24.143240 kubelet[2526]: E0805 22:22:24.142926 2526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4gbsx" podUID="f86a8cc4-dcb1-4c07-ba89-5368804c0223" Aug 5 22:22:24.145323 kubelet[2526]: I0805 22:22:24.145299 2526 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="0e66ea83-6b0f-4244-a06d-6a9815b0a3cd" path="/var/lib/kubelet/pods/0e66ea83-6b0f-4244-a06d-6a9815b0a3cd/volumes" Aug 5 22:22:24.152304 systemd[1]: Removed slice kubepods-besteffort-pod6e49443a_7575_4b86_95e7_446d05915bf6.slice - libcontainer container kubepods-besteffort-pod6e49443a_7575_4b86_95e7_446d05915bf6.slice. Aug 5 22:22:24.238136 kubelet[2526]: I0805 22:22:24.237230 2526 scope.go:117] "RemoveContainer" containerID="bf7c481cd6d1ada623e1b22e1b1fff57c9956c9dcb8703cf7a4cfa8976356fc1" Aug 5 22:22:24.241189 containerd[1442]: time="2024-08-05T22:22:24.241145418Z" level=info msg="RemoveContainer for \"bf7c481cd6d1ada623e1b22e1b1fff57c9956c9dcb8703cf7a4cfa8976356fc1\"" Aug 5 22:22:24.244785 containerd[1442]: time="2024-08-05T22:22:24.244692309Z" level=info msg="RemoveContainer for \"bf7c481cd6d1ada623e1b22e1b1fff57c9956c9dcb8703cf7a4cfa8976356fc1\" returns successfully" Aug 5 22:22:24.265482 kubelet[2526]: I0805 22:22:24.265446 2526 topology_manager.go:215] "Topology Admit Handler" podUID="d218f381-86fc-4321-bc18-7a9c9458346b" podNamespace="calico-system" podName="calico-node-tfgdc" Aug 5 22:22:24.266686 kubelet[2526]: E0805 22:22:24.266651 2526 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6e49443a-7575-4b86-95e7-446d05915bf6" containerName="flexvol-driver" Aug 5 22:22:24.266763 kubelet[2526]: I0805 22:22:24.266706 2526 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e49443a-7575-4b86-95e7-446d05915bf6" containerName="flexvol-driver" Aug 5 22:22:24.278026 systemd[1]: Created slice kubepods-besteffort-podd218f381_86fc_4321_bc18_7a9c9458346b.slice - libcontainer container kubepods-besteffort-podd218f381_86fc_4321_bc18_7a9c9458346b.slice. Aug 5 22:22:24.324465 kubelet[2526]: I0805 22:22:24.324418 2526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/d218f381-86fc-4321-bc18-7a9c9458346b-node-certs\") pod \"calico-node-tfgdc\" (UID: \"d218f381-86fc-4321-bc18-7a9c9458346b\") " pod="calico-system/calico-node-tfgdc" Aug 5 22:22:24.324465 kubelet[2526]: I0805 22:22:24.324465 2526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/d218f381-86fc-4321-bc18-7a9c9458346b-cni-bin-dir\") pod \"calico-node-tfgdc\" (UID: \"d218f381-86fc-4321-bc18-7a9c9458346b\") " pod="calico-system/calico-node-tfgdc" Aug 5 22:22:24.324661 kubelet[2526]: I0805 22:22:24.324491 2526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d218f381-86fc-4321-bc18-7a9c9458346b-lib-modules\") pod \"calico-node-tfgdc\" (UID: \"d218f381-86fc-4321-bc18-7a9c9458346b\") " pod="calico-system/calico-node-tfgdc" Aug 5 22:22:24.324661 kubelet[2526]: I0805 22:22:24.324512 2526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/d218f381-86fc-4321-bc18-7a9c9458346b-var-run-calico\") pod \"calico-node-tfgdc\" (UID: \"d218f381-86fc-4321-bc18-7a9c9458346b\") " pod="calico-system/calico-node-tfgdc" Aug 5 22:22:24.324661 kubelet[2526]: I0805 22:22:24.324531 2526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/d218f381-86fc-4321-bc18-7a9c9458346b-cni-net-dir\") pod \"calico-node-tfgdc\" (UID: \"d218f381-86fc-4321-bc18-7a9c9458346b\") " pod="calico-system/calico-node-tfgdc" Aug 5 22:22:24.324661 kubelet[2526]: I0805 22:22:24.324552 2526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/d218f381-86fc-4321-bc18-7a9c9458346b-flexvol-driver-host\") pod \"calico-node-tfgdc\" (UID: \"d218f381-86fc-4321-bc18-7a9c9458346b\") " pod="calico-system/calico-node-tfgdc" Aug 5 22:22:24.324661 kubelet[2526]: I0805 22:22:24.324572 2526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8hrq\" (UniqueName: \"kubernetes.io/projected/d218f381-86fc-4321-bc18-7a9c9458346b-kube-api-access-w8hrq\") pod \"calico-node-tfgdc\" (UID: \"d218f381-86fc-4321-bc18-7a9c9458346b\") " pod="calico-system/calico-node-tfgdc" Aug 5 22:22:24.324831 kubelet[2526]: I0805 22:22:24.324597 2526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d218f381-86fc-4321-bc18-7a9c9458346b-xtables-lock\") pod \"calico-node-tfgdc\" (UID: \"d218f381-86fc-4321-bc18-7a9c9458346b\") " pod="calico-system/calico-node-tfgdc" Aug 5 22:22:24.324831 kubelet[2526]: I0805 22:22:24.324616 2526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/d218f381-86fc-4321-bc18-7a9c9458346b-cni-log-dir\") pod \"calico-node-tfgdc\" (UID: \"d218f381-86fc-4321-bc18-7a9c9458346b\") " pod="calico-system/calico-node-tfgdc" Aug 5 22:22:24.324831 kubelet[2526]: I0805 22:22:24.324635 2526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d218f381-86fc-4321-bc18-7a9c9458346b-tigera-ca-bundle\") pod \"calico-node-tfgdc\" (UID: \"d218f381-86fc-4321-bc18-7a9c9458346b\") " pod="calico-system/calico-node-tfgdc" Aug 5 22:22:24.324831 kubelet[2526]: I0805 22:22:24.324659 2526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d218f381-86fc-4321-bc18-7a9c9458346b-var-lib-calico\") pod \"calico-node-tfgdc\" (UID: \"d218f381-86fc-4321-bc18-7a9c9458346b\") " pod="calico-system/calico-node-tfgdc" Aug 5 22:22:24.324831 kubelet[2526]: I0805 22:22:24.324694 2526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/d218f381-86fc-4321-bc18-7a9c9458346b-policysync\") pod \"calico-node-tfgdc\" (UID: \"d218f381-86fc-4321-bc18-7a9c9458346b\") " pod="calico-system/calico-node-tfgdc" Aug 5 22:22:24.582387 kubelet[2526]: E0805 22:22:24.582045 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:22:24.583318 containerd[1442]: time="2024-08-05T22:22:24.583272016Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-tfgdc,Uid:d218f381-86fc-4321-bc18-7a9c9458346b,Namespace:calico-system,Attempt:0,}" Aug 5 22:22:24.605797 containerd[1442]: time="2024-08-05T22:22:24.605688938Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:22:24.605797 containerd[1442]: time="2024-08-05T22:22:24.605751777Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:22:24.605797 containerd[1442]: time="2024-08-05T22:22:24.605766057Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:22:24.606034 containerd[1442]: time="2024-08-05T22:22:24.605779456Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:22:24.623843 systemd[1]: Started cri-containerd-fe96624d730b15ffe82a30bdf32eb331ca9e7bda0d4cd266b41a3d73b6e545ab.scope - libcontainer container fe96624d730b15ffe82a30bdf32eb331ca9e7bda0d4cd266b41a3d73b6e545ab. Aug 5 22:22:24.647326 containerd[1442]: time="2024-08-05T22:22:24.647284446Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-tfgdc,Uid:d218f381-86fc-4321-bc18-7a9c9458346b,Namespace:calico-system,Attempt:0,} returns sandbox id \"fe96624d730b15ffe82a30bdf32eb331ca9e7bda0d4cd266b41a3d73b6e545ab\"" Aug 5 22:22:24.648102 kubelet[2526]: E0805 22:22:24.648077 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:22:24.650054 containerd[1442]: time="2024-08-05T22:22:24.650015272Z" level=info msg="CreateContainer within sandbox \"fe96624d730b15ffe82a30bdf32eb331ca9e7bda0d4cd266b41a3d73b6e545ab\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Aug 5 22:22:24.660877 containerd[1442]: time="2024-08-05T22:22:24.660833461Z" level=info msg="CreateContainer within sandbox \"fe96624d730b15ffe82a30bdf32eb331ca9e7bda0d4cd266b41a3d73b6e545ab\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"a5f2a95a99cad3e105b031f6086b23649fbf47c95894900542591b117b8222bf\"" Aug 5 22:22:24.661358 containerd[1442]: time="2024-08-05T22:22:24.661323771Z" level=info msg="StartContainer for \"a5f2a95a99cad3e105b031f6086b23649fbf47c95894900542591b117b8222bf\"" Aug 5 22:22:24.686840 systemd[1]: Started cri-containerd-a5f2a95a99cad3e105b031f6086b23649fbf47c95894900542591b117b8222bf.scope - libcontainer container a5f2a95a99cad3e105b031f6086b23649fbf47c95894900542591b117b8222bf. Aug 5 22:22:24.714275 containerd[1442]: time="2024-08-05T22:22:24.714220378Z" level=info msg="StartContainer for \"a5f2a95a99cad3e105b031f6086b23649fbf47c95894900542591b117b8222bf\" returns successfully" Aug 5 22:22:24.726403 systemd[1]: cri-containerd-a5f2a95a99cad3e105b031f6086b23649fbf47c95894900542591b117b8222bf.scope: Deactivated successfully. Aug 5 22:22:24.754008 containerd[1442]: time="2024-08-05T22:22:24.753944842Z" level=info msg="shim disconnected" id=a5f2a95a99cad3e105b031f6086b23649fbf47c95894900542591b117b8222bf namespace=k8s.io Aug 5 22:22:24.754008 containerd[1442]: time="2024-08-05T22:22:24.753999081Z" level=warning msg="cleaning up after shim disconnected" id=a5f2a95a99cad3e105b031f6086b23649fbf47c95894900542591b117b8222bf namespace=k8s.io Aug 5 22:22:24.754008 containerd[1442]: time="2024-08-05T22:22:24.754007961Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 22:22:25.241314 kubelet[2526]: E0805 22:22:25.241284 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:22:25.242001 containerd[1442]: time="2024-08-05T22:22:25.241931521Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\"" Aug 5 22:22:25.637371 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a5f2a95a99cad3e105b031f6086b23649fbf47c95894900542591b117b8222bf-rootfs.mount: Deactivated successfully. Aug 5 22:22:26.141345 kubelet[2526]: E0805 22:22:26.141308 2526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4gbsx" podUID="f86a8cc4-dcb1-4c07-ba89-5368804c0223" Aug 5 22:22:26.147154 kubelet[2526]: I0805 22:22:26.147122 2526 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="6e49443a-7575-4b86-95e7-446d05915bf6" path="/var/lib/kubelet/pods/6e49443a-7575-4b86-95e7-446d05915bf6/volumes" Aug 5 22:22:26.817962 systemd[1]: Started sshd@7-10.0.0.142:22-10.0.0.1:36176.service - OpenSSH per-connection server daemon (10.0.0.1:36176). Aug 5 22:22:26.854384 sshd[3534]: Accepted publickey for core from 10.0.0.1 port 36176 ssh2: RSA SHA256:m+vSf9MZ8jyHy+Dz2uz+ngzM5NRoRVVH/LZDa5ltoPE Aug 5 22:22:26.855556 sshd[3534]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:22:26.859476 systemd-logind[1424]: New session 8 of user core. Aug 5 22:22:26.868800 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 5 22:22:27.014373 sshd[3534]: pam_unix(sshd:session): session closed for user core Aug 5 22:22:27.017963 systemd[1]: sshd@7-10.0.0.142:22-10.0.0.1:36176.service: Deactivated successfully. Aug 5 22:22:27.019558 systemd[1]: session-8.scope: Deactivated successfully. Aug 5 22:22:27.021176 systemd-logind[1424]: Session 8 logged out. Waiting for processes to exit. Aug 5 22:22:27.022304 systemd-logind[1424]: Removed session 8. Aug 5 22:22:28.142226 kubelet[2526]: E0805 22:22:28.142120 2526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4gbsx" podUID="f86a8cc4-dcb1-4c07-ba89-5368804c0223" Aug 5 22:22:28.975038 containerd[1442]: time="2024-08-05T22:22:28.974383256Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:22:28.975038 containerd[1442]: time="2024-08-05T22:22:28.974832488Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.0: active requests=0, bytes read=86799715" Aug 5 22:22:28.975731 containerd[1442]: time="2024-08-05T22:22:28.975666274Z" level=info msg="ImageCreate event name:\"sha256:adcb19ea66141abcd7dc426e3205f2e6ff26e524a3f7148c97f3d49933f502ee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:22:28.978010 containerd[1442]: time="2024-08-05T22:22:28.977971233Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:22:28.978934 containerd[1442]: time="2024-08-05T22:22:28.978888057Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.0\" with image id \"sha256:adcb19ea66141abcd7dc426e3205f2e6ff26e524a3f7148c97f3d49933f502ee\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\", size \"88166283\" in 3.736923576s" Aug 5 22:22:28.978934 containerd[1442]: time="2024-08-05T22:22:28.978922377Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\" returns image reference \"sha256:adcb19ea66141abcd7dc426e3205f2e6ff26e524a3f7148c97f3d49933f502ee\"" Aug 5 22:22:28.980489 containerd[1442]: time="2024-08-05T22:22:28.980459110Z" level=info msg="CreateContainer within sandbox \"fe96624d730b15ffe82a30bdf32eb331ca9e7bda0d4cd266b41a3d73b6e545ab\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Aug 5 22:22:28.995321 containerd[1442]: time="2024-08-05T22:22:28.995262531Z" level=info msg="CreateContainer within sandbox \"fe96624d730b15ffe82a30bdf32eb331ca9e7bda0d4cd266b41a3d73b6e545ab\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"d2c5061dde3b596895217a26ba1c63b1bf29eb3f14ef61ccae5800198c0ab475\"" Aug 5 22:22:28.995923 containerd[1442]: time="2024-08-05T22:22:28.995762082Z" level=info msg="StartContainer for \"d2c5061dde3b596895217a26ba1c63b1bf29eb3f14ef61ccae5800198c0ab475\"" Aug 5 22:22:29.023883 systemd[1]: Started cri-containerd-d2c5061dde3b596895217a26ba1c63b1bf29eb3f14ef61ccae5800198c0ab475.scope - libcontainer container d2c5061dde3b596895217a26ba1c63b1bf29eb3f14ef61ccae5800198c0ab475. Aug 5 22:22:29.044209 containerd[1442]: time="2024-08-05T22:22:29.044155897Z" level=info msg="StartContainer for \"d2c5061dde3b596895217a26ba1c63b1bf29eb3f14ef61ccae5800198c0ab475\" returns successfully" Aug 5 22:22:29.249543 kubelet[2526]: E0805 22:22:29.249434 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:22:29.510994 systemd[1]: cri-containerd-d2c5061dde3b596895217a26ba1c63b1bf29eb3f14ef61ccae5800198c0ab475.scope: Deactivated successfully. Aug 5 22:22:29.572192 containerd[1442]: time="2024-08-05T22:22:29.572067442Z" level=info msg="shim disconnected" id=d2c5061dde3b596895217a26ba1c63b1bf29eb3f14ef61ccae5800198c0ab475 namespace=k8s.io Aug 5 22:22:29.572192 containerd[1442]: time="2024-08-05T22:22:29.572193240Z" level=warning msg="cleaning up after shim disconnected" id=d2c5061dde3b596895217a26ba1c63b1bf29eb3f14ef61ccae5800198c0ab475 namespace=k8s.io Aug 5 22:22:29.572389 containerd[1442]: time="2024-08-05T22:22:29.572205600Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 22:22:29.578391 kubelet[2526]: I0805 22:22:29.578363 2526 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Aug 5 22:22:29.596183 kubelet[2526]: I0805 22:22:29.595782 2526 topology_manager.go:215] "Topology Admit Handler" podUID="34e6f00e-cfcb-4806-b4ed-da8a03b61a5e" podNamespace="kube-system" podName="coredns-76f75df574-rscnh" Aug 5 22:22:29.598974 kubelet[2526]: I0805 22:22:29.598947 2526 topology_manager.go:215] "Topology Admit Handler" podUID="fd02d030-3ece-464a-a009-26d6a3348ddd" podNamespace="kube-system" podName="coredns-76f75df574-cnk28" Aug 5 22:22:29.602359 kubelet[2526]: I0805 22:22:29.601873 2526 topology_manager.go:215] "Topology Admit Handler" podUID="894e3f76-424c-45d5-b5c3-82011c14a86f" podNamespace="calico-system" podName="calico-kube-controllers-59466c9fdc-nnr4r" Aug 5 22:22:29.606432 systemd[1]: Created slice kubepods-burstable-pod34e6f00e_cfcb_4806_b4ed_da8a03b61a5e.slice - libcontainer container kubepods-burstable-pod34e6f00e_cfcb_4806_b4ed_da8a03b61a5e.slice. Aug 5 22:22:29.614495 systemd[1]: Created slice kubepods-burstable-podfd02d030_3ece_464a_a009_26d6a3348ddd.slice - libcontainer container kubepods-burstable-podfd02d030_3ece_464a_a009_26d6a3348ddd.slice. Aug 5 22:22:29.620513 systemd[1]: Created slice kubepods-besteffort-pod894e3f76_424c_45d5_b5c3_82011c14a86f.slice - libcontainer container kubepods-besteffort-pod894e3f76_424c_45d5_b5c3_82011c14a86f.slice. Aug 5 22:22:29.659015 kubelet[2526]: I0805 22:22:29.658981 2526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fd02d030-3ece-464a-a009-26d6a3348ddd-config-volume\") pod \"coredns-76f75df574-cnk28\" (UID: \"fd02d030-3ece-464a-a009-26d6a3348ddd\") " pod="kube-system/coredns-76f75df574-cnk28" Aug 5 22:22:29.659263 kubelet[2526]: I0805 22:22:29.659236 2526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b64dq\" (UniqueName: \"kubernetes.io/projected/894e3f76-424c-45d5-b5c3-82011c14a86f-kube-api-access-b64dq\") pod \"calico-kube-controllers-59466c9fdc-nnr4r\" (UID: \"894e3f76-424c-45d5-b5c3-82011c14a86f\") " pod="calico-system/calico-kube-controllers-59466c9fdc-nnr4r" Aug 5 22:22:29.659451 kubelet[2526]: I0805 22:22:29.659359 2526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/894e3f76-424c-45d5-b5c3-82011c14a86f-tigera-ca-bundle\") pod \"calico-kube-controllers-59466c9fdc-nnr4r\" (UID: \"894e3f76-424c-45d5-b5c3-82011c14a86f\") " pod="calico-system/calico-kube-controllers-59466c9fdc-nnr4r" Aug 5 22:22:29.659451 kubelet[2526]: I0805 22:22:29.659391 2526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/34e6f00e-cfcb-4806-b4ed-da8a03b61a5e-config-volume\") pod \"coredns-76f75df574-rscnh\" (UID: \"34e6f00e-cfcb-4806-b4ed-da8a03b61a5e\") " pod="kube-system/coredns-76f75df574-rscnh" Aug 5 22:22:29.659451 kubelet[2526]: I0805 22:22:29.659416 2526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6z6tn\" (UniqueName: \"kubernetes.io/projected/fd02d030-3ece-464a-a009-26d6a3348ddd-kube-api-access-6z6tn\") pod \"coredns-76f75df574-cnk28\" (UID: \"fd02d030-3ece-464a-a009-26d6a3348ddd\") " pod="kube-system/coredns-76f75df574-cnk28" Aug 5 22:22:29.659451 kubelet[2526]: I0805 22:22:29.659435 2526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlsrp\" (UniqueName: \"kubernetes.io/projected/34e6f00e-cfcb-4806-b4ed-da8a03b61a5e-kube-api-access-rlsrp\") pod \"coredns-76f75df574-rscnh\" (UID: \"34e6f00e-cfcb-4806-b4ed-da8a03b61a5e\") " pod="kube-system/coredns-76f75df574-rscnh" Aug 5 22:22:29.910712 kubelet[2526]: E0805 22:22:29.910433 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:22:29.911698 containerd[1442]: time="2024-08-05T22:22:29.911417272Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-rscnh,Uid:34e6f00e-cfcb-4806-b4ed-da8a03b61a5e,Namespace:kube-system,Attempt:0,}" Aug 5 22:22:29.918501 kubelet[2526]: E0805 22:22:29.918283 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:22:29.918777 containerd[1442]: time="2024-08-05T22:22:29.918744588Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-cnk28,Uid:fd02d030-3ece-464a-a009-26d6a3348ddd,Namespace:kube-system,Attempt:0,}" Aug 5 22:22:29.923275 containerd[1442]: time="2024-08-05T22:22:29.923233031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-59466c9fdc-nnr4r,Uid:894e3f76-424c-45d5-b5c3-82011c14a86f,Namespace:calico-system,Attempt:0,}" Aug 5 22:22:29.998459 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d2c5061dde3b596895217a26ba1c63b1bf29eb3f14ef61ccae5800198c0ab475-rootfs.mount: Deactivated successfully. Aug 5 22:22:30.148488 systemd[1]: Created slice kubepods-besteffort-podf86a8cc4_dcb1_4c07_ba89_5368804c0223.slice - libcontainer container kubepods-besteffort-podf86a8cc4_dcb1_4c07_ba89_5368804c0223.slice. Aug 5 22:22:30.152457 containerd[1442]: time="2024-08-05T22:22:30.150951229Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4gbsx,Uid:f86a8cc4-dcb1-4c07-ba89-5368804c0223,Namespace:calico-system,Attempt:0,}" Aug 5 22:22:30.205813 containerd[1442]: time="2024-08-05T22:22:30.205094013Z" level=error msg="Failed to destroy network for sandbox \"e9abda5d0ef5f804889b1660f5a992fc203f3978b1441b083025f4909415bc0a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:22:30.207017 containerd[1442]: time="2024-08-05T22:22:30.206967262Z" level=error msg="encountered an error cleaning up failed sandbox \"e9abda5d0ef5f804889b1660f5a992fc203f3978b1441b083025f4909415bc0a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:22:30.207073 containerd[1442]: time="2024-08-05T22:22:30.207034221Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-rscnh,Uid:34e6f00e-cfcb-4806-b4ed-da8a03b61a5e,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e9abda5d0ef5f804889b1660f5a992fc203f3978b1441b083025f4909415bc0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:22:30.207875 kubelet[2526]: E0805 22:22:30.207834 2526 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9abda5d0ef5f804889b1660f5a992fc203f3978b1441b083025f4909415bc0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:22:30.207947 kubelet[2526]: E0805 22:22:30.207897 2526 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9abda5d0ef5f804889b1660f5a992fc203f3978b1441b083025f4909415bc0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-rscnh" Aug 5 22:22:30.207947 kubelet[2526]: E0805 22:22:30.207926 2526 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9abda5d0ef5f804889b1660f5a992fc203f3978b1441b083025f4909415bc0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-rscnh" Aug 5 22:22:30.208004 kubelet[2526]: E0805 22:22:30.207980 2526 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-rscnh_kube-system(34e6f00e-cfcb-4806-b4ed-da8a03b61a5e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-rscnh_kube-system(34e6f00e-cfcb-4806-b4ed-da8a03b61a5e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e9abda5d0ef5f804889b1660f5a992fc203f3978b1441b083025f4909415bc0a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-rscnh" podUID="34e6f00e-cfcb-4806-b4ed-da8a03b61a5e" Aug 5 22:22:30.209512 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e9abda5d0ef5f804889b1660f5a992fc203f3978b1441b083025f4909415bc0a-shm.mount: Deactivated successfully. Aug 5 22:22:30.212538 containerd[1442]: time="2024-08-05T22:22:30.212504931Z" level=error msg="Failed to destroy network for sandbox \"2f32654cbee0559b9e0843f89ede6a1a316c901e168b506cf0e83e1f8e67ee0e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:22:30.213920 containerd[1442]: time="2024-08-05T22:22:30.212922844Z" level=error msg="encountered an error cleaning up failed sandbox \"2f32654cbee0559b9e0843f89ede6a1a316c901e168b506cf0e83e1f8e67ee0e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:22:30.213920 containerd[1442]: time="2024-08-05T22:22:30.212971003Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-cnk28,Uid:fd02d030-3ece-464a-a009-26d6a3348ddd,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2f32654cbee0559b9e0843f89ede6a1a316c901e168b506cf0e83e1f8e67ee0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:22:30.214020 kubelet[2526]: E0805 22:22:30.213138 2526 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f32654cbee0559b9e0843f89ede6a1a316c901e168b506cf0e83e1f8e67ee0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:22:30.214020 kubelet[2526]: E0805 22:22:30.213177 2526 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f32654cbee0559b9e0843f89ede6a1a316c901e168b506cf0e83e1f8e67ee0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-cnk28" Aug 5 22:22:30.214020 kubelet[2526]: E0805 22:22:30.213197 2526 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f32654cbee0559b9e0843f89ede6a1a316c901e168b506cf0e83e1f8e67ee0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-cnk28" Aug 5 22:22:30.214100 kubelet[2526]: E0805 22:22:30.213238 2526 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-cnk28_kube-system(fd02d030-3ece-464a-a009-26d6a3348ddd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-cnk28_kube-system(fd02d030-3ece-464a-a009-26d6a3348ddd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2f32654cbee0559b9e0843f89ede6a1a316c901e168b506cf0e83e1f8e67ee0e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-cnk28" podUID="fd02d030-3ece-464a-a009-26d6a3348ddd" Aug 5 22:22:30.214290 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2f32654cbee0559b9e0843f89ede6a1a316c901e168b506cf0e83e1f8e67ee0e-shm.mount: Deactivated successfully. Aug 5 22:22:30.216750 containerd[1442]: time="2024-08-05T22:22:30.216711581Z" level=error msg="Failed to destroy network for sandbox \"f68d983e7c3eeea37cd4b206105d5883b092149f0ec911982605e85d731696f2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:22:30.217284 containerd[1442]: time="2024-08-05T22:22:30.217209413Z" level=error msg="encountered an error cleaning up failed sandbox \"f68d983e7c3eeea37cd4b206105d5883b092149f0ec911982605e85d731696f2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:22:30.217284 containerd[1442]: time="2024-08-05T22:22:30.217262052Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-59466c9fdc-nnr4r,Uid:894e3f76-424c-45d5-b5c3-82011c14a86f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f68d983e7c3eeea37cd4b206105d5883b092149f0ec911982605e85d731696f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:22:30.217443 kubelet[2526]: E0805 22:22:30.217414 2526 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f68d983e7c3eeea37cd4b206105d5883b092149f0ec911982605e85d731696f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:22:30.217491 kubelet[2526]: E0805 22:22:30.217467 2526 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f68d983e7c3eeea37cd4b206105d5883b092149f0ec911982605e85d731696f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-59466c9fdc-nnr4r" Aug 5 22:22:30.217491 kubelet[2526]: E0805 22:22:30.217485 2526 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f68d983e7c3eeea37cd4b206105d5883b092149f0ec911982605e85d731696f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-59466c9fdc-nnr4r" Aug 5 22:22:30.217558 kubelet[2526]: E0805 22:22:30.217533 2526 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-59466c9fdc-nnr4r_calico-system(894e3f76-424c-45d5-b5c3-82011c14a86f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-59466c9fdc-nnr4r_calico-system(894e3f76-424c-45d5-b5c3-82011c14a86f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f68d983e7c3eeea37cd4b206105d5883b092149f0ec911982605e85d731696f2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-59466c9fdc-nnr4r" podUID="894e3f76-424c-45d5-b5c3-82011c14a86f" Aug 5 22:22:30.236379 containerd[1442]: time="2024-08-05T22:22:30.236313017Z" level=error msg="Failed to destroy network for sandbox \"c3cba8607d516ca0eb55413ae1cf23d5404d798ce23e932e16ff4e08c1284ab8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:22:30.236660 containerd[1442]: time="2024-08-05T22:22:30.236632852Z" level=error msg="encountered an error cleaning up failed sandbox \"c3cba8607d516ca0eb55413ae1cf23d5404d798ce23e932e16ff4e08c1284ab8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:22:30.236729 containerd[1442]: time="2024-08-05T22:22:30.236706170Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4gbsx,Uid:f86a8cc4-dcb1-4c07-ba89-5368804c0223,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c3cba8607d516ca0eb55413ae1cf23d5404d798ce23e932e16ff4e08c1284ab8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:22:30.236970 kubelet[2526]: E0805 22:22:30.236946 2526 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c3cba8607d516ca0eb55413ae1cf23d5404d798ce23e932e16ff4e08c1284ab8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:22:30.237020 kubelet[2526]: E0805 22:22:30.236999 2526 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c3cba8607d516ca0eb55413ae1cf23d5404d798ce23e932e16ff4e08c1284ab8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4gbsx" Aug 5 22:22:30.237047 kubelet[2526]: E0805 22:22:30.237018 2526 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c3cba8607d516ca0eb55413ae1cf23d5404d798ce23e932e16ff4e08c1284ab8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4gbsx" Aug 5 22:22:30.237090 kubelet[2526]: E0805 22:22:30.237078 2526 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-4gbsx_calico-system(f86a8cc4-dcb1-4c07-ba89-5368804c0223)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-4gbsx_calico-system(f86a8cc4-dcb1-4c07-ba89-5368804c0223)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c3cba8607d516ca0eb55413ae1cf23d5404d798ce23e932e16ff4e08c1284ab8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4gbsx" podUID="f86a8cc4-dcb1-4c07-ba89-5368804c0223" Aug 5 22:22:30.252618 kubelet[2526]: E0805 22:22:30.252454 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:22:30.255096 kubelet[2526]: I0805 22:22:30.254199 2526 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c3cba8607d516ca0eb55413ae1cf23d5404d798ce23e932e16ff4e08c1284ab8" Aug 5 22:22:30.255157 containerd[1442]: time="2024-08-05T22:22:30.254424757Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\"" Aug 5 22:22:30.255440 containerd[1442]: time="2024-08-05T22:22:30.255405981Z" level=info msg="StopPodSandbox for \"c3cba8607d516ca0eb55413ae1cf23d5404d798ce23e932e16ff4e08c1284ab8\"" Aug 5 22:22:30.255697 containerd[1442]: time="2024-08-05T22:22:30.255605338Z" level=info msg="Ensure that sandbox c3cba8607d516ca0eb55413ae1cf23d5404d798ce23e932e16ff4e08c1284ab8 in task-service has been cleanup successfully" Aug 5 22:22:30.257690 kubelet[2526]: I0805 22:22:30.257587 2526 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f32654cbee0559b9e0843f89ede6a1a316c901e168b506cf0e83e1f8e67ee0e" Aug 5 22:22:30.258246 containerd[1442]: time="2024-08-05T22:22:30.258211615Z" level=info msg="StopPodSandbox for \"2f32654cbee0559b9e0843f89ede6a1a316c901e168b506cf0e83e1f8e67ee0e\"" Aug 5 22:22:30.258517 containerd[1442]: time="2024-08-05T22:22:30.258482610Z" level=info msg="Ensure that sandbox 2f32654cbee0559b9e0843f89ede6a1a316c901e168b506cf0e83e1f8e67ee0e in task-service has been cleanup successfully" Aug 5 22:22:30.259664 kubelet[2526]: I0805 22:22:30.259647 2526 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e9abda5d0ef5f804889b1660f5a992fc203f3978b1441b083025f4909415bc0a" Aug 5 22:22:30.261267 kubelet[2526]: I0805 22:22:30.260759 2526 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f68d983e7c3eeea37cd4b206105d5883b092149f0ec911982605e85d731696f2" Aug 5 22:22:30.261340 containerd[1442]: time="2024-08-05T22:22:30.260888610Z" level=info msg="StopPodSandbox for \"e9abda5d0ef5f804889b1660f5a992fc203f3978b1441b083025f4909415bc0a\"" Aug 5 22:22:30.261340 containerd[1442]: time="2024-08-05T22:22:30.261061487Z" level=info msg="Ensure that sandbox e9abda5d0ef5f804889b1660f5a992fc203f3978b1441b083025f4909415bc0a in task-service has been cleanup successfully" Aug 5 22:22:30.262515 containerd[1442]: time="2024-08-05T22:22:30.261611678Z" level=info msg="StopPodSandbox for \"f68d983e7c3eeea37cd4b206105d5883b092149f0ec911982605e85d731696f2\"" Aug 5 22:22:30.262515 containerd[1442]: time="2024-08-05T22:22:30.262402465Z" level=info msg="Ensure that sandbox f68d983e7c3eeea37cd4b206105d5883b092149f0ec911982605e85d731696f2 in task-service has been cleanup successfully" Aug 5 22:22:30.296123 containerd[1442]: time="2024-08-05T22:22:30.296070348Z" level=error msg="StopPodSandbox for \"f68d983e7c3eeea37cd4b206105d5883b092149f0ec911982605e85d731696f2\" failed" error="failed to destroy network for sandbox \"f68d983e7c3eeea37cd4b206105d5883b092149f0ec911982605e85d731696f2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:22:30.296627 kubelet[2526]: E0805 22:22:30.296584 2526 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f68d983e7c3eeea37cd4b206105d5883b092149f0ec911982605e85d731696f2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f68d983e7c3eeea37cd4b206105d5883b092149f0ec911982605e85d731696f2" Aug 5 22:22:30.296723 kubelet[2526]: E0805 22:22:30.296653 2526 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f68d983e7c3eeea37cd4b206105d5883b092149f0ec911982605e85d731696f2"} Aug 5 22:22:30.296723 kubelet[2526]: E0805 22:22:30.296705 2526 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"894e3f76-424c-45d5-b5c3-82011c14a86f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f68d983e7c3eeea37cd4b206105d5883b092149f0ec911982605e85d731696f2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 5 22:22:30.296803 kubelet[2526]: E0805 22:22:30.296735 2526 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"894e3f76-424c-45d5-b5c3-82011c14a86f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f68d983e7c3eeea37cd4b206105d5883b092149f0ec911982605e85d731696f2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-59466c9fdc-nnr4r" podUID="894e3f76-424c-45d5-b5c3-82011c14a86f" Aug 5 22:22:30.298378 containerd[1442]: time="2024-08-05T22:22:30.298109315Z" level=error msg="StopPodSandbox for \"e9abda5d0ef5f804889b1660f5a992fc203f3978b1441b083025f4909415bc0a\" failed" error="failed to destroy network for sandbox \"e9abda5d0ef5f804889b1660f5a992fc203f3978b1441b083025f4909415bc0a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:22:30.298446 kubelet[2526]: E0805 22:22:30.298272 2526 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e9abda5d0ef5f804889b1660f5a992fc203f3978b1441b083025f4909415bc0a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e9abda5d0ef5f804889b1660f5a992fc203f3978b1441b083025f4909415bc0a" Aug 5 22:22:30.298446 kubelet[2526]: E0805 22:22:30.298299 2526 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e9abda5d0ef5f804889b1660f5a992fc203f3978b1441b083025f4909415bc0a"} Aug 5 22:22:30.298446 kubelet[2526]: E0805 22:22:30.298328 2526 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"34e6f00e-cfcb-4806-b4ed-da8a03b61a5e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e9abda5d0ef5f804889b1660f5a992fc203f3978b1441b083025f4909415bc0a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 5 22:22:30.298446 kubelet[2526]: E0805 22:22:30.298351 2526 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"34e6f00e-cfcb-4806-b4ed-da8a03b61a5e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e9abda5d0ef5f804889b1660f5a992fc203f3978b1441b083025f4909415bc0a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-rscnh" podUID="34e6f00e-cfcb-4806-b4ed-da8a03b61a5e" Aug 5 22:22:30.300217 containerd[1442]: time="2024-08-05T22:22:30.300176120Z" level=error msg="StopPodSandbox for \"c3cba8607d516ca0eb55413ae1cf23d5404d798ce23e932e16ff4e08c1284ab8\" failed" error="failed to destroy network for sandbox \"c3cba8607d516ca0eb55413ae1cf23d5404d798ce23e932e16ff4e08c1284ab8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:22:30.300374 kubelet[2526]: E0805 22:22:30.300353 2526 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c3cba8607d516ca0eb55413ae1cf23d5404d798ce23e932e16ff4e08c1284ab8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c3cba8607d516ca0eb55413ae1cf23d5404d798ce23e932e16ff4e08c1284ab8" Aug 5 22:22:30.300440 kubelet[2526]: E0805 22:22:30.300387 2526 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c3cba8607d516ca0eb55413ae1cf23d5404d798ce23e932e16ff4e08c1284ab8"} Aug 5 22:22:30.300440 kubelet[2526]: E0805 22:22:30.300420 2526 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f86a8cc4-dcb1-4c07-ba89-5368804c0223\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c3cba8607d516ca0eb55413ae1cf23d5404d798ce23e932e16ff4e08c1284ab8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 5 22:22:30.300506 kubelet[2526]: E0805 22:22:30.300450 2526 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f86a8cc4-dcb1-4c07-ba89-5368804c0223\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c3cba8607d516ca0eb55413ae1cf23d5404d798ce23e932e16ff4e08c1284ab8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4gbsx" podUID="f86a8cc4-dcb1-4c07-ba89-5368804c0223" Aug 5 22:22:30.303906 containerd[1442]: time="2024-08-05T22:22:30.303866859Z" level=error msg="StopPodSandbox for \"2f32654cbee0559b9e0843f89ede6a1a316c901e168b506cf0e83e1f8e67ee0e\" failed" error="failed to destroy network for sandbox \"2f32654cbee0559b9e0843f89ede6a1a316c901e168b506cf0e83e1f8e67ee0e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:22:30.304074 kubelet[2526]: E0805 22:22:30.304054 2526 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2f32654cbee0559b9e0843f89ede6a1a316c901e168b506cf0e83e1f8e67ee0e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2f32654cbee0559b9e0843f89ede6a1a316c901e168b506cf0e83e1f8e67ee0e" Aug 5 22:22:30.304118 kubelet[2526]: E0805 22:22:30.304089 2526 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2f32654cbee0559b9e0843f89ede6a1a316c901e168b506cf0e83e1f8e67ee0e"} Aug 5 22:22:30.304148 kubelet[2526]: E0805 22:22:30.304122 2526 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fd02d030-3ece-464a-a009-26d6a3348ddd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2f32654cbee0559b9e0843f89ede6a1a316c901e168b506cf0e83e1f8e67ee0e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 5 22:22:30.304193 kubelet[2526]: E0805 22:22:30.304153 2526 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fd02d030-3ece-464a-a009-26d6a3348ddd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2f32654cbee0559b9e0843f89ede6a1a316c901e168b506cf0e83e1f8e67ee0e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-cnk28" podUID="fd02d030-3ece-464a-a009-26d6a3348ddd" Aug 5 22:22:30.992583 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c3cba8607d516ca0eb55413ae1cf23d5404d798ce23e932e16ff4e08c1284ab8-shm.mount: Deactivated successfully. Aug 5 22:22:30.992699 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f68d983e7c3eeea37cd4b206105d5883b092149f0ec911982605e85d731696f2-shm.mount: Deactivated successfully. Aug 5 22:22:32.026906 systemd[1]: Started sshd@8-10.0.0.142:22-10.0.0.1:49182.service - OpenSSH per-connection server daemon (10.0.0.1:49182). Aug 5 22:22:32.070869 sshd[3868]: Accepted publickey for core from 10.0.0.1 port 49182 ssh2: RSA SHA256:m+vSf9MZ8jyHy+Dz2uz+ngzM5NRoRVVH/LZDa5ltoPE Aug 5 22:22:32.072284 sshd[3868]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:22:32.076985 systemd-logind[1424]: New session 9 of user core. Aug 5 22:22:32.092923 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 5 22:22:32.217739 sshd[3868]: pam_unix(sshd:session): session closed for user core Aug 5 22:22:32.221288 systemd[1]: sshd@8-10.0.0.142:22-10.0.0.1:49182.service: Deactivated successfully. Aug 5 22:22:32.224633 systemd[1]: session-9.scope: Deactivated successfully. Aug 5 22:22:32.226558 systemd-logind[1424]: Session 9 logged out. Waiting for processes to exit. Aug 5 22:22:32.227722 systemd-logind[1424]: Removed session 9. Aug 5 22:22:33.782705 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1236149807.mount: Deactivated successfully. Aug 5 22:22:33.956148 containerd[1442]: time="2024-08-05T22:22:33.955780407Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:22:33.956148 containerd[1442]: time="2024-08-05T22:22:33.956084482Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.0: active requests=0, bytes read=110491350" Aug 5 22:22:33.957043 containerd[1442]: time="2024-08-05T22:22:33.956984788Z" level=info msg="ImageCreate event name:\"sha256:d80cbd636ae2754a08d04558f0436508a17d92258e4712cc4a6299f43497607f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:22:33.959920 containerd[1442]: time="2024-08-05T22:22:33.958660243Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:22:33.959920 containerd[1442]: time="2024-08-05T22:22:33.959797905Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.0\" with image id \"sha256:d80cbd636ae2754a08d04558f0436508a17d92258e4712cc4a6299f43497607f\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\", size \"110491212\" in 3.705343228s" Aug 5 22:22:33.959920 containerd[1442]: time="2024-08-05T22:22:33.959826545Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\" returns image reference \"sha256:d80cbd636ae2754a08d04558f0436508a17d92258e4712cc4a6299f43497607f\"" Aug 5 22:22:33.968974 containerd[1442]: time="2024-08-05T22:22:33.968932286Z" level=info msg="CreateContainer within sandbox \"fe96624d730b15ffe82a30bdf32eb331ca9e7bda0d4cd266b41a3d73b6e545ab\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Aug 5 22:22:33.984805 containerd[1442]: time="2024-08-05T22:22:33.984760245Z" level=info msg="CreateContainer within sandbox \"fe96624d730b15ffe82a30bdf32eb331ca9e7bda0d4cd266b41a3d73b6e545ab\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"6090ab854481eee7705a7e04807421b752e1f419756133d1d6d947b330549ba7\"" Aug 5 22:22:33.985554 containerd[1442]: time="2024-08-05T22:22:33.985528673Z" level=info msg="StartContainer for \"6090ab854481eee7705a7e04807421b752e1f419756133d1d6d947b330549ba7\"" Aug 5 22:22:34.033927 systemd[1]: Started cri-containerd-6090ab854481eee7705a7e04807421b752e1f419756133d1d6d947b330549ba7.scope - libcontainer container 6090ab854481eee7705a7e04807421b752e1f419756133d1d6d947b330549ba7. Aug 5 22:22:34.105112 containerd[1442]: time="2024-08-05T22:22:34.105069172Z" level=info msg="StartContainer for \"6090ab854481eee7705a7e04807421b752e1f419756133d1d6d947b330549ba7\" returns successfully" Aug 5 22:22:34.204977 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Aug 5 22:22:34.205108 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Aug 5 22:22:34.279546 kubelet[2526]: E0805 22:22:34.279428 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:22:35.275062 kubelet[2526]: E0805 22:22:35.274231 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:22:37.232441 systemd[1]: Started sshd@9-10.0.0.142:22-10.0.0.1:49196.service - OpenSSH per-connection server daemon (10.0.0.1:49196). Aug 5 22:22:37.289996 sshd[4127]: Accepted publickey for core from 10.0.0.1 port 49196 ssh2: RSA SHA256:m+vSf9MZ8jyHy+Dz2uz+ngzM5NRoRVVH/LZDa5ltoPE Aug 5 22:22:37.291445 sshd[4127]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:22:37.295759 systemd-logind[1424]: New session 10 of user core. Aug 5 22:22:37.302863 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 5 22:22:37.437242 sshd[4127]: pam_unix(sshd:session): session closed for user core Aug 5 22:22:37.445358 systemd[1]: sshd@9-10.0.0.142:22-10.0.0.1:49196.service: Deactivated successfully. Aug 5 22:22:37.448141 systemd[1]: session-10.scope: Deactivated successfully. Aug 5 22:22:37.452462 systemd-logind[1424]: Session 10 logged out. Waiting for processes to exit. Aug 5 22:22:37.469045 systemd[1]: Started sshd@10-10.0.0.142:22-10.0.0.1:49198.service - OpenSSH per-connection server daemon (10.0.0.1:49198). Aug 5 22:22:37.476013 systemd-logind[1424]: Removed session 10. Aug 5 22:22:37.511789 sshd[4142]: Accepted publickey for core from 10.0.0.1 port 49198 ssh2: RSA SHA256:m+vSf9MZ8jyHy+Dz2uz+ngzM5NRoRVVH/LZDa5ltoPE Aug 5 22:22:37.513233 sshd[4142]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:22:37.520330 systemd-logind[1424]: New session 11 of user core. Aug 5 22:22:37.538880 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 5 22:22:37.695932 sshd[4142]: pam_unix(sshd:session): session closed for user core Aug 5 22:22:37.706067 systemd[1]: sshd@10-10.0.0.142:22-10.0.0.1:49198.service: Deactivated successfully. Aug 5 22:22:37.711146 systemd[1]: session-11.scope: Deactivated successfully. Aug 5 22:22:37.714377 systemd-logind[1424]: Session 11 logged out. Waiting for processes to exit. Aug 5 22:22:37.725047 systemd[1]: Started sshd@11-10.0.0.142:22-10.0.0.1:49208.service - OpenSSH per-connection server daemon (10.0.0.1:49208). Aug 5 22:22:37.726319 systemd-logind[1424]: Removed session 11. Aug 5 22:22:37.759462 sshd[4157]: Accepted publickey for core from 10.0.0.1 port 49208 ssh2: RSA SHA256:m+vSf9MZ8jyHy+Dz2uz+ngzM5NRoRVVH/LZDa5ltoPE Aug 5 22:22:37.760749 sshd[4157]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:22:37.766167 systemd-logind[1424]: New session 12 of user core. Aug 5 22:22:37.771918 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 5 22:22:37.894493 sshd[4157]: pam_unix(sshd:session): session closed for user core Aug 5 22:22:37.897999 systemd[1]: sshd@11-10.0.0.142:22-10.0.0.1:49208.service: Deactivated successfully. Aug 5 22:22:37.899831 systemd[1]: session-12.scope: Deactivated successfully. Aug 5 22:22:37.900387 systemd-logind[1424]: Session 12 logged out. Waiting for processes to exit. Aug 5 22:22:37.902158 systemd-logind[1424]: Removed session 12. Aug 5 22:22:40.461945 kubelet[2526]: I0805 22:22:40.461835 2526 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 5 22:22:40.462470 kubelet[2526]: E0805 22:22:40.462425 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:22:40.471967 kubelet[2526]: I0805 22:22:40.471930 2526 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-tfgdc" podStartSLOduration=7.753413801 podStartE2EDuration="16.471880016s" podCreationTimestamp="2024-08-05 22:22:24 +0000 UTC" firstStartedPulling="2024-08-05 22:22:25.241699245 +0000 UTC m=+31.184330016" lastFinishedPulling="2024-08-05 22:22:33.96016546 +0000 UTC m=+39.902796231" observedRunningTime="2024-08-05 22:22:34.294501241 +0000 UTC m=+40.237132012" watchObservedRunningTime="2024-08-05 22:22:40.471880016 +0000 UTC m=+46.414510787" Aug 5 22:22:40.987112 systemd-networkd[1373]: vxlan.calico: Link UP Aug 5 22:22:40.987122 systemd-networkd[1373]: vxlan.calico: Gained carrier Aug 5 22:22:41.287870 kubelet[2526]: E0805 22:22:41.287726 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:22:42.142175 containerd[1442]: time="2024-08-05T22:22:42.141985116Z" level=info msg="StopPodSandbox for \"2f32654cbee0559b9e0843f89ede6a1a316c901e168b506cf0e83e1f8e67ee0e\"" Aug 5 22:22:42.142757 containerd[1442]: time="2024-08-05T22:22:42.142729427Z" level=info msg="StopPodSandbox for \"f68d983e7c3eeea37cd4b206105d5883b092149f0ec911982605e85d731696f2\"" Aug 5 22:22:42.353858 systemd-networkd[1373]: vxlan.calico: Gained IPv6LL Aug 5 22:22:42.364188 containerd[1442]: 2024-08-05 22:22:42.250 [INFO][4414] k8s.go 608: Cleaning up netns ContainerID="f68d983e7c3eeea37cd4b206105d5883b092149f0ec911982605e85d731696f2" Aug 5 22:22:42.364188 containerd[1442]: 2024-08-05 22:22:42.250 [INFO][4414] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="f68d983e7c3eeea37cd4b206105d5883b092149f0ec911982605e85d731696f2" iface="eth0" netns="/var/run/netns/cni-cd220cbe-6843-23bf-9896-5d9f308cb5d3" Aug 5 22:22:42.364188 containerd[1442]: 2024-08-05 22:22:42.250 [INFO][4414] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="f68d983e7c3eeea37cd4b206105d5883b092149f0ec911982605e85d731696f2" iface="eth0" netns="/var/run/netns/cni-cd220cbe-6843-23bf-9896-5d9f308cb5d3" Aug 5 22:22:42.364188 containerd[1442]: 2024-08-05 22:22:42.250 [INFO][4414] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="f68d983e7c3eeea37cd4b206105d5883b092149f0ec911982605e85d731696f2" iface="eth0" netns="/var/run/netns/cni-cd220cbe-6843-23bf-9896-5d9f308cb5d3" Aug 5 22:22:42.364188 containerd[1442]: 2024-08-05 22:22:42.250 [INFO][4414] k8s.go 615: Releasing IP address(es) ContainerID="f68d983e7c3eeea37cd4b206105d5883b092149f0ec911982605e85d731696f2" Aug 5 22:22:42.364188 containerd[1442]: 2024-08-05 22:22:42.250 [INFO][4414] utils.go 188: Calico CNI releasing IP address ContainerID="f68d983e7c3eeea37cd4b206105d5883b092149f0ec911982605e85d731696f2" Aug 5 22:22:42.364188 containerd[1442]: 2024-08-05 22:22:42.347 [INFO][4430] ipam_plugin.go 411: Releasing address using handleID ContainerID="f68d983e7c3eeea37cd4b206105d5883b092149f0ec911982605e85d731696f2" HandleID="k8s-pod-network.f68d983e7c3eeea37cd4b206105d5883b092149f0ec911982605e85d731696f2" Workload="localhost-k8s-calico--kube--controllers--59466c9fdc--nnr4r-eth0" Aug 5 22:22:42.364188 containerd[1442]: 2024-08-05 22:22:42.347 [INFO][4430] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:22:42.364188 containerd[1442]: 2024-08-05 22:22:42.347 [INFO][4430] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:22:42.364188 containerd[1442]: 2024-08-05 22:22:42.357 [WARNING][4430] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="f68d983e7c3eeea37cd4b206105d5883b092149f0ec911982605e85d731696f2" HandleID="k8s-pod-network.f68d983e7c3eeea37cd4b206105d5883b092149f0ec911982605e85d731696f2" Workload="localhost-k8s-calico--kube--controllers--59466c9fdc--nnr4r-eth0" Aug 5 22:22:42.364188 containerd[1442]: 2024-08-05 22:22:42.357 [INFO][4430] ipam_plugin.go 439: Releasing address using workloadID ContainerID="f68d983e7c3eeea37cd4b206105d5883b092149f0ec911982605e85d731696f2" HandleID="k8s-pod-network.f68d983e7c3eeea37cd4b206105d5883b092149f0ec911982605e85d731696f2" Workload="localhost-k8s-calico--kube--controllers--59466c9fdc--nnr4r-eth0" Aug 5 22:22:42.364188 containerd[1442]: 2024-08-05 22:22:42.359 [INFO][4430] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:22:42.364188 containerd[1442]: 2024-08-05 22:22:42.361 [INFO][4414] k8s.go 621: Teardown processing complete. ContainerID="f68d983e7c3eeea37cd4b206105d5883b092149f0ec911982605e85d731696f2" Aug 5 22:22:42.366833 containerd[1442]: time="2024-08-05T22:22:42.364339361Z" level=info msg="TearDown network for sandbox \"f68d983e7c3eeea37cd4b206105d5883b092149f0ec911982605e85d731696f2\" successfully" Aug 5 22:22:42.366833 containerd[1442]: time="2024-08-05T22:22:42.364366241Z" level=info msg="StopPodSandbox for \"f68d983e7c3eeea37cd4b206105d5883b092149f0ec911982605e85d731696f2\" returns successfully" Aug 5 22:22:42.366833 containerd[1442]: time="2024-08-05T22:22:42.366171739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-59466c9fdc-nnr4r,Uid:894e3f76-424c-45d5-b5c3-82011c14a86f,Namespace:calico-system,Attempt:1,}" Aug 5 22:22:42.367399 systemd[1]: run-netns-cni\x2dcd220cbe\x2d6843\x2d23bf\x2d9896\x2d5d9f308cb5d3.mount: Deactivated successfully. Aug 5 22:22:42.374196 containerd[1442]: 2024-08-05 22:22:42.244 [INFO][4415] k8s.go 608: Cleaning up netns ContainerID="2f32654cbee0559b9e0843f89ede6a1a316c901e168b506cf0e83e1f8e67ee0e" Aug 5 22:22:42.374196 containerd[1442]: 2024-08-05 22:22:42.244 [INFO][4415] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="2f32654cbee0559b9e0843f89ede6a1a316c901e168b506cf0e83e1f8e67ee0e" iface="eth0" netns="/var/run/netns/cni-724776db-2f8f-08c7-5a28-0401044acde3" Aug 5 22:22:42.374196 containerd[1442]: 2024-08-05 22:22:42.244 [INFO][4415] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="2f32654cbee0559b9e0843f89ede6a1a316c901e168b506cf0e83e1f8e67ee0e" iface="eth0" netns="/var/run/netns/cni-724776db-2f8f-08c7-5a28-0401044acde3" Aug 5 22:22:42.374196 containerd[1442]: 2024-08-05 22:22:42.245 [INFO][4415] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="2f32654cbee0559b9e0843f89ede6a1a316c901e168b506cf0e83e1f8e67ee0e" iface="eth0" netns="/var/run/netns/cni-724776db-2f8f-08c7-5a28-0401044acde3" Aug 5 22:22:42.374196 containerd[1442]: 2024-08-05 22:22:42.245 [INFO][4415] k8s.go 615: Releasing IP address(es) ContainerID="2f32654cbee0559b9e0843f89ede6a1a316c901e168b506cf0e83e1f8e67ee0e" Aug 5 22:22:42.374196 containerd[1442]: 2024-08-05 22:22:42.245 [INFO][4415] utils.go 188: Calico CNI releasing IP address ContainerID="2f32654cbee0559b9e0843f89ede6a1a316c901e168b506cf0e83e1f8e67ee0e" Aug 5 22:22:42.374196 containerd[1442]: 2024-08-05 22:22:42.347 [INFO][4429] ipam_plugin.go 411: Releasing address using handleID ContainerID="2f32654cbee0559b9e0843f89ede6a1a316c901e168b506cf0e83e1f8e67ee0e" HandleID="k8s-pod-network.2f32654cbee0559b9e0843f89ede6a1a316c901e168b506cf0e83e1f8e67ee0e" Workload="localhost-k8s-coredns--76f75df574--cnk28-eth0" Aug 5 22:22:42.374196 containerd[1442]: 2024-08-05 22:22:42.347 [INFO][4429] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:22:42.374196 containerd[1442]: 2024-08-05 22:22:42.359 [INFO][4429] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:22:42.374196 containerd[1442]: 2024-08-05 22:22:42.369 [WARNING][4429] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="2f32654cbee0559b9e0843f89ede6a1a316c901e168b506cf0e83e1f8e67ee0e" HandleID="k8s-pod-network.2f32654cbee0559b9e0843f89ede6a1a316c901e168b506cf0e83e1f8e67ee0e" Workload="localhost-k8s-coredns--76f75df574--cnk28-eth0" Aug 5 22:22:42.374196 containerd[1442]: 2024-08-05 22:22:42.369 [INFO][4429] ipam_plugin.go 439: Releasing address using workloadID ContainerID="2f32654cbee0559b9e0843f89ede6a1a316c901e168b506cf0e83e1f8e67ee0e" HandleID="k8s-pod-network.2f32654cbee0559b9e0843f89ede6a1a316c901e168b506cf0e83e1f8e67ee0e" Workload="localhost-k8s-coredns--76f75df574--cnk28-eth0" Aug 5 22:22:42.374196 containerd[1442]: 2024-08-05 22:22:42.370 [INFO][4429] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:22:42.374196 containerd[1442]: 2024-08-05 22:22:42.372 [INFO][4415] k8s.go 621: Teardown processing complete. ContainerID="2f32654cbee0559b9e0843f89ede6a1a316c901e168b506cf0e83e1f8e67ee0e" Aug 5 22:22:42.374802 containerd[1442]: time="2024-08-05T22:22:42.374688397Z" level=info msg="TearDown network for sandbox \"2f32654cbee0559b9e0843f89ede6a1a316c901e168b506cf0e83e1f8e67ee0e\" successfully" Aug 5 22:22:42.374802 containerd[1442]: time="2024-08-05T22:22:42.374717276Z" level=info msg="StopPodSandbox for \"2f32654cbee0559b9e0843f89ede6a1a316c901e168b506cf0e83e1f8e67ee0e\" returns successfully" Aug 5 22:22:42.375037 kubelet[2526]: E0805 22:22:42.375010 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:22:42.377102 containerd[1442]: time="2024-08-05T22:22:42.376875930Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-cnk28,Uid:fd02d030-3ece-464a-a009-26d6a3348ddd,Namespace:kube-system,Attempt:1,}" Aug 5 22:22:42.377148 systemd[1]: run-netns-cni\x2d724776db\x2d2f8f\x2d08c7\x2d5a28\x2d0401044acde3.mount: Deactivated successfully. Aug 5 22:22:42.531958 systemd-networkd[1373]: cali6dee42cf023: Link UP Aug 5 22:22:42.532757 systemd-networkd[1373]: cali6dee42cf023: Gained carrier Aug 5 22:22:42.546945 containerd[1442]: 2024-08-05 22:22:42.448 [INFO][4444] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--59466c9fdc--nnr4r-eth0 calico-kube-controllers-59466c9fdc- calico-system 894e3f76-424c-45d5-b5c3-82011c14a86f 907 0 2024-08-05 22:22:19 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:59466c9fdc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-59466c9fdc-nnr4r eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali6dee42cf023 [] []}} ContainerID="fb55282830092ebd3cbf509384cbe218f549b5f8ce7667898d318ffbf0c9b390" Namespace="calico-system" Pod="calico-kube-controllers-59466c9fdc-nnr4r" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--59466c9fdc--nnr4r-" Aug 5 22:22:42.546945 containerd[1442]: 2024-08-05 22:22:42.448 [INFO][4444] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="fb55282830092ebd3cbf509384cbe218f549b5f8ce7667898d318ffbf0c9b390" Namespace="calico-system" Pod="calico-kube-controllers-59466c9fdc-nnr4r" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--59466c9fdc--nnr4r-eth0" Aug 5 22:22:42.546945 containerd[1442]: 2024-08-05 22:22:42.481 [INFO][4473] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fb55282830092ebd3cbf509384cbe218f549b5f8ce7667898d318ffbf0c9b390" HandleID="k8s-pod-network.fb55282830092ebd3cbf509384cbe218f549b5f8ce7667898d318ffbf0c9b390" Workload="localhost-k8s-calico--kube--controllers--59466c9fdc--nnr4r-eth0" Aug 5 22:22:42.546945 containerd[1442]: 2024-08-05 22:22:42.492 [INFO][4473] ipam_plugin.go 264: Auto assigning IP ContainerID="fb55282830092ebd3cbf509384cbe218f549b5f8ce7667898d318ffbf0c9b390" HandleID="k8s-pod-network.fb55282830092ebd3cbf509384cbe218f549b5f8ce7667898d318ffbf0c9b390" Workload="localhost-k8s-calico--kube--controllers--59466c9fdc--nnr4r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002925e0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-59466c9fdc-nnr4r", "timestamp":"2024-08-05 22:22:42.481388153 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 5 22:22:42.546945 containerd[1442]: 2024-08-05 22:22:42.492 [INFO][4473] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:22:42.546945 containerd[1442]: 2024-08-05 22:22:42.492 [INFO][4473] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:22:42.546945 containerd[1442]: 2024-08-05 22:22:42.492 [INFO][4473] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 5 22:22:42.546945 containerd[1442]: 2024-08-05 22:22:42.496 [INFO][4473] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.fb55282830092ebd3cbf509384cbe218f549b5f8ce7667898d318ffbf0c9b390" host="localhost" Aug 5 22:22:42.546945 containerd[1442]: 2024-08-05 22:22:42.502 [INFO][4473] ipam.go 372: Looking up existing affinities for host host="localhost" Aug 5 22:22:42.546945 containerd[1442]: 2024-08-05 22:22:42.512 [INFO][4473] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Aug 5 22:22:42.546945 containerd[1442]: 2024-08-05 22:22:42.514 [INFO][4473] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 5 22:22:42.546945 containerd[1442]: 2024-08-05 22:22:42.516 [INFO][4473] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 5 22:22:42.546945 containerd[1442]: 2024-08-05 22:22:42.517 [INFO][4473] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.fb55282830092ebd3cbf509384cbe218f549b5f8ce7667898d318ffbf0c9b390" host="localhost" Aug 5 22:22:42.546945 containerd[1442]: 2024-08-05 22:22:42.518 [INFO][4473] ipam.go 1685: Creating new handle: k8s-pod-network.fb55282830092ebd3cbf509384cbe218f549b5f8ce7667898d318ffbf0c9b390 Aug 5 22:22:42.546945 containerd[1442]: 2024-08-05 22:22:42.521 [INFO][4473] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.fb55282830092ebd3cbf509384cbe218f549b5f8ce7667898d318ffbf0c9b390" host="localhost" Aug 5 22:22:42.546945 containerd[1442]: 2024-08-05 22:22:42.525 [INFO][4473] ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.fb55282830092ebd3cbf509384cbe218f549b5f8ce7667898d318ffbf0c9b390" host="localhost" Aug 5 22:22:42.546945 containerd[1442]: 2024-08-05 22:22:42.525 [INFO][4473] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.fb55282830092ebd3cbf509384cbe218f549b5f8ce7667898d318ffbf0c9b390" host="localhost" Aug 5 22:22:42.546945 containerd[1442]: 2024-08-05 22:22:42.525 [INFO][4473] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:22:42.546945 containerd[1442]: 2024-08-05 22:22:42.525 [INFO][4473] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="fb55282830092ebd3cbf509384cbe218f549b5f8ce7667898d318ffbf0c9b390" HandleID="k8s-pod-network.fb55282830092ebd3cbf509384cbe218f549b5f8ce7667898d318ffbf0c9b390" Workload="localhost-k8s-calico--kube--controllers--59466c9fdc--nnr4r-eth0" Aug 5 22:22:42.547574 containerd[1442]: 2024-08-05 22:22:42.528 [INFO][4444] k8s.go 386: Populated endpoint ContainerID="fb55282830092ebd3cbf509384cbe218f549b5f8ce7667898d318ffbf0c9b390" Namespace="calico-system" Pod="calico-kube-controllers-59466c9fdc-nnr4r" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--59466c9fdc--nnr4r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--59466c9fdc--nnr4r-eth0", GenerateName:"calico-kube-controllers-59466c9fdc-", Namespace:"calico-system", SelfLink:"", UID:"894e3f76-424c-45d5-b5c3-82011c14a86f", ResourceVersion:"907", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 22, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"59466c9fdc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-59466c9fdc-nnr4r", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6dee42cf023", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:22:42.547574 containerd[1442]: 2024-08-05 22:22:42.529 [INFO][4444] k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="fb55282830092ebd3cbf509384cbe218f549b5f8ce7667898d318ffbf0c9b390" Namespace="calico-system" Pod="calico-kube-controllers-59466c9fdc-nnr4r" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--59466c9fdc--nnr4r-eth0" Aug 5 22:22:42.547574 containerd[1442]: 2024-08-05 22:22:42.529 [INFO][4444] dataplane_linux.go 68: Setting the host side veth name to cali6dee42cf023 ContainerID="fb55282830092ebd3cbf509384cbe218f549b5f8ce7667898d318ffbf0c9b390" Namespace="calico-system" Pod="calico-kube-controllers-59466c9fdc-nnr4r" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--59466c9fdc--nnr4r-eth0" Aug 5 22:22:42.547574 containerd[1442]: 2024-08-05 22:22:42.532 [INFO][4444] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="fb55282830092ebd3cbf509384cbe218f549b5f8ce7667898d318ffbf0c9b390" Namespace="calico-system" Pod="calico-kube-controllers-59466c9fdc-nnr4r" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--59466c9fdc--nnr4r-eth0" Aug 5 22:22:42.547574 containerd[1442]: 2024-08-05 22:22:42.533 [INFO][4444] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="fb55282830092ebd3cbf509384cbe218f549b5f8ce7667898d318ffbf0c9b390" Namespace="calico-system" Pod="calico-kube-controllers-59466c9fdc-nnr4r" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--59466c9fdc--nnr4r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--59466c9fdc--nnr4r-eth0", GenerateName:"calico-kube-controllers-59466c9fdc-", Namespace:"calico-system", SelfLink:"", UID:"894e3f76-424c-45d5-b5c3-82011c14a86f", ResourceVersion:"907", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 22, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"59466c9fdc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fb55282830092ebd3cbf509384cbe218f549b5f8ce7667898d318ffbf0c9b390", Pod:"calico-kube-controllers-59466c9fdc-nnr4r", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6dee42cf023", MAC:"06:70:67:42:36:9c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:22:42.547574 containerd[1442]: 2024-08-05 22:22:42.545 [INFO][4444] k8s.go 500: Wrote updated endpoint to datastore ContainerID="fb55282830092ebd3cbf509384cbe218f549b5f8ce7667898d318ffbf0c9b390" Namespace="calico-system" Pod="calico-kube-controllers-59466c9fdc-nnr4r" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--59466c9fdc--nnr4r-eth0" Aug 5 22:22:42.576036 systemd-networkd[1373]: calie2b64fef886: Link UP Aug 5 22:22:42.576473 systemd-networkd[1373]: calie2b64fef886: Gained carrier Aug 5 22:22:42.586303 containerd[1442]: time="2024-08-05T22:22:42.581660507Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:22:42.586303 containerd[1442]: time="2024-08-05T22:22:42.581782706Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:22:42.586303 containerd[1442]: time="2024-08-05T22:22:42.581831185Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:22:42.586303 containerd[1442]: time="2024-08-05T22:22:42.581847185Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:22:42.592424 containerd[1442]: 2024-08-05 22:22:42.458 [INFO][4455] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--76f75df574--cnk28-eth0 coredns-76f75df574- kube-system fd02d030-3ece-464a-a009-26d6a3348ddd 906 0 2024-08-05 22:22:10 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-76f75df574-cnk28 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie2b64fef886 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="47a8c782146cb3cb97a1f18068337f66c978f3b9848b778f6502169531d76e70" Namespace="kube-system" Pod="coredns-76f75df574-cnk28" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--cnk28-" Aug 5 22:22:42.592424 containerd[1442]: 2024-08-05 22:22:42.458 [INFO][4455] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="47a8c782146cb3cb97a1f18068337f66c978f3b9848b778f6502169531d76e70" Namespace="kube-system" Pod="coredns-76f75df574-cnk28" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--cnk28-eth0" Aug 5 22:22:42.592424 containerd[1442]: 2024-08-05 22:22:42.491 [INFO][4478] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="47a8c782146cb3cb97a1f18068337f66c978f3b9848b778f6502169531d76e70" HandleID="k8s-pod-network.47a8c782146cb3cb97a1f18068337f66c978f3b9848b778f6502169531d76e70" Workload="localhost-k8s-coredns--76f75df574--cnk28-eth0" Aug 5 22:22:42.592424 containerd[1442]: 2024-08-05 22:22:42.505 [INFO][4478] ipam_plugin.go 264: Auto assigning IP ContainerID="47a8c782146cb3cb97a1f18068337f66c978f3b9848b778f6502169531d76e70" HandleID="k8s-pod-network.47a8c782146cb3cb97a1f18068337f66c978f3b9848b778f6502169531d76e70" Workload="localhost-k8s-coredns--76f75df574--cnk28-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400031e2e0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-76f75df574-cnk28", "timestamp":"2024-08-05 22:22:42.491427553 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 5 22:22:42.592424 containerd[1442]: 2024-08-05 22:22:42.505 [INFO][4478] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:22:42.592424 containerd[1442]: 2024-08-05 22:22:42.525 [INFO][4478] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:22:42.592424 containerd[1442]: 2024-08-05 22:22:42.525 [INFO][4478] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 5 22:22:42.592424 containerd[1442]: 2024-08-05 22:22:42.534 [INFO][4478] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.47a8c782146cb3cb97a1f18068337f66c978f3b9848b778f6502169531d76e70" host="localhost" Aug 5 22:22:42.592424 containerd[1442]: 2024-08-05 22:22:42.547 [INFO][4478] ipam.go 372: Looking up existing affinities for host host="localhost" Aug 5 22:22:42.592424 containerd[1442]: 2024-08-05 22:22:42.552 [INFO][4478] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Aug 5 22:22:42.592424 containerd[1442]: 2024-08-05 22:22:42.554 [INFO][4478] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 5 22:22:42.592424 containerd[1442]: 2024-08-05 22:22:42.559 [INFO][4478] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 5 22:22:42.592424 containerd[1442]: 2024-08-05 22:22:42.559 [INFO][4478] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.47a8c782146cb3cb97a1f18068337f66c978f3b9848b778f6502169531d76e70" host="localhost" Aug 5 22:22:42.592424 containerd[1442]: 2024-08-05 22:22:42.561 [INFO][4478] ipam.go 1685: Creating new handle: k8s-pod-network.47a8c782146cb3cb97a1f18068337f66c978f3b9848b778f6502169531d76e70 Aug 5 22:22:42.592424 containerd[1442]: 2024-08-05 22:22:42.565 [INFO][4478] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.47a8c782146cb3cb97a1f18068337f66c978f3b9848b778f6502169531d76e70" host="localhost" Aug 5 22:22:42.592424 containerd[1442]: 2024-08-05 22:22:42.571 [INFO][4478] ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.47a8c782146cb3cb97a1f18068337f66c978f3b9848b778f6502169531d76e70" host="localhost" Aug 5 22:22:42.592424 containerd[1442]: 2024-08-05 22:22:42.571 [INFO][4478] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.47a8c782146cb3cb97a1f18068337f66c978f3b9848b778f6502169531d76e70" host="localhost" Aug 5 22:22:42.592424 containerd[1442]: 2024-08-05 22:22:42.571 [INFO][4478] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:22:42.592424 containerd[1442]: 2024-08-05 22:22:42.571 [INFO][4478] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="47a8c782146cb3cb97a1f18068337f66c978f3b9848b778f6502169531d76e70" HandleID="k8s-pod-network.47a8c782146cb3cb97a1f18068337f66c978f3b9848b778f6502169531d76e70" Workload="localhost-k8s-coredns--76f75df574--cnk28-eth0" Aug 5 22:22:42.593087 containerd[1442]: 2024-08-05 22:22:42.573 [INFO][4455] k8s.go 386: Populated endpoint ContainerID="47a8c782146cb3cb97a1f18068337f66c978f3b9848b778f6502169531d76e70" Namespace="kube-system" Pod="coredns-76f75df574-cnk28" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--cnk28-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--cnk28-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"fd02d030-3ece-464a-a009-26d6a3348ddd", ResourceVersion:"906", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 22, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-76f75df574-cnk28", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie2b64fef886", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:22:42.593087 containerd[1442]: 2024-08-05 22:22:42.574 [INFO][4455] k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="47a8c782146cb3cb97a1f18068337f66c978f3b9848b778f6502169531d76e70" Namespace="kube-system" Pod="coredns-76f75df574-cnk28" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--cnk28-eth0" Aug 5 22:22:42.593087 containerd[1442]: 2024-08-05 22:22:42.574 [INFO][4455] dataplane_linux.go 68: Setting the host side veth name to calie2b64fef886 ContainerID="47a8c782146cb3cb97a1f18068337f66c978f3b9848b778f6502169531d76e70" Namespace="kube-system" Pod="coredns-76f75df574-cnk28" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--cnk28-eth0" Aug 5 22:22:42.593087 containerd[1442]: 2024-08-05 22:22:42.576 [INFO][4455] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="47a8c782146cb3cb97a1f18068337f66c978f3b9848b778f6502169531d76e70" Namespace="kube-system" Pod="coredns-76f75df574-cnk28" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--cnk28-eth0" Aug 5 22:22:42.593087 containerd[1442]: 2024-08-05 22:22:42.577 [INFO][4455] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="47a8c782146cb3cb97a1f18068337f66c978f3b9848b778f6502169531d76e70" Namespace="kube-system" Pod="coredns-76f75df574-cnk28" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--cnk28-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--cnk28-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"fd02d030-3ece-464a-a009-26d6a3348ddd", ResourceVersion:"906", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 22, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"47a8c782146cb3cb97a1f18068337f66c978f3b9848b778f6502169531d76e70", Pod:"coredns-76f75df574-cnk28", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie2b64fef886", MAC:"1a:b1:03:37:3a:a7", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:22:42.593087 containerd[1442]: 2024-08-05 22:22:42.589 [INFO][4455] k8s.go 500: Wrote updated endpoint to datastore ContainerID="47a8c782146cb3cb97a1f18068337f66c978f3b9848b778f6502169531d76e70" Namespace="kube-system" Pod="coredns-76f75df574-cnk28" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--cnk28-eth0" Aug 5 22:22:42.620144 containerd[1442]: time="2024-08-05T22:22:42.619802008Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:22:42.620144 containerd[1442]: time="2024-08-05T22:22:42.619856088Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:22:42.620144 containerd[1442]: time="2024-08-05T22:22:42.619881447Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:22:42.620144 containerd[1442]: time="2024-08-05T22:22:42.619895727Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:22:42.629833 systemd[1]: Started cri-containerd-fb55282830092ebd3cbf509384cbe218f549b5f8ce7667898d318ffbf0c9b390.scope - libcontainer container fb55282830092ebd3cbf509384cbe218f549b5f8ce7667898d318ffbf0c9b390. Aug 5 22:22:42.632798 systemd[1]: Started cri-containerd-47a8c782146cb3cb97a1f18068337f66c978f3b9848b778f6502169531d76e70.scope - libcontainer container 47a8c782146cb3cb97a1f18068337f66c978f3b9848b778f6502169531d76e70. Aug 5 22:22:42.642771 systemd-resolved[1313]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 5 22:22:42.645275 systemd-resolved[1313]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 5 22:22:42.665580 containerd[1442]: time="2024-08-05T22:22:42.665444699Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-59466c9fdc-nnr4r,Uid:894e3f76-424c-45d5-b5c3-82011c14a86f,Namespace:calico-system,Attempt:1,} returns sandbox id \"fb55282830092ebd3cbf509384cbe218f549b5f8ce7667898d318ffbf0c9b390\"" Aug 5 22:22:42.665989 containerd[1442]: time="2024-08-05T22:22:42.665625137Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-cnk28,Uid:fd02d030-3ece-464a-a009-26d6a3348ddd,Namespace:kube-system,Attempt:1,} returns sandbox id \"47a8c782146cb3cb97a1f18068337f66c978f3b9848b778f6502169531d76e70\"" Aug 5 22:22:42.666488 kubelet[2526]: E0805 22:22:42.666459 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:22:42.668310 containerd[1442]: time="2024-08-05T22:22:42.667244798Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\"" Aug 5 22:22:42.669975 containerd[1442]: time="2024-08-05T22:22:42.669940725Z" level=info msg="CreateContainer within sandbox \"47a8c782146cb3cb97a1f18068337f66c978f3b9848b778f6502169531d76e70\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 5 22:22:42.680805 containerd[1442]: time="2024-08-05T22:22:42.680764555Z" level=info msg="CreateContainer within sandbox \"47a8c782146cb3cb97a1f18068337f66c978f3b9848b778f6502169531d76e70\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"92aab11ee5036e8eaf3e35573b4b17050e189a8e09bc72c84ff766b542ed60fa\"" Aug 5 22:22:42.681573 containerd[1442]: time="2024-08-05T22:22:42.681499586Z" level=info msg="StartContainer for \"92aab11ee5036e8eaf3e35573b4b17050e189a8e09bc72c84ff766b542ed60fa\"" Aug 5 22:22:42.706860 systemd[1]: Started cri-containerd-92aab11ee5036e8eaf3e35573b4b17050e189a8e09bc72c84ff766b542ed60fa.scope - libcontainer container 92aab11ee5036e8eaf3e35573b4b17050e189a8e09bc72c84ff766b542ed60fa. Aug 5 22:22:42.728951 containerd[1442]: time="2024-08-05T22:22:42.728903256Z" level=info msg="StartContainer for \"92aab11ee5036e8eaf3e35573b4b17050e189a8e09bc72c84ff766b542ed60fa\" returns successfully" Aug 5 22:22:42.908633 systemd[1]: Started sshd@12-10.0.0.142:22-10.0.0.1:59226.service - OpenSSH per-connection server daemon (10.0.0.1:59226). Aug 5 22:22:42.946749 sshd[4639]: Accepted publickey for core from 10.0.0.1 port 59226 ssh2: RSA SHA256:m+vSf9MZ8jyHy+Dz2uz+ngzM5NRoRVVH/LZDa5ltoPE Aug 5 22:22:42.948147 sshd[4639]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:22:42.952667 systemd-logind[1424]: New session 13 of user core. Aug 5 22:22:42.957886 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 5 22:22:43.074984 sshd[4639]: pam_unix(sshd:session): session closed for user core Aug 5 22:22:43.077435 systemd[1]: sshd@12-10.0.0.142:22-10.0.0.1:59226.service: Deactivated successfully. Aug 5 22:22:43.080148 systemd[1]: session-13.scope: Deactivated successfully. Aug 5 22:22:43.081784 systemd-logind[1424]: Session 13 logged out. Waiting for processes to exit. Aug 5 22:22:43.082596 systemd-logind[1424]: Removed session 13. Aug 5 22:22:43.295781 kubelet[2526]: E0805 22:22:43.294892 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:22:43.307738 kubelet[2526]: I0805 22:22:43.306648 2526 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-cnk28" podStartSLOduration=33.30661276 podStartE2EDuration="33.30661276s" podCreationTimestamp="2024-08-05 22:22:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 22:22:43.306385923 +0000 UTC m=+49.249016694" watchObservedRunningTime="2024-08-05 22:22:43.30661276 +0000 UTC m=+49.249243531" Aug 5 22:22:44.183945 containerd[1442]: time="2024-08-05T22:22:44.183886128Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:22:44.185532 containerd[1442]: time="2024-08-05T22:22:44.185488750Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.0: active requests=0, bytes read=31361057" Aug 5 22:22:44.186368 containerd[1442]: time="2024-08-05T22:22:44.186328260Z" level=info msg="ImageCreate event name:\"sha256:89df47edb6965978d3683de1cac38ee5b47d7054332bbea7cc0ef3b3c17da2e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:22:44.192964 containerd[1442]: time="2024-08-05T22:22:44.192928705Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:22:44.194110 containerd[1442]: time="2024-08-05T22:22:44.194065572Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" with image id \"sha256:89df47edb6965978d3683de1cac38ee5b47d7054332bbea7cc0ef3b3c17da2e1\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\", size \"32727593\" in 1.526786295s" Aug 5 22:22:44.194110 containerd[1442]: time="2024-08-05T22:22:44.194101292Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" returns image reference \"sha256:89df47edb6965978d3683de1cac38ee5b47d7054332bbea7cc0ef3b3c17da2e1\"" Aug 5 22:22:44.201610 containerd[1442]: time="2024-08-05T22:22:44.201535767Z" level=info msg="CreateContainer within sandbox \"fb55282830092ebd3cbf509384cbe218f549b5f8ce7667898d318ffbf0c9b390\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Aug 5 22:22:44.210904 systemd-networkd[1373]: calie2b64fef886: Gained IPv6LL Aug 5 22:22:44.213553 containerd[1442]: time="2024-08-05T22:22:44.213517270Z" level=info msg="CreateContainer within sandbox \"fb55282830092ebd3cbf509384cbe218f549b5f8ce7667898d318ffbf0c9b390\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"711a6843aff68865eceeca352aa339a944d7f63c92e245628c7d54d86b1945ab\"" Aug 5 22:22:44.214132 containerd[1442]: time="2024-08-05T22:22:44.214105503Z" level=info msg="StartContainer for \"711a6843aff68865eceeca352aa339a944d7f63c92e245628c7d54d86b1945ab\"" Aug 5 22:22:44.241884 systemd[1]: Started cri-containerd-711a6843aff68865eceeca352aa339a944d7f63c92e245628c7d54d86b1945ab.scope - libcontainer container 711a6843aff68865eceeca352aa339a944d7f63c92e245628c7d54d86b1945ab. Aug 5 22:22:44.274094 containerd[1442]: time="2024-08-05T22:22:44.273559143Z" level=info msg="StartContainer for \"711a6843aff68865eceeca352aa339a944d7f63c92e245628c7d54d86b1945ab\" returns successfully" Aug 5 22:22:44.273796 systemd-networkd[1373]: cali6dee42cf023: Gained IPv6LL Aug 5 22:22:44.297597 kubelet[2526]: E0805 22:22:44.297566 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:22:45.142800 containerd[1442]: time="2024-08-05T22:22:45.142740169Z" level=info msg="StopPodSandbox for \"e9abda5d0ef5f804889b1660f5a992fc203f3978b1441b083025f4909415bc0a\"" Aug 5 22:22:45.143493 containerd[1442]: time="2024-08-05T22:22:45.143339603Z" level=info msg="StopPodSandbox for \"c3cba8607d516ca0eb55413ae1cf23d5404d798ce23e932e16ff4e08c1284ab8\"" Aug 5 22:22:45.194722 kubelet[2526]: I0805 22:22:45.194271 2526 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-59466c9fdc-nnr4r" podStartSLOduration=24.666695069 podStartE2EDuration="26.194227435s" podCreationTimestamp="2024-08-05 22:22:19 +0000 UTC" firstStartedPulling="2024-08-05 22:22:42.667001681 +0000 UTC m=+48.609632412" lastFinishedPulling="2024-08-05 22:22:44.194534007 +0000 UTC m=+50.137164778" observedRunningTime="2024-08-05 22:22:44.30886886 +0000 UTC m=+50.251499631" watchObservedRunningTime="2024-08-05 22:22:45.194227435 +0000 UTC m=+51.136858206" Aug 5 22:22:45.246703 containerd[1442]: 2024-08-05 22:22:45.194 [INFO][4753] k8s.go 608: Cleaning up netns ContainerID="e9abda5d0ef5f804889b1660f5a992fc203f3978b1441b083025f4909415bc0a" Aug 5 22:22:45.246703 containerd[1442]: 2024-08-05 22:22:45.194 [INFO][4753] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="e9abda5d0ef5f804889b1660f5a992fc203f3978b1441b083025f4909415bc0a" iface="eth0" netns="/var/run/netns/cni-dd1737cc-ffd1-5bcf-4a21-68e78f357045" Aug 5 22:22:45.246703 containerd[1442]: 2024-08-05 22:22:45.195 [INFO][4753] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="e9abda5d0ef5f804889b1660f5a992fc203f3978b1441b083025f4909415bc0a" iface="eth0" netns="/var/run/netns/cni-dd1737cc-ffd1-5bcf-4a21-68e78f357045" Aug 5 22:22:45.246703 containerd[1442]: 2024-08-05 22:22:45.195 [INFO][4753] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="e9abda5d0ef5f804889b1660f5a992fc203f3978b1441b083025f4909415bc0a" iface="eth0" netns="/var/run/netns/cni-dd1737cc-ffd1-5bcf-4a21-68e78f357045" Aug 5 22:22:45.246703 containerd[1442]: 2024-08-05 22:22:45.195 [INFO][4753] k8s.go 615: Releasing IP address(es) ContainerID="e9abda5d0ef5f804889b1660f5a992fc203f3978b1441b083025f4909415bc0a" Aug 5 22:22:45.246703 containerd[1442]: 2024-08-05 22:22:45.195 [INFO][4753] utils.go 188: Calico CNI releasing IP address ContainerID="e9abda5d0ef5f804889b1660f5a992fc203f3978b1441b083025f4909415bc0a" Aug 5 22:22:45.246703 containerd[1442]: 2024-08-05 22:22:45.227 [INFO][4770] ipam_plugin.go 411: Releasing address using handleID ContainerID="e9abda5d0ef5f804889b1660f5a992fc203f3978b1441b083025f4909415bc0a" HandleID="k8s-pod-network.e9abda5d0ef5f804889b1660f5a992fc203f3978b1441b083025f4909415bc0a" Workload="localhost-k8s-coredns--76f75df574--rscnh-eth0" Aug 5 22:22:45.246703 containerd[1442]: 2024-08-05 22:22:45.227 [INFO][4770] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:22:45.246703 containerd[1442]: 2024-08-05 22:22:45.227 [INFO][4770] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:22:45.246703 containerd[1442]: 2024-08-05 22:22:45.237 [WARNING][4770] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="e9abda5d0ef5f804889b1660f5a992fc203f3978b1441b083025f4909415bc0a" HandleID="k8s-pod-network.e9abda5d0ef5f804889b1660f5a992fc203f3978b1441b083025f4909415bc0a" Workload="localhost-k8s-coredns--76f75df574--rscnh-eth0" Aug 5 22:22:45.246703 containerd[1442]: 2024-08-05 22:22:45.237 [INFO][4770] ipam_plugin.go 439: Releasing address using workloadID ContainerID="e9abda5d0ef5f804889b1660f5a992fc203f3978b1441b083025f4909415bc0a" HandleID="k8s-pod-network.e9abda5d0ef5f804889b1660f5a992fc203f3978b1441b083025f4909415bc0a" Workload="localhost-k8s-coredns--76f75df574--rscnh-eth0" Aug 5 22:22:45.246703 containerd[1442]: 2024-08-05 22:22:45.238 [INFO][4770] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:22:45.246703 containerd[1442]: 2024-08-05 22:22:45.241 [INFO][4753] k8s.go 621: Teardown processing complete. ContainerID="e9abda5d0ef5f804889b1660f5a992fc203f3978b1441b083025f4909415bc0a" Aug 5 22:22:45.247689 containerd[1442]: time="2024-08-05T22:22:45.246844329Z" level=info msg="TearDown network for sandbox \"e9abda5d0ef5f804889b1660f5a992fc203f3978b1441b083025f4909415bc0a\" successfully" Aug 5 22:22:45.247689 containerd[1442]: time="2024-08-05T22:22:45.246871769Z" level=info msg="StopPodSandbox for \"e9abda5d0ef5f804889b1660f5a992fc203f3978b1441b083025f4909415bc0a\" returns successfully" Aug 5 22:22:45.247739 kubelet[2526]: E0805 22:22:45.247301 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:22:45.248992 containerd[1442]: time="2024-08-05T22:22:45.248900426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-rscnh,Uid:34e6f00e-cfcb-4806-b4ed-da8a03b61a5e,Namespace:kube-system,Attempt:1,}" Aug 5 22:22:45.248977 systemd[1]: run-netns-cni\x2ddd1737cc\x2dffd1\x2d5bcf\x2d4a21\x2d68e78f357045.mount: Deactivated successfully. Aug 5 22:22:45.260231 containerd[1442]: 2024-08-05 22:22:45.203 [INFO][4752] k8s.go 608: Cleaning up netns ContainerID="c3cba8607d516ca0eb55413ae1cf23d5404d798ce23e932e16ff4e08c1284ab8" Aug 5 22:22:45.260231 containerd[1442]: 2024-08-05 22:22:45.205 [INFO][4752] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="c3cba8607d516ca0eb55413ae1cf23d5404d798ce23e932e16ff4e08c1284ab8" iface="eth0" netns="/var/run/netns/cni-2a44494e-86cc-baa0-b977-90b3bd9f98e8" Aug 5 22:22:45.260231 containerd[1442]: 2024-08-05 22:22:45.205 [INFO][4752] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="c3cba8607d516ca0eb55413ae1cf23d5404d798ce23e932e16ff4e08c1284ab8" iface="eth0" netns="/var/run/netns/cni-2a44494e-86cc-baa0-b977-90b3bd9f98e8" Aug 5 22:22:45.260231 containerd[1442]: 2024-08-05 22:22:45.205 [INFO][4752] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="c3cba8607d516ca0eb55413ae1cf23d5404d798ce23e932e16ff4e08c1284ab8" iface="eth0" netns="/var/run/netns/cni-2a44494e-86cc-baa0-b977-90b3bd9f98e8" Aug 5 22:22:45.260231 containerd[1442]: 2024-08-05 22:22:45.205 [INFO][4752] k8s.go 615: Releasing IP address(es) ContainerID="c3cba8607d516ca0eb55413ae1cf23d5404d798ce23e932e16ff4e08c1284ab8" Aug 5 22:22:45.260231 containerd[1442]: 2024-08-05 22:22:45.205 [INFO][4752] utils.go 188: Calico CNI releasing IP address ContainerID="c3cba8607d516ca0eb55413ae1cf23d5404d798ce23e932e16ff4e08c1284ab8" Aug 5 22:22:45.260231 containerd[1442]: 2024-08-05 22:22:45.228 [INFO][4775] ipam_plugin.go 411: Releasing address using handleID ContainerID="c3cba8607d516ca0eb55413ae1cf23d5404d798ce23e932e16ff4e08c1284ab8" HandleID="k8s-pod-network.c3cba8607d516ca0eb55413ae1cf23d5404d798ce23e932e16ff4e08c1284ab8" Workload="localhost-k8s-csi--node--driver--4gbsx-eth0" Aug 5 22:22:45.260231 containerd[1442]: 2024-08-05 22:22:45.228 [INFO][4775] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:22:45.260231 containerd[1442]: 2024-08-05 22:22:45.238 [INFO][4775] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:22:45.260231 containerd[1442]: 2024-08-05 22:22:45.254 [WARNING][4775] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="c3cba8607d516ca0eb55413ae1cf23d5404d798ce23e932e16ff4e08c1284ab8" HandleID="k8s-pod-network.c3cba8607d516ca0eb55413ae1cf23d5404d798ce23e932e16ff4e08c1284ab8" Workload="localhost-k8s-csi--node--driver--4gbsx-eth0" Aug 5 22:22:45.260231 containerd[1442]: 2024-08-05 22:22:45.254 [INFO][4775] ipam_plugin.go 439: Releasing address using workloadID ContainerID="c3cba8607d516ca0eb55413ae1cf23d5404d798ce23e932e16ff4e08c1284ab8" HandleID="k8s-pod-network.c3cba8607d516ca0eb55413ae1cf23d5404d798ce23e932e16ff4e08c1284ab8" Workload="localhost-k8s-csi--node--driver--4gbsx-eth0" Aug 5 22:22:45.260231 containerd[1442]: 2024-08-05 22:22:45.256 [INFO][4775] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:22:45.260231 containerd[1442]: 2024-08-05 22:22:45.258 [INFO][4752] k8s.go 621: Teardown processing complete. ContainerID="c3cba8607d516ca0eb55413ae1cf23d5404d798ce23e932e16ff4e08c1284ab8" Aug 5 22:22:45.260634 containerd[1442]: time="2024-08-05T22:22:45.260356378Z" level=info msg="TearDown network for sandbox \"c3cba8607d516ca0eb55413ae1cf23d5404d798ce23e932e16ff4e08c1284ab8\" successfully" Aug 5 22:22:45.260634 containerd[1442]: time="2024-08-05T22:22:45.260385178Z" level=info msg="StopPodSandbox for \"c3cba8607d516ca0eb55413ae1cf23d5404d798ce23e932e16ff4e08c1284ab8\" returns successfully" Aug 5 22:22:45.262656 containerd[1442]: time="2024-08-05T22:22:45.260972092Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4gbsx,Uid:f86a8cc4-dcb1-4c07-ba89-5368804c0223,Namespace:calico-system,Attempt:1,}" Aug 5 22:22:45.263054 systemd[1]: run-netns-cni\x2d2a44494e\x2d86cc\x2dbaa0\x2db977\x2d90b3bd9f98e8.mount: Deactivated successfully. Aug 5 22:22:45.299573 kubelet[2526]: E0805 22:22:45.299504 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:22:45.447332 systemd-networkd[1373]: calibbe5a1f7077: Link UP Aug 5 22:22:45.447544 systemd-networkd[1373]: calibbe5a1f7077: Gained carrier Aug 5 22:22:45.463566 containerd[1442]: 2024-08-05 22:22:45.373 [INFO][4803] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--76f75df574--rscnh-eth0 coredns-76f75df574- kube-system 34e6f00e-cfcb-4806-b4ed-da8a03b61a5e 955 0 2024-08-05 22:22:10 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-76f75df574-rscnh eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calibbe5a1f7077 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="ed4adc0a385b619aff1bfc0ef9f886912edeba1ac6f90139cd1bb685dbbdd702" Namespace="kube-system" Pod="coredns-76f75df574-rscnh" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--rscnh-" Aug 5 22:22:45.463566 containerd[1442]: 2024-08-05 22:22:45.373 [INFO][4803] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ed4adc0a385b619aff1bfc0ef9f886912edeba1ac6f90139cd1bb685dbbdd702" Namespace="kube-system" Pod="coredns-76f75df574-rscnh" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--rscnh-eth0" Aug 5 22:22:45.463566 containerd[1442]: 2024-08-05 22:22:45.401 [INFO][4837] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ed4adc0a385b619aff1bfc0ef9f886912edeba1ac6f90139cd1bb685dbbdd702" HandleID="k8s-pod-network.ed4adc0a385b619aff1bfc0ef9f886912edeba1ac6f90139cd1bb685dbbdd702" Workload="localhost-k8s-coredns--76f75df574--rscnh-eth0" Aug 5 22:22:45.463566 containerd[1442]: 2024-08-05 22:22:45.416 [INFO][4837] ipam_plugin.go 264: Auto assigning IP ContainerID="ed4adc0a385b619aff1bfc0ef9f886912edeba1ac6f90139cd1bb685dbbdd702" HandleID="k8s-pod-network.ed4adc0a385b619aff1bfc0ef9f886912edeba1ac6f90139cd1bb685dbbdd702" Workload="localhost-k8s-coredns--76f75df574--rscnh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002dc0f0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-76f75df574-rscnh", "timestamp":"2024-08-05 22:22:45.401159009 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 5 22:22:45.463566 containerd[1442]: 2024-08-05 22:22:45.416 [INFO][4837] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:22:45.463566 containerd[1442]: 2024-08-05 22:22:45.416 [INFO][4837] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:22:45.463566 containerd[1442]: 2024-08-05 22:22:45.416 [INFO][4837] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 5 22:22:45.463566 containerd[1442]: 2024-08-05 22:22:45.418 [INFO][4837] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ed4adc0a385b619aff1bfc0ef9f886912edeba1ac6f90139cd1bb685dbbdd702" host="localhost" Aug 5 22:22:45.463566 containerd[1442]: 2024-08-05 22:22:45.421 [INFO][4837] ipam.go 372: Looking up existing affinities for host host="localhost" Aug 5 22:22:45.463566 containerd[1442]: 2024-08-05 22:22:45.426 [INFO][4837] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Aug 5 22:22:45.463566 containerd[1442]: 2024-08-05 22:22:45.428 [INFO][4837] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 5 22:22:45.463566 containerd[1442]: 2024-08-05 22:22:45.430 [INFO][4837] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 5 22:22:45.463566 containerd[1442]: 2024-08-05 22:22:45.430 [INFO][4837] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ed4adc0a385b619aff1bfc0ef9f886912edeba1ac6f90139cd1bb685dbbdd702" host="localhost" Aug 5 22:22:45.463566 containerd[1442]: 2024-08-05 22:22:45.432 [INFO][4837] ipam.go 1685: Creating new handle: k8s-pod-network.ed4adc0a385b619aff1bfc0ef9f886912edeba1ac6f90139cd1bb685dbbdd702 Aug 5 22:22:45.463566 containerd[1442]: 2024-08-05 22:22:45.435 [INFO][4837] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ed4adc0a385b619aff1bfc0ef9f886912edeba1ac6f90139cd1bb685dbbdd702" host="localhost" Aug 5 22:22:45.463566 containerd[1442]: 2024-08-05 22:22:45.439 [INFO][4837] ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.ed4adc0a385b619aff1bfc0ef9f886912edeba1ac6f90139cd1bb685dbbdd702" host="localhost" Aug 5 22:22:45.463566 containerd[1442]: 2024-08-05 22:22:45.439 [INFO][4837] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.ed4adc0a385b619aff1bfc0ef9f886912edeba1ac6f90139cd1bb685dbbdd702" host="localhost" Aug 5 22:22:45.463566 containerd[1442]: 2024-08-05 22:22:45.439 [INFO][4837] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:22:45.463566 containerd[1442]: 2024-08-05 22:22:45.439 [INFO][4837] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="ed4adc0a385b619aff1bfc0ef9f886912edeba1ac6f90139cd1bb685dbbdd702" HandleID="k8s-pod-network.ed4adc0a385b619aff1bfc0ef9f886912edeba1ac6f90139cd1bb685dbbdd702" Workload="localhost-k8s-coredns--76f75df574--rscnh-eth0" Aug 5 22:22:45.464264 containerd[1442]: 2024-08-05 22:22:45.442 [INFO][4803] k8s.go 386: Populated endpoint ContainerID="ed4adc0a385b619aff1bfc0ef9f886912edeba1ac6f90139cd1bb685dbbdd702" Namespace="kube-system" Pod="coredns-76f75df574-rscnh" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--rscnh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--rscnh-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"34e6f00e-cfcb-4806-b4ed-da8a03b61a5e", ResourceVersion:"955", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 22, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-76f75df574-rscnh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibbe5a1f7077", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:22:45.464264 containerd[1442]: 2024-08-05 22:22:45.443 [INFO][4803] k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="ed4adc0a385b619aff1bfc0ef9f886912edeba1ac6f90139cd1bb685dbbdd702" Namespace="kube-system" Pod="coredns-76f75df574-rscnh" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--rscnh-eth0" Aug 5 22:22:45.464264 containerd[1442]: 2024-08-05 22:22:45.443 [INFO][4803] dataplane_linux.go 68: Setting the host side veth name to calibbe5a1f7077 ContainerID="ed4adc0a385b619aff1bfc0ef9f886912edeba1ac6f90139cd1bb685dbbdd702" Namespace="kube-system" Pod="coredns-76f75df574-rscnh" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--rscnh-eth0" Aug 5 22:22:45.464264 containerd[1442]: 2024-08-05 22:22:45.447 [INFO][4803] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="ed4adc0a385b619aff1bfc0ef9f886912edeba1ac6f90139cd1bb685dbbdd702" Namespace="kube-system" Pod="coredns-76f75df574-rscnh" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--rscnh-eth0" Aug 5 22:22:45.464264 containerd[1442]: 2024-08-05 22:22:45.447 [INFO][4803] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ed4adc0a385b619aff1bfc0ef9f886912edeba1ac6f90139cd1bb685dbbdd702" Namespace="kube-system" Pod="coredns-76f75df574-rscnh" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--rscnh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--rscnh-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"34e6f00e-cfcb-4806-b4ed-da8a03b61a5e", ResourceVersion:"955", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 22, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ed4adc0a385b619aff1bfc0ef9f886912edeba1ac6f90139cd1bb685dbbdd702", Pod:"coredns-76f75df574-rscnh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibbe5a1f7077", MAC:"9a:4a:fc:31:e7:50", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:22:45.464264 containerd[1442]: 2024-08-05 22:22:45.455 [INFO][4803] k8s.go 500: Wrote updated endpoint to datastore ContainerID="ed4adc0a385b619aff1bfc0ef9f886912edeba1ac6f90139cd1bb685dbbdd702" Namespace="kube-system" Pod="coredns-76f75df574-rscnh" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--rscnh-eth0" Aug 5 22:22:45.485083 systemd-networkd[1373]: calic4e285434b1: Link UP Aug 5 22:22:45.485300 systemd-networkd[1373]: calic4e285434b1: Gained carrier Aug 5 22:22:45.493700 containerd[1442]: time="2024-08-05T22:22:45.493316262Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:22:45.493700 containerd[1442]: time="2024-08-05T22:22:45.493375902Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:22:45.493700 containerd[1442]: time="2024-08-05T22:22:45.493401581Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:22:45.493700 containerd[1442]: time="2024-08-05T22:22:45.493427261Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:22:45.501654 containerd[1442]: 2024-08-05 22:22:45.376 [INFO][4820] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--4gbsx-eth0 csi-node-driver- calico-system f86a8cc4-dcb1-4c07-ba89-5368804c0223 956 0 2024-08-05 22:22:18 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:7d7f6c786c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s localhost csi-node-driver-4gbsx eth0 default [] [] [kns.calico-system ksa.calico-system.default] calic4e285434b1 [] []}} ContainerID="9ac1b8be2ceb3794623e28d25fe94bb8f6b76948bcafbcded5046a0b70145836" Namespace="calico-system" Pod="csi-node-driver-4gbsx" WorkloadEndpoint="localhost-k8s-csi--node--driver--4gbsx-" Aug 5 22:22:45.501654 containerd[1442]: 2024-08-05 22:22:45.376 [INFO][4820] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9ac1b8be2ceb3794623e28d25fe94bb8f6b76948bcafbcded5046a0b70145836" Namespace="calico-system" Pod="csi-node-driver-4gbsx" WorkloadEndpoint="localhost-k8s-csi--node--driver--4gbsx-eth0" Aug 5 22:22:45.501654 containerd[1442]: 2024-08-05 22:22:45.401 [INFO][4842] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9ac1b8be2ceb3794623e28d25fe94bb8f6b76948bcafbcded5046a0b70145836" HandleID="k8s-pod-network.9ac1b8be2ceb3794623e28d25fe94bb8f6b76948bcafbcded5046a0b70145836" Workload="localhost-k8s-csi--node--driver--4gbsx-eth0" Aug 5 22:22:45.501654 containerd[1442]: 2024-08-05 22:22:45.417 [INFO][4842] ipam_plugin.go 264: Auto assigning IP ContainerID="9ac1b8be2ceb3794623e28d25fe94bb8f6b76948bcafbcded5046a0b70145836" HandleID="k8s-pod-network.9ac1b8be2ceb3794623e28d25fe94bb8f6b76948bcafbcded5046a0b70145836" Workload="localhost-k8s-csi--node--driver--4gbsx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400027bdb0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-4gbsx", "timestamp":"2024-08-05 22:22:45.401233368 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 5 22:22:45.501654 containerd[1442]: 2024-08-05 22:22:45.417 [INFO][4842] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:22:45.501654 containerd[1442]: 2024-08-05 22:22:45.439 [INFO][4842] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:22:45.501654 containerd[1442]: 2024-08-05 22:22:45.439 [INFO][4842] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 5 22:22:45.501654 containerd[1442]: 2024-08-05 22:22:45.442 [INFO][4842] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9ac1b8be2ceb3794623e28d25fe94bb8f6b76948bcafbcded5046a0b70145836" host="localhost" Aug 5 22:22:45.501654 containerd[1442]: 2024-08-05 22:22:45.449 [INFO][4842] ipam.go 372: Looking up existing affinities for host host="localhost" Aug 5 22:22:45.501654 containerd[1442]: 2024-08-05 22:22:45.454 [INFO][4842] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Aug 5 22:22:45.501654 containerd[1442]: 2024-08-05 22:22:45.457 [INFO][4842] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 5 22:22:45.501654 containerd[1442]: 2024-08-05 22:22:45.460 [INFO][4842] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 5 22:22:45.501654 containerd[1442]: 2024-08-05 22:22:45.460 [INFO][4842] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9ac1b8be2ceb3794623e28d25fe94bb8f6b76948bcafbcded5046a0b70145836" host="localhost" Aug 5 22:22:45.501654 containerd[1442]: 2024-08-05 22:22:45.462 [INFO][4842] ipam.go 1685: Creating new handle: k8s-pod-network.9ac1b8be2ceb3794623e28d25fe94bb8f6b76948bcafbcded5046a0b70145836 Aug 5 22:22:45.501654 containerd[1442]: 2024-08-05 22:22:45.470 [INFO][4842] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9ac1b8be2ceb3794623e28d25fe94bb8f6b76948bcafbcded5046a0b70145836" host="localhost" Aug 5 22:22:45.501654 containerd[1442]: 2024-08-05 22:22:45.475 [INFO][4842] ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.9ac1b8be2ceb3794623e28d25fe94bb8f6b76948bcafbcded5046a0b70145836" host="localhost" Aug 5 22:22:45.501654 containerd[1442]: 2024-08-05 22:22:45.476 [INFO][4842] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.9ac1b8be2ceb3794623e28d25fe94bb8f6b76948bcafbcded5046a0b70145836" host="localhost" Aug 5 22:22:45.501654 containerd[1442]: 2024-08-05 22:22:45.476 [INFO][4842] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:22:45.501654 containerd[1442]: 2024-08-05 22:22:45.476 [INFO][4842] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="9ac1b8be2ceb3794623e28d25fe94bb8f6b76948bcafbcded5046a0b70145836" HandleID="k8s-pod-network.9ac1b8be2ceb3794623e28d25fe94bb8f6b76948bcafbcded5046a0b70145836" Workload="localhost-k8s-csi--node--driver--4gbsx-eth0" Aug 5 22:22:45.502319 containerd[1442]: 2024-08-05 22:22:45.480 [INFO][4820] k8s.go 386: Populated endpoint ContainerID="9ac1b8be2ceb3794623e28d25fe94bb8f6b76948bcafbcded5046a0b70145836" Namespace="calico-system" Pod="csi-node-driver-4gbsx" WorkloadEndpoint="localhost-k8s-csi--node--driver--4gbsx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--4gbsx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f86a8cc4-dcb1-4c07-ba89-5368804c0223", ResourceVersion:"956", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 22, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-4gbsx", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calic4e285434b1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:22:45.502319 containerd[1442]: 2024-08-05 22:22:45.481 [INFO][4820] k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="9ac1b8be2ceb3794623e28d25fe94bb8f6b76948bcafbcded5046a0b70145836" Namespace="calico-system" Pod="csi-node-driver-4gbsx" WorkloadEndpoint="localhost-k8s-csi--node--driver--4gbsx-eth0" Aug 5 22:22:45.502319 containerd[1442]: 2024-08-05 22:22:45.481 [INFO][4820] dataplane_linux.go 68: Setting the host side veth name to calic4e285434b1 ContainerID="9ac1b8be2ceb3794623e28d25fe94bb8f6b76948bcafbcded5046a0b70145836" Namespace="calico-system" Pod="csi-node-driver-4gbsx" WorkloadEndpoint="localhost-k8s-csi--node--driver--4gbsx-eth0" Aug 5 22:22:45.502319 containerd[1442]: 2024-08-05 22:22:45.485 [INFO][4820] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="9ac1b8be2ceb3794623e28d25fe94bb8f6b76948bcafbcded5046a0b70145836" Namespace="calico-system" Pod="csi-node-driver-4gbsx" WorkloadEndpoint="localhost-k8s-csi--node--driver--4gbsx-eth0" Aug 5 22:22:45.502319 containerd[1442]: 2024-08-05 22:22:45.485 [INFO][4820] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9ac1b8be2ceb3794623e28d25fe94bb8f6b76948bcafbcded5046a0b70145836" Namespace="calico-system" Pod="csi-node-driver-4gbsx" WorkloadEndpoint="localhost-k8s-csi--node--driver--4gbsx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--4gbsx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f86a8cc4-dcb1-4c07-ba89-5368804c0223", ResourceVersion:"956", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 22, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9ac1b8be2ceb3794623e28d25fe94bb8f6b76948bcafbcded5046a0b70145836", Pod:"csi-node-driver-4gbsx", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calic4e285434b1", MAC:"96:62:5e:aa:1e:82", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:22:45.502319 containerd[1442]: 2024-08-05 22:22:45.495 [INFO][4820] k8s.go 500: Wrote updated endpoint to datastore ContainerID="9ac1b8be2ceb3794623e28d25fe94bb8f6b76948bcafbcded5046a0b70145836" Namespace="calico-system" Pod="csi-node-driver-4gbsx" WorkloadEndpoint="localhost-k8s-csi--node--driver--4gbsx-eth0" Aug 5 22:22:45.523874 systemd[1]: Started cri-containerd-ed4adc0a385b619aff1bfc0ef9f886912edeba1ac6f90139cd1bb685dbbdd702.scope - libcontainer container ed4adc0a385b619aff1bfc0ef9f886912edeba1ac6f90139cd1bb685dbbdd702. Aug 5 22:22:45.531609 containerd[1442]: time="2024-08-05T22:22:45.531526076Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:22:45.531609 containerd[1442]: time="2024-08-05T22:22:45.531588556Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:22:45.531788 containerd[1442]: time="2024-08-05T22:22:45.531622475Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:22:45.531788 containerd[1442]: time="2024-08-05T22:22:45.531646875Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:22:45.540822 systemd-resolved[1313]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 5 22:22:45.558113 systemd[1]: Started cri-containerd-9ac1b8be2ceb3794623e28d25fe94bb8f6b76948bcafbcded5046a0b70145836.scope - libcontainer container 9ac1b8be2ceb3794623e28d25fe94bb8f6b76948bcafbcded5046a0b70145836. Aug 5 22:22:45.566897 containerd[1442]: time="2024-08-05T22:22:45.566837443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-rscnh,Uid:34e6f00e-cfcb-4806-b4ed-da8a03b61a5e,Namespace:kube-system,Attempt:1,} returns sandbox id \"ed4adc0a385b619aff1bfc0ef9f886912edeba1ac6f90139cd1bb685dbbdd702\"" Aug 5 22:22:45.569035 kubelet[2526]: E0805 22:22:45.568853 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:22:45.571318 systemd-resolved[1313]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 5 22:22:45.573196 containerd[1442]: time="2024-08-05T22:22:45.572902615Z" level=info msg="CreateContainer within sandbox \"ed4adc0a385b619aff1bfc0ef9f886912edeba1ac6f90139cd1bb685dbbdd702\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 5 22:22:45.584177 containerd[1442]: time="2024-08-05T22:22:45.584133250Z" level=info msg="CreateContainer within sandbox \"ed4adc0a385b619aff1bfc0ef9f886912edeba1ac6f90139cd1bb685dbbdd702\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d91d97ab49ed84b318e6e182b727c3da0bc73a3b1e6a719e3c92e8e614ac487d\"" Aug 5 22:22:45.586088 containerd[1442]: time="2024-08-05T22:22:45.584949441Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4gbsx,Uid:f86a8cc4-dcb1-4c07-ba89-5368804c0223,Namespace:calico-system,Attempt:1,} returns sandbox id \"9ac1b8be2ceb3794623e28d25fe94bb8f6b76948bcafbcded5046a0b70145836\"" Aug 5 22:22:45.586088 containerd[1442]: time="2024-08-05T22:22:45.585954550Z" level=info msg="StartContainer for \"d91d97ab49ed84b318e6e182b727c3da0bc73a3b1e6a719e3c92e8e614ac487d\"" Aug 5 22:22:45.587427 containerd[1442]: time="2024-08-05T22:22:45.587393614Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\"" Aug 5 22:22:45.616985 systemd[1]: Started cri-containerd-d91d97ab49ed84b318e6e182b727c3da0bc73a3b1e6a719e3c92e8e614ac487d.scope - libcontainer container d91d97ab49ed84b318e6e182b727c3da0bc73a3b1e6a719e3c92e8e614ac487d. Aug 5 22:22:45.637575 containerd[1442]: time="2024-08-05T22:22:45.637530495Z" level=info msg="StartContainer for \"d91d97ab49ed84b318e6e182b727c3da0bc73a3b1e6a719e3c92e8e614ac487d\" returns successfully" Aug 5 22:22:46.304790 kubelet[2526]: E0805 22:22:46.304714 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:22:46.315372 kubelet[2526]: I0805 22:22:46.315303 2526 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-rscnh" podStartSLOduration=36.315263349 podStartE2EDuration="36.315263349s" podCreationTimestamp="2024-08-05 22:22:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 22:22:46.314459198 +0000 UTC m=+52.257090049" watchObservedRunningTime="2024-08-05 22:22:46.315263349 +0000 UTC m=+52.257894120" Aug 5 22:22:46.555742 containerd[1442]: time="2024-08-05T22:22:46.555138702Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:22:46.556471 containerd[1442]: time="2024-08-05T22:22:46.556188771Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.0: active requests=0, bytes read=7210579" Aug 5 22:22:46.557714 containerd[1442]: time="2024-08-05T22:22:46.557069841Z" level=info msg="ImageCreate event name:\"sha256:94ad0dc71bacd91f470c20e61073c2dc00648fd583c0fb95657dee38af05e5ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:22:46.560498 containerd[1442]: time="2024-08-05T22:22:46.560380005Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.0\" with image id \"sha256:94ad0dc71bacd91f470c20e61073c2dc00648fd583c0fb95657dee38af05e5ed\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\", size \"8577147\" in 972.947392ms" Aug 5 22:22:46.560498 containerd[1442]: time="2024-08-05T22:22:46.560416365Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\" returns image reference \"sha256:94ad0dc71bacd91f470c20e61073c2dc00648fd583c0fb95657dee38af05e5ed\"" Aug 5 22:22:46.562840 containerd[1442]: time="2024-08-05T22:22:46.562809939Z" level=info msg="CreateContainer within sandbox \"9ac1b8be2ceb3794623e28d25fe94bb8f6b76948bcafbcded5046a0b70145836\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Aug 5 22:22:46.578715 containerd[1442]: time="2024-08-05T22:22:46.578184172Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:22:46.584934 containerd[1442]: time="2024-08-05T22:22:46.584895339Z" level=info msg="CreateContainer within sandbox \"9ac1b8be2ceb3794623e28d25fe94bb8f6b76948bcafbcded5046a0b70145836\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"5d0eed7fe92ad5e472c93c861488d1c0eb14c9172a61beff377e1cc943bdb110\"" Aug 5 22:22:46.585805 containerd[1442]: time="2024-08-05T22:22:46.585772929Z" level=info msg="StartContainer for \"5d0eed7fe92ad5e472c93c861488d1c0eb14c9172a61beff377e1cc943bdb110\"" Aug 5 22:22:46.613898 systemd[1]: Started cri-containerd-5d0eed7fe92ad5e472c93c861488d1c0eb14c9172a61beff377e1cc943bdb110.scope - libcontainer container 5d0eed7fe92ad5e472c93c861488d1c0eb14c9172a61beff377e1cc943bdb110. Aug 5 22:22:46.642117 containerd[1442]: time="2024-08-05T22:22:46.642040398Z" level=info msg="StartContainer for \"5d0eed7fe92ad5e472c93c861488d1c0eb14c9172a61beff377e1cc943bdb110\" returns successfully" Aug 5 22:22:46.643376 containerd[1442]: time="2024-08-05T22:22:46.643316064Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\"" Aug 5 22:22:46.769890 systemd-networkd[1373]: calibbe5a1f7077: Gained IPv6LL Aug 5 22:22:47.158755 systemd-networkd[1373]: calic4e285434b1: Gained IPv6LL Aug 5 22:22:47.308504 kubelet[2526]: E0805 22:22:47.308477 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:22:47.666541 containerd[1442]: time="2024-08-05T22:22:47.666463443Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:22:47.667320 containerd[1442]: time="2024-08-05T22:22:47.667172795Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0: active requests=0, bytes read=9548567" Aug 5 22:22:47.668097 containerd[1442]: time="2024-08-05T22:22:47.668056306Z" level=info msg="ImageCreate event name:\"sha256:f708eddd5878891da5bc6148fc8bb3f7277210481a15957910fe5fb551a5ed28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:22:47.670717 containerd[1442]: time="2024-08-05T22:22:47.670667638Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:22:47.671501 containerd[1442]: time="2024-08-05T22:22:47.671470430Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" with image id \"sha256:f708eddd5878891da5bc6148fc8bb3f7277210481a15957910fe5fb551a5ed28\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\", size \"10915087\" in 1.028118206s" Aug 5 22:22:47.671559 containerd[1442]: time="2024-08-05T22:22:47.671502549Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" returns image reference \"sha256:f708eddd5878891da5bc6148fc8bb3f7277210481a15957910fe5fb551a5ed28\"" Aug 5 22:22:47.673588 containerd[1442]: time="2024-08-05T22:22:47.673561368Z" level=info msg="CreateContainer within sandbox \"9ac1b8be2ceb3794623e28d25fe94bb8f6b76948bcafbcded5046a0b70145836\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Aug 5 22:22:47.687728 containerd[1442]: time="2024-08-05T22:22:47.687667858Z" level=info msg="CreateContainer within sandbox \"9ac1b8be2ceb3794623e28d25fe94bb8f6b76948bcafbcded5046a0b70145836\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"e5b43fb66690b6da8f9f2cbf57c64f8475bfeb90482c29cf85a20725eacc1bb6\"" Aug 5 22:22:47.688078 containerd[1442]: time="2024-08-05T22:22:47.688048454Z" level=info msg="StartContainer for \"e5b43fb66690b6da8f9f2cbf57c64f8475bfeb90482c29cf85a20725eacc1bb6\"" Aug 5 22:22:47.718918 systemd[1]: Started cri-containerd-e5b43fb66690b6da8f9f2cbf57c64f8475bfeb90482c29cf85a20725eacc1bb6.scope - libcontainer container e5b43fb66690b6da8f9f2cbf57c64f8475bfeb90482c29cf85a20725eacc1bb6. Aug 5 22:22:47.744063 containerd[1442]: time="2024-08-05T22:22:47.744021781Z" level=info msg="StartContainer for \"e5b43fb66690b6da8f9f2cbf57c64f8475bfeb90482c29cf85a20725eacc1bb6\" returns successfully" Aug 5 22:22:48.091286 systemd[1]: Started sshd@13-10.0.0.142:22-10.0.0.1:59234.service - OpenSSH per-connection server daemon (10.0.0.1:59234). Aug 5 22:22:48.133967 sshd[5092]: Accepted publickey for core from 10.0.0.1 port 59234 ssh2: RSA SHA256:m+vSf9MZ8jyHy+Dz2uz+ngzM5NRoRVVH/LZDa5ltoPE Aug 5 22:22:48.135397 sshd[5092]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:22:48.139954 systemd-logind[1424]: New session 14 of user core. Aug 5 22:22:48.148860 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 5 22:22:48.214986 kubelet[2526]: I0805 22:22:48.214947 2526 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Aug 5 22:22:48.214986 kubelet[2526]: I0805 22:22:48.214989 2526 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Aug 5 22:22:48.280508 sshd[5092]: pam_unix(sshd:session): session closed for user core Aug 5 22:22:48.290397 systemd[1]: sshd@13-10.0.0.142:22-10.0.0.1:59234.service: Deactivated successfully. Aug 5 22:22:48.292005 systemd[1]: session-14.scope: Deactivated successfully. Aug 5 22:22:48.293247 systemd-logind[1424]: Session 14 logged out. Waiting for processes to exit. Aug 5 22:22:48.300620 systemd[1]: Started sshd@14-10.0.0.142:22-10.0.0.1:59238.service - OpenSSH per-connection server daemon (10.0.0.1:59238). Aug 5 22:22:48.301555 systemd-logind[1424]: Removed session 14. Aug 5 22:22:48.313229 kubelet[2526]: E0805 22:22:48.313021 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:22:48.333822 sshd[5108]: Accepted publickey for core from 10.0.0.1 port 59238 ssh2: RSA SHA256:m+vSf9MZ8jyHy+Dz2uz+ngzM5NRoRVVH/LZDa5ltoPE Aug 5 22:22:48.335144 sshd[5108]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:22:48.344487 systemd-logind[1424]: New session 15 of user core. Aug 5 22:22:48.353917 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 5 22:22:48.564742 sshd[5108]: pam_unix(sshd:session): session closed for user core Aug 5 22:22:48.574445 systemd[1]: sshd@14-10.0.0.142:22-10.0.0.1:59238.service: Deactivated successfully. Aug 5 22:22:48.576233 systemd[1]: session-15.scope: Deactivated successfully. Aug 5 22:22:48.577602 systemd-logind[1424]: Session 15 logged out. Waiting for processes to exit. Aug 5 22:22:48.585979 systemd[1]: Started sshd@15-10.0.0.142:22-10.0.0.1:59244.service - OpenSSH per-connection server daemon (10.0.0.1:59244). Aug 5 22:22:48.588098 systemd-logind[1424]: Removed session 15. Aug 5 22:22:48.622529 sshd[5120]: Accepted publickey for core from 10.0.0.1 port 59244 ssh2: RSA SHA256:m+vSf9MZ8jyHy+Dz2uz+ngzM5NRoRVVH/LZDa5ltoPE Aug 5 22:22:48.624332 sshd[5120]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:22:48.629454 systemd-logind[1424]: New session 16 of user core. Aug 5 22:22:48.638356 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 5 22:22:49.314308 kubelet[2526]: E0805 22:22:49.314266 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:22:50.101408 sshd[5120]: pam_unix(sshd:session): session closed for user core Aug 5 22:22:50.113924 systemd[1]: sshd@15-10.0.0.142:22-10.0.0.1:59244.service: Deactivated successfully. Aug 5 22:22:50.116418 systemd[1]: session-16.scope: Deactivated successfully. Aug 5 22:22:50.118511 systemd-logind[1424]: Session 16 logged out. Waiting for processes to exit. Aug 5 22:22:50.129112 systemd[1]: Started sshd@16-10.0.0.142:22-10.0.0.1:59248.service - OpenSSH per-connection server daemon (10.0.0.1:59248). Aug 5 22:22:50.130801 systemd-logind[1424]: Removed session 16. Aug 5 22:22:50.174010 sshd[5140]: Accepted publickey for core from 10.0.0.1 port 59248 ssh2: RSA SHA256:m+vSf9MZ8jyHy+Dz2uz+ngzM5NRoRVVH/LZDa5ltoPE Aug 5 22:22:50.176042 sshd[5140]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:22:50.181566 systemd-logind[1424]: New session 17 of user core. Aug 5 22:22:50.188354 systemd[1]: Started session-17.scope - Session 17 of User core. Aug 5 22:22:50.461630 sshd[5140]: pam_unix(sshd:session): session closed for user core Aug 5 22:22:50.474903 systemd[1]: sshd@16-10.0.0.142:22-10.0.0.1:59248.service: Deactivated successfully. Aug 5 22:22:50.481277 systemd[1]: session-17.scope: Deactivated successfully. Aug 5 22:22:50.485091 systemd-logind[1424]: Session 17 logged out. Waiting for processes to exit. Aug 5 22:22:50.496062 systemd[1]: Started sshd@17-10.0.0.142:22-10.0.0.1:59258.service - OpenSSH per-connection server daemon (10.0.0.1:59258). Aug 5 22:22:50.497966 systemd-logind[1424]: Removed session 17. Aug 5 22:22:50.529189 sshd[5153]: Accepted publickey for core from 10.0.0.1 port 59258 ssh2: RSA SHA256:m+vSf9MZ8jyHy+Dz2uz+ngzM5NRoRVVH/LZDa5ltoPE Aug 5 22:22:50.531129 sshd[5153]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:22:50.536712 systemd-logind[1424]: New session 18 of user core. Aug 5 22:22:50.544879 systemd[1]: Started session-18.scope - Session 18 of User core. Aug 5 22:22:50.656344 sshd[5153]: pam_unix(sshd:session): session closed for user core Aug 5 22:22:50.659397 systemd[1]: sshd@17-10.0.0.142:22-10.0.0.1:59258.service: Deactivated successfully. Aug 5 22:22:50.662158 systemd[1]: session-18.scope: Deactivated successfully. Aug 5 22:22:50.664541 systemd-logind[1424]: Session 18 logged out. Waiting for processes to exit. Aug 5 22:22:50.666038 systemd-logind[1424]: Removed session 18. Aug 5 22:22:54.118398 containerd[1442]: time="2024-08-05T22:22:54.118079784Z" level=info msg="StopPodSandbox for \"2f32654cbee0559b9e0843f89ede6a1a316c901e168b506cf0e83e1f8e67ee0e\"" Aug 5 22:22:54.206769 containerd[1442]: 2024-08-05 22:22:54.163 [WARNING][5195] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2f32654cbee0559b9e0843f89ede6a1a316c901e168b506cf0e83e1f8e67ee0e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--cnk28-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"fd02d030-3ece-464a-a009-26d6a3348ddd", ResourceVersion:"929", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 22, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"47a8c782146cb3cb97a1f18068337f66c978f3b9848b778f6502169531d76e70", Pod:"coredns-76f75df574-cnk28", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie2b64fef886", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:22:54.206769 containerd[1442]: 2024-08-05 22:22:54.163 [INFO][5195] k8s.go 608: Cleaning up netns ContainerID="2f32654cbee0559b9e0843f89ede6a1a316c901e168b506cf0e83e1f8e67ee0e" Aug 5 22:22:54.206769 containerd[1442]: 2024-08-05 22:22:54.163 [INFO][5195] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="2f32654cbee0559b9e0843f89ede6a1a316c901e168b506cf0e83e1f8e67ee0e" iface="eth0" netns="" Aug 5 22:22:54.206769 containerd[1442]: 2024-08-05 22:22:54.164 [INFO][5195] k8s.go 615: Releasing IP address(es) ContainerID="2f32654cbee0559b9e0843f89ede6a1a316c901e168b506cf0e83e1f8e67ee0e" Aug 5 22:22:54.206769 containerd[1442]: 2024-08-05 22:22:54.164 [INFO][5195] utils.go 188: Calico CNI releasing IP address ContainerID="2f32654cbee0559b9e0843f89ede6a1a316c901e168b506cf0e83e1f8e67ee0e" Aug 5 22:22:54.206769 containerd[1442]: 2024-08-05 22:22:54.193 [INFO][5204] ipam_plugin.go 411: Releasing address using handleID ContainerID="2f32654cbee0559b9e0843f89ede6a1a316c901e168b506cf0e83e1f8e67ee0e" HandleID="k8s-pod-network.2f32654cbee0559b9e0843f89ede6a1a316c901e168b506cf0e83e1f8e67ee0e" Workload="localhost-k8s-coredns--76f75df574--cnk28-eth0" Aug 5 22:22:54.206769 containerd[1442]: 2024-08-05 22:22:54.193 [INFO][5204] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:22:54.206769 containerd[1442]: 2024-08-05 22:22:54.193 [INFO][5204] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:22:54.206769 containerd[1442]: 2024-08-05 22:22:54.201 [WARNING][5204] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="2f32654cbee0559b9e0843f89ede6a1a316c901e168b506cf0e83e1f8e67ee0e" HandleID="k8s-pod-network.2f32654cbee0559b9e0843f89ede6a1a316c901e168b506cf0e83e1f8e67ee0e" Workload="localhost-k8s-coredns--76f75df574--cnk28-eth0" Aug 5 22:22:54.206769 containerd[1442]: 2024-08-05 22:22:54.202 [INFO][5204] ipam_plugin.go 439: Releasing address using workloadID ContainerID="2f32654cbee0559b9e0843f89ede6a1a316c901e168b506cf0e83e1f8e67ee0e" HandleID="k8s-pod-network.2f32654cbee0559b9e0843f89ede6a1a316c901e168b506cf0e83e1f8e67ee0e" Workload="localhost-k8s-coredns--76f75df574--cnk28-eth0" Aug 5 22:22:54.206769 containerd[1442]: 2024-08-05 22:22:54.203 [INFO][5204] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:22:54.206769 containerd[1442]: 2024-08-05 22:22:54.205 [INFO][5195] k8s.go 621: Teardown processing complete. ContainerID="2f32654cbee0559b9e0843f89ede6a1a316c901e168b506cf0e83e1f8e67ee0e" Aug 5 22:22:54.207413 containerd[1442]: time="2024-08-05T22:22:54.207290946Z" level=info msg="TearDown network for sandbox \"2f32654cbee0559b9e0843f89ede6a1a316c901e168b506cf0e83e1f8e67ee0e\" successfully" Aug 5 22:22:54.207413 containerd[1442]: time="2024-08-05T22:22:54.207331186Z" level=info msg="StopPodSandbox for \"2f32654cbee0559b9e0843f89ede6a1a316c901e168b506cf0e83e1f8e67ee0e\" returns successfully" Aug 5 22:22:54.207873 containerd[1442]: time="2024-08-05T22:22:54.207837621Z" level=info msg="RemovePodSandbox for \"2f32654cbee0559b9e0843f89ede6a1a316c901e168b506cf0e83e1f8e67ee0e\"" Aug 5 22:22:54.214714 containerd[1442]: time="2024-08-05T22:22:54.207877221Z" level=info msg="Forcibly stopping sandbox \"2f32654cbee0559b9e0843f89ede6a1a316c901e168b506cf0e83e1f8e67ee0e\"" Aug 5 22:22:54.283043 containerd[1442]: 2024-08-05 22:22:54.247 [WARNING][5227] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2f32654cbee0559b9e0843f89ede6a1a316c901e168b506cf0e83e1f8e67ee0e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--cnk28-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"fd02d030-3ece-464a-a009-26d6a3348ddd", ResourceVersion:"929", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 22, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"47a8c782146cb3cb97a1f18068337f66c978f3b9848b778f6502169531d76e70", Pod:"coredns-76f75df574-cnk28", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie2b64fef886", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:22:54.283043 containerd[1442]: 2024-08-05 22:22:54.248 [INFO][5227] k8s.go 608: Cleaning up netns ContainerID="2f32654cbee0559b9e0843f89ede6a1a316c901e168b506cf0e83e1f8e67ee0e" Aug 5 22:22:54.283043 containerd[1442]: 2024-08-05 22:22:54.248 [INFO][5227] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="2f32654cbee0559b9e0843f89ede6a1a316c901e168b506cf0e83e1f8e67ee0e" iface="eth0" netns="" Aug 5 22:22:54.283043 containerd[1442]: 2024-08-05 22:22:54.248 [INFO][5227] k8s.go 615: Releasing IP address(es) ContainerID="2f32654cbee0559b9e0843f89ede6a1a316c901e168b506cf0e83e1f8e67ee0e" Aug 5 22:22:54.283043 containerd[1442]: 2024-08-05 22:22:54.248 [INFO][5227] utils.go 188: Calico CNI releasing IP address ContainerID="2f32654cbee0559b9e0843f89ede6a1a316c901e168b506cf0e83e1f8e67ee0e" Aug 5 22:22:54.283043 containerd[1442]: 2024-08-05 22:22:54.270 [INFO][5234] ipam_plugin.go 411: Releasing address using handleID ContainerID="2f32654cbee0559b9e0843f89ede6a1a316c901e168b506cf0e83e1f8e67ee0e" HandleID="k8s-pod-network.2f32654cbee0559b9e0843f89ede6a1a316c901e168b506cf0e83e1f8e67ee0e" Workload="localhost-k8s-coredns--76f75df574--cnk28-eth0" Aug 5 22:22:54.283043 containerd[1442]: 2024-08-05 22:22:54.270 [INFO][5234] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:22:54.283043 containerd[1442]: 2024-08-05 22:22:54.270 [INFO][5234] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:22:54.283043 containerd[1442]: 2024-08-05 22:22:54.278 [WARNING][5234] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="2f32654cbee0559b9e0843f89ede6a1a316c901e168b506cf0e83e1f8e67ee0e" HandleID="k8s-pod-network.2f32654cbee0559b9e0843f89ede6a1a316c901e168b506cf0e83e1f8e67ee0e" Workload="localhost-k8s-coredns--76f75df574--cnk28-eth0" Aug 5 22:22:54.283043 containerd[1442]: 2024-08-05 22:22:54.278 [INFO][5234] ipam_plugin.go 439: Releasing address using workloadID ContainerID="2f32654cbee0559b9e0843f89ede6a1a316c901e168b506cf0e83e1f8e67ee0e" HandleID="k8s-pod-network.2f32654cbee0559b9e0843f89ede6a1a316c901e168b506cf0e83e1f8e67ee0e" Workload="localhost-k8s-coredns--76f75df574--cnk28-eth0" Aug 5 22:22:54.283043 containerd[1442]: 2024-08-05 22:22:54.279 [INFO][5234] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:22:54.283043 containerd[1442]: 2024-08-05 22:22:54.281 [INFO][5227] k8s.go 621: Teardown processing complete. ContainerID="2f32654cbee0559b9e0843f89ede6a1a316c901e168b506cf0e83e1f8e67ee0e" Aug 5 22:22:54.283423 containerd[1442]: time="2024-08-05T22:22:54.283081508Z" level=info msg="TearDown network for sandbox \"2f32654cbee0559b9e0843f89ede6a1a316c901e168b506cf0e83e1f8e67ee0e\" successfully" Aug 5 22:22:54.292158 containerd[1442]: time="2024-08-05T22:22:54.291839910Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2f32654cbee0559b9e0843f89ede6a1a316c901e168b506cf0e83e1f8e67ee0e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 5 22:22:54.292158 containerd[1442]: time="2024-08-05T22:22:54.292006828Z" level=info msg="RemovePodSandbox \"2f32654cbee0559b9e0843f89ede6a1a316c901e168b506cf0e83e1f8e67ee0e\" returns successfully" Aug 5 22:22:54.293206 containerd[1442]: time="2024-08-05T22:22:54.292759101Z" level=info msg="StopPodSandbox for \"c3cba8607d516ca0eb55413ae1cf23d5404d798ce23e932e16ff4e08c1284ab8\"" Aug 5 22:22:54.368273 containerd[1442]: 2024-08-05 22:22:54.332 [WARNING][5255] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c3cba8607d516ca0eb55413ae1cf23d5404d798ce23e932e16ff4e08c1284ab8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--4gbsx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f86a8cc4-dcb1-4c07-ba89-5368804c0223", ResourceVersion:"1005", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 22, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9ac1b8be2ceb3794623e28d25fe94bb8f6b76948bcafbcded5046a0b70145836", Pod:"csi-node-driver-4gbsx", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calic4e285434b1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:22:54.368273 containerd[1442]: 2024-08-05 22:22:54.332 [INFO][5255] k8s.go 608: Cleaning up netns ContainerID="c3cba8607d516ca0eb55413ae1cf23d5404d798ce23e932e16ff4e08c1284ab8" Aug 5 22:22:54.368273 containerd[1442]: 2024-08-05 22:22:54.332 [INFO][5255] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="c3cba8607d516ca0eb55413ae1cf23d5404d798ce23e932e16ff4e08c1284ab8" iface="eth0" netns="" Aug 5 22:22:54.368273 containerd[1442]: 2024-08-05 22:22:54.332 [INFO][5255] k8s.go 615: Releasing IP address(es) ContainerID="c3cba8607d516ca0eb55413ae1cf23d5404d798ce23e932e16ff4e08c1284ab8" Aug 5 22:22:54.368273 containerd[1442]: 2024-08-05 22:22:54.332 [INFO][5255] utils.go 188: Calico CNI releasing IP address ContainerID="c3cba8607d516ca0eb55413ae1cf23d5404d798ce23e932e16ff4e08c1284ab8" Aug 5 22:22:54.368273 containerd[1442]: 2024-08-05 22:22:54.354 [INFO][5263] ipam_plugin.go 411: Releasing address using handleID ContainerID="c3cba8607d516ca0eb55413ae1cf23d5404d798ce23e932e16ff4e08c1284ab8" HandleID="k8s-pod-network.c3cba8607d516ca0eb55413ae1cf23d5404d798ce23e932e16ff4e08c1284ab8" Workload="localhost-k8s-csi--node--driver--4gbsx-eth0" Aug 5 22:22:54.368273 containerd[1442]: 2024-08-05 22:22:54.354 [INFO][5263] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:22:54.368273 containerd[1442]: 2024-08-05 22:22:54.354 [INFO][5263] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:22:54.368273 containerd[1442]: 2024-08-05 22:22:54.362 [WARNING][5263] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="c3cba8607d516ca0eb55413ae1cf23d5404d798ce23e932e16ff4e08c1284ab8" HandleID="k8s-pod-network.c3cba8607d516ca0eb55413ae1cf23d5404d798ce23e932e16ff4e08c1284ab8" Workload="localhost-k8s-csi--node--driver--4gbsx-eth0" Aug 5 22:22:54.368273 containerd[1442]: 2024-08-05 22:22:54.362 [INFO][5263] ipam_plugin.go 439: Releasing address using workloadID ContainerID="c3cba8607d516ca0eb55413ae1cf23d5404d798ce23e932e16ff4e08c1284ab8" HandleID="k8s-pod-network.c3cba8607d516ca0eb55413ae1cf23d5404d798ce23e932e16ff4e08c1284ab8" Workload="localhost-k8s-csi--node--driver--4gbsx-eth0" Aug 5 22:22:54.368273 containerd[1442]: 2024-08-05 22:22:54.363 [INFO][5263] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:22:54.368273 containerd[1442]: 2024-08-05 22:22:54.365 [INFO][5255] k8s.go 621: Teardown processing complete. ContainerID="c3cba8607d516ca0eb55413ae1cf23d5404d798ce23e932e16ff4e08c1284ab8" Aug 5 22:22:54.368762 containerd[1442]: time="2024-08-05T22:22:54.368303946Z" level=info msg="TearDown network for sandbox \"c3cba8607d516ca0eb55413ae1cf23d5404d798ce23e932e16ff4e08c1284ab8\" successfully" Aug 5 22:22:54.368762 containerd[1442]: time="2024-08-05T22:22:54.368329225Z" level=info msg="StopPodSandbox for \"c3cba8607d516ca0eb55413ae1cf23d5404d798ce23e932e16ff4e08c1284ab8\" returns successfully" Aug 5 22:22:54.369902 containerd[1442]: time="2024-08-05T22:22:54.369845892Z" level=info msg="RemovePodSandbox for \"c3cba8607d516ca0eb55413ae1cf23d5404d798ce23e932e16ff4e08c1284ab8\"" Aug 5 22:22:54.370006 containerd[1442]: time="2024-08-05T22:22:54.369880812Z" level=info msg="Forcibly stopping sandbox \"c3cba8607d516ca0eb55413ae1cf23d5404d798ce23e932e16ff4e08c1284ab8\"" Aug 5 22:22:54.447003 containerd[1442]: 2024-08-05 22:22:54.409 [WARNING][5286] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c3cba8607d516ca0eb55413ae1cf23d5404d798ce23e932e16ff4e08c1284ab8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--4gbsx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f86a8cc4-dcb1-4c07-ba89-5368804c0223", ResourceVersion:"1005", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 22, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9ac1b8be2ceb3794623e28d25fe94bb8f6b76948bcafbcded5046a0b70145836", Pod:"csi-node-driver-4gbsx", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calic4e285434b1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:22:54.447003 containerd[1442]: 2024-08-05 22:22:54.410 [INFO][5286] k8s.go 608: Cleaning up netns ContainerID="c3cba8607d516ca0eb55413ae1cf23d5404d798ce23e932e16ff4e08c1284ab8" Aug 5 22:22:54.447003 containerd[1442]: 2024-08-05 22:22:54.410 [INFO][5286] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="c3cba8607d516ca0eb55413ae1cf23d5404d798ce23e932e16ff4e08c1284ab8" iface="eth0" netns="" Aug 5 22:22:54.447003 containerd[1442]: 2024-08-05 22:22:54.410 [INFO][5286] k8s.go 615: Releasing IP address(es) ContainerID="c3cba8607d516ca0eb55413ae1cf23d5404d798ce23e932e16ff4e08c1284ab8" Aug 5 22:22:54.447003 containerd[1442]: 2024-08-05 22:22:54.410 [INFO][5286] utils.go 188: Calico CNI releasing IP address ContainerID="c3cba8607d516ca0eb55413ae1cf23d5404d798ce23e932e16ff4e08c1284ab8" Aug 5 22:22:54.447003 containerd[1442]: 2024-08-05 22:22:54.430 [INFO][5293] ipam_plugin.go 411: Releasing address using handleID ContainerID="c3cba8607d516ca0eb55413ae1cf23d5404d798ce23e932e16ff4e08c1284ab8" HandleID="k8s-pod-network.c3cba8607d516ca0eb55413ae1cf23d5404d798ce23e932e16ff4e08c1284ab8" Workload="localhost-k8s-csi--node--driver--4gbsx-eth0" Aug 5 22:22:54.447003 containerd[1442]: 2024-08-05 22:22:54.430 [INFO][5293] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:22:54.447003 containerd[1442]: 2024-08-05 22:22:54.431 [INFO][5293] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:22:54.447003 containerd[1442]: 2024-08-05 22:22:54.439 [WARNING][5293] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="c3cba8607d516ca0eb55413ae1cf23d5404d798ce23e932e16ff4e08c1284ab8" HandleID="k8s-pod-network.c3cba8607d516ca0eb55413ae1cf23d5404d798ce23e932e16ff4e08c1284ab8" Workload="localhost-k8s-csi--node--driver--4gbsx-eth0" Aug 5 22:22:54.447003 containerd[1442]: 2024-08-05 22:22:54.439 [INFO][5293] ipam_plugin.go 439: Releasing address using workloadID ContainerID="c3cba8607d516ca0eb55413ae1cf23d5404d798ce23e932e16ff4e08c1284ab8" HandleID="k8s-pod-network.c3cba8607d516ca0eb55413ae1cf23d5404d798ce23e932e16ff4e08c1284ab8" Workload="localhost-k8s-csi--node--driver--4gbsx-eth0" Aug 5 22:22:54.447003 containerd[1442]: 2024-08-05 22:22:54.441 [INFO][5293] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:22:54.447003 containerd[1442]: 2024-08-05 22:22:54.443 [INFO][5286] k8s.go 621: Teardown processing complete. ContainerID="c3cba8607d516ca0eb55413ae1cf23d5404d798ce23e932e16ff4e08c1284ab8" Aug 5 22:22:54.447373 containerd[1442]: time="2024-08-05T22:22:54.447014042Z" level=info msg="TearDown network for sandbox \"c3cba8607d516ca0eb55413ae1cf23d5404d798ce23e932e16ff4e08c1284ab8\" successfully" Aug 5 22:22:54.450052 containerd[1442]: time="2024-08-05T22:22:54.450014695Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c3cba8607d516ca0eb55413ae1cf23d5404d798ce23e932e16ff4e08c1284ab8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 5 22:22:54.450126 containerd[1442]: time="2024-08-05T22:22:54.450101854Z" level=info msg="RemovePodSandbox \"c3cba8607d516ca0eb55413ae1cf23d5404d798ce23e932e16ff4e08c1284ab8\" returns successfully" Aug 5 22:22:54.450963 containerd[1442]: time="2024-08-05T22:22:54.450610769Z" level=info msg="StopPodSandbox for \"f68d983e7c3eeea37cd4b206105d5883b092149f0ec911982605e85d731696f2\"" Aug 5 22:22:54.516849 containerd[1442]: 2024-08-05 22:22:54.483 [WARNING][5316] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f68d983e7c3eeea37cd4b206105d5883b092149f0ec911982605e85d731696f2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--59466c9fdc--nnr4r-eth0", GenerateName:"calico-kube-controllers-59466c9fdc-", Namespace:"calico-system", SelfLink:"", UID:"894e3f76-424c-45d5-b5c3-82011c14a86f", ResourceVersion:"960", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 22, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"59466c9fdc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fb55282830092ebd3cbf509384cbe218f549b5f8ce7667898d318ffbf0c9b390", Pod:"calico-kube-controllers-59466c9fdc-nnr4r", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6dee42cf023", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:22:54.516849 containerd[1442]: 2024-08-05 22:22:54.483 [INFO][5316] k8s.go 608: Cleaning up netns ContainerID="f68d983e7c3eeea37cd4b206105d5883b092149f0ec911982605e85d731696f2" Aug 5 22:22:54.516849 containerd[1442]: 2024-08-05 22:22:54.483 [INFO][5316] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="f68d983e7c3eeea37cd4b206105d5883b092149f0ec911982605e85d731696f2" iface="eth0" netns="" Aug 5 22:22:54.516849 containerd[1442]: 2024-08-05 22:22:54.483 [INFO][5316] k8s.go 615: Releasing IP address(es) ContainerID="f68d983e7c3eeea37cd4b206105d5883b092149f0ec911982605e85d731696f2" Aug 5 22:22:54.516849 containerd[1442]: 2024-08-05 22:22:54.483 [INFO][5316] utils.go 188: Calico CNI releasing IP address ContainerID="f68d983e7c3eeea37cd4b206105d5883b092149f0ec911982605e85d731696f2" Aug 5 22:22:54.516849 containerd[1442]: 2024-08-05 22:22:54.502 [INFO][5323] ipam_plugin.go 411: Releasing address using handleID ContainerID="f68d983e7c3eeea37cd4b206105d5883b092149f0ec911982605e85d731696f2" HandleID="k8s-pod-network.f68d983e7c3eeea37cd4b206105d5883b092149f0ec911982605e85d731696f2" Workload="localhost-k8s-calico--kube--controllers--59466c9fdc--nnr4r-eth0" Aug 5 22:22:54.516849 containerd[1442]: 2024-08-05 22:22:54.502 [INFO][5323] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:22:54.516849 containerd[1442]: 2024-08-05 22:22:54.502 [INFO][5323] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:22:54.516849 containerd[1442]: 2024-08-05 22:22:54.510 [WARNING][5323] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="f68d983e7c3eeea37cd4b206105d5883b092149f0ec911982605e85d731696f2" HandleID="k8s-pod-network.f68d983e7c3eeea37cd4b206105d5883b092149f0ec911982605e85d731696f2" Workload="localhost-k8s-calico--kube--controllers--59466c9fdc--nnr4r-eth0" Aug 5 22:22:54.516849 containerd[1442]: 2024-08-05 22:22:54.510 [INFO][5323] ipam_plugin.go 439: Releasing address using workloadID ContainerID="f68d983e7c3eeea37cd4b206105d5883b092149f0ec911982605e85d731696f2" HandleID="k8s-pod-network.f68d983e7c3eeea37cd4b206105d5883b092149f0ec911982605e85d731696f2" Workload="localhost-k8s-calico--kube--controllers--59466c9fdc--nnr4r-eth0" Aug 5 22:22:54.516849 containerd[1442]: 2024-08-05 22:22:54.513 [INFO][5323] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:22:54.516849 containerd[1442]: 2024-08-05 22:22:54.515 [INFO][5316] k8s.go 621: Teardown processing complete. ContainerID="f68d983e7c3eeea37cd4b206105d5883b092149f0ec911982605e85d731696f2" Aug 5 22:22:54.517427 containerd[1442]: time="2024-08-05T22:22:54.517300573Z" level=info msg="TearDown network for sandbox \"f68d983e7c3eeea37cd4b206105d5883b092149f0ec911982605e85d731696f2\" successfully" Aug 5 22:22:54.517427 containerd[1442]: time="2024-08-05T22:22:54.517349052Z" level=info msg="StopPodSandbox for \"f68d983e7c3eeea37cd4b206105d5883b092149f0ec911982605e85d731696f2\" returns successfully" Aug 5 22:22:54.517865 containerd[1442]: time="2024-08-05T22:22:54.517837088Z" level=info msg="RemovePodSandbox for \"f68d983e7c3eeea37cd4b206105d5883b092149f0ec911982605e85d731696f2\"" Aug 5 22:22:54.517919 containerd[1442]: time="2024-08-05T22:22:54.517877848Z" level=info msg="Forcibly stopping sandbox \"f68d983e7c3eeea37cd4b206105d5883b092149f0ec911982605e85d731696f2\"" Aug 5 22:22:54.589598 containerd[1442]: 2024-08-05 22:22:54.552 [WARNING][5344] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f68d983e7c3eeea37cd4b206105d5883b092149f0ec911982605e85d731696f2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--59466c9fdc--nnr4r-eth0", GenerateName:"calico-kube-controllers-59466c9fdc-", Namespace:"calico-system", SelfLink:"", UID:"894e3f76-424c-45d5-b5c3-82011c14a86f", ResourceVersion:"960", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 22, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"59466c9fdc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fb55282830092ebd3cbf509384cbe218f549b5f8ce7667898d318ffbf0c9b390", Pod:"calico-kube-controllers-59466c9fdc-nnr4r", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6dee42cf023", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:22:54.589598 containerd[1442]: 2024-08-05 22:22:54.553 [INFO][5344] k8s.go 608: Cleaning up netns ContainerID="f68d983e7c3eeea37cd4b206105d5883b092149f0ec911982605e85d731696f2" Aug 5 22:22:54.589598 containerd[1442]: 2024-08-05 22:22:54.553 [INFO][5344] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="f68d983e7c3eeea37cd4b206105d5883b092149f0ec911982605e85d731696f2" iface="eth0" netns="" Aug 5 22:22:54.589598 containerd[1442]: 2024-08-05 22:22:54.553 [INFO][5344] k8s.go 615: Releasing IP address(es) ContainerID="f68d983e7c3eeea37cd4b206105d5883b092149f0ec911982605e85d731696f2" Aug 5 22:22:54.589598 containerd[1442]: 2024-08-05 22:22:54.553 [INFO][5344] utils.go 188: Calico CNI releasing IP address ContainerID="f68d983e7c3eeea37cd4b206105d5883b092149f0ec911982605e85d731696f2" Aug 5 22:22:54.589598 containerd[1442]: 2024-08-05 22:22:54.574 [INFO][5351] ipam_plugin.go 411: Releasing address using handleID ContainerID="f68d983e7c3eeea37cd4b206105d5883b092149f0ec911982605e85d731696f2" HandleID="k8s-pod-network.f68d983e7c3eeea37cd4b206105d5883b092149f0ec911982605e85d731696f2" Workload="localhost-k8s-calico--kube--controllers--59466c9fdc--nnr4r-eth0" Aug 5 22:22:54.589598 containerd[1442]: 2024-08-05 22:22:54.574 [INFO][5351] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:22:54.589598 containerd[1442]: 2024-08-05 22:22:54.574 [INFO][5351] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:22:54.589598 containerd[1442]: 2024-08-05 22:22:54.582 [WARNING][5351] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="f68d983e7c3eeea37cd4b206105d5883b092149f0ec911982605e85d731696f2" HandleID="k8s-pod-network.f68d983e7c3eeea37cd4b206105d5883b092149f0ec911982605e85d731696f2" Workload="localhost-k8s-calico--kube--controllers--59466c9fdc--nnr4r-eth0" Aug 5 22:22:54.589598 containerd[1442]: 2024-08-05 22:22:54.582 [INFO][5351] ipam_plugin.go 439: Releasing address using workloadID ContainerID="f68d983e7c3eeea37cd4b206105d5883b092149f0ec911982605e85d731696f2" HandleID="k8s-pod-network.f68d983e7c3eeea37cd4b206105d5883b092149f0ec911982605e85d731696f2" Workload="localhost-k8s-calico--kube--controllers--59466c9fdc--nnr4r-eth0" Aug 5 22:22:54.589598 containerd[1442]: 2024-08-05 22:22:54.584 [INFO][5351] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:22:54.589598 containerd[1442]: 2024-08-05 22:22:54.587 [INFO][5344] k8s.go 621: Teardown processing complete. ContainerID="f68d983e7c3eeea37cd4b206105d5883b092149f0ec911982605e85d731696f2" Aug 5 22:22:54.590043 containerd[1442]: time="2024-08-05T22:22:54.589618926Z" level=info msg="TearDown network for sandbox \"f68d983e7c3eeea37cd4b206105d5883b092149f0ec911982605e85d731696f2\" successfully" Aug 5 22:22:54.592448 containerd[1442]: time="2024-08-05T22:22:54.592409981Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f68d983e7c3eeea37cd4b206105d5883b092149f0ec911982605e85d731696f2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 5 22:22:54.592572 containerd[1442]: time="2024-08-05T22:22:54.592475260Z" level=info msg="RemovePodSandbox \"f68d983e7c3eeea37cd4b206105d5883b092149f0ec911982605e85d731696f2\" returns successfully" Aug 5 22:22:54.592935 containerd[1442]: time="2024-08-05T22:22:54.592891257Z" level=info msg="StopPodSandbox for \"d9bb25eb5077021ff05f07a82cfddd79b055477e178678f47809adbf759b8c64\"" Aug 5 22:22:54.593008 containerd[1442]: time="2024-08-05T22:22:54.592966856Z" level=info msg="TearDown network for sandbox \"d9bb25eb5077021ff05f07a82cfddd79b055477e178678f47809adbf759b8c64\" successfully" Aug 5 22:22:54.593040 containerd[1442]: time="2024-08-05T22:22:54.593006536Z" level=info msg="StopPodSandbox for \"d9bb25eb5077021ff05f07a82cfddd79b055477e178678f47809adbf759b8c64\" returns successfully" Aug 5 22:22:54.593580 containerd[1442]: time="2024-08-05T22:22:54.593528891Z" level=info msg="RemovePodSandbox for \"d9bb25eb5077021ff05f07a82cfddd79b055477e178678f47809adbf759b8c64\"" Aug 5 22:22:54.593623 containerd[1442]: time="2024-08-05T22:22:54.593588610Z" level=info msg="Forcibly stopping sandbox \"d9bb25eb5077021ff05f07a82cfddd79b055477e178678f47809adbf759b8c64\"" Aug 5 22:22:54.593778 containerd[1442]: time="2024-08-05T22:22:54.593742289Z" level=info msg="TearDown network for sandbox \"d9bb25eb5077021ff05f07a82cfddd79b055477e178678f47809adbf759b8c64\" successfully" Aug 5 22:22:54.596616 containerd[1442]: time="2024-08-05T22:22:54.596577744Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d9bb25eb5077021ff05f07a82cfddd79b055477e178678f47809adbf759b8c64\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 5 22:22:54.596695 containerd[1442]: time="2024-08-05T22:22:54.596641343Z" level=info msg="RemovePodSandbox \"d9bb25eb5077021ff05f07a82cfddd79b055477e178678f47809adbf759b8c64\" returns successfully" Aug 5 22:22:54.597196 containerd[1442]: time="2024-08-05T22:22:54.596925741Z" level=info msg="StopPodSandbox for \"e9abda5d0ef5f804889b1660f5a992fc203f3978b1441b083025f4909415bc0a\"" Aug 5 22:22:54.658007 kubelet[2526]: E0805 22:22:54.657910 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:22:54.674523 kubelet[2526]: I0805 22:22:54.674110 2526 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-4gbsx" podStartSLOduration=34.588330533 podStartE2EDuration="36.674064811s" podCreationTimestamp="2024-08-05 22:22:18 +0000 UTC" firstStartedPulling="2024-08-05 22:22:45.586006509 +0000 UTC m=+51.528637280" lastFinishedPulling="2024-08-05 22:22:47.671740787 +0000 UTC m=+53.614371558" observedRunningTime="2024-08-05 22:22:48.339537396 +0000 UTC m=+54.282168167" watchObservedRunningTime="2024-08-05 22:22:54.674064811 +0000 UTC m=+60.616695582" Aug 5 22:22:54.683557 containerd[1442]: 2024-08-05 22:22:54.635 [WARNING][5390] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e9abda5d0ef5f804889b1660f5a992fc203f3978b1441b083025f4909415bc0a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--rscnh-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"34e6f00e-cfcb-4806-b4ed-da8a03b61a5e", ResourceVersion:"993", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 22, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ed4adc0a385b619aff1bfc0ef9f886912edeba1ac6f90139cd1bb685dbbdd702", Pod:"coredns-76f75df574-rscnh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibbe5a1f7077", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:22:54.683557 containerd[1442]: 2024-08-05 22:22:54.635 [INFO][5390] k8s.go 608: Cleaning up netns ContainerID="e9abda5d0ef5f804889b1660f5a992fc203f3978b1441b083025f4909415bc0a" Aug 5 22:22:54.683557 containerd[1442]: 2024-08-05 22:22:54.635 [INFO][5390] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="e9abda5d0ef5f804889b1660f5a992fc203f3978b1441b083025f4909415bc0a" iface="eth0" netns="" Aug 5 22:22:54.683557 containerd[1442]: 2024-08-05 22:22:54.635 [INFO][5390] k8s.go 615: Releasing IP address(es) ContainerID="e9abda5d0ef5f804889b1660f5a992fc203f3978b1441b083025f4909415bc0a" Aug 5 22:22:54.683557 containerd[1442]: 2024-08-05 22:22:54.635 [INFO][5390] utils.go 188: Calico CNI releasing IP address ContainerID="e9abda5d0ef5f804889b1660f5a992fc203f3978b1441b083025f4909415bc0a" Aug 5 22:22:54.683557 containerd[1442]: 2024-08-05 22:22:54.665 [INFO][5404] ipam_plugin.go 411: Releasing address using handleID ContainerID="e9abda5d0ef5f804889b1660f5a992fc203f3978b1441b083025f4909415bc0a" HandleID="k8s-pod-network.e9abda5d0ef5f804889b1660f5a992fc203f3978b1441b083025f4909415bc0a" Workload="localhost-k8s-coredns--76f75df574--rscnh-eth0" Aug 5 22:22:54.683557 containerd[1442]: 2024-08-05 22:22:54.665 [INFO][5404] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:22:54.683557 containerd[1442]: 2024-08-05 22:22:54.665 [INFO][5404] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:22:54.683557 containerd[1442]: 2024-08-05 22:22:54.676 [WARNING][5404] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="e9abda5d0ef5f804889b1660f5a992fc203f3978b1441b083025f4909415bc0a" HandleID="k8s-pod-network.e9abda5d0ef5f804889b1660f5a992fc203f3978b1441b083025f4909415bc0a" Workload="localhost-k8s-coredns--76f75df574--rscnh-eth0" Aug 5 22:22:54.683557 containerd[1442]: 2024-08-05 22:22:54.676 [INFO][5404] ipam_plugin.go 439: Releasing address using workloadID ContainerID="e9abda5d0ef5f804889b1660f5a992fc203f3978b1441b083025f4909415bc0a" HandleID="k8s-pod-network.e9abda5d0ef5f804889b1660f5a992fc203f3978b1441b083025f4909415bc0a" Workload="localhost-k8s-coredns--76f75df574--rscnh-eth0" Aug 5 22:22:54.683557 containerd[1442]: 2024-08-05 22:22:54.679 [INFO][5404] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:22:54.683557 containerd[1442]: 2024-08-05 22:22:54.682 [INFO][5390] k8s.go 621: Teardown processing complete. ContainerID="e9abda5d0ef5f804889b1660f5a992fc203f3978b1441b083025f4909415bc0a" Aug 5 22:22:54.684174 containerd[1442]: time="2024-08-05T22:22:54.684052201Z" level=info msg="TearDown network for sandbox \"e9abda5d0ef5f804889b1660f5a992fc203f3978b1441b083025f4909415bc0a\" successfully" Aug 5 22:22:54.684174 containerd[1442]: time="2024-08-05T22:22:54.684084441Z" level=info msg="StopPodSandbox for \"e9abda5d0ef5f804889b1660f5a992fc203f3978b1441b083025f4909415bc0a\" returns successfully" Aug 5 22:22:54.684535 containerd[1442]: time="2024-08-05T22:22:54.684486117Z" level=info msg="RemovePodSandbox for \"e9abda5d0ef5f804889b1660f5a992fc203f3978b1441b083025f4909415bc0a\"" Aug 5 22:22:54.684580 containerd[1442]: time="2024-08-05T22:22:54.684528517Z" level=info msg="Forcibly stopping sandbox \"e9abda5d0ef5f804889b1660f5a992fc203f3978b1441b083025f4909415bc0a\"" Aug 5 22:22:54.750295 containerd[1442]: 2024-08-05 22:22:54.716 [WARNING][5427] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e9abda5d0ef5f804889b1660f5a992fc203f3978b1441b083025f4909415bc0a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--rscnh-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"34e6f00e-cfcb-4806-b4ed-da8a03b61a5e", ResourceVersion:"993", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 22, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ed4adc0a385b619aff1bfc0ef9f886912edeba1ac6f90139cd1bb685dbbdd702", Pod:"coredns-76f75df574-rscnh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibbe5a1f7077", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:22:54.750295 containerd[1442]: 2024-08-05 22:22:54.716 [INFO][5427] k8s.go 608: Cleaning up netns ContainerID="e9abda5d0ef5f804889b1660f5a992fc203f3978b1441b083025f4909415bc0a" Aug 5 22:22:54.750295 containerd[1442]: 2024-08-05 22:22:54.716 [INFO][5427] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="e9abda5d0ef5f804889b1660f5a992fc203f3978b1441b083025f4909415bc0a" iface="eth0" netns="" Aug 5 22:22:54.750295 containerd[1442]: 2024-08-05 22:22:54.716 [INFO][5427] k8s.go 615: Releasing IP address(es) ContainerID="e9abda5d0ef5f804889b1660f5a992fc203f3978b1441b083025f4909415bc0a" Aug 5 22:22:54.750295 containerd[1442]: 2024-08-05 22:22:54.716 [INFO][5427] utils.go 188: Calico CNI releasing IP address ContainerID="e9abda5d0ef5f804889b1660f5a992fc203f3978b1441b083025f4909415bc0a" Aug 5 22:22:54.750295 containerd[1442]: 2024-08-05 22:22:54.735 [INFO][5435] ipam_plugin.go 411: Releasing address using handleID ContainerID="e9abda5d0ef5f804889b1660f5a992fc203f3978b1441b083025f4909415bc0a" HandleID="k8s-pod-network.e9abda5d0ef5f804889b1660f5a992fc203f3978b1441b083025f4909415bc0a" Workload="localhost-k8s-coredns--76f75df574--rscnh-eth0" Aug 5 22:22:54.750295 containerd[1442]: 2024-08-05 22:22:54.735 [INFO][5435] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:22:54.750295 containerd[1442]: 2024-08-05 22:22:54.735 [INFO][5435] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:22:54.750295 containerd[1442]: 2024-08-05 22:22:54.743 [WARNING][5435] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="e9abda5d0ef5f804889b1660f5a992fc203f3978b1441b083025f4909415bc0a" HandleID="k8s-pod-network.e9abda5d0ef5f804889b1660f5a992fc203f3978b1441b083025f4909415bc0a" Workload="localhost-k8s-coredns--76f75df574--rscnh-eth0" Aug 5 22:22:54.750295 containerd[1442]: 2024-08-05 22:22:54.743 [INFO][5435] ipam_plugin.go 439: Releasing address using workloadID ContainerID="e9abda5d0ef5f804889b1660f5a992fc203f3978b1441b083025f4909415bc0a" HandleID="k8s-pod-network.e9abda5d0ef5f804889b1660f5a992fc203f3978b1441b083025f4909415bc0a" Workload="localhost-k8s-coredns--76f75df574--rscnh-eth0" Aug 5 22:22:54.750295 containerd[1442]: 2024-08-05 22:22:54.745 [INFO][5435] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:22:54.750295 containerd[1442]: 2024-08-05 22:22:54.748 [INFO][5427] k8s.go 621: Teardown processing complete. ContainerID="e9abda5d0ef5f804889b1660f5a992fc203f3978b1441b083025f4909415bc0a" Aug 5 22:22:54.750897 containerd[1442]: time="2024-08-05T22:22:54.750344048Z" level=info msg="TearDown network for sandbox \"e9abda5d0ef5f804889b1660f5a992fc203f3978b1441b083025f4909415bc0a\" successfully" Aug 5 22:22:54.756982 containerd[1442]: time="2024-08-05T22:22:54.756928869Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e9abda5d0ef5f804889b1660f5a992fc203f3978b1441b083025f4909415bc0a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 5 22:22:54.757089 containerd[1442]: time="2024-08-05T22:22:54.757001629Z" level=info msg="RemovePodSandbox \"e9abda5d0ef5f804889b1660f5a992fc203f3978b1441b083025f4909415bc0a\" returns successfully" Aug 5 22:22:54.757464 containerd[1442]: time="2024-08-05T22:22:54.757420745Z" level=info msg="StopPodSandbox for \"b9e3d6b6901bacde775df695318aa7dfca5a19f8dd3bb9156b1c7fa63a0e01ae\"" Aug 5 22:22:54.757559 containerd[1442]: time="2024-08-05T22:22:54.757502544Z" level=info msg="TearDown network for sandbox \"b9e3d6b6901bacde775df695318aa7dfca5a19f8dd3bb9156b1c7fa63a0e01ae\" successfully" Aug 5 22:22:54.757559 containerd[1442]: time="2024-08-05T22:22:54.757557344Z" level=info msg="StopPodSandbox for \"b9e3d6b6901bacde775df695318aa7dfca5a19f8dd3bb9156b1c7fa63a0e01ae\" returns successfully" Aug 5 22:22:54.757879 containerd[1442]: time="2024-08-05T22:22:54.757852101Z" level=info msg="RemovePodSandbox for \"b9e3d6b6901bacde775df695318aa7dfca5a19f8dd3bb9156b1c7fa63a0e01ae\"" Aug 5 22:22:54.757926 containerd[1442]: time="2024-08-05T22:22:54.757877981Z" level=info msg="Forcibly stopping sandbox \"b9e3d6b6901bacde775df695318aa7dfca5a19f8dd3bb9156b1c7fa63a0e01ae\"" Aug 5 22:22:54.757962 containerd[1442]: time="2024-08-05T22:22:54.757944820Z" level=info msg="TearDown network for sandbox \"b9e3d6b6901bacde775df695318aa7dfca5a19f8dd3bb9156b1c7fa63a0e01ae\" successfully" Aug 5 22:22:54.761313 containerd[1442]: time="2024-08-05T22:22:54.761119752Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b9e3d6b6901bacde775df695318aa7dfca5a19f8dd3bb9156b1c7fa63a0e01ae\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 5 22:22:54.761313 containerd[1442]: time="2024-08-05T22:22:54.761216231Z" level=info msg="RemovePodSandbox \"b9e3d6b6901bacde775df695318aa7dfca5a19f8dd3bb9156b1c7fa63a0e01ae\" returns successfully" Aug 5 22:22:55.668838 systemd[1]: Started sshd@18-10.0.0.142:22-10.0.0.1:51606.service - OpenSSH per-connection server daemon (10.0.0.1:51606). Aug 5 22:22:55.702776 sshd[5445]: Accepted publickey for core from 10.0.0.1 port 51606 ssh2: RSA SHA256:m+vSf9MZ8jyHy+Dz2uz+ngzM5NRoRVVH/LZDa5ltoPE Aug 5 22:22:55.704012 sshd[5445]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:22:55.708027 systemd-logind[1424]: New session 19 of user core. Aug 5 22:22:55.717864 systemd[1]: Started session-19.scope - Session 19 of User core. Aug 5 22:22:55.824550 sshd[5445]: pam_unix(sshd:session): session closed for user core Aug 5 22:22:55.827661 systemd[1]: sshd@18-10.0.0.142:22-10.0.0.1:51606.service: Deactivated successfully. Aug 5 22:22:55.830603 systemd[1]: session-19.scope: Deactivated successfully. Aug 5 22:22:55.831389 systemd-logind[1424]: Session 19 logged out. Waiting for processes to exit. Aug 5 22:22:55.832265 systemd-logind[1424]: Removed session 19. Aug 5 22:22:57.805251 kubelet[2526]: I0805 22:22:57.805192 2526 topology_manager.go:215] "Topology Admit Handler" podUID="4540c8f9-169e-405a-a58a-dadd672750c5" podNamespace="calico-apiserver" podName="calico-apiserver-68c4b67667-4s8bc" Aug 5 22:22:57.819420 systemd[1]: Created slice kubepods-besteffort-pod4540c8f9_169e_405a_a58a_dadd672750c5.slice - libcontainer container kubepods-besteffort-pod4540c8f9_169e_405a_a58a_dadd672750c5.slice. Aug 5 22:22:57.935448 kubelet[2526]: I0805 22:22:57.935412 2526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psr46\" (UniqueName: \"kubernetes.io/projected/4540c8f9-169e-405a-a58a-dadd672750c5-kube-api-access-psr46\") pod \"calico-apiserver-68c4b67667-4s8bc\" (UID: \"4540c8f9-169e-405a-a58a-dadd672750c5\") " pod="calico-apiserver/calico-apiserver-68c4b67667-4s8bc" Aug 5 22:22:57.935559 kubelet[2526]: I0805 22:22:57.935484 2526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4540c8f9-169e-405a-a58a-dadd672750c5-calico-apiserver-certs\") pod \"calico-apiserver-68c4b67667-4s8bc\" (UID: \"4540c8f9-169e-405a-a58a-dadd672750c5\") " pod="calico-apiserver/calico-apiserver-68c4b67667-4s8bc" Aug 5 22:22:58.036243 kubelet[2526]: E0805 22:22:58.036196 2526 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Aug 5 22:22:58.036363 kubelet[2526]: E0805 22:22:58.036286 2526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4540c8f9-169e-405a-a58a-dadd672750c5-calico-apiserver-certs podName:4540c8f9-169e-405a-a58a-dadd672750c5 nodeName:}" failed. No retries permitted until 2024-08-05 22:22:58.536268224 +0000 UTC m=+64.478898995 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/4540c8f9-169e-405a-a58a-dadd672750c5-calico-apiserver-certs") pod "calico-apiserver-68c4b67667-4s8bc" (UID: "4540c8f9-169e-405a-a58a-dadd672750c5") : secret "calico-apiserver-certs" not found Aug 5 22:22:58.723104 containerd[1442]: time="2024-08-05T22:22:58.723061746Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68c4b67667-4s8bc,Uid:4540c8f9-169e-405a-a58a-dadd672750c5,Namespace:calico-apiserver,Attempt:0,}" Aug 5 22:22:58.843312 systemd-networkd[1373]: calib77147fc32e: Link UP Aug 5 22:22:58.843865 systemd-networkd[1373]: calib77147fc32e: Gained carrier Aug 5 22:22:58.855549 containerd[1442]: 2024-08-05 22:22:58.765 [INFO][5468] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--68c4b67667--4s8bc-eth0 calico-apiserver-68c4b67667- calico-apiserver 4540c8f9-169e-405a-a58a-dadd672750c5 1132 0 2024-08-05 22:22:57 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:68c4b67667 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-68c4b67667-4s8bc eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib77147fc32e [] []}} ContainerID="5950130cefe4ddcbbfd2a8fc295e757d493bdf2889aa8b77994cf95afbf03bb6" Namespace="calico-apiserver" Pod="calico-apiserver-68c4b67667-4s8bc" WorkloadEndpoint="localhost-k8s-calico--apiserver--68c4b67667--4s8bc-" Aug 5 22:22:58.855549 containerd[1442]: 2024-08-05 22:22:58.765 [INFO][5468] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="5950130cefe4ddcbbfd2a8fc295e757d493bdf2889aa8b77994cf95afbf03bb6" Namespace="calico-apiserver" Pod="calico-apiserver-68c4b67667-4s8bc" WorkloadEndpoint="localhost-k8s-calico--apiserver--68c4b67667--4s8bc-eth0" Aug 5 22:22:58.855549 containerd[1442]: 2024-08-05 22:22:58.799 [INFO][5482] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5950130cefe4ddcbbfd2a8fc295e757d493bdf2889aa8b77994cf95afbf03bb6" HandleID="k8s-pod-network.5950130cefe4ddcbbfd2a8fc295e757d493bdf2889aa8b77994cf95afbf03bb6" Workload="localhost-k8s-calico--apiserver--68c4b67667--4s8bc-eth0" Aug 5 22:22:58.855549 containerd[1442]: 2024-08-05 22:22:58.811 [INFO][5482] ipam_plugin.go 264: Auto assigning IP ContainerID="5950130cefe4ddcbbfd2a8fc295e757d493bdf2889aa8b77994cf95afbf03bb6" HandleID="k8s-pod-network.5950130cefe4ddcbbfd2a8fc295e757d493bdf2889aa8b77994cf95afbf03bb6" Workload="localhost-k8s-calico--apiserver--68c4b67667--4s8bc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d92e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-68c4b67667-4s8bc", "timestamp":"2024-08-05 22:22:58.799368283 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 5 22:22:58.855549 containerd[1442]: 2024-08-05 22:22:58.811 [INFO][5482] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:22:58.855549 containerd[1442]: 2024-08-05 22:22:58.811 [INFO][5482] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:22:58.855549 containerd[1442]: 2024-08-05 22:22:58.811 [INFO][5482] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 5 22:22:58.855549 containerd[1442]: 2024-08-05 22:22:58.813 [INFO][5482] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.5950130cefe4ddcbbfd2a8fc295e757d493bdf2889aa8b77994cf95afbf03bb6" host="localhost" Aug 5 22:22:58.855549 containerd[1442]: 2024-08-05 22:22:58.818 [INFO][5482] ipam.go 372: Looking up existing affinities for host host="localhost" Aug 5 22:22:58.855549 containerd[1442]: 2024-08-05 22:22:58.822 [INFO][5482] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Aug 5 22:22:58.855549 containerd[1442]: 2024-08-05 22:22:58.824 [INFO][5482] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 5 22:22:58.855549 containerd[1442]: 2024-08-05 22:22:58.828 [INFO][5482] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 5 22:22:58.855549 containerd[1442]: 2024-08-05 22:22:58.828 [INFO][5482] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5950130cefe4ddcbbfd2a8fc295e757d493bdf2889aa8b77994cf95afbf03bb6" host="localhost" Aug 5 22:22:58.855549 containerd[1442]: 2024-08-05 22:22:58.830 [INFO][5482] ipam.go 1685: Creating new handle: k8s-pod-network.5950130cefe4ddcbbfd2a8fc295e757d493bdf2889aa8b77994cf95afbf03bb6 Aug 5 22:22:58.855549 containerd[1442]: 2024-08-05 22:22:58.833 [INFO][5482] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5950130cefe4ddcbbfd2a8fc295e757d493bdf2889aa8b77994cf95afbf03bb6" host="localhost" Aug 5 22:22:58.855549 containerd[1442]: 2024-08-05 22:22:58.838 [INFO][5482] ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.5950130cefe4ddcbbfd2a8fc295e757d493bdf2889aa8b77994cf95afbf03bb6" host="localhost" Aug 5 22:22:58.855549 containerd[1442]: 2024-08-05 22:22:58.838 [INFO][5482] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.5950130cefe4ddcbbfd2a8fc295e757d493bdf2889aa8b77994cf95afbf03bb6" host="localhost" Aug 5 22:22:58.855549 containerd[1442]: 2024-08-05 22:22:58.838 [INFO][5482] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:22:58.855549 containerd[1442]: 2024-08-05 22:22:58.838 [INFO][5482] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="5950130cefe4ddcbbfd2a8fc295e757d493bdf2889aa8b77994cf95afbf03bb6" HandleID="k8s-pod-network.5950130cefe4ddcbbfd2a8fc295e757d493bdf2889aa8b77994cf95afbf03bb6" Workload="localhost-k8s-calico--apiserver--68c4b67667--4s8bc-eth0" Aug 5 22:22:58.856188 containerd[1442]: 2024-08-05 22:22:58.840 [INFO][5468] k8s.go 386: Populated endpoint ContainerID="5950130cefe4ddcbbfd2a8fc295e757d493bdf2889aa8b77994cf95afbf03bb6" Namespace="calico-apiserver" Pod="calico-apiserver-68c4b67667-4s8bc" WorkloadEndpoint="localhost-k8s-calico--apiserver--68c4b67667--4s8bc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--68c4b67667--4s8bc-eth0", GenerateName:"calico-apiserver-68c4b67667-", Namespace:"calico-apiserver", SelfLink:"", UID:"4540c8f9-169e-405a-a58a-dadd672750c5", ResourceVersion:"1132", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 22, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68c4b67667", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-68c4b67667-4s8bc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib77147fc32e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:22:58.856188 containerd[1442]: 2024-08-05 22:22:58.841 [INFO][5468] k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="5950130cefe4ddcbbfd2a8fc295e757d493bdf2889aa8b77994cf95afbf03bb6" Namespace="calico-apiserver" Pod="calico-apiserver-68c4b67667-4s8bc" WorkloadEndpoint="localhost-k8s-calico--apiserver--68c4b67667--4s8bc-eth0" Aug 5 22:22:58.856188 containerd[1442]: 2024-08-05 22:22:58.841 [INFO][5468] dataplane_linux.go 68: Setting the host side veth name to calib77147fc32e ContainerID="5950130cefe4ddcbbfd2a8fc295e757d493bdf2889aa8b77994cf95afbf03bb6" Namespace="calico-apiserver" Pod="calico-apiserver-68c4b67667-4s8bc" WorkloadEndpoint="localhost-k8s-calico--apiserver--68c4b67667--4s8bc-eth0" Aug 5 22:22:58.856188 containerd[1442]: 2024-08-05 22:22:58.843 [INFO][5468] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="5950130cefe4ddcbbfd2a8fc295e757d493bdf2889aa8b77994cf95afbf03bb6" Namespace="calico-apiserver" Pod="calico-apiserver-68c4b67667-4s8bc" WorkloadEndpoint="localhost-k8s-calico--apiserver--68c4b67667--4s8bc-eth0" Aug 5 22:22:58.856188 containerd[1442]: 2024-08-05 22:22:58.843 [INFO][5468] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="5950130cefe4ddcbbfd2a8fc295e757d493bdf2889aa8b77994cf95afbf03bb6" Namespace="calico-apiserver" Pod="calico-apiserver-68c4b67667-4s8bc" WorkloadEndpoint="localhost-k8s-calico--apiserver--68c4b67667--4s8bc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--68c4b67667--4s8bc-eth0", GenerateName:"calico-apiserver-68c4b67667-", Namespace:"calico-apiserver", SelfLink:"", UID:"4540c8f9-169e-405a-a58a-dadd672750c5", ResourceVersion:"1132", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 22, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68c4b67667", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5950130cefe4ddcbbfd2a8fc295e757d493bdf2889aa8b77994cf95afbf03bb6", Pod:"calico-apiserver-68c4b67667-4s8bc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib77147fc32e", MAC:"52:05:a7:1d:bb:54", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:22:58.856188 containerd[1442]: 2024-08-05 22:22:58.851 [INFO][5468] k8s.go 500: Wrote updated endpoint to datastore ContainerID="5950130cefe4ddcbbfd2a8fc295e757d493bdf2889aa8b77994cf95afbf03bb6" Namespace="calico-apiserver" Pod="calico-apiserver-68c4b67667-4s8bc" WorkloadEndpoint="localhost-k8s-calico--apiserver--68c4b67667--4s8bc-eth0" Aug 5 22:22:58.881006 containerd[1442]: time="2024-08-05T22:22:58.880917699Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:22:58.881006 containerd[1442]: time="2024-08-05T22:22:58.880974658Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:22:58.881006 containerd[1442]: time="2024-08-05T22:22:58.880998578Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:22:58.881140 containerd[1442]: time="2024-08-05T22:22:58.881011178Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:22:58.897864 systemd[1]: Started cri-containerd-5950130cefe4ddcbbfd2a8fc295e757d493bdf2889aa8b77994cf95afbf03bb6.scope - libcontainer container 5950130cefe4ddcbbfd2a8fc295e757d493bdf2889aa8b77994cf95afbf03bb6. Aug 5 22:22:58.907627 systemd-resolved[1313]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 5 22:22:58.925968 containerd[1442]: time="2024-08-05T22:22:58.925930132Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68c4b67667-4s8bc,Uid:4540c8f9-169e-405a-a58a-dadd672750c5,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"5950130cefe4ddcbbfd2a8fc295e757d493bdf2889aa8b77994cf95afbf03bb6\"" Aug 5 22:22:58.929619 containerd[1442]: time="2024-08-05T22:22:58.929580502Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\"" Aug 5 22:23:00.466175 systemd-networkd[1373]: calib77147fc32e: Gained IPv6LL Aug 5 22:23:00.518455 containerd[1442]: time="2024-08-05T22:23:00.518393159Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:23:00.526847 containerd[1442]: time="2024-08-05T22:23:00.526802974Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.0: active requests=0, bytes read=37831527" Aug 5 22:23:00.527839 containerd[1442]: time="2024-08-05T22:23:00.527799886Z" level=info msg="ImageCreate event name:\"sha256:cfbcd2d846bffa8495396cef27ce876ed8ebd8e36f660b8dd9326c1ff4d770ac\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:23:00.530002 containerd[1442]: time="2024-08-05T22:23:00.529965509Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:23:00.531048 containerd[1442]: time="2024-08-05T22:23:00.531006701Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" with image id \"sha256:cfbcd2d846bffa8495396cef27ce876ed8ebd8e36f660b8dd9326c1ff4d770ac\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\", size \"39198111\" in 1.601390079s" Aug 5 22:23:00.531087 containerd[1442]: time="2024-08-05T22:23:00.531048181Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" returns image reference \"sha256:cfbcd2d846bffa8495396cef27ce876ed8ebd8e36f660b8dd9326c1ff4d770ac\"" Aug 5 22:23:00.533647 containerd[1442]: time="2024-08-05T22:23:00.533614001Z" level=info msg="CreateContainer within sandbox \"5950130cefe4ddcbbfd2a8fc295e757d493bdf2889aa8b77994cf95afbf03bb6\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 5 22:23:00.545376 containerd[1442]: time="2024-08-05T22:23:00.545328989Z" level=info msg="CreateContainer within sandbox \"5950130cefe4ddcbbfd2a8fc295e757d493bdf2889aa8b77994cf95afbf03bb6\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"03434e54b2ca2fdab776f9f115864932aa23c29118b94c764880a5c212997c82\"" Aug 5 22:23:00.546140 containerd[1442]: time="2024-08-05T22:23:00.545778546Z" level=info msg="StartContainer for \"03434e54b2ca2fdab776f9f115864932aa23c29118b94c764880a5c212997c82\"" Aug 5 22:23:00.576852 systemd[1]: Started cri-containerd-03434e54b2ca2fdab776f9f115864932aa23c29118b94c764880a5c212997c82.scope - libcontainer container 03434e54b2ca2fdab776f9f115864932aa23c29118b94c764880a5c212997c82. Aug 5 22:23:00.603979 containerd[1442]: time="2024-08-05T22:23:00.603932333Z" level=info msg="StartContainer for \"03434e54b2ca2fdab776f9f115864932aa23c29118b94c764880a5c212997c82\" returns successfully" Aug 5 22:23:00.839763 systemd[1]: Started sshd@19-10.0.0.142:22-10.0.0.1:51608.service - OpenSSH per-connection server daemon (10.0.0.1:51608). Aug 5 22:23:00.882526 sshd[5611]: Accepted publickey for core from 10.0.0.1 port 51608 ssh2: RSA SHA256:m+vSf9MZ8jyHy+Dz2uz+ngzM5NRoRVVH/LZDa5ltoPE Aug 5 22:23:00.884011 sshd[5611]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:23:00.888613 systemd-logind[1424]: New session 20 of user core. Aug 5 22:23:00.896837 systemd[1]: Started session-20.scope - Session 20 of User core. Aug 5 22:23:01.015449 sshd[5611]: pam_unix(sshd:session): session closed for user core Aug 5 22:23:01.019059 systemd[1]: sshd@19-10.0.0.142:22-10.0.0.1:51608.service: Deactivated successfully. Aug 5 22:23:01.021112 systemd[1]: session-20.scope: Deactivated successfully. Aug 5 22:23:01.022189 systemd-logind[1424]: Session 20 logged out. Waiting for processes to exit. Aug 5 22:23:01.023126 systemd-logind[1424]: Removed session 20. Aug 5 22:23:01.778904 kubelet[2526]: I0805 22:23:01.778828 2526 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-68c4b67667-4s8bc" podStartSLOduration=3.175806925 podStartE2EDuration="4.778360235s" podCreationTimestamp="2024-08-05 22:22:57 +0000 UTC" firstStartedPulling="2024-08-05 22:22:58.928958067 +0000 UTC m=+64.871588838" lastFinishedPulling="2024-08-05 22:23:00.531511377 +0000 UTC m=+66.474142148" observedRunningTime="2024-08-05 22:23:01.378189445 +0000 UTC m=+67.320820256" watchObservedRunningTime="2024-08-05 22:23:01.778360235 +0000 UTC m=+67.720990966" Aug 5 22:23:06.029259 systemd[1]: Started sshd@20-10.0.0.142:22-10.0.0.1:54026.service - OpenSSH per-connection server daemon (10.0.0.1:54026). Aug 5 22:23:06.081844 sshd[5646]: Accepted publickey for core from 10.0.0.1 port 54026 ssh2: RSA SHA256:m+vSf9MZ8jyHy+Dz2uz+ngzM5NRoRVVH/LZDa5ltoPE Aug 5 22:23:06.083253 sshd[5646]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:23:06.091815 systemd-logind[1424]: New session 21 of user core. Aug 5 22:23:06.099823 systemd[1]: Started session-21.scope - Session 21 of User core. Aug 5 22:23:06.142804 kubelet[2526]: E0805 22:23:06.142753 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:23:06.235089 sshd[5646]: pam_unix(sshd:session): session closed for user core Aug 5 22:23:06.239281 systemd[1]: sshd@20-10.0.0.142:22-10.0.0.1:54026.service: Deactivated successfully. Aug 5 22:23:06.241742 systemd[1]: session-21.scope: Deactivated successfully. Aug 5 22:23:06.242750 systemd-logind[1424]: Session 21 logged out. Waiting for processes to exit. Aug 5 22:23:06.243621 systemd-logind[1424]: Removed session 21.