Aug 5 21:31:48.914471 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Aug 5 21:31:48.914492 kernel: Linux version 6.6.43-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT Mon Aug 5 20:24:20 -00 2024 Aug 5 21:31:48.914501 kernel: KASLR enabled Aug 5 21:31:48.914507 kernel: efi: EFI v2.7 by EDK II Aug 5 21:31:48.914512 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb8fd018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Aug 5 21:31:48.914518 kernel: random: crng init done Aug 5 21:31:48.914525 kernel: ACPI: Early table checksum verification disabled Aug 5 21:31:48.914531 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Aug 5 21:31:48.914537 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Aug 5 21:31:48.914545 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Aug 5 21:31:48.914551 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Aug 5 21:31:48.914557 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Aug 5 21:31:48.914563 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Aug 5 21:31:48.914569 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Aug 5 21:31:48.914577 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 5 21:31:48.914584 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Aug 5 21:31:48.914591 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Aug 5 21:31:48.914597 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Aug 5 21:31:48.914603 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Aug 5 21:31:48.914610 kernel: NUMA: Failed to initialise from firmware Aug 5 21:31:48.914616 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Aug 5 21:31:48.914622 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Aug 5 21:31:48.914629 kernel: Zone ranges: Aug 5 21:31:48.914635 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Aug 5 21:31:48.914641 kernel: DMA32 empty Aug 5 21:31:48.914648 kernel: Normal empty Aug 5 21:31:48.914655 kernel: Movable zone start for each node Aug 5 21:31:48.914661 kernel: Early memory node ranges Aug 5 21:31:48.914667 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Aug 5 21:31:48.914674 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Aug 5 21:31:48.914680 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Aug 5 21:31:48.914686 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Aug 5 21:31:48.914692 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Aug 5 21:31:48.914699 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Aug 5 21:31:48.914705 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Aug 5 21:31:48.914711 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Aug 5 21:31:48.914717 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Aug 5 21:31:48.914725 kernel: psci: probing for conduit method from ACPI. Aug 5 21:31:48.914736 kernel: psci: PSCIv1.1 detected in firmware. Aug 5 21:31:48.914742 kernel: psci: Using standard PSCI v0.2 function IDs Aug 5 21:31:48.914754 kernel: psci: Trusted OS migration not required Aug 5 21:31:48.914762 kernel: psci: SMC Calling Convention v1.1 Aug 5 21:31:48.914772 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Aug 5 21:31:48.914781 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Aug 5 21:31:48.914788 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Aug 5 21:31:48.914795 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Aug 5 21:31:48.914828 kernel: Detected PIPT I-cache on CPU0 Aug 5 21:31:48.914836 kernel: CPU features: detected: GIC system register CPU interface Aug 5 21:31:48.914843 kernel: CPU features: detected: Hardware dirty bit management Aug 5 21:31:48.914850 kernel: CPU features: detected: Spectre-v4 Aug 5 21:31:48.914856 kernel: CPU features: detected: Spectre-BHB Aug 5 21:31:48.914863 kernel: CPU features: kernel page table isolation forced ON by KASLR Aug 5 21:31:48.914870 kernel: CPU features: detected: Kernel page table isolation (KPTI) Aug 5 21:31:48.914879 kernel: CPU features: detected: ARM erratum 1418040 Aug 5 21:31:48.914886 kernel: alternatives: applying boot alternatives Aug 5 21:31:48.914894 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=bb6c4f94d40caa6d83ad7b7b3f8907e11ce677871c150228b9a5377ddab3341e Aug 5 21:31:48.914901 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 5 21:31:48.914907 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 5 21:31:48.914914 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 5 21:31:48.914921 kernel: Fallback order for Node 0: 0 Aug 5 21:31:48.914928 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Aug 5 21:31:48.914934 kernel: Policy zone: DMA Aug 5 21:31:48.914941 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 5 21:31:48.914947 kernel: software IO TLB: area num 4. Aug 5 21:31:48.914955 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Aug 5 21:31:48.914963 kernel: Memory: 2386852K/2572288K available (10240K kernel code, 2182K rwdata, 8072K rodata, 39040K init, 897K bss, 185436K reserved, 0K cma-reserved) Aug 5 21:31:48.914970 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Aug 5 21:31:48.914976 kernel: trace event string verifier disabled Aug 5 21:31:48.914983 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 5 21:31:48.914990 kernel: rcu: RCU event tracing is enabled. Aug 5 21:31:48.914997 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Aug 5 21:31:48.915004 kernel: Trampoline variant of Tasks RCU enabled. Aug 5 21:31:48.915010 kernel: Tracing variant of Tasks RCU enabled. Aug 5 21:31:48.915017 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 5 21:31:48.915024 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Aug 5 21:31:48.915030 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Aug 5 21:31:48.915038 kernel: GICv3: 256 SPIs implemented Aug 5 21:31:48.915045 kernel: GICv3: 0 Extended SPIs implemented Aug 5 21:31:48.915052 kernel: Root IRQ handler: gic_handle_irq Aug 5 21:31:48.915058 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Aug 5 21:31:48.915065 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Aug 5 21:31:48.915071 kernel: ITS [mem 0x08080000-0x0809ffff] Aug 5 21:31:48.915078 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400d0000 (indirect, esz 8, psz 64K, shr 1) Aug 5 21:31:48.915085 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400e0000 (flat, esz 8, psz 64K, shr 1) Aug 5 21:31:48.915092 kernel: GICv3: using LPI property table @0x00000000400f0000 Aug 5 21:31:48.915098 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Aug 5 21:31:48.915105 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 5 21:31:48.915113 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 5 21:31:48.915120 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Aug 5 21:31:48.915127 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Aug 5 21:31:48.915134 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Aug 5 21:31:48.915140 kernel: arm-pv: using stolen time PV Aug 5 21:31:48.915147 kernel: Console: colour dummy device 80x25 Aug 5 21:31:48.915154 kernel: ACPI: Core revision 20230628 Aug 5 21:31:48.915161 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Aug 5 21:31:48.915168 kernel: pid_max: default: 32768 minimum: 301 Aug 5 21:31:48.915175 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Aug 5 21:31:48.915183 kernel: SELinux: Initializing. Aug 5 21:31:48.915190 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 5 21:31:48.915197 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 5 21:31:48.915204 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Aug 5 21:31:48.915210 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Aug 5 21:31:48.915217 kernel: rcu: Hierarchical SRCU implementation. Aug 5 21:31:48.915224 kernel: rcu: Max phase no-delay instances is 400. Aug 5 21:31:48.915231 kernel: Platform MSI: ITS@0x8080000 domain created Aug 5 21:31:48.915238 kernel: PCI/MSI: ITS@0x8080000 domain created Aug 5 21:31:48.915246 kernel: Remapping and enabling EFI services. Aug 5 21:31:48.915253 kernel: smp: Bringing up secondary CPUs ... Aug 5 21:31:48.915260 kernel: Detected PIPT I-cache on CPU1 Aug 5 21:31:48.915267 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Aug 5 21:31:48.915274 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Aug 5 21:31:48.915280 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 5 21:31:48.915287 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Aug 5 21:31:48.915294 kernel: Detected PIPT I-cache on CPU2 Aug 5 21:31:48.915301 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Aug 5 21:31:48.915308 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Aug 5 21:31:48.915316 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 5 21:31:48.915323 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Aug 5 21:31:48.915335 kernel: Detected PIPT I-cache on CPU3 Aug 5 21:31:48.915344 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Aug 5 21:31:48.915351 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Aug 5 21:31:48.915358 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 5 21:31:48.915365 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Aug 5 21:31:48.915377 kernel: smp: Brought up 1 node, 4 CPUs Aug 5 21:31:48.915385 kernel: SMP: Total of 4 processors activated. Aug 5 21:31:48.915393 kernel: CPU features: detected: 32-bit EL0 Support Aug 5 21:31:48.915401 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Aug 5 21:31:48.915408 kernel: CPU features: detected: Common not Private translations Aug 5 21:31:48.915415 kernel: CPU features: detected: CRC32 instructions Aug 5 21:31:48.915423 kernel: CPU features: detected: Enhanced Virtualization Traps Aug 5 21:31:48.915430 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Aug 5 21:31:48.915437 kernel: CPU features: detected: LSE atomic instructions Aug 5 21:31:48.915444 kernel: CPU features: detected: Privileged Access Never Aug 5 21:31:48.915453 kernel: CPU features: detected: RAS Extension Support Aug 5 21:31:48.915460 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Aug 5 21:31:48.915467 kernel: CPU: All CPU(s) started at EL1 Aug 5 21:31:48.915474 kernel: alternatives: applying system-wide alternatives Aug 5 21:31:48.915481 kernel: devtmpfs: initialized Aug 5 21:31:48.915489 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 5 21:31:48.915496 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Aug 5 21:31:48.915503 kernel: pinctrl core: initialized pinctrl subsystem Aug 5 21:31:48.915511 kernel: SMBIOS 3.0.0 present. Aug 5 21:31:48.915519 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Aug 5 21:31:48.915527 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 5 21:31:48.915534 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Aug 5 21:31:48.915542 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Aug 5 21:31:48.915549 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Aug 5 21:31:48.915556 kernel: audit: initializing netlink subsys (disabled) Aug 5 21:31:48.915563 kernel: audit: type=2000 audit(0.034:1): state=initialized audit_enabled=0 res=1 Aug 5 21:31:48.915571 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 5 21:31:48.915578 kernel: cpuidle: using governor menu Aug 5 21:31:48.915586 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Aug 5 21:31:48.915594 kernel: ASID allocator initialised with 32768 entries Aug 5 21:31:48.915601 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 5 21:31:48.915608 kernel: Serial: AMBA PL011 UART driver Aug 5 21:31:48.915615 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Aug 5 21:31:48.915623 kernel: Modules: 0 pages in range for non-PLT usage Aug 5 21:31:48.915630 kernel: Modules: 509120 pages in range for PLT usage Aug 5 21:31:48.915637 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 5 21:31:48.915644 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Aug 5 21:31:48.915653 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Aug 5 21:31:48.915660 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Aug 5 21:31:48.915667 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 5 21:31:48.915675 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Aug 5 21:31:48.915682 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Aug 5 21:31:48.915689 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Aug 5 21:31:48.915696 kernel: ACPI: Added _OSI(Module Device) Aug 5 21:31:48.915703 kernel: ACPI: Added _OSI(Processor Device) Aug 5 21:31:48.915710 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Aug 5 21:31:48.915719 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 5 21:31:48.915726 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 5 21:31:48.915733 kernel: ACPI: Interpreter enabled Aug 5 21:31:48.915740 kernel: ACPI: Using GIC for interrupt routing Aug 5 21:31:48.915747 kernel: ACPI: MCFG table detected, 1 entries Aug 5 21:31:48.915754 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Aug 5 21:31:48.915762 kernel: printk: console [ttyAMA0] enabled Aug 5 21:31:48.915769 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 5 21:31:48.915915 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Aug 5 21:31:48.915990 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Aug 5 21:31:48.916055 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Aug 5 21:31:48.916117 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Aug 5 21:31:48.916178 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Aug 5 21:31:48.916187 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Aug 5 21:31:48.916195 kernel: PCI host bridge to bus 0000:00 Aug 5 21:31:48.916262 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Aug 5 21:31:48.916322 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Aug 5 21:31:48.916379 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Aug 5 21:31:48.916435 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 5 21:31:48.916513 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Aug 5 21:31:48.916586 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Aug 5 21:31:48.916652 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Aug 5 21:31:48.916718 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Aug 5 21:31:48.916781 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Aug 5 21:31:48.916868 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Aug 5 21:31:48.916935 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Aug 5 21:31:48.917000 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Aug 5 21:31:48.917058 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Aug 5 21:31:48.917115 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Aug 5 21:31:48.917176 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Aug 5 21:31:48.917185 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Aug 5 21:31:48.917193 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Aug 5 21:31:48.917200 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Aug 5 21:31:48.917208 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Aug 5 21:31:48.917215 kernel: iommu: Default domain type: Translated Aug 5 21:31:48.917222 kernel: iommu: DMA domain TLB invalidation policy: strict mode Aug 5 21:31:48.917230 kernel: efivars: Registered efivars operations Aug 5 21:31:48.917237 kernel: vgaarb: loaded Aug 5 21:31:48.917246 kernel: clocksource: Switched to clocksource arch_sys_counter Aug 5 21:31:48.917253 kernel: VFS: Disk quotas dquot_6.6.0 Aug 5 21:31:48.917260 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 5 21:31:48.917268 kernel: pnp: PnP ACPI init Aug 5 21:31:48.917341 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Aug 5 21:31:48.917351 kernel: pnp: PnP ACPI: found 1 devices Aug 5 21:31:48.917358 kernel: NET: Registered PF_INET protocol family Aug 5 21:31:48.917366 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 5 21:31:48.917375 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Aug 5 21:31:48.917382 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 5 21:31:48.917390 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 5 21:31:48.917397 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Aug 5 21:31:48.917405 kernel: TCP: Hash tables configured (established 32768 bind 32768) Aug 5 21:31:48.917412 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 5 21:31:48.917420 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 5 21:31:48.917427 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 5 21:31:48.917434 kernel: PCI: CLS 0 bytes, default 64 Aug 5 21:31:48.917443 kernel: kvm [1]: HYP mode not available Aug 5 21:31:48.917450 kernel: Initialise system trusted keyrings Aug 5 21:31:48.917457 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Aug 5 21:31:48.917465 kernel: Key type asymmetric registered Aug 5 21:31:48.917472 kernel: Asymmetric key parser 'x509' registered Aug 5 21:31:48.917480 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Aug 5 21:31:48.917487 kernel: io scheduler mq-deadline registered Aug 5 21:31:48.917494 kernel: io scheduler kyber registered Aug 5 21:31:48.917501 kernel: io scheduler bfq registered Aug 5 21:31:48.917510 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Aug 5 21:31:48.917518 kernel: ACPI: button: Power Button [PWRB] Aug 5 21:31:48.917526 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Aug 5 21:31:48.917590 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Aug 5 21:31:48.917600 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 5 21:31:48.917607 kernel: thunder_xcv, ver 1.0 Aug 5 21:31:48.917614 kernel: thunder_bgx, ver 1.0 Aug 5 21:31:48.917621 kernel: nicpf, ver 1.0 Aug 5 21:31:48.917628 kernel: nicvf, ver 1.0 Aug 5 21:31:48.917702 kernel: rtc-efi rtc-efi.0: registered as rtc0 Aug 5 21:31:48.917764 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-08-05T21:31:48 UTC (1722893508) Aug 5 21:31:48.917773 kernel: hid: raw HID events driver (C) Jiri Kosina Aug 5 21:31:48.917781 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Aug 5 21:31:48.917788 kernel: watchdog: Delayed init of the lockup detector failed: -19 Aug 5 21:31:48.917795 kernel: watchdog: Hard watchdog permanently disabled Aug 5 21:31:48.917817 kernel: NET: Registered PF_INET6 protocol family Aug 5 21:31:48.917826 kernel: Segment Routing with IPv6 Aug 5 21:31:48.917836 kernel: In-situ OAM (IOAM) with IPv6 Aug 5 21:31:48.917843 kernel: NET: Registered PF_PACKET protocol family Aug 5 21:31:48.917850 kernel: Key type dns_resolver registered Aug 5 21:31:48.917857 kernel: registered taskstats version 1 Aug 5 21:31:48.917865 kernel: Loading compiled-in X.509 certificates Aug 5 21:31:48.917872 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.43-flatcar: 7b6de7a842f23ac7c1bb6bedfb9546933daaea09' Aug 5 21:31:48.917879 kernel: Key type .fscrypt registered Aug 5 21:31:48.917886 kernel: Key type fscrypt-provisioning registered Aug 5 21:31:48.917893 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 5 21:31:48.917902 kernel: ima: Allocated hash algorithm: sha1 Aug 5 21:31:48.917909 kernel: ima: No architecture policies found Aug 5 21:31:48.917917 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Aug 5 21:31:48.917924 kernel: clk: Disabling unused clocks Aug 5 21:31:48.917931 kernel: Freeing unused kernel memory: 39040K Aug 5 21:31:48.917938 kernel: Run /init as init process Aug 5 21:31:48.917945 kernel: with arguments: Aug 5 21:31:48.917952 kernel: /init Aug 5 21:31:48.917959 kernel: with environment: Aug 5 21:31:48.917968 kernel: HOME=/ Aug 5 21:31:48.917975 kernel: TERM=linux Aug 5 21:31:48.917982 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 5 21:31:48.917991 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 5 21:31:48.918001 systemd[1]: Detected virtualization kvm. Aug 5 21:31:48.918009 systemd[1]: Detected architecture arm64. Aug 5 21:31:48.918016 systemd[1]: Running in initrd. Aug 5 21:31:48.918024 systemd[1]: No hostname configured, using default hostname. Aug 5 21:31:48.918033 systemd[1]: Hostname set to . Aug 5 21:31:48.918041 systemd[1]: Initializing machine ID from VM UUID. Aug 5 21:31:48.918048 systemd[1]: Queued start job for default target initrd.target. Aug 5 21:31:48.918056 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 5 21:31:48.918064 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 5 21:31:48.918072 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 5 21:31:48.918080 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 5 21:31:48.918088 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 5 21:31:48.918097 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 5 21:31:48.918107 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 5 21:31:48.918115 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 5 21:31:48.918122 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 5 21:31:48.918130 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 5 21:31:48.918138 systemd[1]: Reached target paths.target - Path Units. Aug 5 21:31:48.918148 systemd[1]: Reached target slices.target - Slice Units. Aug 5 21:31:48.918155 systemd[1]: Reached target swap.target - Swaps. Aug 5 21:31:48.918163 systemd[1]: Reached target timers.target - Timer Units. Aug 5 21:31:48.918171 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 5 21:31:48.918179 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 5 21:31:48.918186 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 5 21:31:48.918194 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Aug 5 21:31:48.918202 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 5 21:31:48.918210 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 5 21:31:48.918219 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 5 21:31:48.918227 systemd[1]: Reached target sockets.target - Socket Units. Aug 5 21:31:48.918235 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 5 21:31:48.918243 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 5 21:31:48.918250 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 5 21:31:48.918258 systemd[1]: Starting systemd-fsck-usr.service... Aug 5 21:31:48.918266 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 5 21:31:48.918274 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 5 21:31:48.918282 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 21:31:48.918291 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 5 21:31:48.918299 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 5 21:31:48.918307 systemd[1]: Finished systemd-fsck-usr.service. Aug 5 21:31:48.918315 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 5 21:31:48.918342 systemd-journald[236]: Collecting audit messages is disabled. Aug 5 21:31:48.918361 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 21:31:48.918370 systemd-journald[236]: Journal started Aug 5 21:31:48.918390 systemd-journald[236]: Runtime Journal (/run/log/journal/9dc3798a9a8d4c39ae4a75f006455976) is 5.9M, max 47.3M, 41.4M free. Aug 5 21:31:48.912817 systemd-modules-load[237]: Inserted module 'overlay' Aug 5 21:31:48.920838 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 5 21:31:48.922349 systemd[1]: Started systemd-journald.service - Journal Service. Aug 5 21:31:48.922887 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 5 21:31:48.927335 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 5 21:31:48.928027 systemd-modules-load[237]: Inserted module 'br_netfilter' Aug 5 21:31:48.928824 kernel: Bridge firewalling registered Aug 5 21:31:48.941012 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 5 21:31:48.942485 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Aug 5 21:31:48.944818 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 5 21:31:48.945947 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 5 21:31:48.948639 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 5 21:31:48.952833 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 5 21:31:48.954246 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 5 21:31:48.956078 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Aug 5 21:31:48.963820 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 5 21:31:48.969835 dracut-cmdline[271]: dracut-dracut-053 Aug 5 21:31:48.971980 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 5 21:31:48.974361 dracut-cmdline[271]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=bb6c4f94d40caa6d83ad7b7b3f8907e11ce677871c150228b9a5377ddab3341e Aug 5 21:31:49.000933 systemd-resolved[281]: Positive Trust Anchors: Aug 5 21:31:49.000950 systemd-resolved[281]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 5 21:31:49.000980 systemd-resolved[281]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Aug 5 21:31:49.005499 systemd-resolved[281]: Defaulting to hostname 'linux'. Aug 5 21:31:49.007462 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 5 21:31:49.010892 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 5 21:31:49.048846 kernel: SCSI subsystem initialized Aug 5 21:31:49.053834 kernel: Loading iSCSI transport class v2.0-870. Aug 5 21:31:49.062849 kernel: iscsi: registered transport (tcp) Aug 5 21:31:49.075845 kernel: iscsi: registered transport (qla4xxx) Aug 5 21:31:49.075893 kernel: QLogic iSCSI HBA Driver Aug 5 21:31:49.125782 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 5 21:31:49.135999 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 5 21:31:49.155361 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 5 21:31:49.155437 kernel: device-mapper: uevent: version 1.0.3 Aug 5 21:31:49.155451 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Aug 5 21:31:49.207836 kernel: raid6: neonx8 gen() 15764 MB/s Aug 5 21:31:49.224830 kernel: raid6: neonx4 gen() 15643 MB/s Aug 5 21:31:49.241833 kernel: raid6: neonx2 gen() 13220 MB/s Aug 5 21:31:49.258835 kernel: raid6: neonx1 gen() 10441 MB/s Aug 5 21:31:49.275831 kernel: raid6: int64x8 gen() 6960 MB/s Aug 5 21:31:49.292867 kernel: raid6: int64x4 gen() 7334 MB/s Aug 5 21:31:49.309844 kernel: raid6: int64x2 gen() 6125 MB/s Aug 5 21:31:49.326857 kernel: raid6: int64x1 gen() 5050 MB/s Aug 5 21:31:49.326908 kernel: raid6: using algorithm neonx8 gen() 15764 MB/s Aug 5 21:31:49.343875 kernel: raid6: .... xor() 11897 MB/s, rmw enabled Aug 5 21:31:49.343910 kernel: raid6: using neon recovery algorithm Aug 5 21:31:49.348831 kernel: xor: measuring software checksum speed Aug 5 21:31:49.349832 kernel: 8regs : 19849 MB/sec Aug 5 21:31:49.351505 kernel: 32regs : 19640 MB/sec Aug 5 21:31:49.351529 kernel: arm64_neon : 27161 MB/sec Aug 5 21:31:49.351539 kernel: xor: using function: arm64_neon (27161 MB/sec) Aug 5 21:31:49.403858 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 5 21:31:49.416535 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 5 21:31:49.426033 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 5 21:31:49.437919 systemd-udevd[462]: Using default interface naming scheme 'v255'. Aug 5 21:31:49.441543 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 5 21:31:49.450970 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 5 21:31:49.462506 dracut-pre-trigger[468]: rd.md=0: removing MD RAID activation Aug 5 21:31:49.490928 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 5 21:31:49.501032 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 5 21:31:49.543039 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 5 21:31:49.551002 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 5 21:31:49.564373 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 5 21:31:49.566315 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 5 21:31:49.567857 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 5 21:31:49.568772 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 5 21:31:49.578032 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 5 21:31:49.587976 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 5 21:31:49.596847 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Aug 5 21:31:49.610387 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Aug 5 21:31:49.610655 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 5 21:31:49.610669 kernel: GPT:9289727 != 19775487 Aug 5 21:31:49.610678 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 5 21:31:49.610694 kernel: GPT:9289727 != 19775487 Aug 5 21:31:49.610703 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 5 21:31:49.610712 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 5 21:31:49.610975 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 5 21:31:49.611041 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 5 21:31:49.614057 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 5 21:31:49.615767 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 5 21:31:49.615861 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 21:31:49.618436 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 21:31:49.629017 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 21:31:49.633891 kernel: BTRFS: device fsid 8a9ab799-ab52-4671-9234-72d7c6e57b99 devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (519) Aug 5 21:31:49.633915 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (509) Aug 5 21:31:49.643049 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Aug 5 21:31:49.644266 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 21:31:49.651509 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Aug 5 21:31:49.657822 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Aug 5 21:31:49.658796 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Aug 5 21:31:49.664671 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 5 21:31:49.675953 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 5 21:31:49.678054 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 5 21:31:49.681633 disk-uuid[554]: Primary Header is updated. Aug 5 21:31:49.681633 disk-uuid[554]: Secondary Entries is updated. Aug 5 21:31:49.681633 disk-uuid[554]: Secondary Header is updated. Aug 5 21:31:49.684838 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 5 21:31:49.700889 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 5 21:31:50.697856 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 5 21:31:50.700085 disk-uuid[555]: The operation has completed successfully. Aug 5 21:31:50.721046 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 5 21:31:50.721142 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 5 21:31:50.741949 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 5 21:31:50.744657 sh[578]: Success Aug 5 21:31:50.757857 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Aug 5 21:31:50.785804 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 5 21:31:50.794235 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 5 21:31:50.796431 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 5 21:31:50.806830 kernel: BTRFS info (device dm-0): first mount of filesystem 8a9ab799-ab52-4671-9234-72d7c6e57b99 Aug 5 21:31:50.806867 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Aug 5 21:31:50.806878 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Aug 5 21:31:50.806888 kernel: BTRFS info (device dm-0): disabling log replay at mount time Aug 5 21:31:50.806904 kernel: BTRFS info (device dm-0): using free space tree Aug 5 21:31:50.810708 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 5 21:31:50.811922 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 5 21:31:50.812667 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 5 21:31:50.815183 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 5 21:31:50.825949 kernel: BTRFS info (device vda6): first mount of filesystem 2fbfcd26-f9be-477f-9b31-7e91608e027d Aug 5 21:31:50.825995 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Aug 5 21:31:50.826006 kernel: BTRFS info (device vda6): using free space tree Aug 5 21:31:50.828846 kernel: BTRFS info (device vda6): auto enabling async discard Aug 5 21:31:50.835374 systemd[1]: mnt-oem.mount: Deactivated successfully. Aug 5 21:31:50.836988 kernel: BTRFS info (device vda6): last unmount of filesystem 2fbfcd26-f9be-477f-9b31-7e91608e027d Aug 5 21:31:50.843284 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 5 21:31:50.853997 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 5 21:31:50.905916 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 5 21:31:50.914981 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 5 21:31:50.935308 systemd-networkd[765]: lo: Link UP Aug 5 21:31:50.935320 systemd-networkd[765]: lo: Gained carrier Aug 5 21:31:50.936060 systemd-networkd[765]: Enumeration completed Aug 5 21:31:50.936352 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 5 21:31:50.936447 systemd-networkd[765]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 5 21:31:50.936450 systemd-networkd[765]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 5 21:31:50.937183 systemd-networkd[765]: eth0: Link UP Aug 5 21:31:50.937186 systemd-networkd[765]: eth0: Gained carrier Aug 5 21:31:50.937193 systemd-networkd[765]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 5 21:31:50.937915 systemd[1]: Reached target network.target - Network. Aug 5 21:31:50.949552 ignition[676]: Ignition 2.19.0 Aug 5 21:31:50.949562 ignition[676]: Stage: fetch-offline Aug 5 21:31:50.949596 ignition[676]: no configs at "/usr/lib/ignition/base.d" Aug 5 21:31:50.950865 systemd-networkd[765]: eth0: DHCPv4 address 10.0.0.18/16, gateway 10.0.0.1 acquired from 10.0.0.1 Aug 5 21:31:50.949604 ignition[676]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 5 21:31:50.949688 ignition[676]: parsed url from cmdline: "" Aug 5 21:31:50.949691 ignition[676]: no config URL provided Aug 5 21:31:50.949695 ignition[676]: reading system config file "/usr/lib/ignition/user.ign" Aug 5 21:31:50.949702 ignition[676]: no config at "/usr/lib/ignition/user.ign" Aug 5 21:31:50.949724 ignition[676]: op(1): [started] loading QEMU firmware config module Aug 5 21:31:50.949728 ignition[676]: op(1): executing: "modprobe" "qemu_fw_cfg" Aug 5 21:31:50.959601 ignition[676]: op(1): [finished] loading QEMU firmware config module Aug 5 21:31:50.996747 ignition[676]: parsing config with SHA512: 99f185c9eeead20fd518994e1abfaec5bc46b33ceecbbce850fed24d4e8ea8733b6c419f421ea7dc9437bec5c1a1695a1ad8dbbbec72eb5b3ac217bd209bf4e1 Aug 5 21:31:51.000891 unknown[676]: fetched base config from "system" Aug 5 21:31:51.000902 unknown[676]: fetched user config from "qemu" Aug 5 21:31:51.002345 ignition[676]: fetch-offline: fetch-offline passed Aug 5 21:31:51.002428 ignition[676]: Ignition finished successfully Aug 5 21:31:51.003924 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 5 21:31:51.005085 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Aug 5 21:31:51.011061 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 5 21:31:51.021712 ignition[777]: Ignition 2.19.0 Aug 5 21:31:51.021723 ignition[777]: Stage: kargs Aug 5 21:31:51.021915 ignition[777]: no configs at "/usr/lib/ignition/base.d" Aug 5 21:31:51.021925 ignition[777]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 5 21:31:51.022804 ignition[777]: kargs: kargs passed Aug 5 21:31:51.022871 ignition[777]: Ignition finished successfully Aug 5 21:31:51.025604 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 5 21:31:51.035974 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 5 21:31:51.045099 ignition[786]: Ignition 2.19.0 Aug 5 21:31:51.045108 ignition[786]: Stage: disks Aug 5 21:31:51.045262 ignition[786]: no configs at "/usr/lib/ignition/base.d" Aug 5 21:31:51.045272 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 5 21:31:51.047506 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 5 21:31:51.046176 ignition[786]: disks: disks passed Aug 5 21:31:51.046223 ignition[786]: Ignition finished successfully Aug 5 21:31:51.050131 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 5 21:31:51.051414 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 5 21:31:51.053056 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 5 21:31:51.054672 systemd[1]: Reached target sysinit.target - System Initialization. Aug 5 21:31:51.056450 systemd[1]: Reached target basic.target - Basic System. Aug 5 21:31:51.064008 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 5 21:31:51.074278 systemd-fsck[797]: ROOT: clean, 14/553520 files, 52654/553472 blocks Aug 5 21:31:51.078358 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 5 21:31:51.091925 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 5 21:31:51.136830 kernel: EXT4-fs (vda9): mounted filesystem ec701988-3dff-4e7d-a2a2-79d78965de5d r/w with ordered data mode. Quota mode: none. Aug 5 21:31:51.137362 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 5 21:31:51.138485 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 5 21:31:51.159910 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 5 21:31:51.161417 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 5 21:31:51.162669 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Aug 5 21:31:51.162707 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 5 21:31:51.170730 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (806) Aug 5 21:31:51.170758 kernel: BTRFS info (device vda6): first mount of filesystem 2fbfcd26-f9be-477f-9b31-7e91608e027d Aug 5 21:31:51.170776 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Aug 5 21:31:51.170786 kernel: BTRFS info (device vda6): using free space tree Aug 5 21:31:51.162727 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 5 21:31:51.169424 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 5 21:31:51.172494 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 5 21:31:51.175575 kernel: BTRFS info (device vda6): auto enabling async discard Aug 5 21:31:51.176719 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 5 21:31:51.215490 initrd-setup-root[830]: cut: /sysroot/etc/passwd: No such file or directory Aug 5 21:31:51.218631 initrd-setup-root[837]: cut: /sysroot/etc/group: No such file or directory Aug 5 21:31:51.222411 initrd-setup-root[844]: cut: /sysroot/etc/shadow: No such file or directory Aug 5 21:31:51.225936 initrd-setup-root[851]: cut: /sysroot/etc/gshadow: No such file or directory Aug 5 21:31:51.292096 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 5 21:31:51.300949 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 5 21:31:51.302486 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 5 21:31:51.308844 kernel: BTRFS info (device vda6): last unmount of filesystem 2fbfcd26-f9be-477f-9b31-7e91608e027d Aug 5 21:31:51.322667 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 5 21:31:51.328414 ignition[919]: INFO : Ignition 2.19.0 Aug 5 21:31:51.328414 ignition[919]: INFO : Stage: mount Aug 5 21:31:51.329741 ignition[919]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 5 21:31:51.329741 ignition[919]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 5 21:31:51.329741 ignition[919]: INFO : mount: mount passed Aug 5 21:31:51.329741 ignition[919]: INFO : Ignition finished successfully Aug 5 21:31:51.331873 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 5 21:31:51.336895 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 5 21:31:51.804958 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 5 21:31:51.816966 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 5 21:31:51.821831 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (933) Aug 5 21:31:51.823430 kernel: BTRFS info (device vda6): first mount of filesystem 2fbfcd26-f9be-477f-9b31-7e91608e027d Aug 5 21:31:51.823483 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Aug 5 21:31:51.823503 kernel: BTRFS info (device vda6): using free space tree Aug 5 21:31:51.825836 kernel: BTRFS info (device vda6): auto enabling async discard Aug 5 21:31:51.826801 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 5 21:31:51.843559 ignition[950]: INFO : Ignition 2.19.0 Aug 5 21:31:51.843559 ignition[950]: INFO : Stage: files Aug 5 21:31:51.844902 ignition[950]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 5 21:31:51.844902 ignition[950]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 5 21:31:51.844902 ignition[950]: DEBUG : files: compiled without relabeling support, skipping Aug 5 21:31:51.848108 ignition[950]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 5 21:31:51.848108 ignition[950]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 5 21:31:51.848108 ignition[950]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 5 21:31:51.851780 ignition[950]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 5 21:31:51.851780 ignition[950]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 5 21:31:51.851780 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Aug 5 21:31:51.851780 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Aug 5 21:31:51.851780 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Aug 5 21:31:51.851780 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Aug 5 21:31:51.848912 unknown[950]: wrote ssh authorized keys file for user: core Aug 5 21:31:52.091495 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Aug 5 21:31:52.170355 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Aug 5 21:31:52.172262 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Aug 5 21:31:52.172262 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Aug 5 21:31:52.172262 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 5 21:31:52.172262 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 5 21:31:52.172262 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 5 21:31:52.172262 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 5 21:31:52.172262 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 5 21:31:52.172262 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 5 21:31:52.172262 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 5 21:31:52.172262 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 5 21:31:52.172262 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Aug 5 21:31:52.172262 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Aug 5 21:31:52.172262 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Aug 5 21:31:52.172262 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.28.7-arm64.raw: attempt #1 Aug 5 21:31:52.252229 systemd-networkd[765]: eth0: Gained IPv6LL Aug 5 21:31:52.523263 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Aug 5 21:31:53.159917 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Aug 5 21:31:53.159917 ignition[950]: INFO : files: op(c): [started] processing unit "containerd.service" Aug 5 21:31:53.163348 ignition[950]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Aug 5 21:31:53.163348 ignition[950]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Aug 5 21:31:53.163348 ignition[950]: INFO : files: op(c): [finished] processing unit "containerd.service" Aug 5 21:31:53.163348 ignition[950]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Aug 5 21:31:53.163348 ignition[950]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 5 21:31:53.163348 ignition[950]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 5 21:31:53.163348 ignition[950]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Aug 5 21:31:53.163348 ignition[950]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Aug 5 21:31:53.163348 ignition[950]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Aug 5 21:31:53.163348 ignition[950]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Aug 5 21:31:53.163348 ignition[950]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Aug 5 21:31:53.163348 ignition[950]: INFO : files: op(12): [started] setting preset to disabled for "coreos-metadata.service" Aug 5 21:31:53.185382 ignition[950]: INFO : files: op(12): op(13): [started] removing enablement symlink(s) for "coreos-metadata.service" Aug 5 21:31:53.188764 ignition[950]: INFO : files: op(12): op(13): [finished] removing enablement symlink(s) for "coreos-metadata.service" Aug 5 21:31:53.190945 ignition[950]: INFO : files: op(12): [finished] setting preset to disabled for "coreos-metadata.service" Aug 5 21:31:53.190945 ignition[950]: INFO : files: op(14): [started] setting preset to enabled for "prepare-helm.service" Aug 5 21:31:53.190945 ignition[950]: INFO : files: op(14): [finished] setting preset to enabled for "prepare-helm.service" Aug 5 21:31:53.190945 ignition[950]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 5 21:31:53.190945 ignition[950]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 5 21:31:53.190945 ignition[950]: INFO : files: files passed Aug 5 21:31:53.190945 ignition[950]: INFO : Ignition finished successfully Aug 5 21:31:53.191269 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 5 21:31:53.199933 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 5 21:31:53.201416 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 5 21:31:53.203834 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 5 21:31:53.203907 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 5 21:31:53.208750 initrd-setup-root-after-ignition[978]: grep: /sysroot/oem/oem-release: No such file or directory Aug 5 21:31:53.211934 initrd-setup-root-after-ignition[980]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 5 21:31:53.211934 initrd-setup-root-after-ignition[980]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 5 21:31:53.214514 initrd-setup-root-after-ignition[984]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 5 21:31:53.216602 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 5 21:31:53.217851 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 5 21:31:53.226921 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 5 21:31:53.245022 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 5 21:31:53.245854 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 5 21:31:53.246959 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 5 21:31:53.247763 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 5 21:31:53.248759 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 5 21:31:53.251199 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 5 21:31:53.266402 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 5 21:31:53.273965 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 5 21:31:53.281361 systemd[1]: Stopped target network.target - Network. Aug 5 21:31:53.282239 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 5 21:31:53.283952 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 5 21:31:53.285817 systemd[1]: Stopped target timers.target - Timer Units. Aug 5 21:31:53.287333 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 5 21:31:53.287446 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 5 21:31:53.289574 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 5 21:31:53.290610 systemd[1]: Stopped target basic.target - Basic System. Aug 5 21:31:53.292286 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 5 21:31:53.293871 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 5 21:31:53.295572 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 5 21:31:53.297339 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 5 21:31:53.299122 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 5 21:31:53.301085 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 5 21:31:53.302675 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 5 21:31:53.304432 systemd[1]: Stopped target swap.target - Swaps. Aug 5 21:31:53.306009 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 5 21:31:53.306124 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 5 21:31:53.308075 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 5 21:31:53.309642 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 5 21:31:53.311766 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 5 21:31:53.312920 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 5 21:31:53.314542 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 5 21:31:53.314650 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 5 21:31:53.317011 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 5 21:31:53.317123 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 5 21:31:53.318960 systemd[1]: Stopped target paths.target - Path Units. Aug 5 21:31:53.320442 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 5 21:31:53.323894 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 5 21:31:53.324989 systemd[1]: Stopped target slices.target - Slice Units. Aug 5 21:31:53.326840 systemd[1]: Stopped target sockets.target - Socket Units. Aug 5 21:31:53.328057 systemd[1]: iscsid.socket: Deactivated successfully. Aug 5 21:31:53.328154 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 5 21:31:53.329469 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 5 21:31:53.329547 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 5 21:31:53.330701 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 5 21:31:53.330820 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 5 21:31:53.332360 systemd[1]: ignition-files.service: Deactivated successfully. Aug 5 21:31:53.332456 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 5 21:31:53.344021 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 5 21:31:53.345550 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 5 21:31:53.346591 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 5 21:31:53.348199 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 5 21:31:53.349626 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 5 21:31:53.349740 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 5 21:31:53.351408 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 5 21:31:53.351504 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 5 21:31:53.351905 systemd-networkd[765]: eth0: DHCPv6 lease lost Aug 5 21:31:53.355658 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 5 21:31:53.355739 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 5 21:31:53.357657 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 5 21:31:53.357752 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 5 21:31:53.361117 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 5 21:31:53.368694 ignition[1004]: INFO : Ignition 2.19.0 Aug 5 21:31:53.368694 ignition[1004]: INFO : Stage: umount Aug 5 21:31:53.368694 ignition[1004]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 5 21:31:53.368694 ignition[1004]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 5 21:31:53.368694 ignition[1004]: INFO : umount: umount passed Aug 5 21:31:53.368694 ignition[1004]: INFO : Ignition finished successfully Aug 5 21:31:53.364322 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 5 21:31:53.364375 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 5 21:31:53.368241 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 5 21:31:53.368730 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 5 21:31:53.368974 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 5 21:31:53.371316 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 5 21:31:53.371405 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 5 21:31:53.373292 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 5 21:31:53.373372 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 5 21:31:53.389228 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 5 21:31:53.389294 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 5 21:31:53.391157 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 5 21:31:53.391204 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 5 21:31:53.392635 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 5 21:31:53.392670 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 5 21:31:53.394201 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 5 21:31:53.394240 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 5 21:31:53.395871 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 5 21:31:53.395912 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 5 21:31:53.397382 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 5 21:31:53.397421 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 5 21:31:53.399001 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 5 21:31:53.399039 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Aug 5 21:31:53.400898 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 5 21:31:53.403510 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 5 21:31:53.403611 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 5 21:31:53.412823 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 5 21:31:53.412963 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 5 21:31:53.414833 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 5 21:31:53.414871 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 5 21:31:53.416550 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 5 21:31:53.416579 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 5 21:31:53.417492 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 5 21:31:53.417533 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 5 21:31:53.420249 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 5 21:31:53.420291 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 5 21:31:53.424918 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 5 21:31:53.424965 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 5 21:31:53.434975 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 5 21:31:53.435971 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 5 21:31:53.436022 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 5 21:31:53.437818 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Aug 5 21:31:53.437863 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 5 21:31:53.439586 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 5 21:31:53.439626 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 5 21:31:53.442666 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 5 21:31:53.442709 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 21:31:53.444862 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 5 21:31:53.444959 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 5 21:31:53.446217 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 5 21:31:53.446298 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 5 21:31:53.448585 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 5 21:31:53.450234 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 5 21:31:53.450288 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 5 21:31:53.452640 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 5 21:31:53.461526 systemd[1]: Switching root. Aug 5 21:31:53.494280 systemd-journald[236]: Journal stopped Aug 5 21:31:54.233958 systemd-journald[236]: Received SIGTERM from PID 1 (systemd). Aug 5 21:31:54.234013 kernel: SELinux: policy capability network_peer_controls=1 Aug 5 21:31:54.234026 kernel: SELinux: policy capability open_perms=1 Aug 5 21:31:54.234035 kernel: SELinux: policy capability extended_socket_class=1 Aug 5 21:31:54.234051 kernel: SELinux: policy capability always_check_network=0 Aug 5 21:31:54.234061 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 5 21:31:54.234070 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 5 21:31:54.234080 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 5 21:31:54.234089 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 5 21:31:54.234099 kernel: audit: type=1403 audit(1722893513.673:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 5 21:31:54.234109 systemd[1]: Successfully loaded SELinux policy in 29.413ms. Aug 5 21:31:54.234130 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 8.943ms. Aug 5 21:31:54.234143 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 5 21:31:54.234154 systemd[1]: Detected virtualization kvm. Aug 5 21:31:54.234167 systemd[1]: Detected architecture arm64. Aug 5 21:31:54.234177 systemd[1]: Detected first boot. Aug 5 21:31:54.234188 systemd[1]: Initializing machine ID from VM UUID. Aug 5 21:31:54.234198 zram_generator::config[1065]: No configuration found. Aug 5 21:31:54.234209 systemd[1]: Populated /etc with preset unit settings. Aug 5 21:31:54.234219 systemd[1]: Queued start job for default target multi-user.target. Aug 5 21:31:54.234230 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Aug 5 21:31:54.234242 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 5 21:31:54.234253 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 5 21:31:54.234263 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 5 21:31:54.234273 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 5 21:31:54.234284 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 5 21:31:54.234294 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 5 21:31:54.234305 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 5 21:31:54.234315 systemd[1]: Created slice user.slice - User and Session Slice. Aug 5 21:31:54.234327 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 5 21:31:54.234338 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 5 21:31:54.234348 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 5 21:31:54.234359 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 5 21:31:54.234369 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 5 21:31:54.234380 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 5 21:31:54.234391 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Aug 5 21:31:54.234402 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 5 21:31:54.234412 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 5 21:31:54.234424 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 5 21:31:54.234434 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 5 21:31:54.234445 systemd[1]: Reached target slices.target - Slice Units. Aug 5 21:31:54.234455 systemd[1]: Reached target swap.target - Swaps. Aug 5 21:31:54.234465 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 5 21:31:54.234476 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 5 21:31:54.234487 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 5 21:31:54.234497 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Aug 5 21:31:54.234509 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 5 21:31:54.234520 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 5 21:31:54.234530 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 5 21:31:54.234541 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 5 21:31:54.234551 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 5 21:31:54.234561 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 5 21:31:54.234571 systemd[1]: Mounting media.mount - External Media Directory... Aug 5 21:31:54.234582 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 5 21:31:54.234592 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 5 21:31:54.234604 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 5 21:31:54.234615 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 5 21:31:54.234626 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 5 21:31:54.234637 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 5 21:31:54.234647 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 5 21:31:54.234658 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 5 21:31:54.234668 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 5 21:31:54.234679 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 5 21:31:54.234689 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 5 21:31:54.234701 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 5 21:31:54.234712 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 5 21:31:54.234723 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Aug 5 21:31:54.234734 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Aug 5 21:31:54.234746 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 5 21:31:54.234757 kernel: fuse: init (API version 7.39) Aug 5 21:31:54.234767 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 5 21:31:54.234776 kernel: loop: module loaded Aug 5 21:31:54.234793 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 5 21:31:54.234806 kernel: ACPI: bus type drm_connector registered Aug 5 21:31:54.234827 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 5 21:31:54.234853 systemd-journald[1148]: Collecting audit messages is disabled. Aug 5 21:31:54.234878 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 5 21:31:54.234891 systemd-journald[1148]: Journal started Aug 5 21:31:54.234912 systemd-journald[1148]: Runtime Journal (/run/log/journal/9dc3798a9a8d4c39ae4a75f006455976) is 5.9M, max 47.3M, 41.4M free. Aug 5 21:31:54.238164 systemd[1]: Started systemd-journald.service - Journal Service. Aug 5 21:31:54.239225 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 5 21:31:54.240202 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 5 21:31:54.241396 systemd[1]: Mounted media.mount - External Media Directory. Aug 5 21:31:54.242420 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 5 21:31:54.243420 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 5 21:31:54.244531 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 5 21:31:54.245724 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 5 21:31:54.247108 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 5 21:31:54.248373 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 5 21:31:54.248533 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 5 21:31:54.249737 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 5 21:31:54.249904 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 5 21:31:54.251036 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 5 21:31:54.251183 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 5 21:31:54.252272 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 5 21:31:54.252425 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 5 21:31:54.253847 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 5 21:31:54.253994 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 5 21:31:54.255086 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 5 21:31:54.255282 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 5 21:31:54.256466 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 5 21:31:54.257979 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 5 21:31:54.259553 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 5 21:31:54.269613 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 5 21:31:54.280904 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 5 21:31:54.282806 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 5 21:31:54.283792 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 5 21:31:54.287374 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 5 21:31:54.289999 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 5 21:31:54.291038 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 5 21:31:54.292962 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 5 21:31:54.294246 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 5 21:31:54.296180 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 5 21:31:54.298134 systemd-journald[1148]: Time spent on flushing to /var/log/journal/9dc3798a9a8d4c39ae4a75f006455976 is 16.358ms for 843 entries. Aug 5 21:31:54.298134 systemd-journald[1148]: System Journal (/var/log/journal/9dc3798a9a8d4c39ae4a75f006455976) is 8.0M, max 195.6M, 187.6M free. Aug 5 21:31:54.325380 systemd-journald[1148]: Received client request to flush runtime journal. Aug 5 21:31:54.300063 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 5 21:31:54.302777 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 5 21:31:54.304359 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 5 21:31:54.305860 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 5 21:31:54.321064 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Aug 5 21:31:54.325390 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 5 21:31:54.327006 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 5 21:31:54.328627 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 5 21:31:54.330626 systemd-tmpfiles[1199]: ACLs are not supported, ignoring. Aug 5 21:31:54.330888 systemd-tmpfiles[1199]: ACLs are not supported, ignoring. Aug 5 21:31:54.332147 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 5 21:31:54.336063 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 5 21:31:54.344220 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 5 21:31:54.347866 udevadm[1206]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Aug 5 21:31:54.366255 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 5 21:31:54.376046 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 5 21:31:54.387364 systemd-tmpfiles[1220]: ACLs are not supported, ignoring. Aug 5 21:31:54.387387 systemd-tmpfiles[1220]: ACLs are not supported, ignoring. Aug 5 21:31:54.390896 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 5 21:31:54.731797 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 5 21:31:54.742976 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 5 21:31:54.762236 systemd-udevd[1231]: Using default interface naming scheme 'v255'. Aug 5 21:31:54.775657 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 5 21:31:54.788020 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 5 21:31:54.805513 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. Aug 5 21:31:54.812939 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1232) Aug 5 21:31:54.824688 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 5 21:31:54.852854 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1239) Aug 5 21:31:54.872410 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 5 21:31:54.891026 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 21:31:54.896913 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 5 21:31:54.908230 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Aug 5 21:31:54.911343 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Aug 5 21:31:54.923137 lvm[1268]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 5 21:31:54.940035 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 21:31:54.954799 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Aug 5 21:31:54.956104 systemd-networkd[1238]: lo: Link UP Aug 5 21:31:54.956114 systemd-networkd[1238]: lo: Gained carrier Aug 5 21:31:54.956641 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 5 21:31:54.956763 systemd-networkd[1238]: Enumeration completed Aug 5 21:31:54.957193 systemd-networkd[1238]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 5 21:31:54.957201 systemd-networkd[1238]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 5 21:31:54.959211 systemd-networkd[1238]: eth0: Link UP Aug 5 21:31:54.959220 systemd-networkd[1238]: eth0: Gained carrier Aug 5 21:31:54.959232 systemd-networkd[1238]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 5 21:31:54.967978 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Aug 5 21:31:54.969109 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 5 21:31:54.971621 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 5 21:31:54.974709 lvm[1276]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 5 21:31:54.976944 systemd-networkd[1238]: eth0: DHCPv4 address 10.0.0.18/16, gateway 10.0.0.1 acquired from 10.0.0.1 Aug 5 21:31:55.007450 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Aug 5 21:31:55.008772 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 5 21:31:55.010059 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 5 21:31:55.010096 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 5 21:31:55.011110 systemd[1]: Reached target machines.target - Containers. Aug 5 21:31:55.013426 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Aug 5 21:31:55.023953 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 5 21:31:55.026287 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 5 21:31:55.027461 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 5 21:31:55.028416 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 5 21:31:55.032457 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Aug 5 21:31:55.037038 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 5 21:31:55.038960 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 5 21:31:55.047331 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 5 21:31:55.051827 kernel: loop0: detected capacity change from 0 to 59688 Aug 5 21:31:55.051885 kernel: block loop0: the capability attribute has been deprecated. Aug 5 21:31:55.059917 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 5 21:31:55.061437 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Aug 5 21:31:55.069865 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 5 21:31:55.115843 kernel: loop1: detected capacity change from 0 to 113712 Aug 5 21:31:55.154859 kernel: loop2: detected capacity change from 0 to 193208 Aug 5 21:31:55.194843 kernel: loop3: detected capacity change from 0 to 59688 Aug 5 21:31:55.202914 kernel: loop4: detected capacity change from 0 to 113712 Aug 5 21:31:55.210837 kernel: loop5: detected capacity change from 0 to 193208 Aug 5 21:31:55.216856 (sd-merge)[1299]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Aug 5 21:31:55.217353 (sd-merge)[1299]: Merged extensions into '/usr'. Aug 5 21:31:55.221004 systemd[1]: Reloading requested from client PID 1285 ('systemd-sysext') (unit systemd-sysext.service)... Aug 5 21:31:55.221024 systemd[1]: Reloading... Aug 5 21:31:55.269912 zram_generator::config[1331]: No configuration found. Aug 5 21:31:55.313999 ldconfig[1281]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 5 21:31:55.368202 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 5 21:31:55.421439 systemd[1]: Reloading finished in 200 ms. Aug 5 21:31:55.434951 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 5 21:31:55.436272 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 5 21:31:55.450965 systemd[1]: Starting ensure-sysext.service... Aug 5 21:31:55.452796 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Aug 5 21:31:55.458130 systemd[1]: Reloading requested from client PID 1366 ('systemctl') (unit ensure-sysext.service)... Aug 5 21:31:55.458143 systemd[1]: Reloading... Aug 5 21:31:55.469499 systemd-tmpfiles[1372]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 5 21:31:55.469758 systemd-tmpfiles[1372]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 5 21:31:55.470414 systemd-tmpfiles[1372]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 5 21:31:55.470628 systemd-tmpfiles[1372]: ACLs are not supported, ignoring. Aug 5 21:31:55.470673 systemd-tmpfiles[1372]: ACLs are not supported, ignoring. Aug 5 21:31:55.473107 systemd-tmpfiles[1372]: Detected autofs mount point /boot during canonicalization of boot. Aug 5 21:31:55.473121 systemd-tmpfiles[1372]: Skipping /boot Aug 5 21:31:55.479632 systemd-tmpfiles[1372]: Detected autofs mount point /boot during canonicalization of boot. Aug 5 21:31:55.479652 systemd-tmpfiles[1372]: Skipping /boot Aug 5 21:31:55.501005 zram_generator::config[1400]: No configuration found. Aug 5 21:31:55.598143 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 5 21:31:55.651540 systemd[1]: Reloading finished in 193 ms. Aug 5 21:31:55.665864 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Aug 5 21:31:55.674705 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 5 21:31:55.676918 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 5 21:31:55.679107 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 5 21:31:55.684957 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 5 21:31:55.686982 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 5 21:31:55.692271 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 5 21:31:55.695587 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 5 21:31:55.698950 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 5 21:31:55.704069 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 5 21:31:55.705108 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 5 21:31:55.710133 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 5 21:31:55.711676 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 5 21:31:55.711845 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 5 21:31:55.713571 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 5 21:31:55.713719 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 5 21:31:55.715303 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 5 21:31:55.715483 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 5 21:31:55.721517 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 5 21:31:55.724087 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 5 21:31:55.727097 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 5 21:31:55.731661 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 5 21:31:55.732492 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 5 21:31:55.736156 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 5 21:31:55.737966 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 5 21:31:55.739674 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 5 21:31:55.739844 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 5 21:31:55.741333 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 5 21:31:55.741470 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 5 21:31:55.747897 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 5 21:31:55.748096 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 5 21:31:55.750370 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 5 21:31:55.758941 augenrules[1488]: No rules Aug 5 21:31:55.761036 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 5 21:31:55.763095 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 5 21:31:55.765443 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 5 21:31:55.766937 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 5 21:31:55.768039 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 5 21:31:55.769680 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 5 21:31:55.771514 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 5 21:31:55.780282 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 5 21:31:55.780430 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 5 21:31:55.782483 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 5 21:31:55.782726 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 5 21:31:55.782765 systemd-resolved[1446]: Positive Trust Anchors: Aug 5 21:31:55.782775 systemd-resolved[1446]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 5 21:31:55.782830 systemd-resolved[1446]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Aug 5 21:31:55.784656 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 5 21:31:55.784821 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 5 21:31:55.789395 systemd[1]: Finished ensure-sysext.service. Aug 5 21:31:55.789770 systemd-resolved[1446]: Defaulting to hostname 'linux'. Aug 5 21:31:55.792077 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 5 21:31:55.794150 systemd[1]: Reached target network.target - Network. Aug 5 21:31:55.795041 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 5 21:31:55.796305 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 5 21:31:55.796369 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 5 21:31:55.803993 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Aug 5 21:31:55.804888 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 5 21:31:55.845835 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Aug 5 21:31:55.846745 systemd-timesyncd[1514]: Contacted time server 10.0.0.1:123 (10.0.0.1). Aug 5 21:31:55.846823 systemd-timesyncd[1514]: Initial clock synchronization to Mon 2024-08-05 21:31:55.784432 UTC. Aug 5 21:31:55.847297 systemd[1]: Reached target sysinit.target - System Initialization. Aug 5 21:31:55.848263 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 5 21:31:55.849314 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 5 21:31:55.850375 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 5 21:31:55.851298 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 5 21:31:55.851328 systemd[1]: Reached target paths.target - Path Units. Aug 5 21:31:55.852030 systemd[1]: Reached target time-set.target - System Time Set. Aug 5 21:31:55.852909 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 5 21:31:55.853805 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 5 21:31:55.855017 systemd[1]: Reached target timers.target - Timer Units. Aug 5 21:31:55.856403 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 5 21:31:55.858965 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 5 21:31:55.861089 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 5 21:31:55.867911 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 5 21:31:55.868846 systemd[1]: Reached target sockets.target - Socket Units. Aug 5 21:31:55.869656 systemd[1]: Reached target basic.target - Basic System. Aug 5 21:31:55.870588 systemd[1]: System is tainted: cgroupsv1 Aug 5 21:31:55.870636 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 5 21:31:55.870655 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 5 21:31:55.871806 systemd[1]: Starting containerd.service - containerd container runtime... Aug 5 21:31:55.873749 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 5 21:31:55.875523 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 5 21:31:55.879965 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 5 21:31:55.880913 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 5 21:31:55.882842 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 5 21:31:55.887943 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 5 21:31:55.892508 jq[1520]: false Aug 5 21:31:55.894991 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 5 21:31:55.897340 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 5 21:31:55.903034 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 5 21:31:55.906089 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 5 21:31:55.911998 systemd[1]: Starting update-engine.service - Update Engine... Aug 5 21:31:55.916939 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 5 21:31:55.917930 extend-filesystems[1522]: Found loop3 Aug 5 21:31:55.917930 extend-filesystems[1522]: Found loop4 Aug 5 21:31:55.917930 extend-filesystems[1522]: Found loop5 Aug 5 21:31:55.917930 extend-filesystems[1522]: Found vda Aug 5 21:31:55.923906 extend-filesystems[1522]: Found vda1 Aug 5 21:31:55.923906 extend-filesystems[1522]: Found vda2 Aug 5 21:31:55.923906 extend-filesystems[1522]: Found vda3 Aug 5 21:31:55.923906 extend-filesystems[1522]: Found usr Aug 5 21:31:55.923906 extend-filesystems[1522]: Found vda4 Aug 5 21:31:55.923906 extend-filesystems[1522]: Found vda6 Aug 5 21:31:55.923906 extend-filesystems[1522]: Found vda7 Aug 5 21:31:55.923906 extend-filesystems[1522]: Found vda9 Aug 5 21:31:55.923906 extend-filesystems[1522]: Checking size of /dev/vda9 Aug 5 21:31:55.923259 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 5 21:31:55.919697 dbus-daemon[1519]: [system] SELinux support is enabled Aug 5 21:31:55.928884 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 5 21:31:55.935597 jq[1541]: true Aug 5 21:31:55.929123 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 5 21:31:55.929396 systemd[1]: motdgen.service: Deactivated successfully. Aug 5 21:31:55.929594 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 5 21:31:55.938889 extend-filesystems[1522]: Resized partition /dev/vda9 Aug 5 21:31:55.941531 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 5 21:31:55.941790 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 5 21:31:55.955840 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1244) Aug 5 21:31:55.961059 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 5 21:31:55.961104 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 5 21:31:55.962014 jq[1564]: true Aug 5 21:31:55.962338 (ntainerd)[1565]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 5 21:31:55.965059 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 5 21:31:55.965083 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 5 21:31:55.969762 extend-filesystems[1563]: resize2fs 1.47.0 (5-Feb-2023) Aug 5 21:31:55.975946 tar[1548]: linux-arm64/helm Aug 5 21:31:55.979832 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Aug 5 21:31:56.006996 update_engine[1536]: I0805 21:31:56.006345 1536 main.cc:92] Flatcar Update Engine starting Aug 5 21:31:56.010574 systemd[1]: Started update-engine.service - Update Engine. Aug 5 21:31:56.011498 update_engine[1536]: I0805 21:31:56.011404 1536 update_check_scheduler.cc:74] Next update check in 2m4s Aug 5 21:31:56.012707 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 5 21:31:56.019833 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Aug 5 21:31:56.023004 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 5 21:31:56.049536 extend-filesystems[1563]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Aug 5 21:31:56.049536 extend-filesystems[1563]: old_desc_blocks = 1, new_desc_blocks = 1 Aug 5 21:31:56.049536 extend-filesystems[1563]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Aug 5 21:31:56.059051 extend-filesystems[1522]: Resized filesystem in /dev/vda9 Aug 5 21:31:56.055164 systemd-logind[1531]: Watching system buttons on /dev/input/event0 (Power Button) Aug 5 21:31:56.055183 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 5 21:31:56.055489 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 5 21:31:56.057800 systemd-logind[1531]: New seat seat0. Aug 5 21:31:56.073222 systemd[1]: Started systemd-logind.service - User Login Management. Aug 5 21:31:56.074051 bash[1592]: Updated "/home/core/.ssh/authorized_keys" Aug 5 21:31:56.076830 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 5 21:31:56.079204 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Aug 5 21:31:56.157052 locksmithd[1588]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 5 21:31:56.191005 sshd_keygen[1544]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 5 21:31:56.214035 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 5 21:31:56.219236 containerd[1565]: time="2024-08-05T21:31:56.219123073Z" level=info msg="starting containerd" revision=cd7148ac666309abf41fd4a49a8a5895b905e7f3 version=v1.7.18 Aug 5 21:31:56.221081 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 5 21:31:56.228278 systemd[1]: issuegen.service: Deactivated successfully. Aug 5 21:31:56.228502 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 5 21:31:56.247418 containerd[1565]: time="2024-08-05T21:31:56.247365637Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Aug 5 21:31:56.247418 containerd[1565]: time="2024-08-05T21:31:56.247414533Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Aug 5 21:31:56.248635 containerd[1565]: time="2024-08-05T21:31:56.248601510Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.43-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Aug 5 21:31:56.248635 containerd[1565]: time="2024-08-05T21:31:56.248629650Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Aug 5 21:31:56.248938 containerd[1565]: time="2024-08-05T21:31:56.248893859Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 5 21:31:56.248938 containerd[1565]: time="2024-08-05T21:31:56.248916124Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Aug 5 21:31:56.249596 containerd[1565]: time="2024-08-05T21:31:56.248993915Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Aug 5 21:31:56.249596 containerd[1565]: time="2024-08-05T21:31:56.249047098Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Aug 5 21:31:56.249596 containerd[1565]: time="2024-08-05T21:31:56.249059917Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Aug 5 21:31:56.249596 containerd[1565]: time="2024-08-05T21:31:56.249115125Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Aug 5 21:31:56.249596 containerd[1565]: time="2024-08-05T21:31:56.249288486Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Aug 5 21:31:56.249596 containerd[1565]: time="2024-08-05T21:31:56.249305393Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Aug 5 21:31:56.249596 containerd[1565]: time="2024-08-05T21:31:56.249314958Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Aug 5 21:31:56.249596 containerd[1565]: time="2024-08-05T21:31:56.249431723Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 5 21:31:56.249596 containerd[1565]: time="2024-08-05T21:31:56.249445098Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Aug 5 21:31:56.249596 containerd[1565]: time="2024-08-05T21:31:56.249492606Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Aug 5 21:31:56.249596 containerd[1565]: time="2024-08-05T21:31:56.249504235Z" level=info msg="metadata content store policy set" policy=shared Aug 5 21:31:56.249093 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 5 21:31:56.254141 containerd[1565]: time="2024-08-05T21:31:56.253490115Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Aug 5 21:31:56.254141 containerd[1565]: time="2024-08-05T21:31:56.253524843Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Aug 5 21:31:56.254141 containerd[1565]: time="2024-08-05T21:31:56.253542068Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Aug 5 21:31:56.254141 containerd[1565]: time="2024-08-05T21:31:56.253571199Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Aug 5 21:31:56.254141 containerd[1565]: time="2024-08-05T21:31:56.253585249Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Aug 5 21:31:56.254141 containerd[1565]: time="2024-08-05T21:31:56.253594814Z" level=info msg="NRI interface is disabled by configuration." Aug 5 21:31:56.254141 containerd[1565]: time="2024-08-05T21:31:56.253613587Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Aug 5 21:31:56.254141 containerd[1565]: time="2024-08-05T21:31:56.253732614Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Aug 5 21:31:56.254141 containerd[1565]: time="2024-08-05T21:31:56.253748291Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Aug 5 21:31:56.254141 containerd[1565]: time="2024-08-05T21:31:56.253760436Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Aug 5 21:31:56.254141 containerd[1565]: time="2024-08-05T21:31:56.253774724Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Aug 5 21:31:56.254141 containerd[1565]: time="2024-08-05T21:31:56.253789330Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Aug 5 21:31:56.254141 containerd[1565]: time="2024-08-05T21:31:56.253842076Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Aug 5 21:31:56.254141 containerd[1565]: time="2024-08-05T21:31:56.253857436Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Aug 5 21:31:56.254430 containerd[1565]: time="2024-08-05T21:31:56.253869700Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Aug 5 21:31:56.254430 containerd[1565]: time="2024-08-05T21:31:56.253888711Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Aug 5 21:31:56.254430 containerd[1565]: time="2024-08-05T21:31:56.253903594Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Aug 5 21:31:56.254430 containerd[1565]: time="2024-08-05T21:31:56.253915660Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Aug 5 21:31:56.254430 containerd[1565]: time="2024-08-05T21:31:56.253928162Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Aug 5 21:31:56.254430 containerd[1565]: time="2024-08-05T21:31:56.254021629Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Aug 5 21:31:56.254430 containerd[1565]: time="2024-08-05T21:31:56.254305206Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Aug 5 21:31:56.254430 containerd[1565]: time="2024-08-05T21:31:56.254331203Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Aug 5 21:31:56.254430 containerd[1565]: time="2024-08-05T21:31:56.254344419Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Aug 5 21:31:56.254430 containerd[1565]: time="2024-08-05T21:31:56.254365931Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Aug 5 21:31:56.254592 containerd[1565]: time="2024-08-05T21:31:56.254475075Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Aug 5 21:31:56.254592 containerd[1565]: time="2024-08-05T21:31:56.254488768Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Aug 5 21:31:56.254592 containerd[1565]: time="2024-08-05T21:31:56.254499920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Aug 5 21:31:56.254592 containerd[1565]: time="2024-08-05T21:31:56.254510438Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Aug 5 21:31:56.254592 containerd[1565]: time="2024-08-05T21:31:56.254525321Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Aug 5 21:31:56.254592 containerd[1565]: time="2024-08-05T21:31:56.254548301Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Aug 5 21:31:56.254592 containerd[1565]: time="2024-08-05T21:31:56.254560724Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Aug 5 21:31:56.254592 containerd[1565]: time="2024-08-05T21:31:56.254572115Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Aug 5 21:31:56.254592 containerd[1565]: time="2024-08-05T21:31:56.254584934Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Aug 5 21:31:56.254734 containerd[1565]: time="2024-08-05T21:31:56.254709319Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Aug 5 21:31:56.254734 containerd[1565]: time="2024-08-05T21:31:56.254726544Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Aug 5 21:31:56.254770 containerd[1565]: time="2024-08-05T21:31:56.254738610Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Aug 5 21:31:56.254770 containerd[1565]: time="2024-08-05T21:31:56.254750596Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Aug 5 21:31:56.254770 containerd[1565]: time="2024-08-05T21:31:56.254762026Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Aug 5 21:31:56.254846 containerd[1565]: time="2024-08-05T21:31:56.254775282Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Aug 5 21:31:56.254846 containerd[1565]: time="2024-08-05T21:31:56.254794293Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Aug 5 21:31:56.254846 containerd[1565]: time="2024-08-05T21:31:56.254823703Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Aug 5 21:31:56.255559 containerd[1565]: time="2024-08-05T21:31:56.255184475Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Aug 5 21:31:56.255559 containerd[1565]: time="2024-08-05T21:31:56.255264250Z" level=info msg="Connect containerd service" Aug 5 21:31:56.255559 containerd[1565]: time="2024-08-05T21:31:56.255291040Z" level=info msg="using legacy CRI server" Aug 5 21:31:56.255559 containerd[1565]: time="2024-08-05T21:31:56.255298224Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 5 21:31:56.255559 containerd[1565]: time="2024-08-05T21:31:56.255435190Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Aug 5 21:31:56.256162 containerd[1565]: time="2024-08-05T21:31:56.256138121Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 5 21:31:56.256208 containerd[1565]: time="2024-08-05T21:31:56.256187335Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Aug 5 21:31:56.256228 containerd[1565]: time="2024-08-05T21:31:56.256206267Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Aug 5 21:31:56.256228 containerd[1565]: time="2024-08-05T21:31:56.256217697Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Aug 5 21:31:56.256418 containerd[1565]: time="2024-08-05T21:31:56.256327873Z" level=info msg="Start subscribing containerd event" Aug 5 21:31:56.256455 containerd[1565]: time="2024-08-05T21:31:56.256431819Z" level=info msg="Start recovering state" Aug 5 21:31:56.256616 containerd[1565]: time="2024-08-05T21:31:56.256491749Z" level=info msg="Start event monitor" Aug 5 21:31:56.256616 containerd[1565]: time="2024-08-05T21:31:56.256504092Z" level=info msg="Start snapshots syncer" Aug 5 21:31:56.256616 containerd[1565]: time="2024-08-05T21:31:56.256515443Z" level=info msg="Start cni network conf syncer for default" Aug 5 21:31:56.256616 containerd[1565]: time="2024-08-05T21:31:56.256523778Z" level=info msg="Start streaming server" Aug 5 21:31:56.258463 containerd[1565]: time="2024-08-05T21:31:56.256781875Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Aug 5 21:31:56.258463 containerd[1565]: time="2024-08-05T21:31:56.257041957Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 5 21:31:56.258463 containerd[1565]: time="2024-08-05T21:31:56.257079185Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 5 21:31:56.257222 systemd[1]: Started containerd.service - containerd container runtime. Aug 5 21:31:56.258821 containerd[1565]: time="2024-08-05T21:31:56.258774617Z" level=info msg="containerd successfully booted in 0.041089s" Aug 5 21:31:56.260834 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 5 21:31:56.275131 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 5 21:31:56.277541 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Aug 5 21:31:56.278915 systemd[1]: Reached target getty.target - Login Prompts. Aug 5 21:31:56.372794 tar[1548]: linux-arm64/LICENSE Aug 5 21:31:56.372980 tar[1548]: linux-arm64/README.md Aug 5 21:31:56.384104 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 5 21:31:56.539946 systemd-networkd[1238]: eth0: Gained IPv6LL Aug 5 21:31:56.542332 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 5 21:31:56.543916 systemd[1]: Reached target network-online.target - Network is Online. Aug 5 21:31:56.554134 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Aug 5 21:31:56.556341 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 21:31:56.558347 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 5 21:31:56.575361 systemd[1]: coreos-metadata.service: Deactivated successfully. Aug 5 21:31:56.575636 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Aug 5 21:31:56.577371 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 5 21:31:56.582657 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 5 21:31:57.036977 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 21:31:57.038440 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 5 21:31:57.041231 (kubelet)[1668]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 5 21:31:57.042958 systemd[1]: Startup finished in 5.519s (kernel) + 3.398s (userspace) = 8.918s. Aug 5 21:31:57.616294 kubelet[1668]: E0805 21:31:57.615473 1668 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 5 21:31:57.621578 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 5 21:31:57.621774 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 5 21:32:01.818162 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 5 21:32:01.830036 systemd[1]: Started sshd@0-10.0.0.18:22-10.0.0.1:57462.service - OpenSSH per-connection server daemon (10.0.0.1:57462). Aug 5 21:32:01.904284 sshd[1682]: Accepted publickey for core from 10.0.0.1 port 57462 ssh2: RSA SHA256:vLb6+tqq+AN0xHsfmostacJPBc0ER3TTPjmoL0pNEEc Aug 5 21:32:01.906110 sshd[1682]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:32:01.925327 systemd-logind[1531]: New session 1 of user core. Aug 5 21:32:01.926250 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 5 21:32:01.934017 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 5 21:32:01.943644 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 5 21:32:01.945846 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 5 21:32:01.952424 (systemd)[1688]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:32:02.027597 systemd[1688]: Queued start job for default target default.target. Aug 5 21:32:02.027987 systemd[1688]: Created slice app.slice - User Application Slice. Aug 5 21:32:02.028022 systemd[1688]: Reached target paths.target - Paths. Aug 5 21:32:02.028033 systemd[1688]: Reached target timers.target - Timers. Aug 5 21:32:02.041931 systemd[1688]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 5 21:32:02.047866 systemd[1688]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 5 21:32:02.048458 systemd[1688]: Reached target sockets.target - Sockets. Aug 5 21:32:02.048471 systemd[1688]: Reached target basic.target - Basic System. Aug 5 21:32:02.048520 systemd[1688]: Reached target default.target - Main User Target. Aug 5 21:32:02.048546 systemd[1688]: Startup finished in 90ms. Aug 5 21:32:02.048666 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 5 21:32:02.049943 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 5 21:32:02.117107 systemd[1]: Started sshd@1-10.0.0.18:22-10.0.0.1:60266.service - OpenSSH per-connection server daemon (10.0.0.1:60266). Aug 5 21:32:02.152002 sshd[1700]: Accepted publickey for core from 10.0.0.1 port 60266 ssh2: RSA SHA256:vLb6+tqq+AN0xHsfmostacJPBc0ER3TTPjmoL0pNEEc Aug 5 21:32:02.153197 sshd[1700]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:32:02.157726 systemd-logind[1531]: New session 2 of user core. Aug 5 21:32:02.172075 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 5 21:32:02.225934 sshd[1700]: pam_unix(sshd:session): session closed for user core Aug 5 21:32:02.238060 systemd[1]: Started sshd@2-10.0.0.18:22-10.0.0.1:60270.service - OpenSSH per-connection server daemon (10.0.0.1:60270). Aug 5 21:32:02.238473 systemd[1]: sshd@1-10.0.0.18:22-10.0.0.1:60266.service: Deactivated successfully. Aug 5 21:32:02.240498 systemd-logind[1531]: Session 2 logged out. Waiting for processes to exit. Aug 5 21:32:02.241040 systemd[1]: session-2.scope: Deactivated successfully. Aug 5 21:32:02.243144 systemd-logind[1531]: Removed session 2. Aug 5 21:32:02.270112 sshd[1705]: Accepted publickey for core from 10.0.0.1 port 60270 ssh2: RSA SHA256:vLb6+tqq+AN0xHsfmostacJPBc0ER3TTPjmoL0pNEEc Aug 5 21:32:02.271470 sshd[1705]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:32:02.275555 systemd-logind[1531]: New session 3 of user core. Aug 5 21:32:02.286112 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 5 21:32:02.336938 sshd[1705]: pam_unix(sshd:session): session closed for user core Aug 5 21:32:02.345055 systemd[1]: Started sshd@3-10.0.0.18:22-10.0.0.1:60282.service - OpenSSH per-connection server daemon (10.0.0.1:60282). Aug 5 21:32:02.345530 systemd[1]: sshd@2-10.0.0.18:22-10.0.0.1:60270.service: Deactivated successfully. Aug 5 21:32:02.347348 systemd-logind[1531]: Session 3 logged out. Waiting for processes to exit. Aug 5 21:32:02.347880 systemd[1]: session-3.scope: Deactivated successfully. Aug 5 21:32:02.349255 systemd-logind[1531]: Removed session 3. Aug 5 21:32:02.376781 sshd[1713]: Accepted publickey for core from 10.0.0.1 port 60282 ssh2: RSA SHA256:vLb6+tqq+AN0xHsfmostacJPBc0ER3TTPjmoL0pNEEc Aug 5 21:32:02.378035 sshd[1713]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:32:02.382355 systemd-logind[1531]: New session 4 of user core. Aug 5 21:32:02.390034 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 5 21:32:02.441041 sshd[1713]: pam_unix(sshd:session): session closed for user core Aug 5 21:32:02.455038 systemd[1]: Started sshd@4-10.0.0.18:22-10.0.0.1:60286.service - OpenSSH per-connection server daemon (10.0.0.1:60286). Aug 5 21:32:02.455407 systemd[1]: sshd@3-10.0.0.18:22-10.0.0.1:60282.service: Deactivated successfully. Aug 5 21:32:02.457193 systemd-logind[1531]: Session 4 logged out. Waiting for processes to exit. Aug 5 21:32:02.457717 systemd[1]: session-4.scope: Deactivated successfully. Aug 5 21:32:02.459195 systemd-logind[1531]: Removed session 4. Aug 5 21:32:02.486453 sshd[1721]: Accepted publickey for core from 10.0.0.1 port 60286 ssh2: RSA SHA256:vLb6+tqq+AN0xHsfmostacJPBc0ER3TTPjmoL0pNEEc Aug 5 21:32:02.487510 sshd[1721]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:32:02.491281 systemd-logind[1531]: New session 5 of user core. Aug 5 21:32:02.502024 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 5 21:32:02.561654 sudo[1728]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 5 21:32:02.561898 sudo[1728]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 5 21:32:02.574512 sudo[1728]: pam_unix(sudo:session): session closed for user root Aug 5 21:32:02.576127 sshd[1721]: pam_unix(sshd:session): session closed for user core Aug 5 21:32:02.586084 systemd[1]: Started sshd@5-10.0.0.18:22-10.0.0.1:60290.service - OpenSSH per-connection server daemon (10.0.0.1:60290). Aug 5 21:32:02.586510 systemd[1]: sshd@4-10.0.0.18:22-10.0.0.1:60286.service: Deactivated successfully. Aug 5 21:32:02.588186 systemd-logind[1531]: Session 5 logged out. Waiting for processes to exit. Aug 5 21:32:02.588772 systemd[1]: session-5.scope: Deactivated successfully. Aug 5 21:32:02.590190 systemd-logind[1531]: Removed session 5. Aug 5 21:32:02.617452 sshd[1730]: Accepted publickey for core from 10.0.0.1 port 60290 ssh2: RSA SHA256:vLb6+tqq+AN0xHsfmostacJPBc0ER3TTPjmoL0pNEEc Aug 5 21:32:02.618505 sshd[1730]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:32:02.623097 systemd-logind[1531]: New session 6 of user core. Aug 5 21:32:02.634064 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 5 21:32:02.684887 sudo[1738]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 5 21:32:02.685408 sudo[1738]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 5 21:32:02.688215 sudo[1738]: pam_unix(sudo:session): session closed for user root Aug 5 21:32:02.692581 sudo[1737]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Aug 5 21:32:02.692851 sudo[1737]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 5 21:32:02.710043 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Aug 5 21:32:02.711244 auditctl[1741]: No rules Aug 5 21:32:02.711598 systemd[1]: audit-rules.service: Deactivated successfully. Aug 5 21:32:02.711847 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Aug 5 21:32:02.714103 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 5 21:32:02.736998 augenrules[1760]: No rules Aug 5 21:32:02.738137 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 5 21:32:02.739445 sudo[1737]: pam_unix(sudo:session): session closed for user root Aug 5 21:32:02.741051 sshd[1730]: pam_unix(sshd:session): session closed for user core Aug 5 21:32:02.752064 systemd[1]: Started sshd@6-10.0.0.18:22-10.0.0.1:60304.service - OpenSSH per-connection server daemon (10.0.0.1:60304). Aug 5 21:32:02.752491 systemd[1]: sshd@5-10.0.0.18:22-10.0.0.1:60290.service: Deactivated successfully. Aug 5 21:32:02.754013 systemd[1]: session-6.scope: Deactivated successfully. Aug 5 21:32:02.754623 systemd-logind[1531]: Session 6 logged out. Waiting for processes to exit. Aug 5 21:32:02.756172 systemd-logind[1531]: Removed session 6. Aug 5 21:32:02.783643 sshd[1767]: Accepted publickey for core from 10.0.0.1 port 60304 ssh2: RSA SHA256:vLb6+tqq+AN0xHsfmostacJPBc0ER3TTPjmoL0pNEEc Aug 5 21:32:02.784797 sshd[1767]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:32:02.788793 systemd-logind[1531]: New session 7 of user core. Aug 5 21:32:02.802039 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 5 21:32:02.851101 sudo[1778]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 5 21:32:02.851347 sudo[1778]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 5 21:32:02.949051 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 5 21:32:02.949195 (dockerd)[1788]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 5 21:32:03.187993 dockerd[1788]: time="2024-08-05T21:32:03.187928848Z" level=info msg="Starting up" Aug 5 21:32:03.372384 dockerd[1788]: time="2024-08-05T21:32:03.372305856Z" level=info msg="Loading containers: start." Aug 5 21:32:03.460871 kernel: Initializing XFRM netlink socket Aug 5 21:32:03.528491 systemd-networkd[1238]: docker0: Link UP Aug 5 21:32:03.547126 dockerd[1788]: time="2024-08-05T21:32:03.547081554Z" level=info msg="Loading containers: done." Aug 5 21:32:03.601481 dockerd[1788]: time="2024-08-05T21:32:03.600986909Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 5 21:32:03.601481 dockerd[1788]: time="2024-08-05T21:32:03.601171583Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Aug 5 21:32:03.601481 dockerd[1788]: time="2024-08-05T21:32:03.601280330Z" level=info msg="Daemon has completed initialization" Aug 5 21:32:03.628088 dockerd[1788]: time="2024-08-05T21:32:03.627904733Z" level=info msg="API listen on /run/docker.sock" Aug 5 21:32:03.629644 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 5 21:32:04.286208 containerd[1565]: time="2024-08-05T21:32:04.286151009Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.12\"" Aug 5 21:32:05.120472 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2537437827.mount: Deactivated successfully. Aug 5 21:32:06.806670 containerd[1565]: time="2024-08-05T21:32:06.806626542Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.28.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:32:06.807659 containerd[1565]: time="2024-08-05T21:32:06.807410774Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.28.12: active requests=0, bytes read=31601518" Aug 5 21:32:06.808555 containerd[1565]: time="2024-08-05T21:32:06.808498943Z" level=info msg="ImageCreate event name:\"sha256:57305d93b5cb5db7c2dd71c2936b30c6c300a568c571d915f30e2677e4472260\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:32:06.811354 containerd[1565]: time="2024-08-05T21:32:06.811327025Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:ac3b6876d95fe7b7691e69f2161a5466adbe9d72d44f342d595674321ce16d23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:32:06.812591 containerd[1565]: time="2024-08-05T21:32:06.812541576Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.28.12\" with image id \"sha256:57305d93b5cb5db7c2dd71c2936b30c6c300a568c571d915f30e2677e4472260\", repo tag \"registry.k8s.io/kube-apiserver:v1.28.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:ac3b6876d95fe7b7691e69f2161a5466adbe9d72d44f342d595674321ce16d23\", size \"31598316\" in 2.526348035s" Aug 5 21:32:06.812591 containerd[1565]: time="2024-08-05T21:32:06.812578260Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.12\" returns image reference \"sha256:57305d93b5cb5db7c2dd71c2936b30c6c300a568c571d915f30e2677e4472260\"" Aug 5 21:32:06.831941 containerd[1565]: time="2024-08-05T21:32:06.831911426Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.12\"" Aug 5 21:32:07.871928 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 5 21:32:07.881065 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 21:32:07.979656 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 21:32:07.983313 (kubelet)[2002]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 5 21:32:08.022196 kubelet[2002]: E0805 21:32:08.022101 2002 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 5 21:32:08.026553 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 5 21:32:08.026735 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 5 21:32:09.268846 containerd[1565]: time="2024-08-05T21:32:09.268677914Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.28.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:32:09.270086 containerd[1565]: time="2024-08-05T21:32:09.270060616Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.28.12: active requests=0, bytes read=29018272" Aug 5 21:32:09.270891 containerd[1565]: time="2024-08-05T21:32:09.270863554Z" level=info msg="ImageCreate event name:\"sha256:fc5c912cb9569e3e61d6507db0c88360a3e23d7e0cfc589aefe633e02aed582a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:32:09.274467 containerd[1565]: time="2024-08-05T21:32:09.274425426Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:996c6259e4405ab79083fbb52bcf53003691a50b579862bf29b3abaa468460db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:32:09.275461 containerd[1565]: time="2024-08-05T21:32:09.275408037Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.28.12\" with image id \"sha256:fc5c912cb9569e3e61d6507db0c88360a3e23d7e0cfc589aefe633e02aed582a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.28.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:996c6259e4405ab79083fbb52bcf53003691a50b579862bf29b3abaa468460db\", size \"30505537\" in 2.44345345s" Aug 5 21:32:09.275461 containerd[1565]: time="2024-08-05T21:32:09.275448222Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.12\" returns image reference \"sha256:fc5c912cb9569e3e61d6507db0c88360a3e23d7e0cfc589aefe633e02aed582a\"" Aug 5 21:32:09.294868 containerd[1565]: time="2024-08-05T21:32:09.294834935Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.12\"" Aug 5 21:32:10.693827 containerd[1565]: time="2024-08-05T21:32:10.693768159Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.28.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:32:10.694425 containerd[1565]: time="2024-08-05T21:32:10.694195605Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.28.12: active requests=0, bytes read=15534522" Aug 5 21:32:10.695174 containerd[1565]: time="2024-08-05T21:32:10.695146184Z" level=info msg="ImageCreate event name:\"sha256:662db3bc8add7dd68943303fde6906c5c4b372a71ed52107b4272181f3041869\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:32:10.697930 containerd[1565]: time="2024-08-05T21:32:10.697881740Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:d93a3b5961248820beb5ec6dfb0320d12c0dba82fc48693d20d345754883551c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:32:10.699236 containerd[1565]: time="2024-08-05T21:32:10.699062801Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.28.12\" with image id \"sha256:662db3bc8add7dd68943303fde6906c5c4b372a71ed52107b4272181f3041869\", repo tag \"registry.k8s.io/kube-scheduler:v1.28.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:d93a3b5961248820beb5ec6dfb0320d12c0dba82fc48693d20d345754883551c\", size \"17021805\" in 1.404193313s" Aug 5 21:32:10.699236 containerd[1565]: time="2024-08-05T21:32:10.699095602Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.12\" returns image reference \"sha256:662db3bc8add7dd68943303fde6906c5c4b372a71ed52107b4272181f3041869\"" Aug 5 21:32:10.716758 containerd[1565]: time="2024-08-05T21:32:10.716731267Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.12\"" Aug 5 21:32:11.805652 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3673959220.mount: Deactivated successfully. Aug 5 21:32:12.111388 containerd[1565]: time="2024-08-05T21:32:12.111316885Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.28.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:32:12.112027 containerd[1565]: time="2024-08-05T21:32:12.111973722Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.28.12: active requests=0, bytes read=24977921" Aug 5 21:32:12.112574 containerd[1565]: time="2024-08-05T21:32:12.112536005Z" level=info msg="ImageCreate event name:\"sha256:d3c27a9ad523d0e17d8e5f3f587a49f9c4b611f30f1851fe0bc1240e53a2084b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:32:12.114546 containerd[1565]: time="2024-08-05T21:32:12.114508712Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7dd7829fa889ac805a0b1047eba04599fa5006bdbcb5cb9c8d14e1dc8910488b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:32:12.115340 containerd[1565]: time="2024-08-05T21:32:12.115301104Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.28.12\" with image id \"sha256:d3c27a9ad523d0e17d8e5f3f587a49f9c4b611f30f1851fe0bc1240e53a2084b\", repo tag \"registry.k8s.io/kube-proxy:v1.28.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:7dd7829fa889ac805a0b1047eba04599fa5006bdbcb5cb9c8d14e1dc8910488b\", size \"24976938\" in 1.398528282s" Aug 5 21:32:12.115375 containerd[1565]: time="2024-08-05T21:32:12.115338989Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.12\" returns image reference \"sha256:d3c27a9ad523d0e17d8e5f3f587a49f9c4b611f30f1851fe0bc1240e53a2084b\"" Aug 5 21:32:12.134741 containerd[1565]: time="2024-08-05T21:32:12.134698237Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Aug 5 21:32:12.593943 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3576273685.mount: Deactivated successfully. Aug 5 21:32:12.597756 containerd[1565]: time="2024-08-05T21:32:12.597718073Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:32:12.598388 containerd[1565]: time="2024-08-05T21:32:12.598358884Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" Aug 5 21:32:12.599030 containerd[1565]: time="2024-08-05T21:32:12.598968404Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:32:12.600994 containerd[1565]: time="2024-08-05T21:32:12.600952260Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:32:12.602014 containerd[1565]: time="2024-08-05T21:32:12.601900829Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 467.16023ms" Aug 5 21:32:12.602014 containerd[1565]: time="2024-08-05T21:32:12.601930961Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Aug 5 21:32:12.620753 containerd[1565]: time="2024-08-05T21:32:12.620724409Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Aug 5 21:32:13.188836 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2911909215.mount: Deactivated successfully. Aug 5 21:32:15.765900 containerd[1565]: time="2024-08-05T21:32:15.765849981Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:32:15.766982 containerd[1565]: time="2024-08-05T21:32:15.766709013Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200788" Aug 5 21:32:15.767752 containerd[1565]: time="2024-08-05T21:32:15.767687091Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:32:15.770713 containerd[1565]: time="2024-08-05T21:32:15.770683806Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:32:15.772087 containerd[1565]: time="2024-08-05T21:32:15.772046687Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 3.151286788s" Aug 5 21:32:15.772087 containerd[1565]: time="2024-08-05T21:32:15.772086463Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Aug 5 21:32:15.793396 containerd[1565]: time="2024-08-05T21:32:15.793237924Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Aug 5 21:32:16.327560 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4009342737.mount: Deactivated successfully. Aug 5 21:32:16.859142 containerd[1565]: time="2024-08-05T21:32:16.859050636Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:32:16.860737 containerd[1565]: time="2024-08-05T21:32:16.860696550Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.10.1: active requests=0, bytes read=14558464" Aug 5 21:32:16.863709 containerd[1565]: time="2024-08-05T21:32:16.863660194Z" level=info msg="ImageCreate event name:\"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:32:16.865929 containerd[1565]: time="2024-08-05T21:32:16.865885516Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:32:16.866901 containerd[1565]: time="2024-08-05T21:32:16.866818054Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.10.1\" with image id \"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108\", repo tag \"registry.k8s.io/coredns/coredns:v1.10.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\", size \"14557471\" in 1.073529519s" Aug 5 21:32:16.866901 containerd[1565]: time="2024-08-05T21:32:16.866856993Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108\"" Aug 5 21:32:18.071931 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 5 21:32:18.080041 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 21:32:18.177282 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 21:32:18.181119 (kubelet)[2200]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 5 21:32:18.222404 kubelet[2200]: E0805 21:32:18.222292 2200 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 5 21:32:18.225197 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 5 21:32:18.225336 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 5 21:32:21.881465 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 21:32:21.890036 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 21:32:21.906328 systemd[1]: Reloading requested from client PID 2218 ('systemctl') (unit session-7.scope)... Aug 5 21:32:21.906343 systemd[1]: Reloading... Aug 5 21:32:21.975849 zram_generator::config[2258]: No configuration found. Aug 5 21:32:22.111526 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 5 21:32:22.171488 systemd[1]: Reloading finished in 264 ms. Aug 5 21:32:22.213404 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Aug 5 21:32:22.213469 systemd[1]: kubelet.service: Failed with result 'signal'. Aug 5 21:32:22.213730 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 21:32:22.216150 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 21:32:22.311654 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 21:32:22.315562 (kubelet)[2313]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 5 21:32:22.359949 kubelet[2313]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 5 21:32:22.359949 kubelet[2313]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 5 21:32:22.359949 kubelet[2313]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 5 21:32:22.360315 kubelet[2313]: I0805 21:32:22.359999 2313 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 5 21:32:22.926757 kubelet[2313]: I0805 21:32:22.926714 2313 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Aug 5 21:32:22.926757 kubelet[2313]: I0805 21:32:22.926746 2313 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 5 21:32:22.926975 kubelet[2313]: I0805 21:32:22.926959 2313 server.go:895] "Client rotation is on, will bootstrap in background" Aug 5 21:32:22.946055 kubelet[2313]: I0805 21:32:22.946016 2313 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 5 21:32:22.950286 kubelet[2313]: E0805 21:32:22.950131 2313 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.18:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.18:6443: connect: connection refused Aug 5 21:32:22.960851 kubelet[2313]: W0805 21:32:22.960771 2313 machine.go:65] Cannot read vendor id correctly, set empty. Aug 5 21:32:22.962044 kubelet[2313]: I0805 21:32:22.962022 2313 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 5 21:32:22.962346 kubelet[2313]: I0805 21:32:22.962324 2313 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 5 21:32:22.962518 kubelet[2313]: I0805 21:32:22.962495 2313 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Aug 5 21:32:22.962602 kubelet[2313]: I0805 21:32:22.962527 2313 topology_manager.go:138] "Creating topology manager with none policy" Aug 5 21:32:22.962602 kubelet[2313]: I0805 21:32:22.962536 2313 container_manager_linux.go:301] "Creating device plugin manager" Aug 5 21:32:22.962718 kubelet[2313]: I0805 21:32:22.962694 2313 state_mem.go:36] "Initialized new in-memory state store" Aug 5 21:32:22.964109 kubelet[2313]: I0805 21:32:22.964087 2313 kubelet.go:393] "Attempting to sync node with API server" Aug 5 21:32:22.964151 kubelet[2313]: I0805 21:32:22.964111 2313 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 5 21:32:22.964644 kubelet[2313]: I0805 21:32:22.964211 2313 kubelet.go:309] "Adding apiserver pod source" Aug 5 21:32:22.964644 kubelet[2313]: I0805 21:32:22.964235 2313 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 5 21:32:22.964644 kubelet[2313]: W0805 21:32:22.964572 2313 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.18:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.18:6443: connect: connection refused Aug 5 21:32:22.964644 kubelet[2313]: E0805 21:32:22.964606 2313 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.18:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.18:6443: connect: connection refused Aug 5 21:32:22.964783 kubelet[2313]: W0805 21:32:22.964756 2313 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.18:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.18:6443: connect: connection refused Aug 5 21:32:22.964804 kubelet[2313]: E0805 21:32:22.964793 2313 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.18:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.18:6443: connect: connection refused Aug 5 21:32:22.965651 kubelet[2313]: I0805 21:32:22.965633 2313 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.18" apiVersion="v1" Aug 5 21:32:22.967314 kubelet[2313]: W0805 21:32:22.967294 2313 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 5 21:32:22.968031 kubelet[2313]: I0805 21:32:22.968007 2313 server.go:1232] "Started kubelet" Aug 5 21:32:22.969745 kubelet[2313]: I0805 21:32:22.969708 2313 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Aug 5 21:32:22.969960 kubelet[2313]: I0805 21:32:22.969823 2313 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 5 21:32:22.969960 kubelet[2313]: E0805 21:32:22.969895 2313 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Aug 5 21:32:22.969960 kubelet[2313]: E0805 21:32:22.969919 2313 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 5 21:32:22.969960 kubelet[2313]: I0805 21:32:22.969962 2313 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 5 21:32:22.970064 kubelet[2313]: I0805 21:32:22.969716 2313 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Aug 5 21:32:22.970920 kubelet[2313]: I0805 21:32:22.970699 2313 server.go:462] "Adding debug handlers to kubelet server" Aug 5 21:32:22.972007 kubelet[2313]: E0805 21:32:22.971978 2313 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 5 21:32:22.972007 kubelet[2313]: I0805 21:32:22.972010 2313 volume_manager.go:291] "Starting Kubelet Volume Manager" Aug 5 21:32:22.972100 kubelet[2313]: I0805 21:32:22.972091 2313 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Aug 5 21:32:22.972153 kubelet[2313]: I0805 21:32:22.972137 2313 reconciler_new.go:29] "Reconciler: start to sync state" Aug 5 21:32:22.972443 kubelet[2313]: W0805 21:32:22.972399 2313 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.18:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.18:6443: connect: connection refused Aug 5 21:32:22.972443 kubelet[2313]: E0805 21:32:22.972440 2313 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.18:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.18:6443: connect: connection refused Aug 5 21:32:22.973149 kubelet[2313]: E0805 21:32:22.972899 2313 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.18:6443: connect: connection refused" interval="200ms" Aug 5 21:32:22.973545 kubelet[2313]: E0805 21:32:22.973265 2313 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17e8f28c8e6bbd8e", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.August, 5, 21, 32, 22, 967983502, time.Local), LastTimestamp:time.Date(2024, time.August, 5, 21, 32, 22, 967983502, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"localhost"}': 'Post "https://10.0.0.18:6443/api/v1/namespaces/default/events": dial tcp 10.0.0.18:6443: connect: connection refused'(may retry after sleeping) Aug 5 21:32:22.985030 kubelet[2313]: I0805 21:32:22.984984 2313 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 5 21:32:22.986679 kubelet[2313]: I0805 21:32:22.986639 2313 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 5 21:32:22.986679 kubelet[2313]: I0805 21:32:22.986665 2313 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 5 21:32:22.986679 kubelet[2313]: I0805 21:32:22.986683 2313 kubelet.go:2303] "Starting kubelet main sync loop" Aug 5 21:32:22.986825 kubelet[2313]: E0805 21:32:22.986738 2313 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 5 21:32:22.990484 kubelet[2313]: W0805 21:32:22.987905 2313 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.18:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.18:6443: connect: connection refused Aug 5 21:32:22.990484 kubelet[2313]: E0805 21:32:22.987963 2313 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.18:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.18:6443: connect: connection refused Aug 5 21:32:23.011088 kubelet[2313]: I0805 21:32:23.011064 2313 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 5 21:32:23.011088 kubelet[2313]: I0805 21:32:23.011084 2313 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 5 21:32:23.011088 kubelet[2313]: I0805 21:32:23.011101 2313 state_mem.go:36] "Initialized new in-memory state store" Aug 5 21:32:23.073433 kubelet[2313]: I0805 21:32:23.073378 2313 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Aug 5 21:32:23.073921 kubelet[2313]: E0805 21:32:23.073901 2313 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.18:6443/api/v1/nodes\": dial tcp 10.0.0.18:6443: connect: connection refused" node="localhost" Aug 5 21:32:23.087020 kubelet[2313]: E0805 21:32:23.086988 2313 kubelet.go:2327] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 5 21:32:23.173587 kubelet[2313]: E0805 21:32:23.173552 2313 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.18:6443: connect: connection refused" interval="400ms" Aug 5 21:32:23.247591 kubelet[2313]: I0805 21:32:23.247499 2313 policy_none.go:49] "None policy: Start" Aug 5 21:32:23.248472 kubelet[2313]: I0805 21:32:23.248453 2313 memory_manager.go:169] "Starting memorymanager" policy="None" Aug 5 21:32:23.248542 kubelet[2313]: I0805 21:32:23.248483 2313 state_mem.go:35] "Initializing new in-memory state store" Aug 5 21:32:23.276002 kubelet[2313]: I0805 21:32:23.275970 2313 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Aug 5 21:32:23.276380 kubelet[2313]: E0805 21:32:23.276350 2313 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.18:6443/api/v1/nodes\": dial tcp 10.0.0.18:6443: connect: connection refused" node="localhost" Aug 5 21:32:23.283365 kubelet[2313]: I0805 21:32:23.282806 2313 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 5 21:32:23.283365 kubelet[2313]: I0805 21:32:23.283054 2313 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 5 21:32:23.286500 kubelet[2313]: E0805 21:32:23.286483 2313 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Aug 5 21:32:23.287688 kubelet[2313]: I0805 21:32:23.287673 2313 topology_manager.go:215] "Topology Admit Handler" podUID="3ca630c7ac485ed80616f9411596daea" podNamespace="kube-system" podName="kube-apiserver-localhost" Aug 5 21:32:23.292177 kubelet[2313]: I0805 21:32:23.292161 2313 topology_manager.go:215] "Topology Admit Handler" podUID="09d96cdeded1d5a51a9712d8a1a0b54a" podNamespace="kube-system" podName="kube-controller-manager-localhost" Aug 5 21:32:23.294592 kubelet[2313]: I0805 21:32:23.294488 2313 topology_manager.go:215] "Topology Admit Handler" podUID="0cc03c154af91f38c5530287ae9cc549" podNamespace="kube-system" podName="kube-scheduler-localhost" Aug 5 21:32:23.374060 kubelet[2313]: I0805 21:32:23.374029 2313 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/09d96cdeded1d5a51a9712d8a1a0b54a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"09d96cdeded1d5a51a9712d8a1a0b54a\") " pod="kube-system/kube-controller-manager-localhost" Aug 5 21:32:23.374523 kubelet[2313]: I0805 21:32:23.374496 2313 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/09d96cdeded1d5a51a9712d8a1a0b54a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"09d96cdeded1d5a51a9712d8a1a0b54a\") " pod="kube-system/kube-controller-manager-localhost" Aug 5 21:32:23.374566 kubelet[2313]: I0805 21:32:23.374550 2313 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/09d96cdeded1d5a51a9712d8a1a0b54a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"09d96cdeded1d5a51a9712d8a1a0b54a\") " pod="kube-system/kube-controller-manager-localhost" Aug 5 21:32:23.374738 kubelet[2313]: I0805 21:32:23.374622 2313 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/09d96cdeded1d5a51a9712d8a1a0b54a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"09d96cdeded1d5a51a9712d8a1a0b54a\") " pod="kube-system/kube-controller-manager-localhost" Aug 5 21:32:23.374738 kubelet[2313]: I0805 21:32:23.374676 2313 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0cc03c154af91f38c5530287ae9cc549-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0cc03c154af91f38c5530287ae9cc549\") " pod="kube-system/kube-scheduler-localhost" Aug 5 21:32:23.374738 kubelet[2313]: I0805 21:32:23.374716 2313 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3ca630c7ac485ed80616f9411596daea-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"3ca630c7ac485ed80616f9411596daea\") " pod="kube-system/kube-apiserver-localhost" Aug 5 21:32:23.374909 kubelet[2313]: I0805 21:32:23.374764 2313 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3ca630c7ac485ed80616f9411596daea-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"3ca630c7ac485ed80616f9411596daea\") " pod="kube-system/kube-apiserver-localhost" Aug 5 21:32:23.374909 kubelet[2313]: I0805 21:32:23.374795 2313 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3ca630c7ac485ed80616f9411596daea-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"3ca630c7ac485ed80616f9411596daea\") " pod="kube-system/kube-apiserver-localhost" Aug 5 21:32:23.374909 kubelet[2313]: I0805 21:32:23.374835 2313 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/09d96cdeded1d5a51a9712d8a1a0b54a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"09d96cdeded1d5a51a9712d8a1a0b54a\") " pod="kube-system/kube-controller-manager-localhost" Aug 5 21:32:23.574549 kubelet[2313]: E0805 21:32:23.574510 2313 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.18:6443: connect: connection refused" interval="800ms" Aug 5 21:32:23.600768 kubelet[2313]: E0805 21:32:23.600730 2313 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:32:23.600901 kubelet[2313]: E0805 21:32:23.600865 2313 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:32:23.601066 kubelet[2313]: E0805 21:32:23.601038 2313 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:32:23.601583 containerd[1565]: time="2024-08-05T21:32:23.601546134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0cc03c154af91f38c5530287ae9cc549,Namespace:kube-system,Attempt:0,}" Aug 5 21:32:23.601898 containerd[1565]: time="2024-08-05T21:32:23.601591325Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:09d96cdeded1d5a51a9712d8a1a0b54a,Namespace:kube-system,Attempt:0,}" Aug 5 21:32:23.601898 containerd[1565]: time="2024-08-05T21:32:23.601576008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:3ca630c7ac485ed80616f9411596daea,Namespace:kube-system,Attempt:0,}" Aug 5 21:32:23.680345 kubelet[2313]: I0805 21:32:23.680309 2313 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Aug 5 21:32:23.680612 kubelet[2313]: E0805 21:32:23.680584 2313 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.18:6443/api/v1/nodes\": dial tcp 10.0.0.18:6443: connect: connection refused" node="localhost" Aug 5 21:32:24.136859 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2544611542.mount: Deactivated successfully. Aug 5 21:32:24.141515 containerd[1565]: time="2024-08-05T21:32:24.141464576Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 21:32:24.142892 containerd[1565]: time="2024-08-05T21:32:24.142860838Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Aug 5 21:32:24.143495 containerd[1565]: time="2024-08-05T21:32:24.143460687Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 21:32:24.144509 containerd[1565]: time="2024-08-05T21:32:24.144480618Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 5 21:32:24.145136 containerd[1565]: time="2024-08-05T21:32:24.145108742Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 21:32:24.145941 containerd[1565]: time="2024-08-05T21:32:24.145913433Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 5 21:32:24.146515 containerd[1565]: time="2024-08-05T21:32:24.146477489Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 21:32:24.150826 containerd[1565]: time="2024-08-05T21:32:24.150782133Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 21:32:24.151799 containerd[1565]: time="2024-08-05T21:32:24.151633295Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 549.966666ms" Aug 5 21:32:24.152663 containerd[1565]: time="2024-08-05T21:32:24.152444705Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 550.698253ms" Aug 5 21:32:24.155055 containerd[1565]: time="2024-08-05T21:32:24.155017509Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 553.369717ms" Aug 5 21:32:24.172222 kubelet[2313]: W0805 21:32:24.172166 2313 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.18:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.18:6443: connect: connection refused Aug 5 21:32:24.172449 kubelet[2313]: E0805 21:32:24.172425 2313 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.18:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.18:6443: connect: connection refused Aug 5 21:32:24.301328 kubelet[2313]: W0805 21:32:24.301244 2313 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.18:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.18:6443: connect: connection refused Aug 5 21:32:24.301328 kubelet[2313]: E0805 21:32:24.301289 2313 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.18:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.18:6443: connect: connection refused Aug 5 21:32:24.313083 containerd[1565]: time="2024-08-05T21:32:24.312832598Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 21:32:24.313083 containerd[1565]: time="2024-08-05T21:32:24.312901306Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:32:24.313083 containerd[1565]: time="2024-08-05T21:32:24.312920662Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 21:32:24.313083 containerd[1565]: time="2024-08-05T21:32:24.312934499Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:32:24.313745 containerd[1565]: time="2024-08-05T21:32:24.313656606Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 21:32:24.313890 containerd[1565]: time="2024-08-05T21:32:24.313725793Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:32:24.315270 containerd[1565]: time="2024-08-05T21:32:24.315204600Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 21:32:24.315270 containerd[1565]: time="2024-08-05T21:32:24.315232154Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:32:24.316790 containerd[1565]: time="2024-08-05T21:32:24.316452489Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 21:32:24.316790 containerd[1565]: time="2024-08-05T21:32:24.316511998Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:32:24.316790 containerd[1565]: time="2024-08-05T21:32:24.316530754Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 21:32:24.316790 containerd[1565]: time="2024-08-05T21:32:24.316544592Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:32:24.357728 containerd[1565]: time="2024-08-05T21:32:24.357672144Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:3ca630c7ac485ed80616f9411596daea,Namespace:kube-system,Attempt:0,} returns sandbox id \"ee0497aef7095d13bd84c73eede0000ee6066cade1accce96f7fc42f606a5404\"" Aug 5 21:32:24.364136 kubelet[2313]: E0805 21:32:24.364110 2313 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:32:24.366997 containerd[1565]: time="2024-08-05T21:32:24.366866004Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0cc03c154af91f38c5530287ae9cc549,Namespace:kube-system,Attempt:0,} returns sandbox id \"c6a2237641046078aa3c63a08e6b042137b5ccddfa0b2de0cd0a1a3626d1f6df\"" Aug 5 21:32:24.367075 containerd[1565]: time="2024-08-05T21:32:24.367045490Z" level=info msg="CreateContainer within sandbox \"ee0497aef7095d13bd84c73eede0000ee6066cade1accce96f7fc42f606a5404\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 5 21:32:24.367936 kubelet[2313]: E0805 21:32:24.367880 2313 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:32:24.368492 containerd[1565]: time="2024-08-05T21:32:24.368152726Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:09d96cdeded1d5a51a9712d8a1a0b54a,Namespace:kube-system,Attempt:0,} returns sandbox id \"db4b4167bfa404a76520cb581305f9917e41281fb53907ad2f86e72cd2455ea9\"" Aug 5 21:32:24.368797 kubelet[2313]: E0805 21:32:24.368783 2313 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:32:24.370287 containerd[1565]: time="2024-08-05T21:32:24.370148916Z" level=info msg="CreateContainer within sandbox \"c6a2237641046078aa3c63a08e6b042137b5ccddfa0b2de0cd0a1a3626d1f6df\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 5 21:32:24.370371 containerd[1565]: time="2024-08-05T21:32:24.370296209Z" level=info msg="CreateContainer within sandbox \"db4b4167bfa404a76520cb581305f9917e41281fb53907ad2f86e72cd2455ea9\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 5 21:32:24.374953 kubelet[2313]: E0805 21:32:24.374926 2313 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.18:6443: connect: connection refused" interval="1.6s" Aug 5 21:32:24.442749 kubelet[2313]: W0805 21:32:24.442610 2313 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.18:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.18:6443: connect: connection refused Aug 5 21:32:24.442749 kubelet[2313]: E0805 21:32:24.442681 2313 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.18:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.18:6443: connect: connection refused Aug 5 21:32:24.461204 kubelet[2313]: W0805 21:32:24.461148 2313 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.18:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.18:6443: connect: connection refused Aug 5 21:32:24.461287 kubelet[2313]: E0805 21:32:24.461210 2313 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.18:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.18:6443: connect: connection refused Aug 5 21:32:24.482388 kubelet[2313]: I0805 21:32:24.482343 2313 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Aug 5 21:32:24.482669 kubelet[2313]: E0805 21:32:24.482641 2313 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.18:6443/api/v1/nodes\": dial tcp 10.0.0.18:6443: connect: connection refused" node="localhost" Aug 5 21:32:24.526937 containerd[1565]: time="2024-08-05T21:32:24.526654048Z" level=info msg="CreateContainer within sandbox \"db4b4167bfa404a76520cb581305f9917e41281fb53907ad2f86e72cd2455ea9\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f38e4b50234048ea058e92d37abe42347f7d0fc61e96b9e49bfcd5f4ad6679b8\"" Aug 5 21:32:24.527797 containerd[1565]: time="2024-08-05T21:32:24.527767602Z" level=info msg="StartContainer for \"f38e4b50234048ea058e92d37abe42347f7d0fc61e96b9e49bfcd5f4ad6679b8\"" Aug 5 21:32:24.529826 containerd[1565]: time="2024-08-05T21:32:24.529753194Z" level=info msg="CreateContainer within sandbox \"c6a2237641046078aa3c63a08e6b042137b5ccddfa0b2de0cd0a1a3626d1f6df\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"7c206efcab7ed398f91d355023b8a47e70ea00fc7f25354dfe154e67f501282f\"" Aug 5 21:32:24.531188 containerd[1565]: time="2024-08-05T21:32:24.531030958Z" level=info msg="StartContainer for \"7c206efcab7ed398f91d355023b8a47e70ea00fc7f25354dfe154e67f501282f\"" Aug 5 21:32:24.532238 containerd[1565]: time="2024-08-05T21:32:24.532148631Z" level=info msg="CreateContainer within sandbox \"ee0497aef7095d13bd84c73eede0000ee6066cade1accce96f7fc42f606a5404\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a19fc4aa2a8de2afa424db5b83ac3ef17e1a455aaecfd3f789d47a97781c97be\"" Aug 5 21:32:24.533560 containerd[1565]: time="2024-08-05T21:32:24.532484609Z" level=info msg="StartContainer for \"a19fc4aa2a8de2afa424db5b83ac3ef17e1a455aaecfd3f789d47a97781c97be\"" Aug 5 21:32:24.614306 containerd[1565]: time="2024-08-05T21:32:24.614260963Z" level=info msg="StartContainer for \"a19fc4aa2a8de2afa424db5b83ac3ef17e1a455aaecfd3f789d47a97781c97be\" returns successfully" Aug 5 21:32:24.614968 containerd[1565]: time="2024-08-05T21:32:24.614824299Z" level=info msg="StartContainer for \"7c206efcab7ed398f91d355023b8a47e70ea00fc7f25354dfe154e67f501282f\" returns successfully" Aug 5 21:32:24.615084 containerd[1565]: time="2024-08-05T21:32:24.614828698Z" level=info msg="StartContainer for \"f38e4b50234048ea058e92d37abe42347f7d0fc61e96b9e49bfcd5f4ad6679b8\" returns successfully" Aug 5 21:32:24.999909 kubelet[2313]: E0805 21:32:24.999879 2313 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:32:25.004519 kubelet[2313]: E0805 21:32:25.004496 2313 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:32:25.006803 kubelet[2313]: E0805 21:32:25.006785 2313 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:32:26.009629 kubelet[2313]: E0805 21:32:26.009582 2313 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:32:26.010376 kubelet[2313]: E0805 21:32:26.010345 2313 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:32:26.010453 kubelet[2313]: E0805 21:32:26.010346 2313 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:32:26.084048 kubelet[2313]: I0805 21:32:26.084010 2313 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Aug 5 21:32:26.307910 kubelet[2313]: E0805 21:32:26.307793 2313 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Aug 5 21:32:26.371836 kubelet[2313]: I0805 21:32:26.371699 2313 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Aug 5 21:32:26.379561 kubelet[2313]: E0805 21:32:26.379524 2313 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 5 21:32:26.480442 kubelet[2313]: E0805 21:32:26.480409 2313 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 5 21:32:26.580978 kubelet[2313]: E0805 21:32:26.580946 2313 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 5 21:32:26.968627 kubelet[2313]: I0805 21:32:26.968515 2313 apiserver.go:52] "Watching apiserver" Aug 5 21:32:26.972915 kubelet[2313]: I0805 21:32:26.972883 2313 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Aug 5 21:32:27.014358 kubelet[2313]: E0805 21:32:27.014304 2313 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Aug 5 21:32:27.014775 kubelet[2313]: E0805 21:32:27.014763 2313 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:32:28.913772 systemd[1]: Reloading requested from client PID 2589 ('systemctl') (unit session-7.scope)... Aug 5 21:32:28.913789 systemd[1]: Reloading... Aug 5 21:32:28.977923 zram_generator::config[2629]: No configuration found. Aug 5 21:32:29.072670 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 5 21:32:29.140033 systemd[1]: Reloading finished in 225 ms. Aug 5 21:32:29.166597 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 21:32:29.177934 systemd[1]: kubelet.service: Deactivated successfully. Aug 5 21:32:29.178376 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 21:32:29.192714 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 21:32:29.290448 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 21:32:29.294755 (kubelet)[2678]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 5 21:32:29.362874 kubelet[2678]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 5 21:32:29.362874 kubelet[2678]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 5 21:32:29.362874 kubelet[2678]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 5 21:32:29.363272 kubelet[2678]: I0805 21:32:29.362890 2678 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 5 21:32:29.369305 kubelet[2678]: I0805 21:32:29.369270 2678 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Aug 5 21:32:29.369305 kubelet[2678]: I0805 21:32:29.369299 2678 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 5 21:32:29.370413 kubelet[2678]: I0805 21:32:29.369489 2678 server.go:895] "Client rotation is on, will bootstrap in background" Aug 5 21:32:29.371209 kubelet[2678]: I0805 21:32:29.371181 2678 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 5 21:32:29.372338 kubelet[2678]: I0805 21:32:29.372294 2678 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 5 21:32:29.382935 kubelet[2678]: W0805 21:32:29.380637 2678 machine.go:65] Cannot read vendor id correctly, set empty. Aug 5 21:32:29.382935 kubelet[2678]: I0805 21:32:29.381423 2678 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 5 21:32:29.382935 kubelet[2678]: I0805 21:32:29.381761 2678 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 5 21:32:29.382935 kubelet[2678]: I0805 21:32:29.382010 2678 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Aug 5 21:32:29.382935 kubelet[2678]: I0805 21:32:29.382045 2678 topology_manager.go:138] "Creating topology manager with none policy" Aug 5 21:32:29.382935 kubelet[2678]: I0805 21:32:29.382054 2678 container_manager_linux.go:301] "Creating device plugin manager" Aug 5 21:32:29.383215 kubelet[2678]: I0805 21:32:29.382088 2678 state_mem.go:36] "Initialized new in-memory state store" Aug 5 21:32:29.383215 kubelet[2678]: I0805 21:32:29.382176 2678 kubelet.go:393] "Attempting to sync node with API server" Aug 5 21:32:29.383215 kubelet[2678]: I0805 21:32:29.382189 2678 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 5 21:32:29.383215 kubelet[2678]: I0805 21:32:29.382218 2678 kubelet.go:309] "Adding apiserver pod source" Aug 5 21:32:29.383215 kubelet[2678]: I0805 21:32:29.382229 2678 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 5 21:32:29.383909 kubelet[2678]: I0805 21:32:29.383445 2678 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.18" apiVersion="v1" Aug 5 21:32:29.383996 kubelet[2678]: I0805 21:32:29.383988 2678 server.go:1232] "Started kubelet" Aug 5 21:32:29.387109 kubelet[2678]: I0805 21:32:29.387089 2678 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Aug 5 21:32:29.388139 kubelet[2678]: I0805 21:32:29.388100 2678 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 5 21:32:29.388266 kubelet[2678]: I0805 21:32:29.388187 2678 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 5 21:32:29.389722 kubelet[2678]: I0805 21:32:29.389690 2678 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Aug 5 21:32:29.392483 kubelet[2678]: I0805 21:32:29.392366 2678 volume_manager.go:291] "Starting Kubelet Volume Manager" Aug 5 21:32:29.392625 kubelet[2678]: I0805 21:32:29.392582 2678 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Aug 5 21:32:29.392840 kubelet[2678]: I0805 21:32:29.392762 2678 reconciler_new.go:29] "Reconciler: start to sync state" Aug 5 21:32:29.395667 kubelet[2678]: I0805 21:32:29.393975 2678 server.go:462] "Adding debug handlers to kubelet server" Aug 5 21:32:29.400588 kubelet[2678]: E0805 21:32:29.400551 2678 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Aug 5 21:32:29.400588 kubelet[2678]: E0805 21:32:29.400586 2678 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 5 21:32:29.410495 kubelet[2678]: I0805 21:32:29.410447 2678 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 5 21:32:29.412020 kubelet[2678]: I0805 21:32:29.411983 2678 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 5 21:32:29.412020 kubelet[2678]: I0805 21:32:29.412012 2678 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 5 21:32:29.412143 kubelet[2678]: I0805 21:32:29.412040 2678 kubelet.go:2303] "Starting kubelet main sync loop" Aug 5 21:32:29.412143 kubelet[2678]: E0805 21:32:29.412089 2678 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 5 21:32:29.485132 kubelet[2678]: I0805 21:32:29.485039 2678 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 5 21:32:29.485931 kubelet[2678]: I0805 21:32:29.485916 2678 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 5 21:32:29.486025 kubelet[2678]: I0805 21:32:29.486015 2678 state_mem.go:36] "Initialized new in-memory state store" Aug 5 21:32:29.486216 kubelet[2678]: I0805 21:32:29.486202 2678 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 5 21:32:29.486300 kubelet[2678]: I0805 21:32:29.486291 2678 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 5 21:32:29.486346 kubelet[2678]: I0805 21:32:29.486339 2678 policy_none.go:49] "None policy: Start" Aug 5 21:32:29.487098 kubelet[2678]: I0805 21:32:29.487080 2678 memory_manager.go:169] "Starting memorymanager" policy="None" Aug 5 21:32:29.487176 kubelet[2678]: I0805 21:32:29.487105 2678 state_mem.go:35] "Initializing new in-memory state store" Aug 5 21:32:29.487318 kubelet[2678]: I0805 21:32:29.487295 2678 state_mem.go:75] "Updated machine memory state" Aug 5 21:32:29.488412 kubelet[2678]: I0805 21:32:29.488366 2678 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 5 21:32:29.489035 kubelet[2678]: I0805 21:32:29.489019 2678 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 5 21:32:29.496651 kubelet[2678]: I0805 21:32:29.496616 2678 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Aug 5 21:32:29.502346 kubelet[2678]: I0805 21:32:29.502318 2678 kubelet_node_status.go:108] "Node was previously registered" node="localhost" Aug 5 21:32:29.502423 kubelet[2678]: I0805 21:32:29.502404 2678 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Aug 5 21:32:29.512222 kubelet[2678]: I0805 21:32:29.512196 2678 topology_manager.go:215] "Topology Admit Handler" podUID="3ca630c7ac485ed80616f9411596daea" podNamespace="kube-system" podName="kube-apiserver-localhost" Aug 5 21:32:29.512317 kubelet[2678]: I0805 21:32:29.512285 2678 topology_manager.go:215] "Topology Admit Handler" podUID="09d96cdeded1d5a51a9712d8a1a0b54a" podNamespace="kube-system" podName="kube-controller-manager-localhost" Aug 5 21:32:29.512378 kubelet[2678]: I0805 21:32:29.512332 2678 topology_manager.go:215] "Topology Admit Handler" podUID="0cc03c154af91f38c5530287ae9cc549" podNamespace="kube-system" podName="kube-scheduler-localhost" Aug 5 21:32:29.593748 kubelet[2678]: I0805 21:32:29.593705 2678 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0cc03c154af91f38c5530287ae9cc549-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0cc03c154af91f38c5530287ae9cc549\") " pod="kube-system/kube-scheduler-localhost" Aug 5 21:32:29.593748 kubelet[2678]: I0805 21:32:29.593749 2678 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3ca630c7ac485ed80616f9411596daea-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"3ca630c7ac485ed80616f9411596daea\") " pod="kube-system/kube-apiserver-localhost" Aug 5 21:32:29.593914 kubelet[2678]: I0805 21:32:29.593771 2678 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3ca630c7ac485ed80616f9411596daea-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"3ca630c7ac485ed80616f9411596daea\") " pod="kube-system/kube-apiserver-localhost" Aug 5 21:32:29.593914 kubelet[2678]: I0805 21:32:29.593803 2678 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/09d96cdeded1d5a51a9712d8a1a0b54a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"09d96cdeded1d5a51a9712d8a1a0b54a\") " pod="kube-system/kube-controller-manager-localhost" Aug 5 21:32:29.593914 kubelet[2678]: I0805 21:32:29.593878 2678 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/09d96cdeded1d5a51a9712d8a1a0b54a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"09d96cdeded1d5a51a9712d8a1a0b54a\") " pod="kube-system/kube-controller-manager-localhost" Aug 5 21:32:29.593914 kubelet[2678]: I0805 21:32:29.593900 2678 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/09d96cdeded1d5a51a9712d8a1a0b54a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"09d96cdeded1d5a51a9712d8a1a0b54a\") " pod="kube-system/kube-controller-manager-localhost" Aug 5 21:32:29.594005 kubelet[2678]: I0805 21:32:29.593921 2678 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3ca630c7ac485ed80616f9411596daea-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"3ca630c7ac485ed80616f9411596daea\") " pod="kube-system/kube-apiserver-localhost" Aug 5 21:32:29.594005 kubelet[2678]: I0805 21:32:29.593940 2678 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/09d96cdeded1d5a51a9712d8a1a0b54a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"09d96cdeded1d5a51a9712d8a1a0b54a\") " pod="kube-system/kube-controller-manager-localhost" Aug 5 21:32:29.594005 kubelet[2678]: I0805 21:32:29.593969 2678 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/09d96cdeded1d5a51a9712d8a1a0b54a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"09d96cdeded1d5a51a9712d8a1a0b54a\") " pod="kube-system/kube-controller-manager-localhost" Aug 5 21:32:29.827737 kubelet[2678]: E0805 21:32:29.827211 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:32:29.827737 kubelet[2678]: E0805 21:32:29.827684 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:32:29.827737 kubelet[2678]: E0805 21:32:29.827706 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:32:30.383539 kubelet[2678]: I0805 21:32:30.383489 2678 apiserver.go:52] "Watching apiserver" Aug 5 21:32:30.393308 kubelet[2678]: I0805 21:32:30.393258 2678 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Aug 5 21:32:30.437692 kubelet[2678]: E0805 21:32:30.437638 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:32:30.437692 kubelet[2678]: E0805 21:32:30.437668 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:32:30.439838 kubelet[2678]: E0805 21:32:30.438783 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:32:30.454677 kubelet[2678]: I0805 21:32:30.454632 2678 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.45458045 podCreationTimestamp="2024-08-05 21:32:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 21:32:30.44862584 +0000 UTC m=+1.150239329" watchObservedRunningTime="2024-08-05 21:32:30.45458045 +0000 UTC m=+1.156193979" Aug 5 21:32:30.461749 kubelet[2678]: I0805 21:32:30.461678 2678 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.4616348239999999 podCreationTimestamp="2024-08-05 21:32:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 21:32:30.455072954 +0000 UTC m=+1.156686563" watchObservedRunningTime="2024-08-05 21:32:30.461634824 +0000 UTC m=+1.163248353" Aug 5 21:32:30.467545 kubelet[2678]: I0805 21:32:30.467510 2678 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.4674822779999999 podCreationTimestamp="2024-08-05 21:32:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 21:32:30.461746901 +0000 UTC m=+1.163360430" watchObservedRunningTime="2024-08-05 21:32:30.467482278 +0000 UTC m=+1.169095807" Aug 5 21:32:31.441725 kubelet[2678]: E0805 21:32:31.441445 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:32:32.443326 kubelet[2678]: E0805 21:32:32.443301 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:32:32.745951 kubelet[2678]: E0805 21:32:32.745838 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:32:33.444971 kubelet[2678]: E0805 21:32:33.444934 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:32:34.596102 sudo[1778]: pam_unix(sudo:session): session closed for user root Aug 5 21:32:34.602751 sshd[1767]: pam_unix(sshd:session): session closed for user core Aug 5 21:32:34.606363 systemd[1]: sshd@6-10.0.0.18:22-10.0.0.1:60304.service: Deactivated successfully. Aug 5 21:32:34.608543 systemd[1]: session-7.scope: Deactivated successfully. Aug 5 21:32:34.609005 systemd-logind[1531]: Session 7 logged out. Waiting for processes to exit. Aug 5 21:32:34.610170 systemd-logind[1531]: Removed session 7. Aug 5 21:32:35.486248 kubelet[2678]: E0805 21:32:35.485083 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:32:36.449477 kubelet[2678]: E0805 21:32:36.449432 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:32:41.126382 update_engine[1536]: I0805 21:32:41.126308 1536 update_attempter.cc:509] Updating boot flags... Aug 5 21:32:41.151857 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2775) Aug 5 21:32:41.175449 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2774) Aug 5 21:32:41.650642 kubelet[2678]: E0805 21:32:41.650333 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:32:42.618658 kubelet[2678]: I0805 21:32:42.618629 2678 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 5 21:32:42.619419 containerd[1565]: time="2024-08-05T21:32:42.619329887Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 5 21:32:42.621188 kubelet[2678]: I0805 21:32:42.619695 2678 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 5 21:32:42.753323 kubelet[2678]: E0805 21:32:42.753249 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:32:43.602224 kubelet[2678]: I0805 21:32:43.601986 2678 topology_manager.go:215] "Topology Admit Handler" podUID="6610d6f2-e1b9-4541-8401-76f18e4ff62e" podNamespace="kube-system" podName="kube-proxy-cbvsl" Aug 5 21:32:43.691955 kubelet[2678]: I0805 21:32:43.691917 2678 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2mgq\" (UniqueName: \"kubernetes.io/projected/6610d6f2-e1b9-4541-8401-76f18e4ff62e-kube-api-access-t2mgq\") pod \"kube-proxy-cbvsl\" (UID: \"6610d6f2-e1b9-4541-8401-76f18e4ff62e\") " pod="kube-system/kube-proxy-cbvsl" Aug 5 21:32:43.691955 kubelet[2678]: I0805 21:32:43.691958 2678 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6610d6f2-e1b9-4541-8401-76f18e4ff62e-kube-proxy\") pod \"kube-proxy-cbvsl\" (UID: \"6610d6f2-e1b9-4541-8401-76f18e4ff62e\") " pod="kube-system/kube-proxy-cbvsl" Aug 5 21:32:43.692105 kubelet[2678]: I0805 21:32:43.691982 2678 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6610d6f2-e1b9-4541-8401-76f18e4ff62e-xtables-lock\") pod \"kube-proxy-cbvsl\" (UID: \"6610d6f2-e1b9-4541-8401-76f18e4ff62e\") " pod="kube-system/kube-proxy-cbvsl" Aug 5 21:32:43.692105 kubelet[2678]: I0805 21:32:43.692002 2678 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6610d6f2-e1b9-4541-8401-76f18e4ff62e-lib-modules\") pod \"kube-proxy-cbvsl\" (UID: \"6610d6f2-e1b9-4541-8401-76f18e4ff62e\") " pod="kube-system/kube-proxy-cbvsl" Aug 5 21:32:43.717180 kubelet[2678]: I0805 21:32:43.717123 2678 topology_manager.go:215] "Topology Admit Handler" podUID="01749556-d51c-49cf-80b5-7bd96da53a4f" podNamespace="tigera-operator" podName="tigera-operator-76c4974c85-pzl76" Aug 5 21:32:43.792190 kubelet[2678]: I0805 21:32:43.792152 2678 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zmjm\" (UniqueName: \"kubernetes.io/projected/01749556-d51c-49cf-80b5-7bd96da53a4f-kube-api-access-2zmjm\") pod \"tigera-operator-76c4974c85-pzl76\" (UID: \"01749556-d51c-49cf-80b5-7bd96da53a4f\") " pod="tigera-operator/tigera-operator-76c4974c85-pzl76" Aug 5 21:32:43.793202 kubelet[2678]: I0805 21:32:43.792217 2678 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/01749556-d51c-49cf-80b5-7bd96da53a4f-var-lib-calico\") pod \"tigera-operator-76c4974c85-pzl76\" (UID: \"01749556-d51c-49cf-80b5-7bd96da53a4f\") " pod="tigera-operator/tigera-operator-76c4974c85-pzl76" Aug 5 21:32:43.904801 kubelet[2678]: E0805 21:32:43.904698 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:32:43.905517 containerd[1565]: time="2024-08-05T21:32:43.905458914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cbvsl,Uid:6610d6f2-e1b9-4541-8401-76f18e4ff62e,Namespace:kube-system,Attempt:0,}" Aug 5 21:32:43.926321 containerd[1565]: time="2024-08-05T21:32:43.926221303Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 21:32:43.926321 containerd[1565]: time="2024-08-05T21:32:43.926281182Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:32:43.927017 containerd[1565]: time="2024-08-05T21:32:43.926302502Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 21:32:43.927017 containerd[1565]: time="2024-08-05T21:32:43.926901932Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:32:43.958664 containerd[1565]: time="2024-08-05T21:32:43.958622866Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cbvsl,Uid:6610d6f2-e1b9-4541-8401-76f18e4ff62e,Namespace:kube-system,Attempt:0,} returns sandbox id \"45ffd5d2e708fcb4262083e40a55b03151e475998c8aedcc36b00f1d2d6f2396\"" Aug 5 21:32:43.959725 kubelet[2678]: E0805 21:32:43.959557 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:32:43.961519 containerd[1565]: time="2024-08-05T21:32:43.961479621Z" level=info msg="CreateContainer within sandbox \"45ffd5d2e708fcb4262083e40a55b03151e475998c8aedcc36b00f1d2d6f2396\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 5 21:32:43.972410 containerd[1565]: time="2024-08-05T21:32:43.972372647Z" level=info msg="CreateContainer within sandbox \"45ffd5d2e708fcb4262083e40a55b03151e475998c8aedcc36b00f1d2d6f2396\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b8576831677de8462aec715b4794e3506c03efb0361d12ebb252fbd7f9aff0d2\"" Aug 5 21:32:43.973607 containerd[1565]: time="2024-08-05T21:32:43.973575388Z" level=info msg="StartContainer for \"b8576831677de8462aec715b4794e3506c03efb0361d12ebb252fbd7f9aff0d2\"" Aug 5 21:32:44.021009 containerd[1565]: time="2024-08-05T21:32:44.020970647Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-pzl76,Uid:01749556-d51c-49cf-80b5-7bd96da53a4f,Namespace:tigera-operator,Attempt:0,}" Aug 5 21:32:44.023069 containerd[1565]: time="2024-08-05T21:32:44.023034296Z" level=info msg="StartContainer for \"b8576831677de8462aec715b4794e3506c03efb0361d12ebb252fbd7f9aff0d2\" returns successfully" Aug 5 21:32:44.042165 containerd[1565]: time="2024-08-05T21:32:44.041942489Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 21:32:44.042165 containerd[1565]: time="2024-08-05T21:32:44.042002328Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:32:44.042165 containerd[1565]: time="2024-08-05T21:32:44.042022007Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 21:32:44.042165 containerd[1565]: time="2024-08-05T21:32:44.042043767Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:32:44.087664 containerd[1565]: time="2024-08-05T21:32:44.087619915Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-pzl76,Uid:01749556-d51c-49cf-80b5-7bd96da53a4f,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"43ff45a3d5dc05f713e861384fd51b70ce2fe2c5163edba39018ed62141356e6\"" Aug 5 21:32:44.089499 containerd[1565]: time="2024-08-05T21:32:44.089442048Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\"" Aug 5 21:32:44.463077 kubelet[2678]: E0805 21:32:44.463047 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:32:45.101471 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount416517162.mount: Deactivated successfully. Aug 5 21:32:45.546670 containerd[1565]: time="2024-08-05T21:32:45.546553838Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:32:45.547741 containerd[1565]: time="2024-08-05T21:32:45.547533984Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.0: active requests=0, bytes read=19473658" Aug 5 21:32:45.548350 containerd[1565]: time="2024-08-05T21:32:45.548308453Z" level=info msg="ImageCreate event name:\"sha256:5886f48e233edcb89c0e8e3cdbdc40101f3c2dfbe67d7717f01d19c27cd78f92\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:32:45.550602 containerd[1565]: time="2024-08-05T21:32:45.550570380Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:32:45.551354 containerd[1565]: time="2024-08-05T21:32:45.551319329Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.0\" with image id \"sha256:5886f48e233edcb89c0e8e3cdbdc40101f3c2dfbe67d7717f01d19c27cd78f92\", repo tag \"quay.io/tigera/operator:v1.34.0\", repo digest \"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\", size \"19467821\" in 1.461843322s" Aug 5 21:32:45.551354 containerd[1565]: time="2024-08-05T21:32:45.551351249Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\" returns image reference \"sha256:5886f48e233edcb89c0e8e3cdbdc40101f3c2dfbe67d7717f01d19c27cd78f92\"" Aug 5 21:32:45.552934 containerd[1565]: time="2024-08-05T21:32:45.552791588Z" level=info msg="CreateContainer within sandbox \"43ff45a3d5dc05f713e861384fd51b70ce2fe2c5163edba39018ed62141356e6\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Aug 5 21:32:45.562663 containerd[1565]: time="2024-08-05T21:32:45.562627365Z" level=info msg="CreateContainer within sandbox \"43ff45a3d5dc05f713e861384fd51b70ce2fe2c5163edba39018ed62141356e6\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"e4bfbea74b14fe740eab72c4cc5810322a1b96e42e5e3ac0534929bc8974517f\"" Aug 5 21:32:45.563765 containerd[1565]: time="2024-08-05T21:32:45.563069799Z" level=info msg="StartContainer for \"e4bfbea74b14fe740eab72c4cc5810322a1b96e42e5e3ac0534929bc8974517f\"" Aug 5 21:32:45.606124 containerd[1565]: time="2024-08-05T21:32:45.606083977Z" level=info msg="StartContainer for \"e4bfbea74b14fe740eab72c4cc5810322a1b96e42e5e3ac0534929bc8974517f\" returns successfully" Aug 5 21:32:46.473121 kubelet[2678]: I0805 21:32:46.472927 2678 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-cbvsl" podStartSLOduration=3.472890793 podCreationTimestamp="2024-08-05 21:32:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 21:32:44.470522383 +0000 UTC m=+15.172135912" watchObservedRunningTime="2024-08-05 21:32:46.472890793 +0000 UTC m=+17.174504321" Aug 5 21:32:49.424292 kubelet[2678]: I0805 21:32:49.424217 2678 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4974c85-pzl76" podStartSLOduration=4.961389516 podCreationTimestamp="2024-08-05 21:32:43 +0000 UTC" firstStartedPulling="2024-08-05 21:32:44.088853017 +0000 UTC m=+14.790466506" lastFinishedPulling="2024-08-05 21:32:45.551620965 +0000 UTC m=+16.253234494" observedRunningTime="2024-08-05 21:32:46.472862713 +0000 UTC m=+17.174476242" watchObservedRunningTime="2024-08-05 21:32:49.424157504 +0000 UTC m=+20.125771113" Aug 5 21:32:49.806446 kubelet[2678]: I0805 21:32:49.805093 2678 topology_manager.go:215] "Topology Admit Handler" podUID="ee32f8cc-65da-4191-97e8-3a91d8da3847" podNamespace="calico-system" podName="calico-typha-8668bb95cf-6xjbj" Aug 5 21:32:49.852943 kubelet[2678]: I0805 21:32:49.852907 2678 topology_manager.go:215] "Topology Admit Handler" podUID="fec51d87-c4e1-477d-9303-d09dfd0cde39" podNamespace="calico-system" podName="calico-node-xx4tm" Aug 5 21:32:49.932773 kubelet[2678]: I0805 21:32:49.932737 2678 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/ee32f8cc-65da-4191-97e8-3a91d8da3847-typha-certs\") pod \"calico-typha-8668bb95cf-6xjbj\" (UID: \"ee32f8cc-65da-4191-97e8-3a91d8da3847\") " pod="calico-system/calico-typha-8668bb95cf-6xjbj" Aug 5 21:32:49.932773 kubelet[2678]: I0805 21:32:49.932783 2678 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znw5g\" (UniqueName: \"kubernetes.io/projected/ee32f8cc-65da-4191-97e8-3a91d8da3847-kube-api-access-znw5g\") pod \"calico-typha-8668bb95cf-6xjbj\" (UID: \"ee32f8cc-65da-4191-97e8-3a91d8da3847\") " pod="calico-system/calico-typha-8668bb95cf-6xjbj" Aug 5 21:32:49.932962 kubelet[2678]: I0805 21:32:49.932845 2678 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee32f8cc-65da-4191-97e8-3a91d8da3847-tigera-ca-bundle\") pod \"calico-typha-8668bb95cf-6xjbj\" (UID: \"ee32f8cc-65da-4191-97e8-3a91d8da3847\") " pod="calico-system/calico-typha-8668bb95cf-6xjbj" Aug 5 21:32:49.992214 kubelet[2678]: I0805 21:32:49.992170 2678 topology_manager.go:215] "Topology Admit Handler" podUID="926c7d21-c63e-46bb-9599-6d26d109fd83" podNamespace="calico-system" podName="csi-node-driver-fjpwf" Aug 5 21:32:49.992895 kubelet[2678]: E0805 21:32:49.992461 2678 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fjpwf" podUID="926c7d21-c63e-46bb-9599-6d26d109fd83" Aug 5 21:32:50.033520 kubelet[2678]: I0805 21:32:50.033473 2678 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/fec51d87-c4e1-477d-9303-d09dfd0cde39-var-lib-calico\") pod \"calico-node-xx4tm\" (UID: \"fec51d87-c4e1-477d-9303-d09dfd0cde39\") " pod="calico-system/calico-node-xx4tm" Aug 5 21:32:50.033729 kubelet[2678]: I0805 21:32:50.033545 2678 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/fec51d87-c4e1-477d-9303-d09dfd0cde39-node-certs\") pod \"calico-node-xx4tm\" (UID: \"fec51d87-c4e1-477d-9303-d09dfd0cde39\") " pod="calico-system/calico-node-xx4tm" Aug 5 21:32:50.037840 kubelet[2678]: I0805 21:32:50.035614 2678 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/fec51d87-c4e1-477d-9303-d09dfd0cde39-flexvol-driver-host\") pod \"calico-node-xx4tm\" (UID: \"fec51d87-c4e1-477d-9303-d09dfd0cde39\") " pod="calico-system/calico-node-xx4tm" Aug 5 21:32:50.037840 kubelet[2678]: I0805 21:32:50.035711 2678 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/fec51d87-c4e1-477d-9303-d09dfd0cde39-cni-bin-dir\") pod \"calico-node-xx4tm\" (UID: \"fec51d87-c4e1-477d-9303-d09dfd0cde39\") " pod="calico-system/calico-node-xx4tm" Aug 5 21:32:50.037840 kubelet[2678]: I0805 21:32:50.035755 2678 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gb9cg\" (UniqueName: \"kubernetes.io/projected/fec51d87-c4e1-477d-9303-d09dfd0cde39-kube-api-access-gb9cg\") pod \"calico-node-xx4tm\" (UID: \"fec51d87-c4e1-477d-9303-d09dfd0cde39\") " pod="calico-system/calico-node-xx4tm" Aug 5 21:32:50.037840 kubelet[2678]: I0805 21:32:50.035776 2678 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/fec51d87-c4e1-477d-9303-d09dfd0cde39-var-run-calico\") pod \"calico-node-xx4tm\" (UID: \"fec51d87-c4e1-477d-9303-d09dfd0cde39\") " pod="calico-system/calico-node-xx4tm" Aug 5 21:32:50.037840 kubelet[2678]: I0805 21:32:50.035805 2678 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fec51d87-c4e1-477d-9303-d09dfd0cde39-tigera-ca-bundle\") pod \"calico-node-xx4tm\" (UID: \"fec51d87-c4e1-477d-9303-d09dfd0cde39\") " pod="calico-system/calico-node-xx4tm" Aug 5 21:32:50.038085 kubelet[2678]: I0805 21:32:50.035840 2678 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/fec51d87-c4e1-477d-9303-d09dfd0cde39-policysync\") pod \"calico-node-xx4tm\" (UID: \"fec51d87-c4e1-477d-9303-d09dfd0cde39\") " pod="calico-system/calico-node-xx4tm" Aug 5 21:32:50.038085 kubelet[2678]: I0805 21:32:50.035862 2678 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/fec51d87-c4e1-477d-9303-d09dfd0cde39-cni-net-dir\") pod \"calico-node-xx4tm\" (UID: \"fec51d87-c4e1-477d-9303-d09dfd0cde39\") " pod="calico-system/calico-node-xx4tm" Aug 5 21:32:50.038085 kubelet[2678]: I0805 21:32:50.035880 2678 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fec51d87-c4e1-477d-9303-d09dfd0cde39-lib-modules\") pod \"calico-node-xx4tm\" (UID: \"fec51d87-c4e1-477d-9303-d09dfd0cde39\") " pod="calico-system/calico-node-xx4tm" Aug 5 21:32:50.038085 kubelet[2678]: I0805 21:32:50.035904 2678 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/fec51d87-c4e1-477d-9303-d09dfd0cde39-cni-log-dir\") pod \"calico-node-xx4tm\" (UID: \"fec51d87-c4e1-477d-9303-d09dfd0cde39\") " pod="calico-system/calico-node-xx4tm" Aug 5 21:32:50.038085 kubelet[2678]: I0805 21:32:50.035925 2678 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fec51d87-c4e1-477d-9303-d09dfd0cde39-xtables-lock\") pod \"calico-node-xx4tm\" (UID: \"fec51d87-c4e1-477d-9303-d09dfd0cde39\") " pod="calico-system/calico-node-xx4tm" Aug 5 21:32:50.109824 kubelet[2678]: E0805 21:32:50.109768 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:32:50.111729 containerd[1565]: time="2024-08-05T21:32:50.110600583Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8668bb95cf-6xjbj,Uid:ee32f8cc-65da-4191-97e8-3a91d8da3847,Namespace:calico-system,Attempt:0,}" Aug 5 21:32:50.136706 kubelet[2678]: I0805 21:32:50.136652 2678 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ns8xn\" (UniqueName: \"kubernetes.io/projected/926c7d21-c63e-46bb-9599-6d26d109fd83-kube-api-access-ns8xn\") pod \"csi-node-driver-fjpwf\" (UID: \"926c7d21-c63e-46bb-9599-6d26d109fd83\") " pod="calico-system/csi-node-driver-fjpwf" Aug 5 21:32:50.136866 kubelet[2678]: I0805 21:32:50.136729 2678 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/926c7d21-c63e-46bb-9599-6d26d109fd83-socket-dir\") pod \"csi-node-driver-fjpwf\" (UID: \"926c7d21-c63e-46bb-9599-6d26d109fd83\") " pod="calico-system/csi-node-driver-fjpwf" Aug 5 21:32:50.136866 kubelet[2678]: I0805 21:32:50.136765 2678 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/926c7d21-c63e-46bb-9599-6d26d109fd83-registration-dir\") pod \"csi-node-driver-fjpwf\" (UID: \"926c7d21-c63e-46bb-9599-6d26d109fd83\") " pod="calico-system/csi-node-driver-fjpwf" Aug 5 21:32:50.136866 kubelet[2678]: I0805 21:32:50.136800 2678 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/926c7d21-c63e-46bb-9599-6d26d109fd83-kubelet-dir\") pod \"csi-node-driver-fjpwf\" (UID: \"926c7d21-c63e-46bb-9599-6d26d109fd83\") " pod="calico-system/csi-node-driver-fjpwf" Aug 5 21:32:50.138980 kubelet[2678]: I0805 21:32:50.137034 2678 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/926c7d21-c63e-46bb-9599-6d26d109fd83-varrun\") pod \"csi-node-driver-fjpwf\" (UID: \"926c7d21-c63e-46bb-9599-6d26d109fd83\") " pod="calico-system/csi-node-driver-fjpwf" Aug 5 21:32:50.145205 kubelet[2678]: E0805 21:32:50.144988 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:32:50.145205 kubelet[2678]: W0805 21:32:50.145012 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:32:50.145476 kubelet[2678]: E0805 21:32:50.145418 2678 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:32:50.152194 kubelet[2678]: E0805 21:32:50.152117 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:32:50.152194 kubelet[2678]: W0805 21:32:50.152135 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:32:50.152194 kubelet[2678]: E0805 21:32:50.152155 2678 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:32:50.161820 kubelet[2678]: E0805 21:32:50.161763 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:32:50.163461 containerd[1565]: time="2024-08-05T21:32:50.162337587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-xx4tm,Uid:fec51d87-c4e1-477d-9303-d09dfd0cde39,Namespace:calico-system,Attempt:0,}" Aug 5 21:32:50.175742 containerd[1565]: time="2024-08-05T21:32:50.175643794Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 21:32:50.175742 containerd[1565]: time="2024-08-05T21:32:50.175702833Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:32:50.175742 containerd[1565]: time="2024-08-05T21:32:50.175717793Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 21:32:50.175742 containerd[1565]: time="2024-08-05T21:32:50.175727153Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:32:50.195886 containerd[1565]: time="2024-08-05T21:32:50.194156501Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 21:32:50.195886 containerd[1565]: time="2024-08-05T21:32:50.194218300Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:32:50.195886 containerd[1565]: time="2024-08-05T21:32:50.194245060Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 21:32:50.195886 containerd[1565]: time="2024-08-05T21:32:50.194258940Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:32:50.236428 containerd[1565]: time="2024-08-05T21:32:50.236303256Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8668bb95cf-6xjbj,Uid:ee32f8cc-65da-4191-97e8-3a91d8da3847,Namespace:calico-system,Attempt:0,} returns sandbox id \"a557a438e3e5af53ef8e439736a14064f81b1b8b0f3886f94c7b47cf073ef5a4\"" Aug 5 21:32:50.237131 kubelet[2678]: E0805 21:32:50.237090 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:32:50.240985 kubelet[2678]: E0805 21:32:50.238948 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:32:50.240985 kubelet[2678]: W0805 21:32:50.238967 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:32:50.240985 kubelet[2678]: E0805 21:32:50.238987 2678 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:32:50.240985 kubelet[2678]: E0805 21:32:50.239982 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:32:50.240985 kubelet[2678]: W0805 21:32:50.239995 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:32:50.241609 kubelet[2678]: E0805 21:32:50.241588 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:32:50.241609 kubelet[2678]: W0805 21:32:50.241604 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:32:50.241692 kubelet[2678]: E0805 21:32:50.241620 2678 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:32:50.241804 kubelet[2678]: E0805 21:32:50.241785 2678 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:32:50.242563 kubelet[2678]: E0805 21:32:50.242536 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:32:50.242563 kubelet[2678]: W0805 21:32:50.242552 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:32:50.242563 kubelet[2678]: E0805 21:32:50.242568 2678 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:32:50.244171 containerd[1565]: time="2024-08-05T21:32:50.244134605Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\"" Aug 5 21:32:50.244856 kubelet[2678]: E0805 21:32:50.244796 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:32:50.244856 kubelet[2678]: W0805 21:32:50.244827 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:32:50.244856 kubelet[2678]: E0805 21:32:50.244842 2678 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:32:50.245725 kubelet[2678]: E0805 21:32:50.245703 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:32:50.245725 kubelet[2678]: W0805 21:32:50.245720 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:32:50.245993 kubelet[2678]: E0805 21:32:50.245807 2678 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:32:50.245993 kubelet[2678]: E0805 21:32:50.245947 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:32:50.245993 kubelet[2678]: W0805 21:32:50.245957 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:32:50.246087 kubelet[2678]: E0805 21:32:50.246069 2678 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:32:50.248743 kubelet[2678]: E0805 21:32:50.247338 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:32:50.248743 kubelet[2678]: W0805 21:32:50.247358 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:32:50.248743 kubelet[2678]: E0805 21:32:50.247385 2678 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:32:50.249296 kubelet[2678]: E0805 21:32:50.249201 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:32:50.249296 kubelet[2678]: W0805 21:32:50.249217 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:32:50.249296 kubelet[2678]: E0805 21:32:50.249290 2678 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:32:50.250621 containerd[1565]: time="2024-08-05T21:32:50.250584811Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-xx4tm,Uid:fec51d87-c4e1-477d-9303-d09dfd0cde39,Namespace:calico-system,Attempt:0,} returns sandbox id \"4fcbe026ff5c23df34a67c5c1570f0329cbb0c8f50f8000338a477b5a9828a57\"" Aug 5 21:32:50.251404 kubelet[2678]: E0805 21:32:50.251293 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:32:50.251404 kubelet[2678]: W0805 21:32:50.251392 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:32:50.251507 kubelet[2678]: E0805 21:32:50.251456 2678 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:32:50.251789 kubelet[2678]: E0805 21:32:50.251674 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:32:50.252988 kubelet[2678]: E0805 21:32:50.252967 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:32:50.252988 kubelet[2678]: W0805 21:32:50.252982 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:32:50.253088 kubelet[2678]: E0805 21:32:50.253023 2678 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:32:50.253540 kubelet[2678]: E0805 21:32:50.253208 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:32:50.253540 kubelet[2678]: W0805 21:32:50.253225 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:32:50.253540 kubelet[2678]: E0805 21:32:50.253464 2678 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:32:50.253708 kubelet[2678]: E0805 21:32:50.253599 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:32:50.253708 kubelet[2678]: W0805 21:32:50.253610 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:32:50.253750 kubelet[2678]: E0805 21:32:50.253694 2678 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:32:50.254037 kubelet[2678]: E0805 21:32:50.253951 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:32:50.254037 kubelet[2678]: W0805 21:32:50.253965 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:32:50.254122 kubelet[2678]: E0805 21:32:50.254080 2678 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:32:50.254402 kubelet[2678]: E0805 21:32:50.254384 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:32:50.254461 kubelet[2678]: W0805 21:32:50.254399 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:32:50.254461 kubelet[2678]: E0805 21:32:50.254453 2678 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:32:50.254779 kubelet[2678]: E0805 21:32:50.254754 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:32:50.254779 kubelet[2678]: W0805 21:32:50.254771 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:32:50.254942 kubelet[2678]: E0805 21:32:50.254933 2678 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:32:50.255322 kubelet[2678]: E0805 21:32:50.255303 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:32:50.255380 kubelet[2678]: W0805 21:32:50.255320 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:32:50.255470 kubelet[2678]: E0805 21:32:50.255438 2678 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:32:50.255602 kubelet[2678]: E0805 21:32:50.255572 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:32:50.255638 kubelet[2678]: W0805 21:32:50.255603 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:32:50.255717 kubelet[2678]: E0805 21:32:50.255699 2678 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:32:50.255905 kubelet[2678]: E0805 21:32:50.255857 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:32:50.255957 kubelet[2678]: W0805 21:32:50.255927 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:32:50.256070 kubelet[2678]: E0805 21:32:50.256054 2678 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:32:50.256288 kubelet[2678]: E0805 21:32:50.256275 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:32:50.256288 kubelet[2678]: W0805 21:32:50.256288 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:32:50.256365 kubelet[2678]: E0805 21:32:50.256301 2678 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:32:50.256520 kubelet[2678]: E0805 21:32:50.256508 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:32:50.256520 kubelet[2678]: W0805 21:32:50.256520 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:32:50.256584 kubelet[2678]: E0805 21:32:50.256532 2678 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:32:50.256724 kubelet[2678]: E0805 21:32:50.256710 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:32:50.256724 kubelet[2678]: W0805 21:32:50.256722 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:32:50.256787 kubelet[2678]: E0805 21:32:50.256734 2678 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:32:50.256970 kubelet[2678]: E0805 21:32:50.256957 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:32:50.256970 kubelet[2678]: W0805 21:32:50.256969 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:32:50.257132 kubelet[2678]: E0805 21:32:50.257046 2678 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:32:50.257175 kubelet[2678]: E0805 21:32:50.257164 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:32:50.257175 kubelet[2678]: W0805 21:32:50.257173 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:32:50.257226 kubelet[2678]: E0805 21:32:50.257185 2678 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:32:50.257387 kubelet[2678]: E0805 21:32:50.257366 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:32:50.257426 kubelet[2678]: W0805 21:32:50.257388 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:32:50.257426 kubelet[2678]: E0805 21:32:50.257400 2678 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:32:50.267231 kubelet[2678]: E0805 21:32:50.267208 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:32:50.267231 kubelet[2678]: W0805 21:32:50.267226 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:32:50.267363 kubelet[2678]: E0805 21:32:50.267247 2678 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:32:51.413457 kubelet[2678]: E0805 21:32:51.413400 2678 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fjpwf" podUID="926c7d21-c63e-46bb-9599-6d26d109fd83" Aug 5 21:32:51.661962 containerd[1565]: time="2024-08-05T21:32:51.661917643Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:32:51.662866 containerd[1565]: time="2024-08-05T21:32:51.662665114Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.0: active requests=0, bytes read=27476513" Aug 5 21:32:51.663590 containerd[1565]: time="2024-08-05T21:32:51.663500785Z" level=info msg="ImageCreate event name:\"sha256:2551880d36cd0ce4c6820747ffe4c40cbf344d26df0ecd878808432ad4f78f03\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:32:51.665744 containerd[1565]: time="2024-08-05T21:32:51.665714161Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:32:51.667236 containerd[1565]: time="2024-08-05T21:32:51.667187704Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.0\" with image id \"sha256:2551880d36cd0ce4c6820747ffe4c40cbf344d26df0ecd878808432ad4f78f03\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\", size \"28843073\" in 1.422539785s" Aug 5 21:32:51.667236 containerd[1565]: time="2024-08-05T21:32:51.667221144Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\" returns image reference \"sha256:2551880d36cd0ce4c6820747ffe4c40cbf344d26df0ecd878808432ad4f78f03\"" Aug 5 21:32:51.667967 containerd[1565]: time="2024-08-05T21:32:51.667662659Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\"" Aug 5 21:32:51.676509 containerd[1565]: time="2024-08-05T21:32:51.676467282Z" level=info msg="CreateContainer within sandbox \"a557a438e3e5af53ef8e439736a14064f81b1b8b0f3886f94c7b47cf073ef5a4\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Aug 5 21:32:51.691858 containerd[1565]: time="2024-08-05T21:32:51.691803633Z" level=info msg="CreateContainer within sandbox \"a557a438e3e5af53ef8e439736a14064f81b1b8b0f3886f94c7b47cf073ef5a4\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"95c4d660d2258e46f0cf69bd85a0ad8ddd795e756180cadf77a443f7a7a69dfe\"" Aug 5 21:32:51.692491 containerd[1565]: time="2024-08-05T21:32:51.692392107Z" level=info msg="StartContainer for \"95c4d660d2258e46f0cf69bd85a0ad8ddd795e756180cadf77a443f7a7a69dfe\"" Aug 5 21:32:51.745144 containerd[1565]: time="2024-08-05T21:32:51.745100125Z" level=info msg="StartContainer for \"95c4d660d2258e46f0cf69bd85a0ad8ddd795e756180cadf77a443f7a7a69dfe\" returns successfully" Aug 5 21:32:52.486458 kubelet[2678]: E0805 21:32:52.486412 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:32:52.494531 kubelet[2678]: I0805 21:32:52.494167 2678 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-8668bb95cf-6xjbj" podStartSLOduration=2.0690146130000002 podCreationTimestamp="2024-08-05 21:32:49 +0000 UTC" firstStartedPulling="2024-08-05 21:32:50.242357546 +0000 UTC m=+20.943971075" lastFinishedPulling="2024-08-05 21:32:51.667474861 +0000 UTC m=+22.369088390" observedRunningTime="2024-08-05 21:32:52.493882651 +0000 UTC m=+23.195496220" watchObservedRunningTime="2024-08-05 21:32:52.494131928 +0000 UTC m=+23.195745457" Aug 5 21:32:52.554193 kubelet[2678]: E0805 21:32:52.554167 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:32:52.554481 kubelet[2678]: W0805 21:32:52.554348 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:32:52.554481 kubelet[2678]: E0805 21:32:52.554379 2678 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:32:52.554628 kubelet[2678]: E0805 21:32:52.554615 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:32:52.554758 kubelet[2678]: W0805 21:32:52.554686 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:32:52.554758 kubelet[2678]: E0805 21:32:52.554704 2678 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:32:52.554988 kubelet[2678]: E0805 21:32:52.554975 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:32:52.555159 kubelet[2678]: W0805 21:32:52.555057 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:32:52.555159 kubelet[2678]: E0805 21:32:52.555076 2678 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:32:52.555295 kubelet[2678]: E0805 21:32:52.555282 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:32:52.555371 kubelet[2678]: W0805 21:32:52.555360 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:32:52.555513 kubelet[2678]: E0805 21:32:52.555420 2678 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:32:52.555759 kubelet[2678]: E0805 21:32:52.555747 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:32:52.556033 kubelet[2678]: W0805 21:32:52.555955 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:32:52.556033 kubelet[2678]: E0805 21:32:52.555976 2678 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:32:52.556362 kubelet[2678]: E0805 21:32:52.556256 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:32:52.556362 kubelet[2678]: W0805 21:32:52.556272 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:32:52.556362 kubelet[2678]: E0805 21:32:52.556285 2678 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:32:52.556527 kubelet[2678]: E0805 21:32:52.556516 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:32:52.556664 kubelet[2678]: W0805 21:32:52.556591 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:32:52.556664 kubelet[2678]: E0805 21:32:52.556609 2678 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:32:52.556934 kubelet[2678]: E0805 21:32:52.556922 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:32:52.557049 kubelet[2678]: W0805 21:32:52.557036 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:32:52.557183 kubelet[2678]: E0805 21:32:52.557083 2678 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:32:52.557490 kubelet[2678]: E0805 21:32:52.557395 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:32:52.557490 kubelet[2678]: W0805 21:32:52.557406 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:32:52.557490 kubelet[2678]: E0805 21:32:52.557420 2678 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:32:52.557659 kubelet[2678]: E0805 21:32:52.557647 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:32:52.557789 kubelet[2678]: W0805 21:32:52.557716 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:32:52.557789 kubelet[2678]: E0805 21:32:52.557731 2678 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:32:52.558018 kubelet[2678]: E0805 21:32:52.558006 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:32:52.558173 kubelet[2678]: W0805 21:32:52.558051 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:32:52.558173 kubelet[2678]: E0805 21:32:52.558068 2678 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:32:52.558297 kubelet[2678]: E0805 21:32:52.558285 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:32:52.558372 kubelet[2678]: W0805 21:32:52.558361 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:32:52.558512 kubelet[2678]: E0805 21:32:52.558418 2678 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:32:52.558711 kubelet[2678]: E0805 21:32:52.558699 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:32:52.558918 kubelet[2678]: W0805 21:32:52.558768 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:32:52.558918 kubelet[2678]: E0805 21:32:52.558786 2678 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:32:52.559066 kubelet[2678]: E0805 21:32:52.559053 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:32:52.559195 kubelet[2678]: W0805 21:32:52.559106 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:32:52.559195 kubelet[2678]: E0805 21:32:52.559121 2678 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:32:52.559410 kubelet[2678]: E0805 21:32:52.559398 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:32:52.559535 kubelet[2678]: W0805 21:32:52.559470 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:32:52.559535 kubelet[2678]: E0805 21:32:52.559487 2678 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:32:52.571695 kubelet[2678]: E0805 21:32:52.571675 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:32:52.571695 kubelet[2678]: W0805 21:32:52.571693 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:32:52.571839 kubelet[2678]: E0805 21:32:52.571710 2678 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:32:52.571932 kubelet[2678]: E0805 21:32:52.571895 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:32:52.571932 kubelet[2678]: W0805 21:32:52.571903 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:32:52.571932 kubelet[2678]: E0805 21:32:52.571917 2678 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:32:52.572077 kubelet[2678]: E0805 21:32:52.572066 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:32:52.572077 kubelet[2678]: W0805 21:32:52.572076 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:32:52.572137 kubelet[2678]: E0805 21:32:52.572090 2678 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:32:52.572263 kubelet[2678]: E0805 21:32:52.572253 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:32:52.572263 kubelet[2678]: W0805 21:32:52.572262 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:32:52.572319 kubelet[2678]: E0805 21:32:52.572278 2678 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:32:52.572430 kubelet[2678]: E0805 21:32:52.572418 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:32:52.572430 kubelet[2678]: W0805 21:32:52.572428 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:32:52.572510 kubelet[2678]: E0805 21:32:52.572442 2678 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:32:52.572586 kubelet[2678]: E0805 21:32:52.572576 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:32:52.572586 kubelet[2678]: W0805 21:32:52.572585 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:32:52.572630 kubelet[2678]: E0805 21:32:52.572604 2678 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:32:52.572777 kubelet[2678]: E0805 21:32:52.572766 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:32:52.572777 kubelet[2678]: W0805 21:32:52.572776 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:32:52.572847 kubelet[2678]: E0805 21:32:52.572789 2678 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:32:52.573206 kubelet[2678]: E0805 21:32:52.573099 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:32:52.573206 kubelet[2678]: W0805 21:32:52.573114 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:32:52.573206 kubelet[2678]: E0805 21:32:52.573135 2678 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:32:52.573446 kubelet[2678]: E0805 21:32:52.573361 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:32:52.573446 kubelet[2678]: W0805 21:32:52.573372 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:32:52.573446 kubelet[2678]: E0805 21:32:52.573415 2678 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:32:52.573605 kubelet[2678]: E0805 21:32:52.573593 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:32:52.573730 kubelet[2678]: W0805 21:32:52.573652 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:32:52.573730 kubelet[2678]: E0805 21:32:52.573681 2678 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:32:52.573878 kubelet[2678]: E0805 21:32:52.573866 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:32:52.573940 kubelet[2678]: W0805 21:32:52.573930 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:32:52.574030 kubelet[2678]: E0805 21:32:52.574000 2678 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:32:52.574664 kubelet[2678]: E0805 21:32:52.574631 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:32:52.574664 kubelet[2678]: W0805 21:32:52.574647 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:32:52.574724 kubelet[2678]: E0805 21:32:52.574669 2678 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:32:52.576002 kubelet[2678]: E0805 21:32:52.575975 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:32:52.576002 kubelet[2678]: W0805 21:32:52.575998 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:32:52.576079 kubelet[2678]: E0805 21:32:52.576044 2678 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:32:52.576378 kubelet[2678]: E0805 21:32:52.576365 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:32:52.576378 kubelet[2678]: W0805 21:32:52.576379 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:32:52.576429 kubelet[2678]: E0805 21:32:52.576398 2678 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:32:52.576805 kubelet[2678]: E0805 21:32:52.576790 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:32:52.576841 kubelet[2678]: W0805 21:32:52.576806 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:32:52.576841 kubelet[2678]: E0805 21:32:52.576840 2678 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:32:52.577071 kubelet[2678]: E0805 21:32:52.577053 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:32:52.577071 kubelet[2678]: W0805 21:32:52.577068 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:32:52.577140 kubelet[2678]: E0805 21:32:52.577080 2678 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:32:52.577736 kubelet[2678]: E0805 21:32:52.577720 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:32:52.577736 kubelet[2678]: W0805 21:32:52.577736 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:32:52.577821 kubelet[2678]: E0805 21:32:52.577755 2678 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:32:52.577993 kubelet[2678]: E0805 21:32:52.577982 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:32:52.577993 kubelet[2678]: W0805 21:32:52.577993 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:32:52.578043 kubelet[2678]: E0805 21:32:52.578005 2678 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:32:52.870053 containerd[1565]: time="2024-08-05T21:32:52.869905595Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:32:52.870645 containerd[1565]: time="2024-08-05T21:32:52.870298711Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0: active requests=0, bytes read=4916009" Aug 5 21:32:52.871683 containerd[1565]: time="2024-08-05T21:32:52.871448819Z" level=info msg="ImageCreate event name:\"sha256:4b6a6a9b369fa6127e23e376ac423670fa81290e0860917acaacae108e3cc064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:32:52.875488 containerd[1565]: time="2024-08-05T21:32:52.875433216Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:32:52.875963 containerd[1565]: time="2024-08-05T21:32:52.875923651Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" with image id \"sha256:4b6a6a9b369fa6127e23e376ac423670fa81290e0860917acaacae108e3cc064\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\", size \"6282537\" in 1.208225912s" Aug 5 21:32:52.875963 containerd[1565]: time="2024-08-05T21:32:52.875960091Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" returns image reference \"sha256:4b6a6a9b369fa6127e23e376ac423670fa81290e0860917acaacae108e3cc064\"" Aug 5 21:32:52.879411 containerd[1565]: time="2024-08-05T21:32:52.879326735Z" level=info msg="CreateContainer within sandbox \"4fcbe026ff5c23df34a67c5c1570f0329cbb0c8f50f8000338a477b5a9828a57\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Aug 5 21:32:52.891213 containerd[1565]: time="2024-08-05T21:32:52.891163130Z" level=info msg="CreateContainer within sandbox \"4fcbe026ff5c23df34a67c5c1570f0329cbb0c8f50f8000338a477b5a9828a57\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"f38dce3d1184ef9e8dd49fcf48d0cd7e15cc3eecc4334d07eabb316803cdec3f\"" Aug 5 21:32:52.891870 containerd[1565]: time="2024-08-05T21:32:52.891824563Z" level=info msg="StartContainer for \"f38dce3d1184ef9e8dd49fcf48d0cd7e15cc3eecc4334d07eabb316803cdec3f\"" Aug 5 21:32:52.949011 containerd[1565]: time="2024-08-05T21:32:52.948968439Z" level=info msg="StartContainer for \"f38dce3d1184ef9e8dd49fcf48d0cd7e15cc3eecc4334d07eabb316803cdec3f\" returns successfully" Aug 5 21:32:53.039411 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f38dce3d1184ef9e8dd49fcf48d0cd7e15cc3eecc4334d07eabb316803cdec3f-rootfs.mount: Deactivated successfully. Aug 5 21:32:53.040273 containerd[1565]: time="2024-08-05T21:32:53.040217771Z" level=info msg="shim disconnected" id=f38dce3d1184ef9e8dd49fcf48d0cd7e15cc3eecc4334d07eabb316803cdec3f namespace=k8s.io Aug 5 21:32:53.040273 containerd[1565]: time="2024-08-05T21:32:53.040270850Z" level=warning msg="cleaning up after shim disconnected" id=f38dce3d1184ef9e8dd49fcf48d0cd7e15cc3eecc4334d07eabb316803cdec3f namespace=k8s.io Aug 5 21:32:53.040398 containerd[1565]: time="2024-08-05T21:32:53.040279650Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 21:32:53.413503 kubelet[2678]: E0805 21:32:53.413170 2678 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fjpwf" podUID="926c7d21-c63e-46bb-9599-6d26d109fd83" Aug 5 21:32:53.487474 kubelet[2678]: E0805 21:32:53.487440 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:32:53.487987 kubelet[2678]: E0805 21:32:53.487966 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:32:53.488876 containerd[1565]: time="2024-08-05T21:32:53.488835178Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\"" Aug 5 21:32:55.412756 kubelet[2678]: E0805 21:32:55.412719 2678 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fjpwf" podUID="926c7d21-c63e-46bb-9599-6d26d109fd83" Aug 5 21:32:56.033271 containerd[1565]: time="2024-08-05T21:32:56.033225247Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:32:56.033794 containerd[1565]: time="2024-08-05T21:32:56.033767442Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.0: active requests=0, bytes read=86799715" Aug 5 21:32:56.034539 containerd[1565]: time="2024-08-05T21:32:56.034515475Z" level=info msg="ImageCreate event name:\"sha256:adcb19ea66141abcd7dc426e3205f2e6ff26e524a3f7148c97f3d49933f502ee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:32:56.036797 containerd[1565]: time="2024-08-05T21:32:56.036410458Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:32:56.037360 containerd[1565]: time="2024-08-05T21:32:56.037245370Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.0\" with image id \"sha256:adcb19ea66141abcd7dc426e3205f2e6ff26e524a3f7148c97f3d49933f502ee\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\", size \"88166283\" in 2.548371113s" Aug 5 21:32:56.037360 containerd[1565]: time="2024-08-05T21:32:56.037276770Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\" returns image reference \"sha256:adcb19ea66141abcd7dc426e3205f2e6ff26e524a3f7148c97f3d49933f502ee\"" Aug 5 21:32:56.039027 containerd[1565]: time="2024-08-05T21:32:56.038882316Z" level=info msg="CreateContainer within sandbox \"4fcbe026ff5c23df34a67c5c1570f0329cbb0c8f50f8000338a477b5a9828a57\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Aug 5 21:32:56.049380 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount829097484.mount: Deactivated successfully. Aug 5 21:32:56.051696 containerd[1565]: time="2024-08-05T21:32:56.051653360Z" level=info msg="CreateContainer within sandbox \"4fcbe026ff5c23df34a67c5c1570f0329cbb0c8f50f8000338a477b5a9828a57\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"71b2710875b59c2330c14d5edeb205705815c015b7358feabecda29f2542d548\"" Aug 5 21:32:56.052106 containerd[1565]: time="2024-08-05T21:32:56.052083117Z" level=info msg="StartContainer for \"71b2710875b59c2330c14d5edeb205705815c015b7358feabecda29f2542d548\"" Aug 5 21:32:56.095093 containerd[1565]: time="2024-08-05T21:32:56.095035169Z" level=info msg="StartContainer for \"71b2710875b59c2330c14d5edeb205705815c015b7358feabecda29f2542d548\" returns successfully" Aug 5 21:32:56.503566 kubelet[2678]: E0805 21:32:56.503528 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:32:56.595246 containerd[1565]: time="2024-08-05T21:32:56.595194456Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 5 21:32:56.615358 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-71b2710875b59c2330c14d5edeb205705815c015b7358feabecda29f2542d548-rootfs.mount: Deactivated successfully. Aug 5 21:32:56.645545 containerd[1565]: time="2024-08-05T21:32:56.645473242Z" level=info msg="shim disconnected" id=71b2710875b59c2330c14d5edeb205705815c015b7358feabecda29f2542d548 namespace=k8s.io Aug 5 21:32:56.645545 containerd[1565]: time="2024-08-05T21:32:56.645532042Z" level=warning msg="cleaning up after shim disconnected" id=71b2710875b59c2330c14d5edeb205705815c015b7358feabecda29f2542d548 namespace=k8s.io Aug 5 21:32:56.645545 containerd[1565]: time="2024-08-05T21:32:56.645542961Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 21:32:56.671237 kubelet[2678]: I0805 21:32:56.671189 2678 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Aug 5 21:32:56.689880 kubelet[2678]: I0805 21:32:56.689487 2678 topology_manager.go:215] "Topology Admit Handler" podUID="7f194363-3bf2-4565-81a0-e7e8a1dc0b71" podNamespace="calico-system" podName="calico-kube-controllers-78555dd857-hs25k" Aug 5 21:32:56.691584 kubelet[2678]: I0805 21:32:56.691149 2678 topology_manager.go:215] "Topology Admit Handler" podUID="250133c2-f2cb-49cc-a9e1-4020ef81de96" podNamespace="kube-system" podName="coredns-5dd5756b68-p2qhs" Aug 5 21:32:56.691584 kubelet[2678]: I0805 21:32:56.691485 2678 topology_manager.go:215] "Topology Admit Handler" podUID="4b1da061-9919-457a-87c2-aab1ccb0c931" podNamespace="kube-system" podName="coredns-5dd5756b68-7g4l5" Aug 5 21:32:56.800859 kubelet[2678]: I0805 21:32:56.800711 2678 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/250133c2-f2cb-49cc-a9e1-4020ef81de96-config-volume\") pod \"coredns-5dd5756b68-p2qhs\" (UID: \"250133c2-f2cb-49cc-a9e1-4020ef81de96\") " pod="kube-system/coredns-5dd5756b68-p2qhs" Aug 5 21:32:56.800859 kubelet[2678]: I0805 21:32:56.800822 2678 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4b1da061-9919-457a-87c2-aab1ccb0c931-config-volume\") pod \"coredns-5dd5756b68-7g4l5\" (UID: \"4b1da061-9919-457a-87c2-aab1ccb0c931\") " pod="kube-system/coredns-5dd5756b68-7g4l5" Aug 5 21:32:56.801001 kubelet[2678]: I0805 21:32:56.800875 2678 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7f194363-3bf2-4565-81a0-e7e8a1dc0b71-tigera-ca-bundle\") pod \"calico-kube-controllers-78555dd857-hs25k\" (UID: \"7f194363-3bf2-4565-81a0-e7e8a1dc0b71\") " pod="calico-system/calico-kube-controllers-78555dd857-hs25k" Aug 5 21:32:56.801001 kubelet[2678]: I0805 21:32:56.800900 2678 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vsdk\" (UniqueName: \"kubernetes.io/projected/7f194363-3bf2-4565-81a0-e7e8a1dc0b71-kube-api-access-7vsdk\") pod \"calico-kube-controllers-78555dd857-hs25k\" (UID: \"7f194363-3bf2-4565-81a0-e7e8a1dc0b71\") " pod="calico-system/calico-kube-controllers-78555dd857-hs25k" Aug 5 21:32:56.801001 kubelet[2678]: I0805 21:32:56.800922 2678 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p59c4\" (UniqueName: \"kubernetes.io/projected/4b1da061-9919-457a-87c2-aab1ccb0c931-kube-api-access-p59c4\") pod \"coredns-5dd5756b68-7g4l5\" (UID: \"4b1da061-9919-457a-87c2-aab1ccb0c931\") " pod="kube-system/coredns-5dd5756b68-7g4l5" Aug 5 21:32:56.801001 kubelet[2678]: I0805 21:32:56.800944 2678 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbv89\" (UniqueName: \"kubernetes.io/projected/250133c2-f2cb-49cc-a9e1-4020ef81de96-kube-api-access-mbv89\") pod \"coredns-5dd5756b68-p2qhs\" (UID: \"250133c2-f2cb-49cc-a9e1-4020ef81de96\") " pod="kube-system/coredns-5dd5756b68-p2qhs" Aug 5 21:32:56.995566 containerd[1565]: time="2024-08-05T21:32:56.995520923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78555dd857-hs25k,Uid:7f194363-3bf2-4565-81a0-e7e8a1dc0b71,Namespace:calico-system,Attempt:0,}" Aug 5 21:32:56.996930 kubelet[2678]: E0805 21:32:56.996890 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:32:56.997335 containerd[1565]: time="2024-08-05T21:32:56.997265708Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-7g4l5,Uid:4b1da061-9919-457a-87c2-aab1ccb0c931,Namespace:kube-system,Attempt:0,}" Aug 5 21:32:56.999990 kubelet[2678]: E0805 21:32:56.999969 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:32:57.000960 containerd[1565]: time="2024-08-05T21:32:57.000932235Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-p2qhs,Uid:250133c2-f2cb-49cc-a9e1-4020ef81de96,Namespace:kube-system,Attempt:0,}" Aug 5 21:32:57.170741 containerd[1565]: time="2024-08-05T21:32:57.170694519Z" level=error msg="Failed to destroy network for sandbox \"bf57b789acdfa47f7924a0a6180b8e5a80871a90e7c8102f6620de927e25ed97\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 21:32:57.173247 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bf57b789acdfa47f7924a0a6180b8e5a80871a90e7c8102f6620de927e25ed97-shm.mount: Deactivated successfully. Aug 5 21:32:57.173608 containerd[1565]: time="2024-08-05T21:32:57.173573774Z" level=error msg="encountered an error cleaning up failed sandbox \"bf57b789acdfa47f7924a0a6180b8e5a80871a90e7c8102f6620de927e25ed97\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 21:32:57.173720 containerd[1565]: time="2024-08-05T21:32:57.173695413Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-7g4l5,Uid:4b1da061-9919-457a-87c2-aab1ccb0c931,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bf57b789acdfa47f7924a0a6180b8e5a80871a90e7c8102f6620de927e25ed97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 21:32:57.173984 containerd[1565]: time="2024-08-05T21:32:57.173630413Z" level=error msg="Failed to destroy network for sandbox \"068c89225eaab713c3aa234d0475d89cc767219d0e8ef63ac1de59abedb3b0b3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 21:32:57.174432 kubelet[2678]: E0805 21:32:57.174385 2678 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf57b789acdfa47f7924a0a6180b8e5a80871a90e7c8102f6620de927e25ed97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 21:32:57.174518 kubelet[2678]: E0805 21:32:57.174466 2678 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf57b789acdfa47f7924a0a6180b8e5a80871a90e7c8102f6620de927e25ed97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-7g4l5" Aug 5 21:32:57.174518 kubelet[2678]: E0805 21:32:57.174487 2678 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf57b789acdfa47f7924a0a6180b8e5a80871a90e7c8102f6620de927e25ed97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-7g4l5" Aug 5 21:32:57.174564 kubelet[2678]: E0805 21:32:57.174538 2678 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-7g4l5_kube-system(4b1da061-9919-457a-87c2-aab1ccb0c931)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-7g4l5_kube-system(4b1da061-9919-457a-87c2-aab1ccb0c931)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bf57b789acdfa47f7924a0a6180b8e5a80871a90e7c8102f6620de927e25ed97\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-7g4l5" podUID="4b1da061-9919-457a-87c2-aab1ccb0c931" Aug 5 21:32:57.174662 containerd[1565]: time="2024-08-05T21:32:57.174631804Z" level=error msg="encountered an error cleaning up failed sandbox \"068c89225eaab713c3aa234d0475d89cc767219d0e8ef63ac1de59abedb3b0b3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 21:32:57.174928 containerd[1565]: time="2024-08-05T21:32:57.174900722Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-p2qhs,Uid:250133c2-f2cb-49cc-a9e1-4020ef81de96,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"068c89225eaab713c3aa234d0475d89cc767219d0e8ef63ac1de59abedb3b0b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 21:32:57.176317 kubelet[2678]: E0805 21:32:57.175735 2678 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"068c89225eaab713c3aa234d0475d89cc767219d0e8ef63ac1de59abedb3b0b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 21:32:57.176317 kubelet[2678]: E0805 21:32:57.175783 2678 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"068c89225eaab713c3aa234d0475d89cc767219d0e8ef63ac1de59abedb3b0b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-p2qhs" Aug 5 21:32:57.176317 kubelet[2678]: E0805 21:32:57.175807 2678 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"068c89225eaab713c3aa234d0475d89cc767219d0e8ef63ac1de59abedb3b0b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-p2qhs" Aug 5 21:32:57.176502 kubelet[2678]: E0805 21:32:57.175861 2678 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-p2qhs_kube-system(250133c2-f2cb-49cc-a9e1-4020ef81de96)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-p2qhs_kube-system(250133c2-f2cb-49cc-a9e1-4020ef81de96)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"068c89225eaab713c3aa234d0475d89cc767219d0e8ef63ac1de59abedb3b0b3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-p2qhs" podUID="250133c2-f2cb-49cc-a9e1-4020ef81de96" Aug 5 21:32:57.177284 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-068c89225eaab713c3aa234d0475d89cc767219d0e8ef63ac1de59abedb3b0b3-shm.mount: Deactivated successfully. Aug 5 21:32:57.178113 containerd[1565]: time="2024-08-05T21:32:57.177963615Z" level=error msg="Failed to destroy network for sandbox \"b8960703f4d4ff6d894b78e0e38f0f446941fae70e5e01df009b03ca320bdf65\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 21:32:57.178321 containerd[1565]: time="2024-08-05T21:32:57.178284533Z" level=error msg="encountered an error cleaning up failed sandbox \"b8960703f4d4ff6d894b78e0e38f0f446941fae70e5e01df009b03ca320bdf65\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 21:32:57.178381 containerd[1565]: time="2024-08-05T21:32:57.178335852Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78555dd857-hs25k,Uid:7f194363-3bf2-4565-81a0-e7e8a1dc0b71,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b8960703f4d4ff6d894b78e0e38f0f446941fae70e5e01df009b03ca320bdf65\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 21:32:57.180043 kubelet[2678]: E0805 21:32:57.179762 2678 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8960703f4d4ff6d894b78e0e38f0f446941fae70e5e01df009b03ca320bdf65\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 21:32:57.180254 kubelet[2678]: E0805 21:32:57.180126 2678 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8960703f4d4ff6d894b78e0e38f0f446941fae70e5e01df009b03ca320bdf65\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-78555dd857-hs25k" Aug 5 21:32:57.180481 kubelet[2678]: E0805 21:32:57.180341 2678 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8960703f4d4ff6d894b78e0e38f0f446941fae70e5e01df009b03ca320bdf65\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-78555dd857-hs25k" Aug 5 21:32:57.180481 kubelet[2678]: E0805 21:32:57.180409 2678 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-78555dd857-hs25k_calico-system(7f194363-3bf2-4565-81a0-e7e8a1dc0b71)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-78555dd857-hs25k_calico-system(7f194363-3bf2-4565-81a0-e7e8a1dc0b71)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b8960703f4d4ff6d894b78e0e38f0f446941fae70e5e01df009b03ca320bdf65\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-78555dd857-hs25k" podUID="7f194363-3bf2-4565-81a0-e7e8a1dc0b71" Aug 5 21:32:57.180725 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b8960703f4d4ff6d894b78e0e38f0f446941fae70e5e01df009b03ca320bdf65-shm.mount: Deactivated successfully. Aug 5 21:32:57.415010 containerd[1565]: time="2024-08-05T21:32:57.414876556Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fjpwf,Uid:926c7d21-c63e-46bb-9599-6d26d109fd83,Namespace:calico-system,Attempt:0,}" Aug 5 21:32:57.461586 containerd[1565]: time="2024-08-05T21:32:57.461457791Z" level=error msg="Failed to destroy network for sandbox \"5206a5e0d1259ce19c047c65bf8b2f5a0a4a960a56335758f7ffc50c89ec9a48\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 21:32:57.461783 containerd[1565]: time="2024-08-05T21:32:57.461744148Z" level=error msg="encountered an error cleaning up failed sandbox \"5206a5e0d1259ce19c047c65bf8b2f5a0a4a960a56335758f7ffc50c89ec9a48\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 21:32:57.461984 containerd[1565]: time="2024-08-05T21:32:57.461800268Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fjpwf,Uid:926c7d21-c63e-46bb-9599-6d26d109fd83,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5206a5e0d1259ce19c047c65bf8b2f5a0a4a960a56335758f7ffc50c89ec9a48\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 21:32:57.462292 kubelet[2678]: E0805 21:32:57.462249 2678 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5206a5e0d1259ce19c047c65bf8b2f5a0a4a960a56335758f7ffc50c89ec9a48\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 21:32:57.462373 kubelet[2678]: E0805 21:32:57.462311 2678 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5206a5e0d1259ce19c047c65bf8b2f5a0a4a960a56335758f7ffc50c89ec9a48\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fjpwf" Aug 5 21:32:57.462373 kubelet[2678]: E0805 21:32:57.462336 2678 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5206a5e0d1259ce19c047c65bf8b2f5a0a4a960a56335758f7ffc50c89ec9a48\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fjpwf" Aug 5 21:32:57.462461 kubelet[2678]: E0805 21:32:57.462385 2678 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-fjpwf_calico-system(926c7d21-c63e-46bb-9599-6d26d109fd83)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-fjpwf_calico-system(926c7d21-c63e-46bb-9599-6d26d109fd83)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5206a5e0d1259ce19c047c65bf8b2f5a0a4a960a56335758f7ffc50c89ec9a48\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-fjpwf" podUID="926c7d21-c63e-46bb-9599-6d26d109fd83" Aug 5 21:32:57.503739 kubelet[2678]: E0805 21:32:57.503713 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:32:57.505053 kubelet[2678]: I0805 21:32:57.505025 2678 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5206a5e0d1259ce19c047c65bf8b2f5a0a4a960a56335758f7ffc50c89ec9a48" Aug 5 21:32:57.506459 containerd[1565]: time="2024-08-05T21:32:57.505546687Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\"" Aug 5 21:32:57.506583 containerd[1565]: time="2024-08-05T21:32:57.506522239Z" level=info msg="StopPodSandbox for \"5206a5e0d1259ce19c047c65bf8b2f5a0a4a960a56335758f7ffc50c89ec9a48\"" Aug 5 21:32:57.506853 containerd[1565]: time="2024-08-05T21:32:57.506830516Z" level=info msg="Ensure that sandbox 5206a5e0d1259ce19c047c65bf8b2f5a0a4a960a56335758f7ffc50c89ec9a48 in task-service has been cleanup successfully" Aug 5 21:32:57.509069 kubelet[2678]: I0805 21:32:57.508586 2678 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="068c89225eaab713c3aa234d0475d89cc767219d0e8ef63ac1de59abedb3b0b3" Aug 5 21:32:57.510371 containerd[1565]: time="2024-08-05T21:32:57.510329286Z" level=info msg="StopPodSandbox for \"068c89225eaab713c3aa234d0475d89cc767219d0e8ef63ac1de59abedb3b0b3\"" Aug 5 21:32:57.511241 containerd[1565]: time="2024-08-05T21:32:57.511205958Z" level=info msg="Ensure that sandbox 068c89225eaab713c3aa234d0475d89cc767219d0e8ef63ac1de59abedb3b0b3 in task-service has been cleanup successfully" Aug 5 21:32:57.513016 kubelet[2678]: I0805 21:32:57.512982 2678 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf57b789acdfa47f7924a0a6180b8e5a80871a90e7c8102f6620de927e25ed97" Aug 5 21:32:57.513822 containerd[1565]: time="2024-08-05T21:32:57.513790976Z" level=info msg="StopPodSandbox for \"bf57b789acdfa47f7924a0a6180b8e5a80871a90e7c8102f6620de927e25ed97\"" Aug 5 21:32:57.514005 containerd[1565]: time="2024-08-05T21:32:57.513978694Z" level=info msg="Ensure that sandbox bf57b789acdfa47f7924a0a6180b8e5a80871a90e7c8102f6620de927e25ed97 in task-service has been cleanup successfully" Aug 5 21:32:57.515377 kubelet[2678]: I0805 21:32:57.515355 2678 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b8960703f4d4ff6d894b78e0e38f0f446941fae70e5e01df009b03ca320bdf65" Aug 5 21:32:57.516571 containerd[1565]: time="2024-08-05T21:32:57.516409073Z" level=info msg="StopPodSandbox for \"b8960703f4d4ff6d894b78e0e38f0f446941fae70e5e01df009b03ca320bdf65\"" Aug 5 21:32:57.520336 containerd[1565]: time="2024-08-05T21:32:57.519977202Z" level=info msg="Ensure that sandbox b8960703f4d4ff6d894b78e0e38f0f446941fae70e5e01df009b03ca320bdf65 in task-service has been cleanup successfully" Aug 5 21:32:57.542980 containerd[1565]: time="2024-08-05T21:32:57.542927402Z" level=error msg="StopPodSandbox for \"5206a5e0d1259ce19c047c65bf8b2f5a0a4a960a56335758f7ffc50c89ec9a48\" failed" error="failed to destroy network for sandbox \"5206a5e0d1259ce19c047c65bf8b2f5a0a4a960a56335758f7ffc50c89ec9a48\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 21:32:57.543409 kubelet[2678]: E0805 21:32:57.543215 2678 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5206a5e0d1259ce19c047c65bf8b2f5a0a4a960a56335758f7ffc50c89ec9a48\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5206a5e0d1259ce19c047c65bf8b2f5a0a4a960a56335758f7ffc50c89ec9a48" Aug 5 21:32:57.543409 kubelet[2678]: E0805 21:32:57.543294 2678 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5206a5e0d1259ce19c047c65bf8b2f5a0a4a960a56335758f7ffc50c89ec9a48"} Aug 5 21:32:57.543409 kubelet[2678]: E0805 21:32:57.543329 2678 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"926c7d21-c63e-46bb-9599-6d26d109fd83\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5206a5e0d1259ce19c047c65bf8b2f5a0a4a960a56335758f7ffc50c89ec9a48\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 5 21:32:57.543409 kubelet[2678]: E0805 21:32:57.543380 2678 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"926c7d21-c63e-46bb-9599-6d26d109fd83\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5206a5e0d1259ce19c047c65bf8b2f5a0a4a960a56335758f7ffc50c89ec9a48\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-fjpwf" podUID="926c7d21-c63e-46bb-9599-6d26d109fd83" Aug 5 21:32:57.547347 containerd[1565]: time="2024-08-05T21:32:57.547305244Z" level=error msg="StopPodSandbox for \"b8960703f4d4ff6d894b78e0e38f0f446941fae70e5e01df009b03ca320bdf65\" failed" error="failed to destroy network for sandbox \"b8960703f4d4ff6d894b78e0e38f0f446941fae70e5e01df009b03ca320bdf65\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 21:32:57.549004 kubelet[2678]: E0805 21:32:57.548983 2678 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b8960703f4d4ff6d894b78e0e38f0f446941fae70e5e01df009b03ca320bdf65\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b8960703f4d4ff6d894b78e0e38f0f446941fae70e5e01df009b03ca320bdf65" Aug 5 21:32:57.549277 kubelet[2678]: E0805 21:32:57.549168 2678 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b8960703f4d4ff6d894b78e0e38f0f446941fae70e5e01df009b03ca320bdf65"} Aug 5 21:32:57.549277 kubelet[2678]: E0805 21:32:57.549206 2678 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7f194363-3bf2-4565-81a0-e7e8a1dc0b71\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b8960703f4d4ff6d894b78e0e38f0f446941fae70e5e01df009b03ca320bdf65\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 5 21:32:57.549277 kubelet[2678]: E0805 21:32:57.549238 2678 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7f194363-3bf2-4565-81a0-e7e8a1dc0b71\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b8960703f4d4ff6d894b78e0e38f0f446941fae70e5e01df009b03ca320bdf65\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-78555dd857-hs25k" podUID="7f194363-3bf2-4565-81a0-e7e8a1dc0b71" Aug 5 21:32:57.554993 containerd[1565]: time="2024-08-05T21:32:57.554948378Z" level=error msg="StopPodSandbox for \"068c89225eaab713c3aa234d0475d89cc767219d0e8ef63ac1de59abedb3b0b3\" failed" error="failed to destroy network for sandbox \"068c89225eaab713c3aa234d0475d89cc767219d0e8ef63ac1de59abedb3b0b3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 21:32:57.555370 containerd[1565]: time="2024-08-05T21:32:57.554966578Z" level=error msg="StopPodSandbox for \"bf57b789acdfa47f7924a0a6180b8e5a80871a90e7c8102f6620de927e25ed97\" failed" error="failed to destroy network for sandbox \"bf57b789acdfa47f7924a0a6180b8e5a80871a90e7c8102f6620de927e25ed97\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 21:32:57.555418 kubelet[2678]: E0805 21:32:57.555177 2678 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"bf57b789acdfa47f7924a0a6180b8e5a80871a90e7c8102f6620de927e25ed97\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="bf57b789acdfa47f7924a0a6180b8e5a80871a90e7c8102f6620de927e25ed97" Aug 5 21:32:57.555418 kubelet[2678]: E0805 21:32:57.555204 2678 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"bf57b789acdfa47f7924a0a6180b8e5a80871a90e7c8102f6620de927e25ed97"} Aug 5 21:32:57.555418 kubelet[2678]: E0805 21:32:57.555231 2678 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4b1da061-9919-457a-87c2-aab1ccb0c931\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bf57b789acdfa47f7924a0a6180b8e5a80871a90e7c8102f6620de927e25ed97\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 5 21:32:57.555418 kubelet[2678]: E0805 21:32:57.555257 2678 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4b1da061-9919-457a-87c2-aab1ccb0c931\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bf57b789acdfa47f7924a0a6180b8e5a80871a90e7c8102f6620de927e25ed97\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-7g4l5" podUID="4b1da061-9919-457a-87c2-aab1ccb0c931" Aug 5 21:32:57.555543 kubelet[2678]: E0805 21:32:57.555177 2678 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"068c89225eaab713c3aa234d0475d89cc767219d0e8ef63ac1de59abedb3b0b3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="068c89225eaab713c3aa234d0475d89cc767219d0e8ef63ac1de59abedb3b0b3" Aug 5 21:32:57.555543 kubelet[2678]: E0805 21:32:57.555288 2678 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"068c89225eaab713c3aa234d0475d89cc767219d0e8ef63ac1de59abedb3b0b3"} Aug 5 21:32:57.555543 kubelet[2678]: E0805 21:32:57.555313 2678 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"250133c2-f2cb-49cc-a9e1-4020ef81de96\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"068c89225eaab713c3aa234d0475d89cc767219d0e8ef63ac1de59abedb3b0b3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 5 21:32:57.555543 kubelet[2678]: E0805 21:32:57.555335 2678 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"250133c2-f2cb-49cc-a9e1-4020ef81de96\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"068c89225eaab713c3aa234d0475d89cc767219d0e8ef63ac1de59abedb3b0b3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-p2qhs" podUID="250133c2-f2cb-49cc-a9e1-4020ef81de96" Aug 5 21:32:58.047364 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5206a5e0d1259ce19c047c65bf8b2f5a0a4a960a56335758f7ffc50c89ec9a48-shm.mount: Deactivated successfully. Aug 5 21:33:00.773594 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3283616966.mount: Deactivated successfully. Aug 5 21:33:00.816965 containerd[1565]: time="2024-08-05T21:33:00.816911392Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:33:00.817921 containerd[1565]: time="2024-08-05T21:33:00.817896744Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.0: active requests=0, bytes read=110491350" Aug 5 21:33:00.818800 containerd[1565]: time="2024-08-05T21:33:00.818761977Z" level=info msg="ImageCreate event name:\"sha256:d80cbd636ae2754a08d04558f0436508a17d92258e4712cc4a6299f43497607f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:33:00.820593 containerd[1565]: time="2024-08-05T21:33:00.820534604Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:33:00.821435 containerd[1565]: time="2024-08-05T21:33:00.821352117Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.0\" with image id \"sha256:d80cbd636ae2754a08d04558f0436508a17d92258e4712cc4a6299f43497607f\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\", size \"110491212\" in 3.315512152s" Aug 5 21:33:00.821435 containerd[1565]: time="2024-08-05T21:33:00.821386517Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\" returns image reference \"sha256:d80cbd636ae2754a08d04558f0436508a17d92258e4712cc4a6299f43497607f\"" Aug 5 21:33:00.827875 containerd[1565]: time="2024-08-05T21:33:00.827782347Z" level=info msg="CreateContainer within sandbox \"4fcbe026ff5c23df34a67c5c1570f0329cbb0c8f50f8000338a477b5a9828a57\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Aug 5 21:33:00.848516 containerd[1565]: time="2024-08-05T21:33:00.848466105Z" level=info msg="CreateContainer within sandbox \"4fcbe026ff5c23df34a67c5c1570f0329cbb0c8f50f8000338a477b5a9828a57\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"a01040613da5de9c845a9cad074f2e2fe54e216e9f0d9d709f318b61e12baa9a\"" Aug 5 21:33:00.849284 containerd[1565]: time="2024-08-05T21:33:00.849252899Z" level=info msg="StartContainer for \"a01040613da5de9c845a9cad074f2e2fe54e216e9f0d9d709f318b61e12baa9a\"" Aug 5 21:33:01.032330 containerd[1565]: time="2024-08-05T21:33:01.032159115Z" level=info msg="StartContainer for \"a01040613da5de9c845a9cad074f2e2fe54e216e9f0d9d709f318b61e12baa9a\" returns successfully" Aug 5 21:33:01.192943 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Aug 5 21:33:01.193105 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Aug 5 21:33:01.524944 kubelet[2678]: E0805 21:33:01.524846 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:33:01.552126 kubelet[2678]: I0805 21:33:01.552089 2678 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-xx4tm" podStartSLOduration=1.9838522809999999 podCreationTimestamp="2024-08-05 21:32:49 +0000 UTC" firstStartedPulling="2024-08-05 21:32:50.253511937 +0000 UTC m=+20.955125466" lastFinishedPulling="2024-08-05 21:33:00.821694235 +0000 UTC m=+31.523307764" observedRunningTime="2024-08-05 21:33:01.551639942 +0000 UTC m=+32.253253471" watchObservedRunningTime="2024-08-05 21:33:01.552034579 +0000 UTC m=+32.253648108" Aug 5 21:33:02.528372 kubelet[2678]: I0805 21:33:02.528128 2678 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 5 21:33:02.529212 kubelet[2678]: E0805 21:33:02.529014 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:33:02.740149 systemd-networkd[1238]: vxlan.calico: Link UP Aug 5 21:33:02.740159 systemd-networkd[1238]: vxlan.calico: Gained carrier Aug 5 21:33:03.693587 kubelet[2678]: I0805 21:33:03.690700 2678 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 5 21:33:03.693587 kubelet[2678]: E0805 21:33:03.691454 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:33:03.935478 systemd-networkd[1238]: vxlan.calico: Gained IPv6LL Aug 5 21:33:04.776072 systemd[1]: Started sshd@7-10.0.0.18:22-10.0.0.1:43054.service - OpenSSH per-connection server daemon (10.0.0.1:43054). Aug 5 21:33:04.811673 sshd[3973]: Accepted publickey for core from 10.0.0.1 port 43054 ssh2: RSA SHA256:vLb6+tqq+AN0xHsfmostacJPBc0ER3TTPjmoL0pNEEc Aug 5 21:33:04.813336 sshd[3973]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:33:04.822681 systemd-logind[1531]: New session 8 of user core. Aug 5 21:33:04.831276 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 5 21:33:04.959957 sshd[3973]: pam_unix(sshd:session): session closed for user core Aug 5 21:33:04.963377 systemd[1]: sshd@7-10.0.0.18:22-10.0.0.1:43054.service: Deactivated successfully. Aug 5 21:33:04.965649 systemd-logind[1531]: Session 8 logged out. Waiting for processes to exit. Aug 5 21:33:04.965667 systemd[1]: session-8.scope: Deactivated successfully. Aug 5 21:33:04.967585 systemd-logind[1531]: Removed session 8. Aug 5 21:33:08.413510 containerd[1565]: time="2024-08-05T21:33:08.413093797Z" level=info msg="StopPodSandbox for \"bf57b789acdfa47f7924a0a6180b8e5a80871a90e7c8102f6620de927e25ed97\"" Aug 5 21:33:08.413510 containerd[1565]: time="2024-08-05T21:33:08.413173436Z" level=info msg="StopPodSandbox for \"b8960703f4d4ff6d894b78e0e38f0f446941fae70e5e01df009b03ca320bdf65\"" Aug 5 21:33:08.607271 containerd[1565]: 2024-08-05 21:33:08.488 [INFO][4032] k8s.go 608: Cleaning up netns ContainerID="bf57b789acdfa47f7924a0a6180b8e5a80871a90e7c8102f6620de927e25ed97" Aug 5 21:33:08.607271 containerd[1565]: 2024-08-05 21:33:08.489 [INFO][4032] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="bf57b789acdfa47f7924a0a6180b8e5a80871a90e7c8102f6620de927e25ed97" iface="eth0" netns="/var/run/netns/cni-a994ebdf-63a3-ee58-4cf6-293c2b848de9" Aug 5 21:33:08.607271 containerd[1565]: 2024-08-05 21:33:08.489 [INFO][4032] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="bf57b789acdfa47f7924a0a6180b8e5a80871a90e7c8102f6620de927e25ed97" iface="eth0" netns="/var/run/netns/cni-a994ebdf-63a3-ee58-4cf6-293c2b848de9" Aug 5 21:33:08.607271 containerd[1565]: 2024-08-05 21:33:08.490 [INFO][4032] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="bf57b789acdfa47f7924a0a6180b8e5a80871a90e7c8102f6620de927e25ed97" iface="eth0" netns="/var/run/netns/cni-a994ebdf-63a3-ee58-4cf6-293c2b848de9" Aug 5 21:33:08.607271 containerd[1565]: 2024-08-05 21:33:08.490 [INFO][4032] k8s.go 615: Releasing IP address(es) ContainerID="bf57b789acdfa47f7924a0a6180b8e5a80871a90e7c8102f6620de927e25ed97" Aug 5 21:33:08.607271 containerd[1565]: 2024-08-05 21:33:08.490 [INFO][4032] utils.go 188: Calico CNI releasing IP address ContainerID="bf57b789acdfa47f7924a0a6180b8e5a80871a90e7c8102f6620de927e25ed97" Aug 5 21:33:08.607271 containerd[1565]: 2024-08-05 21:33:08.584 [INFO][4048] ipam_plugin.go 411: Releasing address using handleID ContainerID="bf57b789acdfa47f7924a0a6180b8e5a80871a90e7c8102f6620de927e25ed97" HandleID="k8s-pod-network.bf57b789acdfa47f7924a0a6180b8e5a80871a90e7c8102f6620de927e25ed97" Workload="localhost-k8s-coredns--5dd5756b68--7g4l5-eth0" Aug 5 21:33:08.607271 containerd[1565]: 2024-08-05 21:33:08.584 [INFO][4048] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 21:33:08.607271 containerd[1565]: 2024-08-05 21:33:08.584 [INFO][4048] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 21:33:08.607271 containerd[1565]: 2024-08-05 21:33:08.603 [WARNING][4048] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="bf57b789acdfa47f7924a0a6180b8e5a80871a90e7c8102f6620de927e25ed97" HandleID="k8s-pod-network.bf57b789acdfa47f7924a0a6180b8e5a80871a90e7c8102f6620de927e25ed97" Workload="localhost-k8s-coredns--5dd5756b68--7g4l5-eth0" Aug 5 21:33:08.607271 containerd[1565]: 2024-08-05 21:33:08.603 [INFO][4048] ipam_plugin.go 439: Releasing address using workloadID ContainerID="bf57b789acdfa47f7924a0a6180b8e5a80871a90e7c8102f6620de927e25ed97" HandleID="k8s-pod-network.bf57b789acdfa47f7924a0a6180b8e5a80871a90e7c8102f6620de927e25ed97" Workload="localhost-k8s-coredns--5dd5756b68--7g4l5-eth0" Aug 5 21:33:08.607271 containerd[1565]: 2024-08-05 21:33:08.604 [INFO][4048] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 21:33:08.607271 containerd[1565]: 2024-08-05 21:33:08.605 [INFO][4032] k8s.go 621: Teardown processing complete. ContainerID="bf57b789acdfa47f7924a0a6180b8e5a80871a90e7c8102f6620de927e25ed97" Aug 5 21:33:08.607704 containerd[1565]: time="2024-08-05T21:33:08.607417155Z" level=info msg="TearDown network for sandbox \"bf57b789acdfa47f7924a0a6180b8e5a80871a90e7c8102f6620de927e25ed97\" successfully" Aug 5 21:33:08.607704 containerd[1565]: time="2024-08-05T21:33:08.607444154Z" level=info msg="StopPodSandbox for \"bf57b789acdfa47f7924a0a6180b8e5a80871a90e7c8102f6620de927e25ed97\" returns successfully" Aug 5 21:33:08.609082 kubelet[2678]: E0805 21:33:08.609023 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:33:08.609780 containerd[1565]: time="2024-08-05T21:33:08.609750500Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-7g4l5,Uid:4b1da061-9919-457a-87c2-aab1ccb0c931,Namespace:kube-system,Attempt:1,}" Aug 5 21:33:08.610940 systemd[1]: run-netns-cni\x2da994ebdf\x2d63a3\x2dee58\x2d4cf6\x2d293c2b848de9.mount: Deactivated successfully. Aug 5 21:33:08.620676 containerd[1565]: 2024-08-05 21:33:08.490 [INFO][4033] k8s.go 608: Cleaning up netns ContainerID="b8960703f4d4ff6d894b78e0e38f0f446941fae70e5e01df009b03ca320bdf65" Aug 5 21:33:08.620676 containerd[1565]: 2024-08-05 21:33:08.491 [INFO][4033] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="b8960703f4d4ff6d894b78e0e38f0f446941fae70e5e01df009b03ca320bdf65" iface="eth0" netns="/var/run/netns/cni-cb936a21-6371-fc1e-7b42-ed223eb7eb46" Aug 5 21:33:08.620676 containerd[1565]: 2024-08-05 21:33:08.492 [INFO][4033] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="b8960703f4d4ff6d894b78e0e38f0f446941fae70e5e01df009b03ca320bdf65" iface="eth0" netns="/var/run/netns/cni-cb936a21-6371-fc1e-7b42-ed223eb7eb46" Aug 5 21:33:08.620676 containerd[1565]: 2024-08-05 21:33:08.492 [INFO][4033] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="b8960703f4d4ff6d894b78e0e38f0f446941fae70e5e01df009b03ca320bdf65" iface="eth0" netns="/var/run/netns/cni-cb936a21-6371-fc1e-7b42-ed223eb7eb46" Aug 5 21:33:08.620676 containerd[1565]: 2024-08-05 21:33:08.492 [INFO][4033] k8s.go 615: Releasing IP address(es) ContainerID="b8960703f4d4ff6d894b78e0e38f0f446941fae70e5e01df009b03ca320bdf65" Aug 5 21:33:08.620676 containerd[1565]: 2024-08-05 21:33:08.492 [INFO][4033] utils.go 188: Calico CNI releasing IP address ContainerID="b8960703f4d4ff6d894b78e0e38f0f446941fae70e5e01df009b03ca320bdf65" Aug 5 21:33:08.620676 containerd[1565]: 2024-08-05 21:33:08.586 [INFO][4049] ipam_plugin.go 411: Releasing address using handleID ContainerID="b8960703f4d4ff6d894b78e0e38f0f446941fae70e5e01df009b03ca320bdf65" HandleID="k8s-pod-network.b8960703f4d4ff6d894b78e0e38f0f446941fae70e5e01df009b03ca320bdf65" Workload="localhost-k8s-calico--kube--controllers--78555dd857--hs25k-eth0" Aug 5 21:33:08.620676 containerd[1565]: 2024-08-05 21:33:08.587 [INFO][4049] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 21:33:08.620676 containerd[1565]: 2024-08-05 21:33:08.604 [INFO][4049] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 21:33:08.620676 containerd[1565]: 2024-08-05 21:33:08.615 [WARNING][4049] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="b8960703f4d4ff6d894b78e0e38f0f446941fae70e5e01df009b03ca320bdf65" HandleID="k8s-pod-network.b8960703f4d4ff6d894b78e0e38f0f446941fae70e5e01df009b03ca320bdf65" Workload="localhost-k8s-calico--kube--controllers--78555dd857--hs25k-eth0" Aug 5 21:33:08.620676 containerd[1565]: 2024-08-05 21:33:08.615 [INFO][4049] ipam_plugin.go 439: Releasing address using workloadID ContainerID="b8960703f4d4ff6d894b78e0e38f0f446941fae70e5e01df009b03ca320bdf65" HandleID="k8s-pod-network.b8960703f4d4ff6d894b78e0e38f0f446941fae70e5e01df009b03ca320bdf65" Workload="localhost-k8s-calico--kube--controllers--78555dd857--hs25k-eth0" Aug 5 21:33:08.620676 containerd[1565]: 2024-08-05 21:33:08.616 [INFO][4049] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 21:33:08.620676 containerd[1565]: 2024-08-05 21:33:08.618 [INFO][4033] k8s.go 621: Teardown processing complete. ContainerID="b8960703f4d4ff6d894b78e0e38f0f446941fae70e5e01df009b03ca320bdf65" Aug 5 21:33:08.621125 containerd[1565]: time="2024-08-05T21:33:08.620854351Z" level=info msg="TearDown network for sandbox \"b8960703f4d4ff6d894b78e0e38f0f446941fae70e5e01df009b03ca320bdf65\" successfully" Aug 5 21:33:08.621125 containerd[1565]: time="2024-08-05T21:33:08.620874871Z" level=info msg="StopPodSandbox for \"b8960703f4d4ff6d894b78e0e38f0f446941fae70e5e01df009b03ca320bdf65\" returns successfully" Aug 5 21:33:08.622019 containerd[1565]: time="2024-08-05T21:33:08.621662466Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78555dd857-hs25k,Uid:7f194363-3bf2-4565-81a0-e7e8a1dc0b71,Namespace:calico-system,Attempt:1,}" Aug 5 21:33:08.623161 systemd[1]: run-netns-cni\x2dcb936a21\x2d6371\x2dfc1e\x2d7b42\x2ded223eb7eb46.mount: Deactivated successfully. Aug 5 21:33:08.751032 systemd-networkd[1238]: cali17e314e23fb: Link UP Aug 5 21:33:08.752202 systemd-networkd[1238]: cali17e314e23fb: Gained carrier Aug 5 21:33:08.770147 containerd[1565]: 2024-08-05 21:33:08.672 [INFO][4066] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--5dd5756b68--7g4l5-eth0 coredns-5dd5756b68- kube-system 4b1da061-9919-457a-87c2-aab1ccb0c931 801 0 2024-08-05 21:32:43 +0000 UTC map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-5dd5756b68-7g4l5 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali17e314e23fb [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="ae41b82b38f4688edb99177a53bbd5820030abcf7b6531b131c8d90fb62f2aff" Namespace="kube-system" Pod="coredns-5dd5756b68-7g4l5" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--7g4l5-" Aug 5 21:33:08.770147 containerd[1565]: 2024-08-05 21:33:08.673 [INFO][4066] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ae41b82b38f4688edb99177a53bbd5820030abcf7b6531b131c8d90fb62f2aff" Namespace="kube-system" Pod="coredns-5dd5756b68-7g4l5" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--7g4l5-eth0" Aug 5 21:33:08.770147 containerd[1565]: 2024-08-05 21:33:08.704 [INFO][4092] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ae41b82b38f4688edb99177a53bbd5820030abcf7b6531b131c8d90fb62f2aff" HandleID="k8s-pod-network.ae41b82b38f4688edb99177a53bbd5820030abcf7b6531b131c8d90fb62f2aff" Workload="localhost-k8s-coredns--5dd5756b68--7g4l5-eth0" Aug 5 21:33:08.770147 containerd[1565]: 2024-08-05 21:33:08.720 [INFO][4092] ipam_plugin.go 264: Auto assigning IP ContainerID="ae41b82b38f4688edb99177a53bbd5820030abcf7b6531b131c8d90fb62f2aff" HandleID="k8s-pod-network.ae41b82b38f4688edb99177a53bbd5820030abcf7b6531b131c8d90fb62f2aff" Workload="localhost-k8s-coredns--5dd5756b68--7g4l5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000133bd0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-5dd5756b68-7g4l5", "timestamp":"2024-08-05 21:33:08.704114596 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 5 21:33:08.770147 containerd[1565]: 2024-08-05 21:33:08.720 [INFO][4092] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 21:33:08.770147 containerd[1565]: 2024-08-05 21:33:08.720 [INFO][4092] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 21:33:08.770147 containerd[1565]: 2024-08-05 21:33:08.721 [INFO][4092] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 5 21:33:08.770147 containerd[1565]: 2024-08-05 21:33:08.723 [INFO][4092] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ae41b82b38f4688edb99177a53bbd5820030abcf7b6531b131c8d90fb62f2aff" host="localhost" Aug 5 21:33:08.770147 containerd[1565]: 2024-08-05 21:33:08.729 [INFO][4092] ipam.go 372: Looking up existing affinities for host host="localhost" Aug 5 21:33:08.770147 containerd[1565]: 2024-08-05 21:33:08.733 [INFO][4092] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Aug 5 21:33:08.770147 containerd[1565]: 2024-08-05 21:33:08.735 [INFO][4092] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 5 21:33:08.770147 containerd[1565]: 2024-08-05 21:33:08.737 [INFO][4092] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 5 21:33:08.770147 containerd[1565]: 2024-08-05 21:33:08.737 [INFO][4092] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ae41b82b38f4688edb99177a53bbd5820030abcf7b6531b131c8d90fb62f2aff" host="localhost" Aug 5 21:33:08.770147 containerd[1565]: 2024-08-05 21:33:08.739 [INFO][4092] ipam.go 1685: Creating new handle: k8s-pod-network.ae41b82b38f4688edb99177a53bbd5820030abcf7b6531b131c8d90fb62f2aff Aug 5 21:33:08.770147 containerd[1565]: 2024-08-05 21:33:08.741 [INFO][4092] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ae41b82b38f4688edb99177a53bbd5820030abcf7b6531b131c8d90fb62f2aff" host="localhost" Aug 5 21:33:08.770147 containerd[1565]: 2024-08-05 21:33:08.745 [INFO][4092] ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.ae41b82b38f4688edb99177a53bbd5820030abcf7b6531b131c8d90fb62f2aff" host="localhost" Aug 5 21:33:08.770147 containerd[1565]: 2024-08-05 21:33:08.745 [INFO][4092] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.ae41b82b38f4688edb99177a53bbd5820030abcf7b6531b131c8d90fb62f2aff" host="localhost" Aug 5 21:33:08.770147 containerd[1565]: 2024-08-05 21:33:08.745 [INFO][4092] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 21:33:08.770147 containerd[1565]: 2024-08-05 21:33:08.746 [INFO][4092] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="ae41b82b38f4688edb99177a53bbd5820030abcf7b6531b131c8d90fb62f2aff" HandleID="k8s-pod-network.ae41b82b38f4688edb99177a53bbd5820030abcf7b6531b131c8d90fb62f2aff" Workload="localhost-k8s-coredns--5dd5756b68--7g4l5-eth0" Aug 5 21:33:08.770682 containerd[1565]: 2024-08-05 21:33:08.748 [INFO][4066] k8s.go 386: Populated endpoint ContainerID="ae41b82b38f4688edb99177a53bbd5820030abcf7b6531b131c8d90fb62f2aff" Namespace="kube-system" Pod="coredns-5dd5756b68-7g4l5" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--7g4l5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--7g4l5-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"4b1da061-9919-457a-87c2-aab1ccb0c931", ResourceVersion:"801", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 21, 32, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-5dd5756b68-7g4l5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali17e314e23fb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 21:33:08.770682 containerd[1565]: 2024-08-05 21:33:08.748 [INFO][4066] k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="ae41b82b38f4688edb99177a53bbd5820030abcf7b6531b131c8d90fb62f2aff" Namespace="kube-system" Pod="coredns-5dd5756b68-7g4l5" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--7g4l5-eth0" Aug 5 21:33:08.770682 containerd[1565]: 2024-08-05 21:33:08.748 [INFO][4066] dataplane_linux.go 68: Setting the host side veth name to cali17e314e23fb ContainerID="ae41b82b38f4688edb99177a53bbd5820030abcf7b6531b131c8d90fb62f2aff" Namespace="kube-system" Pod="coredns-5dd5756b68-7g4l5" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--7g4l5-eth0" Aug 5 21:33:08.770682 containerd[1565]: 2024-08-05 21:33:08.752 [INFO][4066] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="ae41b82b38f4688edb99177a53bbd5820030abcf7b6531b131c8d90fb62f2aff" Namespace="kube-system" Pod="coredns-5dd5756b68-7g4l5" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--7g4l5-eth0" Aug 5 21:33:08.770682 containerd[1565]: 2024-08-05 21:33:08.753 [INFO][4066] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ae41b82b38f4688edb99177a53bbd5820030abcf7b6531b131c8d90fb62f2aff" Namespace="kube-system" Pod="coredns-5dd5756b68-7g4l5" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--7g4l5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--7g4l5-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"4b1da061-9919-457a-87c2-aab1ccb0c931", ResourceVersion:"801", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 21, 32, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ae41b82b38f4688edb99177a53bbd5820030abcf7b6531b131c8d90fb62f2aff", Pod:"coredns-5dd5756b68-7g4l5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali17e314e23fb", MAC:"96:d7:39:2f:09:9b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 21:33:08.770682 containerd[1565]: 2024-08-05 21:33:08.764 [INFO][4066] k8s.go 500: Wrote updated endpoint to datastore ContainerID="ae41b82b38f4688edb99177a53bbd5820030abcf7b6531b131c8d90fb62f2aff" Namespace="kube-system" Pod="coredns-5dd5756b68-7g4l5" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--7g4l5-eth0" Aug 5 21:33:08.785597 systemd-networkd[1238]: cali3df62fe5e23: Link UP Aug 5 21:33:08.785841 systemd-networkd[1238]: cali3df62fe5e23: Gained carrier Aug 5 21:33:08.798682 containerd[1565]: 2024-08-05 21:33:08.684 [INFO][4076] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--78555dd857--hs25k-eth0 calico-kube-controllers-78555dd857- calico-system 7f194363-3bf2-4565-81a0-e7e8a1dc0b71 802 0 2024-08-05 21:32:50 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:78555dd857 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-78555dd857-hs25k eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali3df62fe5e23 [] []}} ContainerID="e53ca3620be86d40eedc43750545a3e8e9c46036b539d5fd487065d911a4e4fb" Namespace="calico-system" Pod="calico-kube-controllers-78555dd857-hs25k" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78555dd857--hs25k-" Aug 5 21:33:08.798682 containerd[1565]: 2024-08-05 21:33:08.684 [INFO][4076] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e53ca3620be86d40eedc43750545a3e8e9c46036b539d5fd487065d911a4e4fb" Namespace="calico-system" Pod="calico-kube-controllers-78555dd857-hs25k" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78555dd857--hs25k-eth0" Aug 5 21:33:08.798682 containerd[1565]: 2024-08-05 21:33:08.713 [INFO][4098] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e53ca3620be86d40eedc43750545a3e8e9c46036b539d5fd487065d911a4e4fb" HandleID="k8s-pod-network.e53ca3620be86d40eedc43750545a3e8e9c46036b539d5fd487065d911a4e4fb" Workload="localhost-k8s-calico--kube--controllers--78555dd857--hs25k-eth0" Aug 5 21:33:08.798682 containerd[1565]: 2024-08-05 21:33:08.723 [INFO][4098] ipam_plugin.go 264: Auto assigning IP ContainerID="e53ca3620be86d40eedc43750545a3e8e9c46036b539d5fd487065d911a4e4fb" HandleID="k8s-pod-network.e53ca3620be86d40eedc43750545a3e8e9c46036b539d5fd487065d911a4e4fb" Workload="localhost-k8s-calico--kube--controllers--78555dd857--hs25k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000699c50), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-78555dd857-hs25k", "timestamp":"2024-08-05 21:33:08.713145181 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 5 21:33:08.798682 containerd[1565]: 2024-08-05 21:33:08.724 [INFO][4098] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 21:33:08.798682 containerd[1565]: 2024-08-05 21:33:08.746 [INFO][4098] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 21:33:08.798682 containerd[1565]: 2024-08-05 21:33:08.746 [INFO][4098] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 5 21:33:08.798682 containerd[1565]: 2024-08-05 21:33:08.748 [INFO][4098] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e53ca3620be86d40eedc43750545a3e8e9c46036b539d5fd487065d911a4e4fb" host="localhost" Aug 5 21:33:08.798682 containerd[1565]: 2024-08-05 21:33:08.754 [INFO][4098] ipam.go 372: Looking up existing affinities for host host="localhost" Aug 5 21:33:08.798682 containerd[1565]: 2024-08-05 21:33:08.763 [INFO][4098] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Aug 5 21:33:08.798682 containerd[1565]: 2024-08-05 21:33:08.765 [INFO][4098] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 5 21:33:08.798682 containerd[1565]: 2024-08-05 21:33:08.769 [INFO][4098] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 5 21:33:08.798682 containerd[1565]: 2024-08-05 21:33:08.769 [INFO][4098] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e53ca3620be86d40eedc43750545a3e8e9c46036b539d5fd487065d911a4e4fb" host="localhost" Aug 5 21:33:08.798682 containerd[1565]: 2024-08-05 21:33:08.771 [INFO][4098] ipam.go 1685: Creating new handle: k8s-pod-network.e53ca3620be86d40eedc43750545a3e8e9c46036b539d5fd487065d911a4e4fb Aug 5 21:33:08.798682 containerd[1565]: 2024-08-05 21:33:08.775 [INFO][4098] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e53ca3620be86d40eedc43750545a3e8e9c46036b539d5fd487065d911a4e4fb" host="localhost" Aug 5 21:33:08.798682 containerd[1565]: 2024-08-05 21:33:08.781 [INFO][4098] ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.e53ca3620be86d40eedc43750545a3e8e9c46036b539d5fd487065d911a4e4fb" host="localhost" Aug 5 21:33:08.798682 containerd[1565]: 2024-08-05 21:33:08.781 [INFO][4098] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.e53ca3620be86d40eedc43750545a3e8e9c46036b539d5fd487065d911a4e4fb" host="localhost" Aug 5 21:33:08.798682 containerd[1565]: 2024-08-05 21:33:08.781 [INFO][4098] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 21:33:08.798682 containerd[1565]: 2024-08-05 21:33:08.781 [INFO][4098] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="e53ca3620be86d40eedc43750545a3e8e9c46036b539d5fd487065d911a4e4fb" HandleID="k8s-pod-network.e53ca3620be86d40eedc43750545a3e8e9c46036b539d5fd487065d911a4e4fb" Workload="localhost-k8s-calico--kube--controllers--78555dd857--hs25k-eth0" Aug 5 21:33:08.799645 containerd[1565]: 2024-08-05 21:33:08.784 [INFO][4076] k8s.go 386: Populated endpoint ContainerID="e53ca3620be86d40eedc43750545a3e8e9c46036b539d5fd487065d911a4e4fb" Namespace="calico-system" Pod="calico-kube-controllers-78555dd857-hs25k" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78555dd857--hs25k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--78555dd857--hs25k-eth0", GenerateName:"calico-kube-controllers-78555dd857-", Namespace:"calico-system", SelfLink:"", UID:"7f194363-3bf2-4565-81a0-e7e8a1dc0b71", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 21, 32, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"78555dd857", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-78555dd857-hs25k", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3df62fe5e23", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 21:33:08.799645 containerd[1565]: 2024-08-05 21:33:08.784 [INFO][4076] k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="e53ca3620be86d40eedc43750545a3e8e9c46036b539d5fd487065d911a4e4fb" Namespace="calico-system" Pod="calico-kube-controllers-78555dd857-hs25k" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78555dd857--hs25k-eth0" Aug 5 21:33:08.799645 containerd[1565]: 2024-08-05 21:33:08.784 [INFO][4076] dataplane_linux.go 68: Setting the host side veth name to cali3df62fe5e23 ContainerID="e53ca3620be86d40eedc43750545a3e8e9c46036b539d5fd487065d911a4e4fb" Namespace="calico-system" Pod="calico-kube-controllers-78555dd857-hs25k" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78555dd857--hs25k-eth0" Aug 5 21:33:08.799645 containerd[1565]: 2024-08-05 21:33:08.785 [INFO][4076] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="e53ca3620be86d40eedc43750545a3e8e9c46036b539d5fd487065d911a4e4fb" Namespace="calico-system" Pod="calico-kube-controllers-78555dd857-hs25k" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78555dd857--hs25k-eth0" Aug 5 21:33:08.799645 containerd[1565]: 2024-08-05 21:33:08.786 [INFO][4076] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e53ca3620be86d40eedc43750545a3e8e9c46036b539d5fd487065d911a4e4fb" Namespace="calico-system" Pod="calico-kube-controllers-78555dd857-hs25k" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78555dd857--hs25k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--78555dd857--hs25k-eth0", GenerateName:"calico-kube-controllers-78555dd857-", Namespace:"calico-system", SelfLink:"", UID:"7f194363-3bf2-4565-81a0-e7e8a1dc0b71", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 21, 32, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"78555dd857", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e53ca3620be86d40eedc43750545a3e8e9c46036b539d5fd487065d911a4e4fb", Pod:"calico-kube-controllers-78555dd857-hs25k", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3df62fe5e23", MAC:"5e:84:90:05:7b:91", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 21:33:08.799645 containerd[1565]: 2024-08-05 21:33:08.793 [INFO][4076] k8s.go 500: Wrote updated endpoint to datastore ContainerID="e53ca3620be86d40eedc43750545a3e8e9c46036b539d5fd487065d911a4e4fb" Namespace="calico-system" Pod="calico-kube-controllers-78555dd857-hs25k" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78555dd857--hs25k-eth0" Aug 5 21:33:08.803009 containerd[1565]: time="2024-08-05T21:33:08.802778346Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 21:33:08.803009 containerd[1565]: time="2024-08-05T21:33:08.802846746Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:33:08.803009 containerd[1565]: time="2024-08-05T21:33:08.802866546Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 21:33:08.803009 containerd[1565]: time="2024-08-05T21:33:08.802897785Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:33:08.819699 containerd[1565]: time="2024-08-05T21:33:08.819624962Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 21:33:08.819699 containerd[1565]: time="2024-08-05T21:33:08.819681322Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:33:08.819806 containerd[1565]: time="2024-08-05T21:33:08.819708961Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 21:33:08.819806 containerd[1565]: time="2024-08-05T21:33:08.819726681Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:33:08.830439 systemd-resolved[1446]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 5 21:33:08.838713 systemd-resolved[1446]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 5 21:33:08.862160 containerd[1565]: time="2024-08-05T21:33:08.862107379Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-7g4l5,Uid:4b1da061-9919-457a-87c2-aab1ccb0c931,Namespace:kube-system,Attempt:1,} returns sandbox id \"ae41b82b38f4688edb99177a53bbd5820030abcf7b6531b131c8d90fb62f2aff\"" Aug 5 21:33:08.862334 containerd[1565]: time="2024-08-05T21:33:08.862109739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78555dd857-hs25k,Uid:7f194363-3bf2-4565-81a0-e7e8a1dc0b71,Namespace:calico-system,Attempt:1,} returns sandbox id \"e53ca3620be86d40eedc43750545a3e8e9c46036b539d5fd487065d911a4e4fb\"" Aug 5 21:33:08.863363 kubelet[2678]: E0805 21:33:08.862896 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:33:08.864553 containerd[1565]: time="2024-08-05T21:33:08.864523444Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\"" Aug 5 21:33:08.865943 containerd[1565]: time="2024-08-05T21:33:08.865712517Z" level=info msg="CreateContainer within sandbox \"ae41b82b38f4688edb99177a53bbd5820030abcf7b6531b131c8d90fb62f2aff\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 5 21:33:08.878223 containerd[1565]: time="2024-08-05T21:33:08.878187720Z" level=info msg="CreateContainer within sandbox \"ae41b82b38f4688edb99177a53bbd5820030abcf7b6531b131c8d90fb62f2aff\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e8cf36f125730bb575f767853b8796fe5cdbe6ffe0462ee0482213e9c00d4e0f\"" Aug 5 21:33:08.878960 containerd[1565]: time="2024-08-05T21:33:08.878924835Z" level=info msg="StartContainer for \"e8cf36f125730bb575f767853b8796fe5cdbe6ffe0462ee0482213e9c00d4e0f\"" Aug 5 21:33:08.917934 containerd[1565]: time="2024-08-05T21:33:08.917882434Z" level=info msg="StartContainer for \"e8cf36f125730bb575f767853b8796fe5cdbe6ffe0462ee0482213e9c00d4e0f\" returns successfully" Aug 5 21:33:09.547851 kubelet[2678]: E0805 21:33:09.547731 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:33:09.558631 kubelet[2678]: I0805 21:33:09.557841 2678 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-7g4l5" podStartSLOduration=26.557744521 podCreationTimestamp="2024-08-05 21:32:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 21:33:09.55628001 +0000 UTC m=+40.257893499" watchObservedRunningTime="2024-08-05 21:33:09.557744521 +0000 UTC m=+40.259358050" Aug 5 21:33:09.979119 systemd[1]: Started sshd@8-10.0.0.18:22-10.0.0.1:43064.service - OpenSSH per-connection server daemon (10.0.0.1:43064). Aug 5 21:33:10.018271 sshd[4261]: Accepted publickey for core from 10.0.0.1 port 43064 ssh2: RSA SHA256:vLb6+tqq+AN0xHsfmostacJPBc0ER3TTPjmoL0pNEEc Aug 5 21:33:10.019599 sshd[4261]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:33:10.025197 systemd-logind[1531]: New session 9 of user core. Aug 5 21:33:10.032113 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 5 21:33:10.159040 sshd[4261]: pam_unix(sshd:session): session closed for user core Aug 5 21:33:10.163444 systemd[1]: sshd@8-10.0.0.18:22-10.0.0.1:43064.service: Deactivated successfully. Aug 5 21:33:10.167557 systemd[1]: session-9.scope: Deactivated successfully. Aug 5 21:33:10.168790 systemd-logind[1531]: Session 9 logged out. Waiting for processes to exit. Aug 5 21:33:10.170718 systemd-logind[1531]: Removed session 9. Aug 5 21:33:10.204077 systemd-networkd[1238]: cali17e314e23fb: Gained IPv6LL Aug 5 21:33:10.362674 containerd[1565]: time="2024-08-05T21:33:10.362207678Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:33:10.363798 containerd[1565]: time="2024-08-05T21:33:10.363769229Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.0: active requests=0, bytes read=31361057" Aug 5 21:33:10.365038 containerd[1565]: time="2024-08-05T21:33:10.364446065Z" level=info msg="ImageCreate event name:\"sha256:89df47edb6965978d3683de1cac38ee5b47d7054332bbea7cc0ef3b3c17da2e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:33:10.377884 containerd[1565]: time="2024-08-05T21:33:10.377850506Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:33:10.378726 containerd[1565]: time="2024-08-05T21:33:10.378672661Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" with image id \"sha256:89df47edb6965978d3683de1cac38ee5b47d7054332bbea7cc0ef3b3c17da2e1\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\", size \"32727593\" in 1.514113977s" Aug 5 21:33:10.378726 containerd[1565]: time="2024-08-05T21:33:10.378714261Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" returns image reference \"sha256:89df47edb6965978d3683de1cac38ee5b47d7054332bbea7cc0ef3b3c17da2e1\"" Aug 5 21:33:10.387299 containerd[1565]: time="2024-08-05T21:33:10.387156451Z" level=info msg="CreateContainer within sandbox \"e53ca3620be86d40eedc43750545a3e8e9c46036b539d5fd487065d911a4e4fb\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Aug 5 21:33:10.395700 containerd[1565]: time="2024-08-05T21:33:10.395603561Z" level=info msg="CreateContainer within sandbox \"e53ca3620be86d40eedc43750545a3e8e9c46036b539d5fd487065d911a4e4fb\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"a08f16f575b4dc4afcb4e6a0c669e38a41475493061fd1f64adabdf8d645a785\"" Aug 5 21:33:10.395952 systemd-networkd[1238]: cali3df62fe5e23: Gained IPv6LL Aug 5 21:33:10.396597 containerd[1565]: time="2024-08-05T21:33:10.396572116Z" level=info msg="StartContainer for \"a08f16f575b4dc4afcb4e6a0c669e38a41475493061fd1f64adabdf8d645a785\"" Aug 5 21:33:10.413447 containerd[1565]: time="2024-08-05T21:33:10.413409576Z" level=info msg="StopPodSandbox for \"068c89225eaab713c3aa234d0475d89cc767219d0e8ef63ac1de59abedb3b0b3\"" Aug 5 21:33:10.534184 containerd[1565]: time="2024-08-05T21:33:10.534007746Z" level=info msg="StartContainer for \"a08f16f575b4dc4afcb4e6a0c669e38a41475493061fd1f64adabdf8d645a785\" returns successfully" Aug 5 21:33:10.554842 kubelet[2678]: E0805 21:33:10.554440 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:33:10.571553 containerd[1565]: 2024-08-05 21:33:10.472 [INFO][4314] k8s.go 608: Cleaning up netns ContainerID="068c89225eaab713c3aa234d0475d89cc767219d0e8ef63ac1de59abedb3b0b3" Aug 5 21:33:10.571553 containerd[1565]: 2024-08-05 21:33:10.472 [INFO][4314] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="068c89225eaab713c3aa234d0475d89cc767219d0e8ef63ac1de59abedb3b0b3" iface="eth0" netns="/var/run/netns/cni-988769c2-930d-8f75-9d8d-bfad2a85b4c8" Aug 5 21:33:10.571553 containerd[1565]: 2024-08-05 21:33:10.472 [INFO][4314] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="068c89225eaab713c3aa234d0475d89cc767219d0e8ef63ac1de59abedb3b0b3" iface="eth0" netns="/var/run/netns/cni-988769c2-930d-8f75-9d8d-bfad2a85b4c8" Aug 5 21:33:10.571553 containerd[1565]: 2024-08-05 21:33:10.473 [INFO][4314] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="068c89225eaab713c3aa234d0475d89cc767219d0e8ef63ac1de59abedb3b0b3" iface="eth0" netns="/var/run/netns/cni-988769c2-930d-8f75-9d8d-bfad2a85b4c8" Aug 5 21:33:10.571553 containerd[1565]: 2024-08-05 21:33:10.473 [INFO][4314] k8s.go 615: Releasing IP address(es) ContainerID="068c89225eaab713c3aa234d0475d89cc767219d0e8ef63ac1de59abedb3b0b3" Aug 5 21:33:10.571553 containerd[1565]: 2024-08-05 21:33:10.473 [INFO][4314] utils.go 188: Calico CNI releasing IP address ContainerID="068c89225eaab713c3aa234d0475d89cc767219d0e8ef63ac1de59abedb3b0b3" Aug 5 21:33:10.571553 containerd[1565]: 2024-08-05 21:33:10.546 [INFO][4336] ipam_plugin.go 411: Releasing address using handleID ContainerID="068c89225eaab713c3aa234d0475d89cc767219d0e8ef63ac1de59abedb3b0b3" HandleID="k8s-pod-network.068c89225eaab713c3aa234d0475d89cc767219d0e8ef63ac1de59abedb3b0b3" Workload="localhost-k8s-coredns--5dd5756b68--p2qhs-eth0" Aug 5 21:33:10.571553 containerd[1565]: 2024-08-05 21:33:10.546 [INFO][4336] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 21:33:10.571553 containerd[1565]: 2024-08-05 21:33:10.546 [INFO][4336] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 21:33:10.571553 containerd[1565]: 2024-08-05 21:33:10.560 [WARNING][4336] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="068c89225eaab713c3aa234d0475d89cc767219d0e8ef63ac1de59abedb3b0b3" HandleID="k8s-pod-network.068c89225eaab713c3aa234d0475d89cc767219d0e8ef63ac1de59abedb3b0b3" Workload="localhost-k8s-coredns--5dd5756b68--p2qhs-eth0" Aug 5 21:33:10.571553 containerd[1565]: 2024-08-05 21:33:10.560 [INFO][4336] ipam_plugin.go 439: Releasing address using workloadID ContainerID="068c89225eaab713c3aa234d0475d89cc767219d0e8ef63ac1de59abedb3b0b3" HandleID="k8s-pod-network.068c89225eaab713c3aa234d0475d89cc767219d0e8ef63ac1de59abedb3b0b3" Workload="localhost-k8s-coredns--5dd5756b68--p2qhs-eth0" Aug 5 21:33:10.571553 containerd[1565]: 2024-08-05 21:33:10.565 [INFO][4336] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 21:33:10.571553 containerd[1565]: 2024-08-05 21:33:10.567 [INFO][4314] k8s.go 621: Teardown processing complete. ContainerID="068c89225eaab713c3aa234d0475d89cc767219d0e8ef63ac1de59abedb3b0b3" Aug 5 21:33:10.572270 containerd[1565]: time="2024-08-05T21:33:10.571697884Z" level=info msg="TearDown network for sandbox \"068c89225eaab713c3aa234d0475d89cc767219d0e8ef63ac1de59abedb3b0b3\" successfully" Aug 5 21:33:10.572270 containerd[1565]: time="2024-08-05T21:33:10.571727204Z" level=info msg="StopPodSandbox for \"068c89225eaab713c3aa234d0475d89cc767219d0e8ef63ac1de59abedb3b0b3\" returns successfully" Aug 5 21:33:10.572318 kubelet[2678]: E0805 21:33:10.572061 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:33:10.573353 containerd[1565]: time="2024-08-05T21:33:10.573311754Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-p2qhs,Uid:250133c2-f2cb-49cc-a9e1-4020ef81de96,Namespace:kube-system,Attempt:1,}" Aug 5 21:33:10.613494 systemd[1]: run-netns-cni\x2d988769c2\x2d930d\x2d8f75\x2d9d8d\x2dbfad2a85b4c8.mount: Deactivated successfully. Aug 5 21:33:10.705736 systemd-networkd[1238]: cali2106dbc3cbc: Link UP Aug 5 21:33:10.707209 systemd-networkd[1238]: cali2106dbc3cbc: Gained carrier Aug 5 21:33:10.718375 kubelet[2678]: I0805 21:33:10.718101 2678 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-78555dd857-hs25k" podStartSLOduration=19.203245168 podCreationTimestamp="2024-08-05 21:32:50 +0000 UTC" firstStartedPulling="2024-08-05 21:33:08.864094407 +0000 UTC m=+39.565707936" lastFinishedPulling="2024-08-05 21:33:10.37890946 +0000 UTC m=+41.080522989" observedRunningTime="2024-08-05 21:33:10.569905694 +0000 UTC m=+41.271519223" watchObservedRunningTime="2024-08-05 21:33:10.718060221 +0000 UTC m=+41.419673750" Aug 5 21:33:10.722255 containerd[1565]: 2024-08-05 21:33:10.627 [INFO][4350] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--5dd5756b68--p2qhs-eth0 coredns-5dd5756b68- kube-system 250133c2-f2cb-49cc-a9e1-4020ef81de96 836 0 2024-08-05 21:32:43 +0000 UTC map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-5dd5756b68-p2qhs eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2106dbc3cbc [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="fd42c9671f934306633a5c867e8407862ec04502b1c885d7c6e979c4bb5a29d0" Namespace="kube-system" Pod="coredns-5dd5756b68-p2qhs" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--p2qhs-" Aug 5 21:33:10.722255 containerd[1565]: 2024-08-05 21:33:10.627 [INFO][4350] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="fd42c9671f934306633a5c867e8407862ec04502b1c885d7c6e979c4bb5a29d0" Namespace="kube-system" Pod="coredns-5dd5756b68-p2qhs" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--p2qhs-eth0" Aug 5 21:33:10.722255 containerd[1565]: 2024-08-05 21:33:10.660 [INFO][4364] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fd42c9671f934306633a5c867e8407862ec04502b1c885d7c6e979c4bb5a29d0" HandleID="k8s-pod-network.fd42c9671f934306633a5c867e8407862ec04502b1c885d7c6e979c4bb5a29d0" Workload="localhost-k8s-coredns--5dd5756b68--p2qhs-eth0" Aug 5 21:33:10.722255 containerd[1565]: 2024-08-05 21:33:10.671 [INFO][4364] ipam_plugin.go 264: Auto assigning IP ContainerID="fd42c9671f934306633a5c867e8407862ec04502b1c885d7c6e979c4bb5a29d0" HandleID="k8s-pod-network.fd42c9671f934306633a5c867e8407862ec04502b1c885d7c6e979c4bb5a29d0" Workload="localhost-k8s-coredns--5dd5756b68--p2qhs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002f5b20), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-5dd5756b68-p2qhs", "timestamp":"2024-08-05 21:33:10.660432401 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 5 21:33:10.722255 containerd[1565]: 2024-08-05 21:33:10.671 [INFO][4364] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 21:33:10.722255 containerd[1565]: 2024-08-05 21:33:10.671 [INFO][4364] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 21:33:10.722255 containerd[1565]: 2024-08-05 21:33:10.671 [INFO][4364] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 5 21:33:10.722255 containerd[1565]: 2024-08-05 21:33:10.674 [INFO][4364] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.fd42c9671f934306633a5c867e8407862ec04502b1c885d7c6e979c4bb5a29d0" host="localhost" Aug 5 21:33:10.722255 containerd[1565]: 2024-08-05 21:33:10.680 [INFO][4364] ipam.go 372: Looking up existing affinities for host host="localhost" Aug 5 21:33:10.722255 containerd[1565]: 2024-08-05 21:33:10.685 [INFO][4364] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Aug 5 21:33:10.722255 containerd[1565]: 2024-08-05 21:33:10.686 [INFO][4364] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 5 21:33:10.722255 containerd[1565]: 2024-08-05 21:33:10.689 [INFO][4364] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 5 21:33:10.722255 containerd[1565]: 2024-08-05 21:33:10.689 [INFO][4364] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.fd42c9671f934306633a5c867e8407862ec04502b1c885d7c6e979c4bb5a29d0" host="localhost" Aug 5 21:33:10.722255 containerd[1565]: 2024-08-05 21:33:10.691 [INFO][4364] ipam.go 1685: Creating new handle: k8s-pod-network.fd42c9671f934306633a5c867e8407862ec04502b1c885d7c6e979c4bb5a29d0 Aug 5 21:33:10.722255 containerd[1565]: 2024-08-05 21:33:10.695 [INFO][4364] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.fd42c9671f934306633a5c867e8407862ec04502b1c885d7c6e979c4bb5a29d0" host="localhost" Aug 5 21:33:10.722255 containerd[1565]: 2024-08-05 21:33:10.699 [INFO][4364] ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.fd42c9671f934306633a5c867e8407862ec04502b1c885d7c6e979c4bb5a29d0" host="localhost" Aug 5 21:33:10.722255 containerd[1565]: 2024-08-05 21:33:10.699 [INFO][4364] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.fd42c9671f934306633a5c867e8407862ec04502b1c885d7c6e979c4bb5a29d0" host="localhost" Aug 5 21:33:10.722255 containerd[1565]: 2024-08-05 21:33:10.700 [INFO][4364] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 21:33:10.722255 containerd[1565]: 2024-08-05 21:33:10.700 [INFO][4364] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="fd42c9671f934306633a5c867e8407862ec04502b1c885d7c6e979c4bb5a29d0" HandleID="k8s-pod-network.fd42c9671f934306633a5c867e8407862ec04502b1c885d7c6e979c4bb5a29d0" Workload="localhost-k8s-coredns--5dd5756b68--p2qhs-eth0" Aug 5 21:33:10.723762 containerd[1565]: 2024-08-05 21:33:10.703 [INFO][4350] k8s.go 386: Populated endpoint ContainerID="fd42c9671f934306633a5c867e8407862ec04502b1c885d7c6e979c4bb5a29d0" Namespace="kube-system" Pod="coredns-5dd5756b68-p2qhs" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--p2qhs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--p2qhs-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"250133c2-f2cb-49cc-a9e1-4020ef81de96", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 21, 32, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-5dd5756b68-p2qhs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2106dbc3cbc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 21:33:10.723762 containerd[1565]: 2024-08-05 21:33:10.703 [INFO][4350] k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="fd42c9671f934306633a5c867e8407862ec04502b1c885d7c6e979c4bb5a29d0" Namespace="kube-system" Pod="coredns-5dd5756b68-p2qhs" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--p2qhs-eth0" Aug 5 21:33:10.723762 containerd[1565]: 2024-08-05 21:33:10.703 [INFO][4350] dataplane_linux.go 68: Setting the host side veth name to cali2106dbc3cbc ContainerID="fd42c9671f934306633a5c867e8407862ec04502b1c885d7c6e979c4bb5a29d0" Namespace="kube-system" Pod="coredns-5dd5756b68-p2qhs" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--p2qhs-eth0" Aug 5 21:33:10.723762 containerd[1565]: 2024-08-05 21:33:10.706 [INFO][4350] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="fd42c9671f934306633a5c867e8407862ec04502b1c885d7c6e979c4bb5a29d0" Namespace="kube-system" Pod="coredns-5dd5756b68-p2qhs" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--p2qhs-eth0" Aug 5 21:33:10.723762 containerd[1565]: 2024-08-05 21:33:10.708 [INFO][4350] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="fd42c9671f934306633a5c867e8407862ec04502b1c885d7c6e979c4bb5a29d0" Namespace="kube-system" Pod="coredns-5dd5756b68-p2qhs" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--p2qhs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--p2qhs-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"250133c2-f2cb-49cc-a9e1-4020ef81de96", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 21, 32, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fd42c9671f934306633a5c867e8407862ec04502b1c885d7c6e979c4bb5a29d0", Pod:"coredns-5dd5756b68-p2qhs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2106dbc3cbc", MAC:"ce:eb:05:cc:fb:d0", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 21:33:10.723762 containerd[1565]: 2024-08-05 21:33:10.719 [INFO][4350] k8s.go 500: Wrote updated endpoint to datastore ContainerID="fd42c9671f934306633a5c867e8407862ec04502b1c885d7c6e979c4bb5a29d0" Namespace="kube-system" Pod="coredns-5dd5756b68-p2qhs" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--p2qhs-eth0" Aug 5 21:33:10.755437 containerd[1565]: time="2024-08-05T21:33:10.747133690Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 21:33:10.756245 containerd[1565]: time="2024-08-05T21:33:10.755687040Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:33:10.756245 containerd[1565]: time="2024-08-05T21:33:10.755713160Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 21:33:10.756245 containerd[1565]: time="2024-08-05T21:33:10.755724200Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:33:10.773702 systemd[1]: run-containerd-runc-k8s.io-fd42c9671f934306633a5c867e8407862ec04502b1c885d7c6e979c4bb5a29d0-runc.gBVuvn.mount: Deactivated successfully. Aug 5 21:33:10.784754 systemd-resolved[1446]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 5 21:33:10.813156 containerd[1565]: time="2024-08-05T21:33:10.813109781Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-p2qhs,Uid:250133c2-f2cb-49cc-a9e1-4020ef81de96,Namespace:kube-system,Attempt:1,} returns sandbox id \"fd42c9671f934306633a5c867e8407862ec04502b1c885d7c6e979c4bb5a29d0\"" Aug 5 21:33:10.813919 kubelet[2678]: E0805 21:33:10.813888 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:33:10.815944 containerd[1565]: time="2024-08-05T21:33:10.815912405Z" level=info msg="CreateContainer within sandbox \"fd42c9671f934306633a5c867e8407862ec04502b1c885d7c6e979c4bb5a29d0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 5 21:33:10.921386 containerd[1565]: time="2024-08-05T21:33:10.921266984Z" level=info msg="CreateContainer within sandbox \"fd42c9671f934306633a5c867e8407862ec04502b1c885d7c6e979c4bb5a29d0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a0c1e2e5772855c1ffaf25b533f61328921eab691ae2891c7815f95ac0f16a83\"" Aug 5 21:33:10.923431 containerd[1565]: time="2024-08-05T21:33:10.923298292Z" level=info msg="StartContainer for \"a0c1e2e5772855c1ffaf25b533f61328921eab691ae2891c7815f95ac0f16a83\"" Aug 5 21:33:10.982627 containerd[1565]: time="2024-08-05T21:33:10.982512663Z" level=info msg="StartContainer for \"a0c1e2e5772855c1ffaf25b533f61328921eab691ae2891c7815f95ac0f16a83\" returns successfully" Aug 5 21:33:11.414392 containerd[1565]: time="2024-08-05T21:33:11.414088416Z" level=info msg="StopPodSandbox for \"5206a5e0d1259ce19c047c65bf8b2f5a0a4a960a56335758f7ffc50c89ec9a48\"" Aug 5 21:33:11.499758 containerd[1565]: 2024-08-05 21:33:11.459 [INFO][4483] k8s.go 608: Cleaning up netns ContainerID="5206a5e0d1259ce19c047c65bf8b2f5a0a4a960a56335758f7ffc50c89ec9a48" Aug 5 21:33:11.499758 containerd[1565]: 2024-08-05 21:33:11.459 [INFO][4483] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="5206a5e0d1259ce19c047c65bf8b2f5a0a4a960a56335758f7ffc50c89ec9a48" iface="eth0" netns="/var/run/netns/cni-f5e1f57c-53df-4810-20ba-542fc4daf92e" Aug 5 21:33:11.499758 containerd[1565]: 2024-08-05 21:33:11.460 [INFO][4483] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="5206a5e0d1259ce19c047c65bf8b2f5a0a4a960a56335758f7ffc50c89ec9a48" iface="eth0" netns="/var/run/netns/cni-f5e1f57c-53df-4810-20ba-542fc4daf92e" Aug 5 21:33:11.499758 containerd[1565]: 2024-08-05 21:33:11.460 [INFO][4483] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="5206a5e0d1259ce19c047c65bf8b2f5a0a4a960a56335758f7ffc50c89ec9a48" iface="eth0" netns="/var/run/netns/cni-f5e1f57c-53df-4810-20ba-542fc4daf92e" Aug 5 21:33:11.499758 containerd[1565]: 2024-08-05 21:33:11.460 [INFO][4483] k8s.go 615: Releasing IP address(es) ContainerID="5206a5e0d1259ce19c047c65bf8b2f5a0a4a960a56335758f7ffc50c89ec9a48" Aug 5 21:33:11.499758 containerd[1565]: 2024-08-05 21:33:11.460 [INFO][4483] utils.go 188: Calico CNI releasing IP address ContainerID="5206a5e0d1259ce19c047c65bf8b2f5a0a4a960a56335758f7ffc50c89ec9a48" Aug 5 21:33:11.499758 containerd[1565]: 2024-08-05 21:33:11.485 [INFO][4491] ipam_plugin.go 411: Releasing address using handleID ContainerID="5206a5e0d1259ce19c047c65bf8b2f5a0a4a960a56335758f7ffc50c89ec9a48" HandleID="k8s-pod-network.5206a5e0d1259ce19c047c65bf8b2f5a0a4a960a56335758f7ffc50c89ec9a48" Workload="localhost-k8s-csi--node--driver--fjpwf-eth0" Aug 5 21:33:11.499758 containerd[1565]: 2024-08-05 21:33:11.485 [INFO][4491] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 21:33:11.499758 containerd[1565]: 2024-08-05 21:33:11.485 [INFO][4491] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 21:33:11.499758 containerd[1565]: 2024-08-05 21:33:11.494 [WARNING][4491] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="5206a5e0d1259ce19c047c65bf8b2f5a0a4a960a56335758f7ffc50c89ec9a48" HandleID="k8s-pod-network.5206a5e0d1259ce19c047c65bf8b2f5a0a4a960a56335758f7ffc50c89ec9a48" Workload="localhost-k8s-csi--node--driver--fjpwf-eth0" Aug 5 21:33:11.499758 containerd[1565]: 2024-08-05 21:33:11.495 [INFO][4491] ipam_plugin.go 439: Releasing address using workloadID ContainerID="5206a5e0d1259ce19c047c65bf8b2f5a0a4a960a56335758f7ffc50c89ec9a48" HandleID="k8s-pod-network.5206a5e0d1259ce19c047c65bf8b2f5a0a4a960a56335758f7ffc50c89ec9a48" Workload="localhost-k8s-csi--node--driver--fjpwf-eth0" Aug 5 21:33:11.499758 containerd[1565]: 2024-08-05 21:33:11.496 [INFO][4491] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 21:33:11.499758 containerd[1565]: 2024-08-05 21:33:11.498 [INFO][4483] k8s.go 621: Teardown processing complete. ContainerID="5206a5e0d1259ce19c047c65bf8b2f5a0a4a960a56335758f7ffc50c89ec9a48" Aug 5 21:33:11.500291 containerd[1565]: time="2024-08-05T21:33:11.499942001Z" level=info msg="TearDown network for sandbox \"5206a5e0d1259ce19c047c65bf8b2f5a0a4a960a56335758f7ffc50c89ec9a48\" successfully" Aug 5 21:33:11.500291 containerd[1565]: time="2024-08-05T21:33:11.499981081Z" level=info msg="StopPodSandbox for \"5206a5e0d1259ce19c047c65bf8b2f5a0a4a960a56335758f7ffc50c89ec9a48\" returns successfully" Aug 5 21:33:11.500772 containerd[1565]: time="2024-08-05T21:33:11.500740717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fjpwf,Uid:926c7d21-c63e-46bb-9599-6d26d109fd83,Namespace:calico-system,Attempt:1,}" Aug 5 21:33:11.558612 kubelet[2678]: I0805 21:33:11.558579 2678 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 5 21:33:11.559445 kubelet[2678]: E0805 21:33:11.559074 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:33:11.560494 kubelet[2678]: E0805 21:33:11.559807 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:33:11.572239 kubelet[2678]: I0805 21:33:11.572203 2678 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-p2qhs" podStartSLOduration=28.572163145 podCreationTimestamp="2024-08-05 21:32:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 21:33:11.572115386 +0000 UTC m=+42.273728995" watchObservedRunningTime="2024-08-05 21:33:11.572163145 +0000 UTC m=+42.273776674" Aug 5 21:33:11.612836 systemd[1]: run-netns-cni\x2df5e1f57c\x2d53df\x2d4810\x2d20ba\x2d542fc4daf92e.mount: Deactivated successfully. Aug 5 21:33:11.670601 systemd-networkd[1238]: calid3b5fed83eb: Link UP Aug 5 21:33:11.672118 systemd-networkd[1238]: calid3b5fed83eb: Gained carrier Aug 5 21:33:11.684827 containerd[1565]: 2024-08-05 21:33:11.601 [INFO][4500] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--fjpwf-eth0 csi-node-driver- calico-system 926c7d21-c63e-46bb-9599-6d26d109fd83 861 0 2024-08-05 21:32:49 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:7d7f6c786c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s localhost csi-node-driver-fjpwf eth0 default [] [] [kns.calico-system ksa.calico-system.default] calid3b5fed83eb [] []}} ContainerID="6baf4c294eb4f4b56350f1ce3fd2ba69766d093cb1c617f69adb30ee420730d8" Namespace="calico-system" Pod="csi-node-driver-fjpwf" WorkloadEndpoint="localhost-k8s-csi--node--driver--fjpwf-" Aug 5 21:33:11.684827 containerd[1565]: 2024-08-05 21:33:11.601 [INFO][4500] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="6baf4c294eb4f4b56350f1ce3fd2ba69766d093cb1c617f69adb30ee420730d8" Namespace="calico-system" Pod="csi-node-driver-fjpwf" WorkloadEndpoint="localhost-k8s-csi--node--driver--fjpwf-eth0" Aug 5 21:33:11.684827 containerd[1565]: 2024-08-05 21:33:11.628 [INFO][4515] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6baf4c294eb4f4b56350f1ce3fd2ba69766d093cb1c617f69adb30ee420730d8" HandleID="k8s-pod-network.6baf4c294eb4f4b56350f1ce3fd2ba69766d093cb1c617f69adb30ee420730d8" Workload="localhost-k8s-csi--node--driver--fjpwf-eth0" Aug 5 21:33:11.684827 containerd[1565]: 2024-08-05 21:33:11.640 [INFO][4515] ipam_plugin.go 264: Auto assigning IP ContainerID="6baf4c294eb4f4b56350f1ce3fd2ba69766d093cb1c617f69adb30ee420730d8" HandleID="k8s-pod-network.6baf4c294eb4f4b56350f1ce3fd2ba69766d093cb1c617f69adb30ee420730d8" Workload="localhost-k8s-csi--node--driver--fjpwf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000129d30), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-fjpwf", "timestamp":"2024-08-05 21:33:11.628791299 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 5 21:33:11.684827 containerd[1565]: 2024-08-05 21:33:11.640 [INFO][4515] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 21:33:11.684827 containerd[1565]: 2024-08-05 21:33:11.640 [INFO][4515] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 21:33:11.684827 containerd[1565]: 2024-08-05 21:33:11.640 [INFO][4515] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 5 21:33:11.684827 containerd[1565]: 2024-08-05 21:33:11.642 [INFO][4515] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.6baf4c294eb4f4b56350f1ce3fd2ba69766d093cb1c617f69adb30ee420730d8" host="localhost" Aug 5 21:33:11.684827 containerd[1565]: 2024-08-05 21:33:11.646 [INFO][4515] ipam.go 372: Looking up existing affinities for host host="localhost" Aug 5 21:33:11.684827 containerd[1565]: 2024-08-05 21:33:11.650 [INFO][4515] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Aug 5 21:33:11.684827 containerd[1565]: 2024-08-05 21:33:11.653 [INFO][4515] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 5 21:33:11.684827 containerd[1565]: 2024-08-05 21:33:11.655 [INFO][4515] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 5 21:33:11.684827 containerd[1565]: 2024-08-05 21:33:11.655 [INFO][4515] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6baf4c294eb4f4b56350f1ce3fd2ba69766d093cb1c617f69adb30ee420730d8" host="localhost" Aug 5 21:33:11.684827 containerd[1565]: 2024-08-05 21:33:11.657 [INFO][4515] ipam.go 1685: Creating new handle: k8s-pod-network.6baf4c294eb4f4b56350f1ce3fd2ba69766d093cb1c617f69adb30ee420730d8 Aug 5 21:33:11.684827 containerd[1565]: 2024-08-05 21:33:11.660 [INFO][4515] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6baf4c294eb4f4b56350f1ce3fd2ba69766d093cb1c617f69adb30ee420730d8" host="localhost" Aug 5 21:33:11.684827 containerd[1565]: 2024-08-05 21:33:11.664 [INFO][4515] ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.6baf4c294eb4f4b56350f1ce3fd2ba69766d093cb1c617f69adb30ee420730d8" host="localhost" Aug 5 21:33:11.684827 containerd[1565]: 2024-08-05 21:33:11.664 [INFO][4515] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.6baf4c294eb4f4b56350f1ce3fd2ba69766d093cb1c617f69adb30ee420730d8" host="localhost" Aug 5 21:33:11.684827 containerd[1565]: 2024-08-05 21:33:11.664 [INFO][4515] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 21:33:11.684827 containerd[1565]: 2024-08-05 21:33:11.664 [INFO][4515] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="6baf4c294eb4f4b56350f1ce3fd2ba69766d093cb1c617f69adb30ee420730d8" HandleID="k8s-pod-network.6baf4c294eb4f4b56350f1ce3fd2ba69766d093cb1c617f69adb30ee420730d8" Workload="localhost-k8s-csi--node--driver--fjpwf-eth0" Aug 5 21:33:11.685411 containerd[1565]: 2024-08-05 21:33:11.667 [INFO][4500] k8s.go 386: Populated endpoint ContainerID="6baf4c294eb4f4b56350f1ce3fd2ba69766d093cb1c617f69adb30ee420730d8" Namespace="calico-system" Pod="csi-node-driver-fjpwf" WorkloadEndpoint="localhost-k8s-csi--node--driver--fjpwf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--fjpwf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"926c7d21-c63e-46bb-9599-6d26d109fd83", ResourceVersion:"861", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 21, 32, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-fjpwf", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calid3b5fed83eb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 21:33:11.685411 containerd[1565]: 2024-08-05 21:33:11.667 [INFO][4500] k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="6baf4c294eb4f4b56350f1ce3fd2ba69766d093cb1c617f69adb30ee420730d8" Namespace="calico-system" Pod="csi-node-driver-fjpwf" WorkloadEndpoint="localhost-k8s-csi--node--driver--fjpwf-eth0" Aug 5 21:33:11.685411 containerd[1565]: 2024-08-05 21:33:11.667 [INFO][4500] dataplane_linux.go 68: Setting the host side veth name to calid3b5fed83eb ContainerID="6baf4c294eb4f4b56350f1ce3fd2ba69766d093cb1c617f69adb30ee420730d8" Namespace="calico-system" Pod="csi-node-driver-fjpwf" WorkloadEndpoint="localhost-k8s-csi--node--driver--fjpwf-eth0" Aug 5 21:33:11.685411 containerd[1565]: 2024-08-05 21:33:11.672 [INFO][4500] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="6baf4c294eb4f4b56350f1ce3fd2ba69766d093cb1c617f69adb30ee420730d8" Namespace="calico-system" Pod="csi-node-driver-fjpwf" WorkloadEndpoint="localhost-k8s-csi--node--driver--fjpwf-eth0" Aug 5 21:33:11.685411 containerd[1565]: 2024-08-05 21:33:11.672 [INFO][4500] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="6baf4c294eb4f4b56350f1ce3fd2ba69766d093cb1c617f69adb30ee420730d8" Namespace="calico-system" Pod="csi-node-driver-fjpwf" WorkloadEndpoint="localhost-k8s-csi--node--driver--fjpwf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--fjpwf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"926c7d21-c63e-46bb-9599-6d26d109fd83", ResourceVersion:"861", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 21, 32, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6baf4c294eb4f4b56350f1ce3fd2ba69766d093cb1c617f69adb30ee420730d8", Pod:"csi-node-driver-fjpwf", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calid3b5fed83eb", MAC:"ba:e4:ae:15:da:be", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 21:33:11.685411 containerd[1565]: 2024-08-05 21:33:11.681 [INFO][4500] k8s.go 500: Wrote updated endpoint to datastore ContainerID="6baf4c294eb4f4b56350f1ce3fd2ba69766d093cb1c617f69adb30ee420730d8" Namespace="calico-system" Pod="csi-node-driver-fjpwf" WorkloadEndpoint="localhost-k8s-csi--node--driver--fjpwf-eth0" Aug 5 21:33:11.704451 containerd[1565]: time="2024-08-05T21:33:11.704326424Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 21:33:11.704451 containerd[1565]: time="2024-08-05T21:33:11.704418624Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:33:11.704451 containerd[1565]: time="2024-08-05T21:33:11.704440624Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 21:33:11.704638 containerd[1565]: time="2024-08-05T21:33:11.704454464Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:33:11.726061 systemd-resolved[1446]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 5 21:33:11.736661 containerd[1565]: time="2024-08-05T21:33:11.736607399Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fjpwf,Uid:926c7d21-c63e-46bb-9599-6d26d109fd83,Namespace:calico-system,Attempt:1,} returns sandbox id \"6baf4c294eb4f4b56350f1ce3fd2ba69766d093cb1c617f69adb30ee420730d8\"" Aug 5 21:33:11.738038 containerd[1565]: time="2024-08-05T21:33:11.738009830Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\"" Aug 5 21:33:12.508934 systemd-networkd[1238]: cali2106dbc3cbc: Gained IPv6LL Aug 5 21:33:12.562273 kubelet[2678]: E0805 21:33:12.562244 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:33:12.818748 containerd[1565]: time="2024-08-05T21:33:12.818554350Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:33:12.819616 containerd[1565]: time="2024-08-05T21:33:12.819573265Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.0: active requests=0, bytes read=7210579" Aug 5 21:33:12.821640 containerd[1565]: time="2024-08-05T21:33:12.821404934Z" level=info msg="ImageCreate event name:\"sha256:94ad0dc71bacd91f470c20e61073c2dc00648fd583c0fb95657dee38af05e5ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:33:12.824448 containerd[1565]: time="2024-08-05T21:33:12.824411517Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:33:12.825200 containerd[1565]: time="2024-08-05T21:33:12.825165633Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.0\" with image id \"sha256:94ad0dc71bacd91f470c20e61073c2dc00648fd583c0fb95657dee38af05e5ed\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\", size \"8577147\" in 1.087121363s" Aug 5 21:33:12.825330 containerd[1565]: time="2024-08-05T21:33:12.825203673Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\" returns image reference \"sha256:94ad0dc71bacd91f470c20e61073c2dc00648fd583c0fb95657dee38af05e5ed\"" Aug 5 21:33:12.829082 containerd[1565]: time="2024-08-05T21:33:12.829041291Z" level=info msg="CreateContainer within sandbox \"6baf4c294eb4f4b56350f1ce3fd2ba69766d093cb1c617f69adb30ee420730d8\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Aug 5 21:33:12.850882 containerd[1565]: time="2024-08-05T21:33:12.850836209Z" level=info msg="CreateContainer within sandbox \"6baf4c294eb4f4b56350f1ce3fd2ba69766d093cb1c617f69adb30ee420730d8\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"eccc78b6a74a5672fc17b9537e93e1c0190d5896160832d9cf088772fae77d28\"" Aug 5 21:33:12.851601 containerd[1565]: time="2024-08-05T21:33:12.851568964Z" level=info msg="StartContainer for \"eccc78b6a74a5672fc17b9537e93e1c0190d5896160832d9cf088772fae77d28\"" Aug 5 21:33:12.917262 containerd[1565]: time="2024-08-05T21:33:12.917213355Z" level=info msg="StartContainer for \"eccc78b6a74a5672fc17b9537e93e1c0190d5896160832d9cf088772fae77d28\" returns successfully" Aug 5 21:33:12.919199 containerd[1565]: time="2024-08-05T21:33:12.919162264Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\"" Aug 5 21:33:13.566328 kubelet[2678]: E0805 21:33:13.566300 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:33:13.595963 systemd-networkd[1238]: calid3b5fed83eb: Gained IPv6LL Aug 5 21:33:14.036328 containerd[1565]: time="2024-08-05T21:33:14.036283696Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:33:14.037612 containerd[1565]: time="2024-08-05T21:33:14.037572169Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0: active requests=0, bytes read=9548567" Aug 5 21:33:14.038438 containerd[1565]: time="2024-08-05T21:33:14.038393964Z" level=info msg="ImageCreate event name:\"sha256:f708eddd5878891da5bc6148fc8bb3f7277210481a15957910fe5fb551a5ed28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:33:14.040865 containerd[1565]: time="2024-08-05T21:33:14.040829911Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:33:14.042111 containerd[1565]: time="2024-08-05T21:33:14.042076344Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" with image id \"sha256:f708eddd5878891da5bc6148fc8bb3f7277210481a15957910fe5fb551a5ed28\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\", size \"10915087\" in 1.12287712s" Aug 5 21:33:14.042174 containerd[1565]: time="2024-08-05T21:33:14.042117544Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" returns image reference \"sha256:f708eddd5878891da5bc6148fc8bb3f7277210481a15957910fe5fb551a5ed28\"" Aug 5 21:33:14.044192 containerd[1565]: time="2024-08-05T21:33:14.044158893Z" level=info msg="CreateContainer within sandbox \"6baf4c294eb4f4b56350f1ce3fd2ba69766d093cb1c617f69adb30ee420730d8\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Aug 5 21:33:14.055125 containerd[1565]: time="2024-08-05T21:33:14.055015194Z" level=info msg="CreateContainer within sandbox \"6baf4c294eb4f4b56350f1ce3fd2ba69766d093cb1c617f69adb30ee420730d8\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"dcbc610fb398f6622aa4f87a87bcbc3d55c3abd010fb2b2a6384882d60604715\"" Aug 5 21:33:14.056338 containerd[1565]: time="2024-08-05T21:33:14.055467312Z" level=info msg="StartContainer for \"dcbc610fb398f6622aa4f87a87bcbc3d55c3abd010fb2b2a6384882d60604715\"" Aug 5 21:33:14.107231 containerd[1565]: time="2024-08-05T21:33:14.105081564Z" level=info msg="StartContainer for \"dcbc610fb398f6622aa4f87a87bcbc3d55c3abd010fb2b2a6384882d60604715\" returns successfully" Aug 5 21:33:14.520408 kubelet[2678]: I0805 21:33:14.520036 2678 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Aug 5 21:33:14.520408 kubelet[2678]: I0805 21:33:14.520075 2678 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Aug 5 21:33:14.581154 kubelet[2678]: I0805 21:33:14.581097 2678 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-fjpwf" podStartSLOduration=23.276078682 podCreationTimestamp="2024-08-05 21:32:49 +0000 UTC" firstStartedPulling="2024-08-05 21:33:11.737744032 +0000 UTC m=+42.439357561" lastFinishedPulling="2024-08-05 21:33:14.042296543 +0000 UTC m=+44.743910072" observedRunningTime="2024-08-05 21:33:14.580182715 +0000 UTC m=+45.281796244" watchObservedRunningTime="2024-08-05 21:33:14.580631193 +0000 UTC m=+45.282244682" Aug 5 21:33:15.174065 systemd[1]: Started sshd@9-10.0.0.18:22-10.0.0.1:57420.service - OpenSSH per-connection server daemon (10.0.0.1:57420). Aug 5 21:33:15.212446 sshd[4674]: Accepted publickey for core from 10.0.0.1 port 57420 ssh2: RSA SHA256:vLb6+tqq+AN0xHsfmostacJPBc0ER3TTPjmoL0pNEEc Aug 5 21:33:15.213869 sshd[4674]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:33:15.218392 systemd-logind[1531]: New session 10 of user core. Aug 5 21:33:15.229158 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 5 21:33:15.352404 sshd[4674]: pam_unix(sshd:session): session closed for user core Aug 5 21:33:15.360229 systemd[1]: Started sshd@10-10.0.0.18:22-10.0.0.1:57422.service - OpenSSH per-connection server daemon (10.0.0.1:57422). Aug 5 21:33:15.360642 systemd[1]: sshd@9-10.0.0.18:22-10.0.0.1:57420.service: Deactivated successfully. Aug 5 21:33:15.363428 systemd[1]: session-10.scope: Deactivated successfully. Aug 5 21:33:15.364894 systemd-logind[1531]: Session 10 logged out. Waiting for processes to exit. Aug 5 21:33:15.366282 systemd-logind[1531]: Removed session 10. Aug 5 21:33:15.393731 sshd[4687]: Accepted publickey for core from 10.0.0.1 port 57422 ssh2: RSA SHA256:vLb6+tqq+AN0xHsfmostacJPBc0ER3TTPjmoL0pNEEc Aug 5 21:33:15.394976 sshd[4687]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:33:15.399269 systemd-logind[1531]: New session 11 of user core. Aug 5 21:33:15.410101 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 5 21:33:15.669323 sshd[4687]: pam_unix(sshd:session): session closed for user core Aug 5 21:33:15.677777 systemd[1]: sshd@10-10.0.0.18:22-10.0.0.1:57422.service: Deactivated successfully. Aug 5 21:33:15.683930 systemd[1]: session-11.scope: Deactivated successfully. Aug 5 21:33:15.684911 systemd-logind[1531]: Session 11 logged out. Waiting for processes to exit. Aug 5 21:33:15.706190 systemd[1]: Started sshd@11-10.0.0.18:22-10.0.0.1:57432.service - OpenSSH per-connection server daemon (10.0.0.1:57432). Aug 5 21:33:15.707178 systemd-logind[1531]: Removed session 11. Aug 5 21:33:15.744038 sshd[4704]: Accepted publickey for core from 10.0.0.1 port 57432 ssh2: RSA SHA256:vLb6+tqq+AN0xHsfmostacJPBc0ER3TTPjmoL0pNEEc Aug 5 21:33:15.745345 sshd[4704]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:33:15.749877 systemd-logind[1531]: New session 12 of user core. Aug 5 21:33:15.757078 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 5 21:33:15.875565 sshd[4704]: pam_unix(sshd:session): session closed for user core Aug 5 21:33:15.878738 systemd[1]: sshd@11-10.0.0.18:22-10.0.0.1:57432.service: Deactivated successfully. Aug 5 21:33:15.880954 systemd-logind[1531]: Session 12 logged out. Waiting for processes to exit. Aug 5 21:33:15.881015 systemd[1]: session-12.scope: Deactivated successfully. Aug 5 21:33:15.882276 systemd-logind[1531]: Removed session 12. Aug 5 21:33:16.279115 kubelet[2678]: I0805 21:33:16.279065 2678 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 5 21:33:20.885069 systemd[1]: Started sshd@12-10.0.0.18:22-10.0.0.1:57448.service - OpenSSH per-connection server daemon (10.0.0.1:57448). Aug 5 21:33:20.917511 sshd[4764]: Accepted publickey for core from 10.0.0.1 port 57448 ssh2: RSA SHA256:vLb6+tqq+AN0xHsfmostacJPBc0ER3TTPjmoL0pNEEc Aug 5 21:33:20.918807 sshd[4764]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:33:20.924798 systemd-logind[1531]: New session 13 of user core. Aug 5 21:33:20.935047 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 5 21:33:21.053609 sshd[4764]: pam_unix(sshd:session): session closed for user core Aug 5 21:33:21.061093 systemd[1]: Started sshd@13-10.0.0.18:22-10.0.0.1:57460.service - OpenSSH per-connection server daemon (10.0.0.1:57460). Aug 5 21:33:21.061573 systemd[1]: sshd@12-10.0.0.18:22-10.0.0.1:57448.service: Deactivated successfully. Aug 5 21:33:21.064404 systemd[1]: session-13.scope: Deactivated successfully. Aug 5 21:33:21.065367 systemd-logind[1531]: Session 13 logged out. Waiting for processes to exit. Aug 5 21:33:21.066506 systemd-logind[1531]: Removed session 13. Aug 5 21:33:21.092816 sshd[4776]: Accepted publickey for core from 10.0.0.1 port 57460 ssh2: RSA SHA256:vLb6+tqq+AN0xHsfmostacJPBc0ER3TTPjmoL0pNEEc Aug 5 21:33:21.094060 sshd[4776]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:33:21.098662 systemd-logind[1531]: New session 14 of user core. Aug 5 21:33:21.106089 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 5 21:33:21.382471 sshd[4776]: pam_unix(sshd:session): session closed for user core Aug 5 21:33:21.389158 systemd[1]: Started sshd@14-10.0.0.18:22-10.0.0.1:57464.service - OpenSSH per-connection server daemon (10.0.0.1:57464). Aug 5 21:33:21.389593 systemd[1]: sshd@13-10.0.0.18:22-10.0.0.1:57460.service: Deactivated successfully. Aug 5 21:33:21.396235 systemd[1]: session-14.scope: Deactivated successfully. Aug 5 21:33:21.396308 systemd-logind[1531]: Session 14 logged out. Waiting for processes to exit. Aug 5 21:33:21.400866 systemd-logind[1531]: Removed session 14. Aug 5 21:33:21.432017 sshd[4789]: Accepted publickey for core from 10.0.0.1 port 57464 ssh2: RSA SHA256:vLb6+tqq+AN0xHsfmostacJPBc0ER3TTPjmoL0pNEEc Aug 5 21:33:21.433373 sshd[4789]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:33:21.438851 systemd-logind[1531]: New session 15 of user core. Aug 5 21:33:21.446188 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 5 21:33:22.291524 sshd[4789]: pam_unix(sshd:session): session closed for user core Aug 5 21:33:22.302162 systemd[1]: Started sshd@15-10.0.0.18:22-10.0.0.1:57644.service - OpenSSH per-connection server daemon (10.0.0.1:57644). Aug 5 21:33:22.302663 systemd[1]: sshd@14-10.0.0.18:22-10.0.0.1:57464.service: Deactivated successfully. Aug 5 21:33:22.315798 systemd[1]: session-15.scope: Deactivated successfully. Aug 5 21:33:22.316252 systemd-logind[1531]: Session 15 logged out. Waiting for processes to exit. Aug 5 21:33:22.318178 systemd-logind[1531]: Removed session 15. Aug 5 21:33:22.338509 sshd[4814]: Accepted publickey for core from 10.0.0.1 port 57644 ssh2: RSA SHA256:vLb6+tqq+AN0xHsfmostacJPBc0ER3TTPjmoL0pNEEc Aug 5 21:33:22.339907 sshd[4814]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:33:22.343874 systemd-logind[1531]: New session 16 of user core. Aug 5 21:33:22.357127 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 5 21:33:22.649920 sshd[4814]: pam_unix(sshd:session): session closed for user core Aug 5 21:33:22.659191 systemd[1]: Started sshd@16-10.0.0.18:22-10.0.0.1:57650.service - OpenSSH per-connection server daemon (10.0.0.1:57650). Aug 5 21:33:22.661307 systemd[1]: sshd@15-10.0.0.18:22-10.0.0.1:57644.service: Deactivated successfully. Aug 5 21:33:22.665945 systemd[1]: session-16.scope: Deactivated successfully. Aug 5 21:33:22.666952 systemd-logind[1531]: Session 16 logged out. Waiting for processes to exit. Aug 5 21:33:22.667779 systemd-logind[1531]: Removed session 16. Aug 5 21:33:22.701450 sshd[4827]: Accepted publickey for core from 10.0.0.1 port 57650 ssh2: RSA SHA256:vLb6+tqq+AN0xHsfmostacJPBc0ER3TTPjmoL0pNEEc Aug 5 21:33:22.702858 sshd[4827]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:33:22.707839 systemd-logind[1531]: New session 17 of user core. Aug 5 21:33:22.715638 systemd[1]: Started session-17.scope - Session 17 of User core. Aug 5 21:33:22.836202 sshd[4827]: pam_unix(sshd:session): session closed for user core Aug 5 21:33:22.839043 systemd[1]: sshd@16-10.0.0.18:22-10.0.0.1:57650.service: Deactivated successfully. Aug 5 21:33:22.841862 systemd-logind[1531]: Session 17 logged out. Waiting for processes to exit. Aug 5 21:33:22.842537 systemd[1]: session-17.scope: Deactivated successfully. Aug 5 21:33:22.843685 systemd-logind[1531]: Removed session 17. Aug 5 21:33:27.847290 systemd[1]: Started sshd@17-10.0.0.18:22-10.0.0.1:57660.service - OpenSSH per-connection server daemon (10.0.0.1:57660). Aug 5 21:33:27.880962 sshd[4859]: Accepted publickey for core from 10.0.0.1 port 57660 ssh2: RSA SHA256:vLb6+tqq+AN0xHsfmostacJPBc0ER3TTPjmoL0pNEEc Aug 5 21:33:27.882659 sshd[4859]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:33:27.887900 systemd-logind[1531]: New session 18 of user core. Aug 5 21:33:27.893063 systemd[1]: Started session-18.scope - Session 18 of User core. Aug 5 21:33:28.014719 sshd[4859]: pam_unix(sshd:session): session closed for user core Aug 5 21:33:28.018405 systemd[1]: sshd@17-10.0.0.18:22-10.0.0.1:57660.service: Deactivated successfully. Aug 5 21:33:28.021669 systemd-logind[1531]: Session 18 logged out. Waiting for processes to exit. Aug 5 21:33:28.023024 systemd[1]: session-18.scope: Deactivated successfully. Aug 5 21:33:28.026057 systemd-logind[1531]: Removed session 18. Aug 5 21:33:29.422970 containerd[1565]: time="2024-08-05T21:33:29.422925825Z" level=info msg="StopPodSandbox for \"b8960703f4d4ff6d894b78e0e38f0f446941fae70e5e01df009b03ca320bdf65\"" Aug 5 21:33:29.495210 containerd[1565]: 2024-08-05 21:33:29.463 [WARNING][4890] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b8960703f4d4ff6d894b78e0e38f0f446941fae70e5e01df009b03ca320bdf65" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--78555dd857--hs25k-eth0", GenerateName:"calico-kube-controllers-78555dd857-", Namespace:"calico-system", SelfLink:"", UID:"7f194363-3bf2-4565-81a0-e7e8a1dc0b71", ResourceVersion:"937", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 21, 32, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"78555dd857", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e53ca3620be86d40eedc43750545a3e8e9c46036b539d5fd487065d911a4e4fb", Pod:"calico-kube-controllers-78555dd857-hs25k", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3df62fe5e23", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 21:33:29.495210 containerd[1565]: 2024-08-05 21:33:29.464 [INFO][4890] k8s.go 608: Cleaning up netns ContainerID="b8960703f4d4ff6d894b78e0e38f0f446941fae70e5e01df009b03ca320bdf65" Aug 5 21:33:29.495210 containerd[1565]: 2024-08-05 21:33:29.464 [INFO][4890] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="b8960703f4d4ff6d894b78e0e38f0f446941fae70e5e01df009b03ca320bdf65" iface="eth0" netns="" Aug 5 21:33:29.495210 containerd[1565]: 2024-08-05 21:33:29.464 [INFO][4890] k8s.go 615: Releasing IP address(es) ContainerID="b8960703f4d4ff6d894b78e0e38f0f446941fae70e5e01df009b03ca320bdf65" Aug 5 21:33:29.495210 containerd[1565]: 2024-08-05 21:33:29.464 [INFO][4890] utils.go 188: Calico CNI releasing IP address ContainerID="b8960703f4d4ff6d894b78e0e38f0f446941fae70e5e01df009b03ca320bdf65" Aug 5 21:33:29.495210 containerd[1565]: 2024-08-05 21:33:29.480 [INFO][4898] ipam_plugin.go 411: Releasing address using handleID ContainerID="b8960703f4d4ff6d894b78e0e38f0f446941fae70e5e01df009b03ca320bdf65" HandleID="k8s-pod-network.b8960703f4d4ff6d894b78e0e38f0f446941fae70e5e01df009b03ca320bdf65" Workload="localhost-k8s-calico--kube--controllers--78555dd857--hs25k-eth0" Aug 5 21:33:29.495210 containerd[1565]: 2024-08-05 21:33:29.480 [INFO][4898] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 21:33:29.495210 containerd[1565]: 2024-08-05 21:33:29.480 [INFO][4898] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 21:33:29.495210 containerd[1565]: 2024-08-05 21:33:29.490 [WARNING][4898] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="b8960703f4d4ff6d894b78e0e38f0f446941fae70e5e01df009b03ca320bdf65" HandleID="k8s-pod-network.b8960703f4d4ff6d894b78e0e38f0f446941fae70e5e01df009b03ca320bdf65" Workload="localhost-k8s-calico--kube--controllers--78555dd857--hs25k-eth0" Aug 5 21:33:29.495210 containerd[1565]: 2024-08-05 21:33:29.490 [INFO][4898] ipam_plugin.go 439: Releasing address using workloadID ContainerID="b8960703f4d4ff6d894b78e0e38f0f446941fae70e5e01df009b03ca320bdf65" HandleID="k8s-pod-network.b8960703f4d4ff6d894b78e0e38f0f446941fae70e5e01df009b03ca320bdf65" Workload="localhost-k8s-calico--kube--controllers--78555dd857--hs25k-eth0" Aug 5 21:33:29.495210 containerd[1565]: 2024-08-05 21:33:29.492 [INFO][4898] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 21:33:29.495210 containerd[1565]: 2024-08-05 21:33:29.493 [INFO][4890] k8s.go 621: Teardown processing complete. ContainerID="b8960703f4d4ff6d894b78e0e38f0f446941fae70e5e01df009b03ca320bdf65" Aug 5 21:33:29.495648 containerd[1565]: time="2024-08-05T21:33:29.495245068Z" level=info msg="TearDown network for sandbox \"b8960703f4d4ff6d894b78e0e38f0f446941fae70e5e01df009b03ca320bdf65\" successfully" Aug 5 21:33:29.495648 containerd[1565]: time="2024-08-05T21:33:29.495276948Z" level=info msg="StopPodSandbox for \"b8960703f4d4ff6d894b78e0e38f0f446941fae70e5e01df009b03ca320bdf65\" returns successfully" Aug 5 21:33:29.495897 containerd[1565]: time="2024-08-05T21:33:29.495872385Z" level=info msg="RemovePodSandbox for \"b8960703f4d4ff6d894b78e0e38f0f446941fae70e5e01df009b03ca320bdf65\"" Aug 5 21:33:29.495954 containerd[1565]: time="2024-08-05T21:33:29.495908385Z" level=info msg="Forcibly stopping sandbox \"b8960703f4d4ff6d894b78e0e38f0f446941fae70e5e01df009b03ca320bdf65\"" Aug 5 21:33:29.560230 containerd[1565]: 2024-08-05 21:33:29.529 [WARNING][4921] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b8960703f4d4ff6d894b78e0e38f0f446941fae70e5e01df009b03ca320bdf65" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--78555dd857--hs25k-eth0", GenerateName:"calico-kube-controllers-78555dd857-", Namespace:"calico-system", SelfLink:"", UID:"7f194363-3bf2-4565-81a0-e7e8a1dc0b71", ResourceVersion:"937", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 21, 32, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"78555dd857", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e53ca3620be86d40eedc43750545a3e8e9c46036b539d5fd487065d911a4e4fb", Pod:"calico-kube-controllers-78555dd857-hs25k", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3df62fe5e23", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 21:33:29.560230 containerd[1565]: 2024-08-05 21:33:29.530 [INFO][4921] k8s.go 608: Cleaning up netns ContainerID="b8960703f4d4ff6d894b78e0e38f0f446941fae70e5e01df009b03ca320bdf65" Aug 5 21:33:29.560230 containerd[1565]: 2024-08-05 21:33:29.530 [INFO][4921] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="b8960703f4d4ff6d894b78e0e38f0f446941fae70e5e01df009b03ca320bdf65" iface="eth0" netns="" Aug 5 21:33:29.560230 containerd[1565]: 2024-08-05 21:33:29.530 [INFO][4921] k8s.go 615: Releasing IP address(es) ContainerID="b8960703f4d4ff6d894b78e0e38f0f446941fae70e5e01df009b03ca320bdf65" Aug 5 21:33:29.560230 containerd[1565]: 2024-08-05 21:33:29.530 [INFO][4921] utils.go 188: Calico CNI releasing IP address ContainerID="b8960703f4d4ff6d894b78e0e38f0f446941fae70e5e01df009b03ca320bdf65" Aug 5 21:33:29.560230 containerd[1565]: 2024-08-05 21:33:29.547 [INFO][4929] ipam_plugin.go 411: Releasing address using handleID ContainerID="b8960703f4d4ff6d894b78e0e38f0f446941fae70e5e01df009b03ca320bdf65" HandleID="k8s-pod-network.b8960703f4d4ff6d894b78e0e38f0f446941fae70e5e01df009b03ca320bdf65" Workload="localhost-k8s-calico--kube--controllers--78555dd857--hs25k-eth0" Aug 5 21:33:29.560230 containerd[1565]: 2024-08-05 21:33:29.547 [INFO][4929] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 21:33:29.560230 containerd[1565]: 2024-08-05 21:33:29.547 [INFO][4929] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 21:33:29.560230 containerd[1565]: 2024-08-05 21:33:29.556 [WARNING][4929] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="b8960703f4d4ff6d894b78e0e38f0f446941fae70e5e01df009b03ca320bdf65" HandleID="k8s-pod-network.b8960703f4d4ff6d894b78e0e38f0f446941fae70e5e01df009b03ca320bdf65" Workload="localhost-k8s-calico--kube--controllers--78555dd857--hs25k-eth0" Aug 5 21:33:29.560230 containerd[1565]: 2024-08-05 21:33:29.556 [INFO][4929] ipam_plugin.go 439: Releasing address using workloadID ContainerID="b8960703f4d4ff6d894b78e0e38f0f446941fae70e5e01df009b03ca320bdf65" HandleID="k8s-pod-network.b8960703f4d4ff6d894b78e0e38f0f446941fae70e5e01df009b03ca320bdf65" Workload="localhost-k8s-calico--kube--controllers--78555dd857--hs25k-eth0" Aug 5 21:33:29.560230 containerd[1565]: 2024-08-05 21:33:29.557 [INFO][4929] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 21:33:29.560230 containerd[1565]: 2024-08-05 21:33:29.558 [INFO][4921] k8s.go 621: Teardown processing complete. ContainerID="b8960703f4d4ff6d894b78e0e38f0f446941fae70e5e01df009b03ca320bdf65" Aug 5 21:33:29.560628 containerd[1565]: time="2024-08-05T21:33:29.560282023Z" level=info msg="TearDown network for sandbox \"b8960703f4d4ff6d894b78e0e38f0f446941fae70e5e01df009b03ca320bdf65\" successfully" Aug 5 21:33:29.587226 containerd[1565]: time="2024-08-05T21:33:29.587168905Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b8960703f4d4ff6d894b78e0e38f0f446941fae70e5e01df009b03ca320bdf65\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 5 21:33:29.587357 containerd[1565]: time="2024-08-05T21:33:29.587255584Z" level=info msg="RemovePodSandbox \"b8960703f4d4ff6d894b78e0e38f0f446941fae70e5e01df009b03ca320bdf65\" returns successfully" Aug 5 21:33:29.588282 containerd[1565]: time="2024-08-05T21:33:29.587978101Z" level=info msg="StopPodSandbox for \"bf57b789acdfa47f7924a0a6180b8e5a80871a90e7c8102f6620de927e25ed97\"" Aug 5 21:33:29.654753 containerd[1565]: 2024-08-05 21:33:29.620 [WARNING][4951] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bf57b789acdfa47f7924a0a6180b8e5a80871a90e7c8102f6620de927e25ed97" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--7g4l5-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"4b1da061-9919-457a-87c2-aab1ccb0c931", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 21, 32, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ae41b82b38f4688edb99177a53bbd5820030abcf7b6531b131c8d90fb62f2aff", Pod:"coredns-5dd5756b68-7g4l5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali17e314e23fb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 21:33:29.654753 containerd[1565]: 2024-08-05 21:33:29.620 [INFO][4951] k8s.go 608: Cleaning up netns ContainerID="bf57b789acdfa47f7924a0a6180b8e5a80871a90e7c8102f6620de927e25ed97" Aug 5 21:33:29.654753 containerd[1565]: 2024-08-05 21:33:29.620 [INFO][4951] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="bf57b789acdfa47f7924a0a6180b8e5a80871a90e7c8102f6620de927e25ed97" iface="eth0" netns="" Aug 5 21:33:29.654753 containerd[1565]: 2024-08-05 21:33:29.620 [INFO][4951] k8s.go 615: Releasing IP address(es) ContainerID="bf57b789acdfa47f7924a0a6180b8e5a80871a90e7c8102f6620de927e25ed97" Aug 5 21:33:29.654753 containerd[1565]: 2024-08-05 21:33:29.620 [INFO][4951] utils.go 188: Calico CNI releasing IP address ContainerID="bf57b789acdfa47f7924a0a6180b8e5a80871a90e7c8102f6620de927e25ed97" Aug 5 21:33:29.654753 containerd[1565]: 2024-08-05 21:33:29.642 [INFO][4958] ipam_plugin.go 411: Releasing address using handleID ContainerID="bf57b789acdfa47f7924a0a6180b8e5a80871a90e7c8102f6620de927e25ed97" HandleID="k8s-pod-network.bf57b789acdfa47f7924a0a6180b8e5a80871a90e7c8102f6620de927e25ed97" Workload="localhost-k8s-coredns--5dd5756b68--7g4l5-eth0" Aug 5 21:33:29.654753 containerd[1565]: 2024-08-05 21:33:29.642 [INFO][4958] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 21:33:29.654753 containerd[1565]: 2024-08-05 21:33:29.642 [INFO][4958] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 21:33:29.654753 containerd[1565]: 2024-08-05 21:33:29.650 [WARNING][4958] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="bf57b789acdfa47f7924a0a6180b8e5a80871a90e7c8102f6620de927e25ed97" HandleID="k8s-pod-network.bf57b789acdfa47f7924a0a6180b8e5a80871a90e7c8102f6620de927e25ed97" Workload="localhost-k8s-coredns--5dd5756b68--7g4l5-eth0" Aug 5 21:33:29.654753 containerd[1565]: 2024-08-05 21:33:29.650 [INFO][4958] ipam_plugin.go 439: Releasing address using workloadID ContainerID="bf57b789acdfa47f7924a0a6180b8e5a80871a90e7c8102f6620de927e25ed97" HandleID="k8s-pod-network.bf57b789acdfa47f7924a0a6180b8e5a80871a90e7c8102f6620de927e25ed97" Workload="localhost-k8s-coredns--5dd5756b68--7g4l5-eth0" Aug 5 21:33:29.654753 containerd[1565]: 2024-08-05 21:33:29.652 [INFO][4958] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 21:33:29.654753 containerd[1565]: 2024-08-05 21:33:29.653 [INFO][4951] k8s.go 621: Teardown processing complete. ContainerID="bf57b789acdfa47f7924a0a6180b8e5a80871a90e7c8102f6620de927e25ed97" Aug 5 21:33:29.655334 containerd[1565]: time="2024-08-05T21:33:29.655220006Z" level=info msg="TearDown network for sandbox \"bf57b789acdfa47f7924a0a6180b8e5a80871a90e7c8102f6620de927e25ed97\" successfully" Aug 5 21:33:29.655334 containerd[1565]: time="2024-08-05T21:33:29.655252486Z" level=info msg="StopPodSandbox for \"bf57b789acdfa47f7924a0a6180b8e5a80871a90e7c8102f6620de927e25ed97\" returns successfully" Aug 5 21:33:29.655905 containerd[1565]: time="2024-08-05T21:33:29.655609245Z" level=info msg="RemovePodSandbox for \"bf57b789acdfa47f7924a0a6180b8e5a80871a90e7c8102f6620de927e25ed97\"" Aug 5 21:33:29.655905 containerd[1565]: time="2024-08-05T21:33:29.655638565Z" level=info msg="Forcibly stopping sandbox \"bf57b789acdfa47f7924a0a6180b8e5a80871a90e7c8102f6620de927e25ed97\"" Aug 5 21:33:29.717602 containerd[1565]: 2024-08-05 21:33:29.688 [WARNING][4981] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bf57b789acdfa47f7924a0a6180b8e5a80871a90e7c8102f6620de927e25ed97" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--7g4l5-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"4b1da061-9919-457a-87c2-aab1ccb0c931", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 21, 32, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ae41b82b38f4688edb99177a53bbd5820030abcf7b6531b131c8d90fb62f2aff", Pod:"coredns-5dd5756b68-7g4l5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali17e314e23fb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 21:33:29.717602 containerd[1565]: 2024-08-05 21:33:29.688 [INFO][4981] k8s.go 608: Cleaning up netns ContainerID="bf57b789acdfa47f7924a0a6180b8e5a80871a90e7c8102f6620de927e25ed97" Aug 5 21:33:29.717602 containerd[1565]: 2024-08-05 21:33:29.688 [INFO][4981] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="bf57b789acdfa47f7924a0a6180b8e5a80871a90e7c8102f6620de927e25ed97" iface="eth0" netns="" Aug 5 21:33:29.717602 containerd[1565]: 2024-08-05 21:33:29.688 [INFO][4981] k8s.go 615: Releasing IP address(es) ContainerID="bf57b789acdfa47f7924a0a6180b8e5a80871a90e7c8102f6620de927e25ed97" Aug 5 21:33:29.717602 containerd[1565]: 2024-08-05 21:33:29.688 [INFO][4981] utils.go 188: Calico CNI releasing IP address ContainerID="bf57b789acdfa47f7924a0a6180b8e5a80871a90e7c8102f6620de927e25ed97" Aug 5 21:33:29.717602 containerd[1565]: 2024-08-05 21:33:29.705 [INFO][4989] ipam_plugin.go 411: Releasing address using handleID ContainerID="bf57b789acdfa47f7924a0a6180b8e5a80871a90e7c8102f6620de927e25ed97" HandleID="k8s-pod-network.bf57b789acdfa47f7924a0a6180b8e5a80871a90e7c8102f6620de927e25ed97" Workload="localhost-k8s-coredns--5dd5756b68--7g4l5-eth0" Aug 5 21:33:29.717602 containerd[1565]: 2024-08-05 21:33:29.705 [INFO][4989] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 21:33:29.717602 containerd[1565]: 2024-08-05 21:33:29.706 [INFO][4989] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 21:33:29.717602 containerd[1565]: 2024-08-05 21:33:29.713 [WARNING][4989] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="bf57b789acdfa47f7924a0a6180b8e5a80871a90e7c8102f6620de927e25ed97" HandleID="k8s-pod-network.bf57b789acdfa47f7924a0a6180b8e5a80871a90e7c8102f6620de927e25ed97" Workload="localhost-k8s-coredns--5dd5756b68--7g4l5-eth0" Aug 5 21:33:29.717602 containerd[1565]: 2024-08-05 21:33:29.713 [INFO][4989] ipam_plugin.go 439: Releasing address using workloadID ContainerID="bf57b789acdfa47f7924a0a6180b8e5a80871a90e7c8102f6620de927e25ed97" HandleID="k8s-pod-network.bf57b789acdfa47f7924a0a6180b8e5a80871a90e7c8102f6620de927e25ed97" Workload="localhost-k8s-coredns--5dd5756b68--7g4l5-eth0" Aug 5 21:33:29.717602 containerd[1565]: 2024-08-05 21:33:29.714 [INFO][4989] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 21:33:29.717602 containerd[1565]: 2024-08-05 21:33:29.716 [INFO][4981] k8s.go 621: Teardown processing complete. ContainerID="bf57b789acdfa47f7924a0a6180b8e5a80871a90e7c8102f6620de927e25ed97" Aug 5 21:33:29.717602 containerd[1565]: time="2024-08-05T21:33:29.717570733Z" level=info msg="TearDown network for sandbox \"bf57b789acdfa47f7924a0a6180b8e5a80871a90e7c8102f6620de927e25ed97\" successfully" Aug 5 21:33:29.720115 containerd[1565]: time="2024-08-05T21:33:29.720085122Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bf57b789acdfa47f7924a0a6180b8e5a80871a90e7c8102f6620de927e25ed97\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 5 21:33:29.720115 containerd[1565]: time="2024-08-05T21:33:29.720141762Z" level=info msg="RemovePodSandbox \"bf57b789acdfa47f7924a0a6180b8e5a80871a90e7c8102f6620de927e25ed97\" returns successfully" Aug 5 21:33:29.720565 containerd[1565]: time="2024-08-05T21:33:29.720538800Z" level=info msg="StopPodSandbox for \"5206a5e0d1259ce19c047c65bf8b2f5a0a4a960a56335758f7ffc50c89ec9a48\"" Aug 5 21:33:29.781842 containerd[1565]: 2024-08-05 21:33:29.752 [WARNING][5011] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5206a5e0d1259ce19c047c65bf8b2f5a0a4a960a56335758f7ffc50c89ec9a48" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--fjpwf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"926c7d21-c63e-46bb-9599-6d26d109fd83", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 21, 32, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6baf4c294eb4f4b56350f1ce3fd2ba69766d093cb1c617f69adb30ee420730d8", Pod:"csi-node-driver-fjpwf", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calid3b5fed83eb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 21:33:29.781842 containerd[1565]: 2024-08-05 21:33:29.752 [INFO][5011] k8s.go 608: Cleaning up netns ContainerID="5206a5e0d1259ce19c047c65bf8b2f5a0a4a960a56335758f7ffc50c89ec9a48" Aug 5 21:33:29.781842 containerd[1565]: 2024-08-05 21:33:29.752 [INFO][5011] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="5206a5e0d1259ce19c047c65bf8b2f5a0a4a960a56335758f7ffc50c89ec9a48" iface="eth0" netns="" Aug 5 21:33:29.781842 containerd[1565]: 2024-08-05 21:33:29.752 [INFO][5011] k8s.go 615: Releasing IP address(es) ContainerID="5206a5e0d1259ce19c047c65bf8b2f5a0a4a960a56335758f7ffc50c89ec9a48" Aug 5 21:33:29.781842 containerd[1565]: 2024-08-05 21:33:29.752 [INFO][5011] utils.go 188: Calico CNI releasing IP address ContainerID="5206a5e0d1259ce19c047c65bf8b2f5a0a4a960a56335758f7ffc50c89ec9a48" Aug 5 21:33:29.781842 containerd[1565]: 2024-08-05 21:33:29.769 [INFO][5019] ipam_plugin.go 411: Releasing address using handleID ContainerID="5206a5e0d1259ce19c047c65bf8b2f5a0a4a960a56335758f7ffc50c89ec9a48" HandleID="k8s-pod-network.5206a5e0d1259ce19c047c65bf8b2f5a0a4a960a56335758f7ffc50c89ec9a48" Workload="localhost-k8s-csi--node--driver--fjpwf-eth0" Aug 5 21:33:29.781842 containerd[1565]: 2024-08-05 21:33:29.769 [INFO][5019] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 21:33:29.781842 containerd[1565]: 2024-08-05 21:33:29.769 [INFO][5019] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 21:33:29.781842 containerd[1565]: 2024-08-05 21:33:29.777 [WARNING][5019] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="5206a5e0d1259ce19c047c65bf8b2f5a0a4a960a56335758f7ffc50c89ec9a48" HandleID="k8s-pod-network.5206a5e0d1259ce19c047c65bf8b2f5a0a4a960a56335758f7ffc50c89ec9a48" Workload="localhost-k8s-csi--node--driver--fjpwf-eth0" Aug 5 21:33:29.781842 containerd[1565]: 2024-08-05 21:33:29.777 [INFO][5019] ipam_plugin.go 439: Releasing address using workloadID ContainerID="5206a5e0d1259ce19c047c65bf8b2f5a0a4a960a56335758f7ffc50c89ec9a48" HandleID="k8s-pod-network.5206a5e0d1259ce19c047c65bf8b2f5a0a4a960a56335758f7ffc50c89ec9a48" Workload="localhost-k8s-csi--node--driver--fjpwf-eth0" Aug 5 21:33:29.781842 containerd[1565]: 2024-08-05 21:33:29.779 [INFO][5019] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 21:33:29.781842 containerd[1565]: 2024-08-05 21:33:29.780 [INFO][5011] k8s.go 621: Teardown processing complete. ContainerID="5206a5e0d1259ce19c047c65bf8b2f5a0a4a960a56335758f7ffc50c89ec9a48" Aug 5 21:33:29.782266 containerd[1565]: time="2024-08-05T21:33:29.781873571Z" level=info msg="TearDown network for sandbox \"5206a5e0d1259ce19c047c65bf8b2f5a0a4a960a56335758f7ffc50c89ec9a48\" successfully" Aug 5 21:33:29.782266 containerd[1565]: time="2024-08-05T21:33:29.781898491Z" level=info msg="StopPodSandbox for \"5206a5e0d1259ce19c047c65bf8b2f5a0a4a960a56335758f7ffc50c89ec9a48\" returns successfully" Aug 5 21:33:29.782392 containerd[1565]: time="2024-08-05T21:33:29.782352209Z" level=info msg="RemovePodSandbox for \"5206a5e0d1259ce19c047c65bf8b2f5a0a4a960a56335758f7ffc50c89ec9a48\"" Aug 5 21:33:29.782426 containerd[1565]: time="2024-08-05T21:33:29.782386809Z" level=info msg="Forcibly stopping sandbox \"5206a5e0d1259ce19c047c65bf8b2f5a0a4a960a56335758f7ffc50c89ec9a48\"" Aug 5 21:33:29.848587 containerd[1565]: 2024-08-05 21:33:29.813 [WARNING][5041] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5206a5e0d1259ce19c047c65bf8b2f5a0a4a960a56335758f7ffc50c89ec9a48" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--fjpwf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"926c7d21-c63e-46bb-9599-6d26d109fd83", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 21, 32, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6baf4c294eb4f4b56350f1ce3fd2ba69766d093cb1c617f69adb30ee420730d8", Pod:"csi-node-driver-fjpwf", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calid3b5fed83eb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 21:33:29.848587 containerd[1565]: 2024-08-05 21:33:29.814 [INFO][5041] k8s.go 608: Cleaning up netns ContainerID="5206a5e0d1259ce19c047c65bf8b2f5a0a4a960a56335758f7ffc50c89ec9a48" Aug 5 21:33:29.848587 containerd[1565]: 2024-08-05 21:33:29.814 [INFO][5041] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="5206a5e0d1259ce19c047c65bf8b2f5a0a4a960a56335758f7ffc50c89ec9a48" iface="eth0" netns="" Aug 5 21:33:29.848587 containerd[1565]: 2024-08-05 21:33:29.814 [INFO][5041] k8s.go 615: Releasing IP address(es) ContainerID="5206a5e0d1259ce19c047c65bf8b2f5a0a4a960a56335758f7ffc50c89ec9a48" Aug 5 21:33:29.848587 containerd[1565]: 2024-08-05 21:33:29.814 [INFO][5041] utils.go 188: Calico CNI releasing IP address ContainerID="5206a5e0d1259ce19c047c65bf8b2f5a0a4a960a56335758f7ffc50c89ec9a48" Aug 5 21:33:29.848587 containerd[1565]: 2024-08-05 21:33:29.832 [INFO][5055] ipam_plugin.go 411: Releasing address using handleID ContainerID="5206a5e0d1259ce19c047c65bf8b2f5a0a4a960a56335758f7ffc50c89ec9a48" HandleID="k8s-pod-network.5206a5e0d1259ce19c047c65bf8b2f5a0a4a960a56335758f7ffc50c89ec9a48" Workload="localhost-k8s-csi--node--driver--fjpwf-eth0" Aug 5 21:33:29.848587 containerd[1565]: 2024-08-05 21:33:29.832 [INFO][5055] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 21:33:29.848587 containerd[1565]: 2024-08-05 21:33:29.832 [INFO][5055] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 21:33:29.848587 containerd[1565]: 2024-08-05 21:33:29.843 [WARNING][5055] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="5206a5e0d1259ce19c047c65bf8b2f5a0a4a960a56335758f7ffc50c89ec9a48" HandleID="k8s-pod-network.5206a5e0d1259ce19c047c65bf8b2f5a0a4a960a56335758f7ffc50c89ec9a48" Workload="localhost-k8s-csi--node--driver--fjpwf-eth0" Aug 5 21:33:29.848587 containerd[1565]: 2024-08-05 21:33:29.843 [INFO][5055] ipam_plugin.go 439: Releasing address using workloadID ContainerID="5206a5e0d1259ce19c047c65bf8b2f5a0a4a960a56335758f7ffc50c89ec9a48" HandleID="k8s-pod-network.5206a5e0d1259ce19c047c65bf8b2f5a0a4a960a56335758f7ffc50c89ec9a48" Workload="localhost-k8s-csi--node--driver--fjpwf-eth0" Aug 5 21:33:29.848587 containerd[1565]: 2024-08-05 21:33:29.844 [INFO][5055] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 21:33:29.848587 containerd[1565]: 2024-08-05 21:33:29.846 [INFO][5041] k8s.go 621: Teardown processing complete. ContainerID="5206a5e0d1259ce19c047c65bf8b2f5a0a4a960a56335758f7ffc50c89ec9a48" Aug 5 21:33:29.849210 containerd[1565]: time="2024-08-05T21:33:29.848628198Z" level=info msg="TearDown network for sandbox \"5206a5e0d1259ce19c047c65bf8b2f5a0a4a960a56335758f7ffc50c89ec9a48\" successfully" Aug 5 21:33:29.851218 containerd[1565]: time="2024-08-05T21:33:29.851180467Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5206a5e0d1259ce19c047c65bf8b2f5a0a4a960a56335758f7ffc50c89ec9a48\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 5 21:33:29.851276 containerd[1565]: time="2024-08-05T21:33:29.851237267Z" level=info msg="RemovePodSandbox \"5206a5e0d1259ce19c047c65bf8b2f5a0a4a960a56335758f7ffc50c89ec9a48\" returns successfully" Aug 5 21:33:29.851969 containerd[1565]: time="2024-08-05T21:33:29.851597785Z" level=info msg="StopPodSandbox for \"068c89225eaab713c3aa234d0475d89cc767219d0e8ef63ac1de59abedb3b0b3\"" Aug 5 21:33:29.912211 containerd[1565]: 2024-08-05 21:33:29.882 [WARNING][5091] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="068c89225eaab713c3aa234d0475d89cc767219d0e8ef63ac1de59abedb3b0b3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--p2qhs-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"250133c2-f2cb-49cc-a9e1-4020ef81de96", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 21, 32, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fd42c9671f934306633a5c867e8407862ec04502b1c885d7c6e979c4bb5a29d0", Pod:"coredns-5dd5756b68-p2qhs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2106dbc3cbc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 21:33:29.912211 containerd[1565]: 2024-08-05 21:33:29.883 [INFO][5091] k8s.go 608: Cleaning up netns ContainerID="068c89225eaab713c3aa234d0475d89cc767219d0e8ef63ac1de59abedb3b0b3" Aug 5 21:33:29.912211 containerd[1565]: 2024-08-05 21:33:29.883 [INFO][5091] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="068c89225eaab713c3aa234d0475d89cc767219d0e8ef63ac1de59abedb3b0b3" iface="eth0" netns="" Aug 5 21:33:29.912211 containerd[1565]: 2024-08-05 21:33:29.883 [INFO][5091] k8s.go 615: Releasing IP address(es) ContainerID="068c89225eaab713c3aa234d0475d89cc767219d0e8ef63ac1de59abedb3b0b3" Aug 5 21:33:29.912211 containerd[1565]: 2024-08-05 21:33:29.883 [INFO][5091] utils.go 188: Calico CNI releasing IP address ContainerID="068c89225eaab713c3aa234d0475d89cc767219d0e8ef63ac1de59abedb3b0b3" Aug 5 21:33:29.912211 containerd[1565]: 2024-08-05 21:33:29.899 [INFO][5099] ipam_plugin.go 411: Releasing address using handleID ContainerID="068c89225eaab713c3aa234d0475d89cc767219d0e8ef63ac1de59abedb3b0b3" HandleID="k8s-pod-network.068c89225eaab713c3aa234d0475d89cc767219d0e8ef63ac1de59abedb3b0b3" Workload="localhost-k8s-coredns--5dd5756b68--p2qhs-eth0" Aug 5 21:33:29.912211 containerd[1565]: 2024-08-05 21:33:29.899 [INFO][5099] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 21:33:29.912211 containerd[1565]: 2024-08-05 21:33:29.899 [INFO][5099] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 21:33:29.912211 containerd[1565]: 2024-08-05 21:33:29.908 [WARNING][5099] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="068c89225eaab713c3aa234d0475d89cc767219d0e8ef63ac1de59abedb3b0b3" HandleID="k8s-pod-network.068c89225eaab713c3aa234d0475d89cc767219d0e8ef63ac1de59abedb3b0b3" Workload="localhost-k8s-coredns--5dd5756b68--p2qhs-eth0" Aug 5 21:33:29.912211 containerd[1565]: 2024-08-05 21:33:29.908 [INFO][5099] ipam_plugin.go 439: Releasing address using workloadID ContainerID="068c89225eaab713c3aa234d0475d89cc767219d0e8ef63ac1de59abedb3b0b3" HandleID="k8s-pod-network.068c89225eaab713c3aa234d0475d89cc767219d0e8ef63ac1de59abedb3b0b3" Workload="localhost-k8s-coredns--5dd5756b68--p2qhs-eth0" Aug 5 21:33:29.912211 containerd[1565]: 2024-08-05 21:33:29.909 [INFO][5099] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 21:33:29.912211 containerd[1565]: 2024-08-05 21:33:29.910 [INFO][5091] k8s.go 621: Teardown processing complete. ContainerID="068c89225eaab713c3aa234d0475d89cc767219d0e8ef63ac1de59abedb3b0b3" Aug 5 21:33:29.912902 containerd[1565]: time="2024-08-05T21:33:29.912665918Z" level=info msg="TearDown network for sandbox \"068c89225eaab713c3aa234d0475d89cc767219d0e8ef63ac1de59abedb3b0b3\" successfully" Aug 5 21:33:29.912902 containerd[1565]: time="2024-08-05T21:33:29.912695918Z" level=info msg="StopPodSandbox for \"068c89225eaab713c3aa234d0475d89cc767219d0e8ef63ac1de59abedb3b0b3\" returns successfully" Aug 5 21:33:29.913483 containerd[1565]: time="2024-08-05T21:33:29.913174835Z" level=info msg="RemovePodSandbox for \"068c89225eaab713c3aa234d0475d89cc767219d0e8ef63ac1de59abedb3b0b3\"" Aug 5 21:33:29.913483 containerd[1565]: time="2024-08-05T21:33:29.913204515Z" level=info msg="Forcibly stopping sandbox \"068c89225eaab713c3aa234d0475d89cc767219d0e8ef63ac1de59abedb3b0b3\"" Aug 5 21:33:29.974928 containerd[1565]: 2024-08-05 21:33:29.946 [WARNING][5122] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="068c89225eaab713c3aa234d0475d89cc767219d0e8ef63ac1de59abedb3b0b3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--p2qhs-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"250133c2-f2cb-49cc-a9e1-4020ef81de96", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 21, 32, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fd42c9671f934306633a5c867e8407862ec04502b1c885d7c6e979c4bb5a29d0", Pod:"coredns-5dd5756b68-p2qhs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2106dbc3cbc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 21:33:29.974928 containerd[1565]: 2024-08-05 21:33:29.946 [INFO][5122] k8s.go 608: Cleaning up netns ContainerID="068c89225eaab713c3aa234d0475d89cc767219d0e8ef63ac1de59abedb3b0b3" Aug 5 21:33:29.974928 containerd[1565]: 2024-08-05 21:33:29.946 [INFO][5122] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="068c89225eaab713c3aa234d0475d89cc767219d0e8ef63ac1de59abedb3b0b3" iface="eth0" netns="" Aug 5 21:33:29.974928 containerd[1565]: 2024-08-05 21:33:29.946 [INFO][5122] k8s.go 615: Releasing IP address(es) ContainerID="068c89225eaab713c3aa234d0475d89cc767219d0e8ef63ac1de59abedb3b0b3" Aug 5 21:33:29.974928 containerd[1565]: 2024-08-05 21:33:29.946 [INFO][5122] utils.go 188: Calico CNI releasing IP address ContainerID="068c89225eaab713c3aa234d0475d89cc767219d0e8ef63ac1de59abedb3b0b3" Aug 5 21:33:29.974928 containerd[1565]: 2024-08-05 21:33:29.963 [INFO][5130] ipam_plugin.go 411: Releasing address using handleID ContainerID="068c89225eaab713c3aa234d0475d89cc767219d0e8ef63ac1de59abedb3b0b3" HandleID="k8s-pod-network.068c89225eaab713c3aa234d0475d89cc767219d0e8ef63ac1de59abedb3b0b3" Workload="localhost-k8s-coredns--5dd5756b68--p2qhs-eth0" Aug 5 21:33:29.974928 containerd[1565]: 2024-08-05 21:33:29.963 [INFO][5130] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 21:33:29.974928 containerd[1565]: 2024-08-05 21:33:29.963 [INFO][5130] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 21:33:29.974928 containerd[1565]: 2024-08-05 21:33:29.970 [WARNING][5130] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="068c89225eaab713c3aa234d0475d89cc767219d0e8ef63ac1de59abedb3b0b3" HandleID="k8s-pod-network.068c89225eaab713c3aa234d0475d89cc767219d0e8ef63ac1de59abedb3b0b3" Workload="localhost-k8s-coredns--5dd5756b68--p2qhs-eth0" Aug 5 21:33:29.974928 containerd[1565]: 2024-08-05 21:33:29.970 [INFO][5130] ipam_plugin.go 439: Releasing address using workloadID ContainerID="068c89225eaab713c3aa234d0475d89cc767219d0e8ef63ac1de59abedb3b0b3" HandleID="k8s-pod-network.068c89225eaab713c3aa234d0475d89cc767219d0e8ef63ac1de59abedb3b0b3" Workload="localhost-k8s-coredns--5dd5756b68--p2qhs-eth0" Aug 5 21:33:29.974928 containerd[1565]: 2024-08-05 21:33:29.972 [INFO][5130] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 21:33:29.974928 containerd[1565]: 2024-08-05 21:33:29.973 [INFO][5122] k8s.go 621: Teardown processing complete. ContainerID="068c89225eaab713c3aa234d0475d89cc767219d0e8ef63ac1de59abedb3b0b3" Aug 5 21:33:29.976622 containerd[1565]: time="2024-08-05T21:33:29.976176719Z" level=info msg="TearDown network for sandbox \"068c89225eaab713c3aa234d0475d89cc767219d0e8ef63ac1de59abedb3b0b3\" successfully" Aug 5 21:33:29.980079 containerd[1565]: time="2024-08-05T21:33:29.979926983Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"068c89225eaab713c3aa234d0475d89cc767219d0e8ef63ac1de59abedb3b0b3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 5 21:33:29.980079 containerd[1565]: time="2024-08-05T21:33:29.980000502Z" level=info msg="RemovePodSandbox \"068c89225eaab713c3aa234d0475d89cc767219d0e8ef63ac1de59abedb3b0b3\" returns successfully" Aug 5 21:33:33.025081 systemd[1]: Started sshd@18-10.0.0.18:22-10.0.0.1:34374.service - OpenSSH per-connection server daemon (10.0.0.1:34374). Aug 5 21:33:33.057473 sshd[5140]: Accepted publickey for core from 10.0.0.1 port 34374 ssh2: RSA SHA256:vLb6+tqq+AN0xHsfmostacJPBc0ER3TTPjmoL0pNEEc Aug 5 21:33:33.058604 sshd[5140]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:33:33.063202 systemd-logind[1531]: New session 19 of user core. Aug 5 21:33:33.072138 systemd[1]: Started session-19.scope - Session 19 of User core. Aug 5 21:33:33.179951 sshd[5140]: pam_unix(sshd:session): session closed for user core Aug 5 21:33:33.183592 systemd[1]: sshd@18-10.0.0.18:22-10.0.0.1:34374.service: Deactivated successfully. Aug 5 21:33:33.185703 systemd-logind[1531]: Session 19 logged out. Waiting for processes to exit. Aug 5 21:33:33.185722 systemd[1]: session-19.scope: Deactivated successfully. Aug 5 21:33:33.187030 systemd-logind[1531]: Removed session 19. Aug 5 21:33:33.789533 kubelet[2678]: E0805 21:33:33.788995 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 21:33:38.189050 systemd[1]: Started sshd@19-10.0.0.18:22-10.0.0.1:34376.service - OpenSSH per-connection server daemon (10.0.0.1:34376). Aug 5 21:33:38.238691 sshd[5183]: Accepted publickey for core from 10.0.0.1 port 34376 ssh2: RSA SHA256:vLb6+tqq+AN0xHsfmostacJPBc0ER3TTPjmoL0pNEEc Aug 5 21:33:38.242122 sshd[5183]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:33:38.248593 systemd-logind[1531]: New session 20 of user core. Aug 5 21:33:38.255112 systemd[1]: Started session-20.scope - Session 20 of User core. Aug 5 21:33:38.372168 sshd[5183]: pam_unix(sshd:session): session closed for user core Aug 5 21:33:38.375524 systemd[1]: sshd@19-10.0.0.18:22-10.0.0.1:34376.service: Deactivated successfully. Aug 5 21:33:38.377986 systemd-logind[1531]: Session 20 logged out. Waiting for processes to exit. Aug 5 21:33:38.378165 systemd[1]: session-20.scope: Deactivated successfully. Aug 5 21:33:38.379670 systemd-logind[1531]: Removed session 20. Aug 5 21:33:39.009085 kubelet[2678]: I0805 21:33:39.004835 2678 topology_manager.go:215] "Topology Admit Handler" podUID="1655d9f3-971a-4613-8df5-c6b4951677a8" podNamespace="calico-apiserver" podName="calico-apiserver-666dffcbc8-hbf4j" Aug 5 21:33:39.169298 kubelet[2678]: I0805 21:33:39.169262 2678 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flr6b\" (UniqueName: \"kubernetes.io/projected/1655d9f3-971a-4613-8df5-c6b4951677a8-kube-api-access-flr6b\") pod \"calico-apiserver-666dffcbc8-hbf4j\" (UID: \"1655d9f3-971a-4613-8df5-c6b4951677a8\") " pod="calico-apiserver/calico-apiserver-666dffcbc8-hbf4j" Aug 5 21:33:39.169853 kubelet[2678]: I0805 21:33:39.169833 2678 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1655d9f3-971a-4613-8df5-c6b4951677a8-calico-apiserver-certs\") pod \"calico-apiserver-666dffcbc8-hbf4j\" (UID: \"1655d9f3-971a-4613-8df5-c6b4951677a8\") " pod="calico-apiserver/calico-apiserver-666dffcbc8-hbf4j" Aug 5 21:33:39.316211 containerd[1565]: time="2024-08-05T21:33:39.316100693Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-666dffcbc8-hbf4j,Uid:1655d9f3-971a-4613-8df5-c6b4951677a8,Namespace:calico-apiserver,Attempt:0,}" Aug 5 21:33:39.419865 systemd-networkd[1238]: cali1cd535dc0fb: Link UP Aug 5 21:33:39.420455 systemd-networkd[1238]: cali1cd535dc0fb: Gained carrier Aug 5 21:33:39.437349 containerd[1565]: 2024-08-05 21:33:39.353 [INFO][5205] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--666dffcbc8--hbf4j-eth0 calico-apiserver-666dffcbc8- calico-apiserver 1655d9f3-971a-4613-8df5-c6b4951677a8 1124 0 2024-08-05 21:33:38 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:666dffcbc8 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-666dffcbc8-hbf4j eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali1cd535dc0fb [] []}} ContainerID="6ef17191f182d6a5805638e7e11ab351afccb9dd91a6b74f2fc0379c9d0add03" Namespace="calico-apiserver" Pod="calico-apiserver-666dffcbc8-hbf4j" WorkloadEndpoint="localhost-k8s-calico--apiserver--666dffcbc8--hbf4j-" Aug 5 21:33:39.437349 containerd[1565]: 2024-08-05 21:33:39.353 [INFO][5205] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="6ef17191f182d6a5805638e7e11ab351afccb9dd91a6b74f2fc0379c9d0add03" Namespace="calico-apiserver" Pod="calico-apiserver-666dffcbc8-hbf4j" WorkloadEndpoint="localhost-k8s-calico--apiserver--666dffcbc8--hbf4j-eth0" Aug 5 21:33:39.437349 containerd[1565]: 2024-08-05 21:33:39.377 [INFO][5219] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6ef17191f182d6a5805638e7e11ab351afccb9dd91a6b74f2fc0379c9d0add03" HandleID="k8s-pod-network.6ef17191f182d6a5805638e7e11ab351afccb9dd91a6b74f2fc0379c9d0add03" Workload="localhost-k8s-calico--apiserver--666dffcbc8--hbf4j-eth0" Aug 5 21:33:39.437349 containerd[1565]: 2024-08-05 21:33:39.387 [INFO][5219] ipam_plugin.go 264: Auto assigning IP ContainerID="6ef17191f182d6a5805638e7e11ab351afccb9dd91a6b74f2fc0379c9d0add03" HandleID="k8s-pod-network.6ef17191f182d6a5805638e7e11ab351afccb9dd91a6b74f2fc0379c9d0add03" Workload="localhost-k8s-calico--apiserver--666dffcbc8--hbf4j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c140), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-666dffcbc8-hbf4j", "timestamp":"2024-08-05 21:33:39.377226385 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 5 21:33:39.437349 containerd[1565]: 2024-08-05 21:33:39.387 [INFO][5219] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 21:33:39.437349 containerd[1565]: 2024-08-05 21:33:39.387 [INFO][5219] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 21:33:39.437349 containerd[1565]: 2024-08-05 21:33:39.387 [INFO][5219] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 5 21:33:39.437349 containerd[1565]: 2024-08-05 21:33:39.391 [INFO][5219] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.6ef17191f182d6a5805638e7e11ab351afccb9dd91a6b74f2fc0379c9d0add03" host="localhost" Aug 5 21:33:39.437349 containerd[1565]: 2024-08-05 21:33:39.394 [INFO][5219] ipam.go 372: Looking up existing affinities for host host="localhost" Aug 5 21:33:39.437349 containerd[1565]: 2024-08-05 21:33:39.398 [INFO][5219] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Aug 5 21:33:39.437349 containerd[1565]: 2024-08-05 21:33:39.400 [INFO][5219] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 5 21:33:39.437349 containerd[1565]: 2024-08-05 21:33:39.402 [INFO][5219] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 5 21:33:39.437349 containerd[1565]: 2024-08-05 21:33:39.402 [INFO][5219] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6ef17191f182d6a5805638e7e11ab351afccb9dd91a6b74f2fc0379c9d0add03" host="localhost" Aug 5 21:33:39.437349 containerd[1565]: 2024-08-05 21:33:39.403 [INFO][5219] ipam.go 1685: Creating new handle: k8s-pod-network.6ef17191f182d6a5805638e7e11ab351afccb9dd91a6b74f2fc0379c9d0add03 Aug 5 21:33:39.437349 containerd[1565]: 2024-08-05 21:33:39.406 [INFO][5219] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6ef17191f182d6a5805638e7e11ab351afccb9dd91a6b74f2fc0379c9d0add03" host="localhost" Aug 5 21:33:39.437349 containerd[1565]: 2024-08-05 21:33:39.412 [INFO][5219] ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.6ef17191f182d6a5805638e7e11ab351afccb9dd91a6b74f2fc0379c9d0add03" host="localhost" Aug 5 21:33:39.437349 containerd[1565]: 2024-08-05 21:33:39.413 [INFO][5219] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.6ef17191f182d6a5805638e7e11ab351afccb9dd91a6b74f2fc0379c9d0add03" host="localhost" Aug 5 21:33:39.437349 containerd[1565]: 2024-08-05 21:33:39.413 [INFO][5219] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 21:33:39.437349 containerd[1565]: 2024-08-05 21:33:39.413 [INFO][5219] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="6ef17191f182d6a5805638e7e11ab351afccb9dd91a6b74f2fc0379c9d0add03" HandleID="k8s-pod-network.6ef17191f182d6a5805638e7e11ab351afccb9dd91a6b74f2fc0379c9d0add03" Workload="localhost-k8s-calico--apiserver--666dffcbc8--hbf4j-eth0" Aug 5 21:33:39.437913 containerd[1565]: 2024-08-05 21:33:39.415 [INFO][5205] k8s.go 386: Populated endpoint ContainerID="6ef17191f182d6a5805638e7e11ab351afccb9dd91a6b74f2fc0379c9d0add03" Namespace="calico-apiserver" Pod="calico-apiserver-666dffcbc8-hbf4j" WorkloadEndpoint="localhost-k8s-calico--apiserver--666dffcbc8--hbf4j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--666dffcbc8--hbf4j-eth0", GenerateName:"calico-apiserver-666dffcbc8-", Namespace:"calico-apiserver", SelfLink:"", UID:"1655d9f3-971a-4613-8df5-c6b4951677a8", ResourceVersion:"1124", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 21, 33, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"666dffcbc8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-666dffcbc8-hbf4j", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1cd535dc0fb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 21:33:39.437913 containerd[1565]: 2024-08-05 21:33:39.415 [INFO][5205] k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="6ef17191f182d6a5805638e7e11ab351afccb9dd91a6b74f2fc0379c9d0add03" Namespace="calico-apiserver" Pod="calico-apiserver-666dffcbc8-hbf4j" WorkloadEndpoint="localhost-k8s-calico--apiserver--666dffcbc8--hbf4j-eth0" Aug 5 21:33:39.437913 containerd[1565]: 2024-08-05 21:33:39.415 [INFO][5205] dataplane_linux.go 68: Setting the host side veth name to cali1cd535dc0fb ContainerID="6ef17191f182d6a5805638e7e11ab351afccb9dd91a6b74f2fc0379c9d0add03" Namespace="calico-apiserver" Pod="calico-apiserver-666dffcbc8-hbf4j" WorkloadEndpoint="localhost-k8s-calico--apiserver--666dffcbc8--hbf4j-eth0" Aug 5 21:33:39.437913 containerd[1565]: 2024-08-05 21:33:39.420 [INFO][5205] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="6ef17191f182d6a5805638e7e11ab351afccb9dd91a6b74f2fc0379c9d0add03" Namespace="calico-apiserver" Pod="calico-apiserver-666dffcbc8-hbf4j" WorkloadEndpoint="localhost-k8s-calico--apiserver--666dffcbc8--hbf4j-eth0" Aug 5 21:33:39.437913 containerd[1565]: 2024-08-05 21:33:39.421 [INFO][5205] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="6ef17191f182d6a5805638e7e11ab351afccb9dd91a6b74f2fc0379c9d0add03" Namespace="calico-apiserver" Pod="calico-apiserver-666dffcbc8-hbf4j" WorkloadEndpoint="localhost-k8s-calico--apiserver--666dffcbc8--hbf4j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--666dffcbc8--hbf4j-eth0", GenerateName:"calico-apiserver-666dffcbc8-", Namespace:"calico-apiserver", SelfLink:"", UID:"1655d9f3-971a-4613-8df5-c6b4951677a8", ResourceVersion:"1124", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 21, 33, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"666dffcbc8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6ef17191f182d6a5805638e7e11ab351afccb9dd91a6b74f2fc0379c9d0add03", Pod:"calico-apiserver-666dffcbc8-hbf4j", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1cd535dc0fb", MAC:"1a:61:fc:63:f5:b1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 21:33:39.437913 containerd[1565]: 2024-08-05 21:33:39.430 [INFO][5205] k8s.go 500: Wrote updated endpoint to datastore ContainerID="6ef17191f182d6a5805638e7e11ab351afccb9dd91a6b74f2fc0379c9d0add03" Namespace="calico-apiserver" Pod="calico-apiserver-666dffcbc8-hbf4j" WorkloadEndpoint="localhost-k8s-calico--apiserver--666dffcbc8--hbf4j-eth0" Aug 5 21:33:39.456188 containerd[1565]: time="2024-08-05T21:33:39.456098233Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 21:33:39.456188 containerd[1565]: time="2024-08-05T21:33:39.456151434Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:33:39.456188 containerd[1565]: time="2024-08-05T21:33:39.456164634Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 21:33:39.456188 containerd[1565]: time="2024-08-05T21:33:39.456173994Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:33:39.478461 systemd-resolved[1446]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 5 21:33:39.495419 containerd[1565]: time="2024-08-05T21:33:39.495380755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-666dffcbc8-hbf4j,Uid:1655d9f3-971a-4613-8df5-c6b4951677a8,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"6ef17191f182d6a5805638e7e11ab351afccb9dd91a6b74f2fc0379c9d0add03\"" Aug 5 21:33:39.497040 containerd[1565]: time="2024-08-05T21:33:39.497002257Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\""