Oct 8 20:02:14.884599 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Oct 8 20:02:14.884620 kernel: Linux version 6.6.54-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Tue Oct 8 18:25:39 -00 2024 Oct 8 20:02:14.884629 kernel: KASLR enabled Oct 8 20:02:14.884635 kernel: efi: EFI v2.7 by EDK II Oct 8 20:02:14.884641 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Oct 8 20:02:14.884646 kernel: random: crng init done Oct 8 20:02:14.884654 kernel: ACPI: Early table checksum verification disabled Oct 8 20:02:14.884659 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Oct 8 20:02:14.884665 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Oct 8 20:02:14.884673 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 20:02:14.884679 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 20:02:14.884685 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 20:02:14.884691 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 20:02:14.884697 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 20:02:14.884704 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 20:02:14.884712 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 20:02:14.884718 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 20:02:14.884724 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 20:02:14.884731 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Oct 8 20:02:14.884737 kernel: NUMA: Failed to initialise from firmware Oct 8 20:02:14.884743 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Oct 8 20:02:14.884749 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Oct 8 20:02:14.884756 kernel: Zone ranges: Oct 8 20:02:14.884762 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Oct 8 20:02:14.884768 kernel: DMA32 empty Oct 8 20:02:14.884775 kernel: Normal empty Oct 8 20:02:14.884781 kernel: Movable zone start for each node Oct 8 20:02:14.884788 kernel: Early memory node ranges Oct 8 20:02:14.884794 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Oct 8 20:02:14.884800 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Oct 8 20:02:14.884807 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Oct 8 20:02:14.884813 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Oct 8 20:02:14.884819 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Oct 8 20:02:14.884826 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Oct 8 20:02:14.884832 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Oct 8 20:02:14.884838 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Oct 8 20:02:14.884845 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Oct 8 20:02:14.884852 kernel: psci: probing for conduit method from ACPI. Oct 8 20:02:14.884859 kernel: psci: PSCIv1.1 detected in firmware. Oct 8 20:02:14.884865 kernel: psci: Using standard PSCI v0.2 function IDs Oct 8 20:02:14.884874 kernel: psci: Trusted OS migration not required Oct 8 20:02:14.884880 kernel: psci: SMC Calling Convention v1.1 Oct 8 20:02:14.884887 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Oct 8 20:02:14.884895 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Oct 8 20:02:14.884902 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Oct 8 20:02:14.884909 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Oct 8 20:02:14.884915 kernel: Detected PIPT I-cache on CPU0 Oct 8 20:02:14.884922 kernel: CPU features: detected: GIC system register CPU interface Oct 8 20:02:14.884929 kernel: CPU features: detected: Hardware dirty bit management Oct 8 20:02:14.884935 kernel: CPU features: detected: Spectre-v4 Oct 8 20:02:14.884942 kernel: CPU features: detected: Spectre-BHB Oct 8 20:02:14.884948 kernel: CPU features: kernel page table isolation forced ON by KASLR Oct 8 20:02:14.884955 kernel: CPU features: detected: Kernel page table isolation (KPTI) Oct 8 20:02:14.884963 kernel: CPU features: detected: ARM erratum 1418040 Oct 8 20:02:14.884970 kernel: CPU features: detected: SSBS not fully self-synchronizing Oct 8 20:02:14.884976 kernel: alternatives: applying boot alternatives Oct 8 20:02:14.884984 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=f7968382bc5b46f9b6104a9f012cfba991c8ea306771e716a099618547de81d3 Oct 8 20:02:14.884991 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 8 20:02:14.884997 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 8 20:02:14.885004 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 8 20:02:14.885011 kernel: Fallback order for Node 0: 0 Oct 8 20:02:14.885018 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Oct 8 20:02:14.885024 kernel: Policy zone: DMA Oct 8 20:02:14.885031 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 8 20:02:14.885039 kernel: software IO TLB: area num 4. Oct 8 20:02:14.885045 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Oct 8 20:02:14.885053 kernel: Memory: 2386468K/2572288K available (10304K kernel code, 2184K rwdata, 8092K rodata, 39360K init, 897K bss, 185820K reserved, 0K cma-reserved) Oct 8 20:02:14.885060 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Oct 8 20:02:14.885066 kernel: trace event string verifier disabled Oct 8 20:02:14.885073 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 8 20:02:14.885080 kernel: rcu: RCU event tracing is enabled. Oct 8 20:02:14.885087 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Oct 8 20:02:14.885094 kernel: Trampoline variant of Tasks RCU enabled. Oct 8 20:02:14.885101 kernel: Tracing variant of Tasks RCU enabled. Oct 8 20:02:14.885108 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 8 20:02:14.885114 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Oct 8 20:02:14.885122 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Oct 8 20:02:14.885129 kernel: GICv3: 256 SPIs implemented Oct 8 20:02:14.885144 kernel: GICv3: 0 Extended SPIs implemented Oct 8 20:02:14.885151 kernel: Root IRQ handler: gic_handle_irq Oct 8 20:02:14.885157 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Oct 8 20:02:14.885164 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Oct 8 20:02:14.885171 kernel: ITS [mem 0x08080000-0x0809ffff] Oct 8 20:02:14.885178 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400d0000 (indirect, esz 8, psz 64K, shr 1) Oct 8 20:02:14.885185 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400e0000 (flat, esz 8, psz 64K, shr 1) Oct 8 20:02:14.885191 kernel: GICv3: using LPI property table @0x00000000400f0000 Oct 8 20:02:14.885198 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Oct 8 20:02:14.885211 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 8 20:02:14.885218 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 8 20:02:14.885224 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Oct 8 20:02:14.885231 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Oct 8 20:02:14.885311 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Oct 8 20:02:14.885319 kernel: arm-pv: using stolen time PV Oct 8 20:02:14.885326 kernel: Console: colour dummy device 80x25 Oct 8 20:02:14.885333 kernel: ACPI: Core revision 20230628 Oct 8 20:02:14.885340 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Oct 8 20:02:14.885347 kernel: pid_max: default: 32768 minimum: 301 Oct 8 20:02:14.885357 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Oct 8 20:02:14.885363 kernel: landlock: Up and running. Oct 8 20:02:14.885370 kernel: SELinux: Initializing. Oct 8 20:02:14.885377 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 8 20:02:14.885384 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 8 20:02:14.885391 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Oct 8 20:02:14.885398 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Oct 8 20:02:14.885405 kernel: rcu: Hierarchical SRCU implementation. Oct 8 20:02:14.885412 kernel: rcu: Max phase no-delay instances is 400. Oct 8 20:02:14.885420 kernel: Platform MSI: ITS@0x8080000 domain created Oct 8 20:02:14.885427 kernel: PCI/MSI: ITS@0x8080000 domain created Oct 8 20:02:14.885434 kernel: Remapping and enabling EFI services. Oct 8 20:02:14.885440 kernel: smp: Bringing up secondary CPUs ... Oct 8 20:02:14.885447 kernel: Detected PIPT I-cache on CPU1 Oct 8 20:02:14.885454 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Oct 8 20:02:14.885461 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Oct 8 20:02:14.885468 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 8 20:02:14.885475 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Oct 8 20:02:14.885482 kernel: Detected PIPT I-cache on CPU2 Oct 8 20:02:14.885490 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Oct 8 20:02:14.885497 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Oct 8 20:02:14.885509 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 8 20:02:14.885517 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Oct 8 20:02:14.885525 kernel: Detected PIPT I-cache on CPU3 Oct 8 20:02:14.885532 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Oct 8 20:02:14.885539 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Oct 8 20:02:14.885546 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 8 20:02:14.885553 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Oct 8 20:02:14.885562 kernel: smp: Brought up 1 node, 4 CPUs Oct 8 20:02:14.885569 kernel: SMP: Total of 4 processors activated. Oct 8 20:02:14.885576 kernel: CPU features: detected: 32-bit EL0 Support Oct 8 20:02:14.885583 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Oct 8 20:02:14.885591 kernel: CPU features: detected: Common not Private translations Oct 8 20:02:14.885598 kernel: CPU features: detected: CRC32 instructions Oct 8 20:02:14.885605 kernel: CPU features: detected: Enhanced Virtualization Traps Oct 8 20:02:14.885612 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Oct 8 20:02:14.885621 kernel: CPU features: detected: LSE atomic instructions Oct 8 20:02:14.885628 kernel: CPU features: detected: Privileged Access Never Oct 8 20:02:14.885635 kernel: CPU features: detected: RAS Extension Support Oct 8 20:02:14.885642 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Oct 8 20:02:14.885650 kernel: CPU: All CPU(s) started at EL1 Oct 8 20:02:14.885657 kernel: alternatives: applying system-wide alternatives Oct 8 20:02:14.885664 kernel: devtmpfs: initialized Oct 8 20:02:14.885671 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 8 20:02:14.885678 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Oct 8 20:02:14.885687 kernel: pinctrl core: initialized pinctrl subsystem Oct 8 20:02:14.885695 kernel: SMBIOS 3.0.0 present. Oct 8 20:02:14.885706 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Oct 8 20:02:14.885713 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 8 20:02:14.885721 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Oct 8 20:02:14.885728 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Oct 8 20:02:14.885735 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Oct 8 20:02:14.885743 kernel: audit: initializing netlink subsys (disabled) Oct 8 20:02:14.885750 kernel: audit: type=2000 audit(0.024:1): state=initialized audit_enabled=0 res=1 Oct 8 20:02:14.885759 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 8 20:02:14.885766 kernel: cpuidle: using governor menu Oct 8 20:02:14.885774 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Oct 8 20:02:14.885781 kernel: ASID allocator initialised with 32768 entries Oct 8 20:02:14.885788 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 8 20:02:14.885795 kernel: Serial: AMBA PL011 UART driver Oct 8 20:02:14.885802 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Oct 8 20:02:14.885810 kernel: Modules: 0 pages in range for non-PLT usage Oct 8 20:02:14.885817 kernel: Modules: 509024 pages in range for PLT usage Oct 8 20:02:14.885825 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 8 20:02:14.885833 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Oct 8 20:02:14.885840 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Oct 8 20:02:14.885847 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Oct 8 20:02:14.885855 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 8 20:02:14.885862 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Oct 8 20:02:14.885869 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Oct 8 20:02:14.885876 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Oct 8 20:02:14.885884 kernel: ACPI: Added _OSI(Module Device) Oct 8 20:02:14.885892 kernel: ACPI: Added _OSI(Processor Device) Oct 8 20:02:14.885900 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Oct 8 20:02:14.885907 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 8 20:02:14.885914 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 8 20:02:14.885921 kernel: ACPI: Interpreter enabled Oct 8 20:02:14.885928 kernel: ACPI: Using GIC for interrupt routing Oct 8 20:02:14.885936 kernel: ACPI: MCFG table detected, 1 entries Oct 8 20:02:14.885943 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Oct 8 20:02:14.885950 kernel: printk: console [ttyAMA0] enabled Oct 8 20:02:14.885959 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 8 20:02:14.886099 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 8 20:02:14.886184 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Oct 8 20:02:14.886263 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Oct 8 20:02:14.886330 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Oct 8 20:02:14.886393 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Oct 8 20:02:14.886402 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Oct 8 20:02:14.886413 kernel: PCI host bridge to bus 0000:00 Oct 8 20:02:14.886490 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Oct 8 20:02:14.886552 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Oct 8 20:02:14.886610 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Oct 8 20:02:14.886666 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 8 20:02:14.886750 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Oct 8 20:02:14.886826 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Oct 8 20:02:14.886896 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Oct 8 20:02:14.886978 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Oct 8 20:02:14.887044 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Oct 8 20:02:14.887112 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Oct 8 20:02:14.887189 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Oct 8 20:02:14.887278 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Oct 8 20:02:14.887339 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Oct 8 20:02:14.887398 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Oct 8 20:02:14.887455 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Oct 8 20:02:14.887465 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Oct 8 20:02:14.887472 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Oct 8 20:02:14.887480 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Oct 8 20:02:14.887487 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Oct 8 20:02:14.887495 kernel: iommu: Default domain type: Translated Oct 8 20:02:14.887502 kernel: iommu: DMA domain TLB invalidation policy: strict mode Oct 8 20:02:14.887511 kernel: efivars: Registered efivars operations Oct 8 20:02:14.887518 kernel: vgaarb: loaded Oct 8 20:02:14.887526 kernel: clocksource: Switched to clocksource arch_sys_counter Oct 8 20:02:14.887533 kernel: VFS: Disk quotas dquot_6.6.0 Oct 8 20:02:14.887540 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 8 20:02:14.887548 kernel: pnp: PnP ACPI init Oct 8 20:02:14.887616 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Oct 8 20:02:14.887626 kernel: pnp: PnP ACPI: found 1 devices Oct 8 20:02:14.887635 kernel: NET: Registered PF_INET protocol family Oct 8 20:02:14.887643 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 8 20:02:14.887650 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 8 20:02:14.887658 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 8 20:02:14.887666 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 8 20:02:14.887673 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Oct 8 20:02:14.887680 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 8 20:02:14.887688 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 8 20:02:14.887695 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 8 20:02:14.887704 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 8 20:02:14.887711 kernel: PCI: CLS 0 bytes, default 64 Oct 8 20:02:14.887718 kernel: kvm [1]: HYP mode not available Oct 8 20:02:14.887725 kernel: Initialise system trusted keyrings Oct 8 20:02:14.887733 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 8 20:02:14.887740 kernel: Key type asymmetric registered Oct 8 20:02:14.887747 kernel: Asymmetric key parser 'x509' registered Oct 8 20:02:14.887755 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Oct 8 20:02:14.887762 kernel: io scheduler mq-deadline registered Oct 8 20:02:14.887770 kernel: io scheduler kyber registered Oct 8 20:02:14.887778 kernel: io scheduler bfq registered Oct 8 20:02:14.887785 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Oct 8 20:02:14.887792 kernel: ACPI: button: Power Button [PWRB] Oct 8 20:02:14.887800 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Oct 8 20:02:14.887868 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Oct 8 20:02:14.887878 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 8 20:02:14.887886 kernel: thunder_xcv, ver 1.0 Oct 8 20:02:14.887893 kernel: thunder_bgx, ver 1.0 Oct 8 20:02:14.887902 kernel: nicpf, ver 1.0 Oct 8 20:02:14.887909 kernel: nicvf, ver 1.0 Oct 8 20:02:14.887982 kernel: rtc-efi rtc-efi.0: registered as rtc0 Oct 8 20:02:14.888044 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-10-08T20:02:14 UTC (1728417734) Oct 8 20:02:14.888054 kernel: hid: raw HID events driver (C) Jiri Kosina Oct 8 20:02:14.888061 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Oct 8 20:02:14.888069 kernel: watchdog: Delayed init of the lockup detector failed: -19 Oct 8 20:02:14.888076 kernel: watchdog: Hard watchdog permanently disabled Oct 8 20:02:14.888085 kernel: NET: Registered PF_INET6 protocol family Oct 8 20:02:14.888092 kernel: Segment Routing with IPv6 Oct 8 20:02:14.888099 kernel: In-situ OAM (IOAM) with IPv6 Oct 8 20:02:14.888106 kernel: NET: Registered PF_PACKET protocol family Oct 8 20:02:14.888113 kernel: Key type dns_resolver registered Oct 8 20:02:14.888121 kernel: registered taskstats version 1 Oct 8 20:02:14.888128 kernel: Loading compiled-in X.509 certificates Oct 8 20:02:14.888143 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.54-flatcar: e9e638352c282bfddf5aec6da700ad8191939d05' Oct 8 20:02:14.888150 kernel: Key type .fscrypt registered Oct 8 20:02:14.888159 kernel: Key type fscrypt-provisioning registered Oct 8 20:02:14.888167 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 8 20:02:14.888174 kernel: ima: Allocated hash algorithm: sha1 Oct 8 20:02:14.888181 kernel: ima: No architecture policies found Oct 8 20:02:14.888188 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Oct 8 20:02:14.888196 kernel: clk: Disabling unused clocks Oct 8 20:02:14.888203 kernel: Freeing unused kernel memory: 39360K Oct 8 20:02:14.888210 kernel: Run /init as init process Oct 8 20:02:14.888217 kernel: with arguments: Oct 8 20:02:14.888226 kernel: /init Oct 8 20:02:14.888233 kernel: with environment: Oct 8 20:02:14.888249 kernel: HOME=/ Oct 8 20:02:14.888256 kernel: TERM=linux Oct 8 20:02:14.888263 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 8 20:02:14.888272 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 8 20:02:14.888281 systemd[1]: Detected virtualization kvm. Oct 8 20:02:14.888289 systemd[1]: Detected architecture arm64. Oct 8 20:02:14.888299 systemd[1]: Running in initrd. Oct 8 20:02:14.888307 systemd[1]: No hostname configured, using default hostname. Oct 8 20:02:14.888314 systemd[1]: Hostname set to . Oct 8 20:02:14.888322 systemd[1]: Initializing machine ID from VM UUID. Oct 8 20:02:14.888330 systemd[1]: Queued start job for default target initrd.target. Oct 8 20:02:14.888338 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 8 20:02:14.888345 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 8 20:02:14.888354 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 8 20:02:14.888363 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 8 20:02:14.888371 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 8 20:02:14.888379 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 8 20:02:14.888388 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Oct 8 20:02:14.888396 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Oct 8 20:02:14.888404 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 8 20:02:14.888412 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 8 20:02:14.888421 systemd[1]: Reached target paths.target - Path Units. Oct 8 20:02:14.888429 systemd[1]: Reached target slices.target - Slice Units. Oct 8 20:02:14.888436 systemd[1]: Reached target swap.target - Swaps. Oct 8 20:02:14.888444 systemd[1]: Reached target timers.target - Timer Units. Oct 8 20:02:14.888452 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 8 20:02:14.888460 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 8 20:02:14.888467 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 8 20:02:14.888475 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Oct 8 20:02:14.888485 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 8 20:02:14.888493 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 8 20:02:14.888501 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 8 20:02:14.888508 systemd[1]: Reached target sockets.target - Socket Units. Oct 8 20:02:14.888516 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 8 20:02:14.888524 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 8 20:02:14.888532 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 8 20:02:14.888539 systemd[1]: Starting systemd-fsck-usr.service... Oct 8 20:02:14.888547 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 8 20:02:14.888556 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 8 20:02:14.888564 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 20:02:14.888572 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 8 20:02:14.888580 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 8 20:02:14.888587 systemd[1]: Finished systemd-fsck-usr.service. Oct 8 20:02:14.888596 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 8 20:02:14.888605 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 8 20:02:14.888613 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 8 20:02:14.888638 systemd-journald[237]: Collecting audit messages is disabled. Oct 8 20:02:14.888659 kernel: Bridge firewalling registered Oct 8 20:02:14.888667 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 8 20:02:14.888675 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 8 20:02:14.888683 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 20:02:14.888691 systemd-journald[237]: Journal started Oct 8 20:02:14.888710 systemd-journald[237]: Runtime Journal (/run/log/journal/1b627052a41f4f9ebe4874a6670b6e82) is 5.9M, max 47.3M, 41.4M free. Oct 8 20:02:14.863710 systemd-modules-load[238]: Inserted module 'overlay' Oct 8 20:02:14.890507 systemd[1]: Started systemd-journald.service - Journal Service. Oct 8 20:02:14.882082 systemd-modules-load[238]: Inserted module 'br_netfilter' Oct 8 20:02:14.891403 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 8 20:02:14.894289 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 8 20:02:14.895664 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 8 20:02:14.900360 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 8 20:02:14.907951 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 8 20:02:14.910188 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 8 20:02:14.920446 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 8 20:02:14.921640 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 20:02:14.924027 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 8 20:02:14.936076 dracut-cmdline[280]: dracut-dracut-053 Oct 8 20:02:14.938474 dracut-cmdline[280]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=f7968382bc5b46f9b6104a9f012cfba991c8ea306771e716a099618547de81d3 Oct 8 20:02:14.945418 systemd-resolved[274]: Positive Trust Anchors: Oct 8 20:02:14.945435 systemd-resolved[274]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 8 20:02:14.945466 systemd-resolved[274]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 8 20:02:14.950121 systemd-resolved[274]: Defaulting to hostname 'linux'. Oct 8 20:02:14.951045 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 8 20:02:14.953013 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 8 20:02:15.009267 kernel: SCSI subsystem initialized Oct 8 20:02:15.014254 kernel: Loading iSCSI transport class v2.0-870. Oct 8 20:02:15.021264 kernel: iscsi: registered transport (tcp) Oct 8 20:02:15.035453 kernel: iscsi: registered transport (qla4xxx) Oct 8 20:02:15.035468 kernel: QLogic iSCSI HBA Driver Oct 8 20:02:15.076840 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 8 20:02:15.085396 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 8 20:02:15.101481 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 8 20:02:15.101524 kernel: device-mapper: uevent: version 1.0.3 Oct 8 20:02:15.102296 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Oct 8 20:02:15.149274 kernel: raid6: neonx8 gen() 15791 MB/s Oct 8 20:02:15.166265 kernel: raid6: neonx4 gen() 15673 MB/s Oct 8 20:02:15.183259 kernel: raid6: neonx2 gen() 13209 MB/s Oct 8 20:02:15.200258 kernel: raid6: neonx1 gen() 10464 MB/s Oct 8 20:02:15.217265 kernel: raid6: int64x8 gen() 6962 MB/s Oct 8 20:02:15.234264 kernel: raid6: int64x4 gen() 7337 MB/s Oct 8 20:02:15.251252 kernel: raid6: int64x2 gen() 6114 MB/s Oct 8 20:02:15.268250 kernel: raid6: int64x1 gen() 5050 MB/s Oct 8 20:02:15.268264 kernel: raid6: using algorithm neonx8 gen() 15791 MB/s Oct 8 20:02:15.285256 kernel: raid6: .... xor() 11926 MB/s, rmw enabled Oct 8 20:02:15.285270 kernel: raid6: using neon recovery algorithm Oct 8 20:02:15.290254 kernel: xor: measuring software checksum speed Oct 8 20:02:15.290269 kernel: 8regs : 19778 MB/sec Oct 8 20:02:15.291678 kernel: 32regs : 18106 MB/sec Oct 8 20:02:15.291692 kernel: arm64_neon : 25833 MB/sec Oct 8 20:02:15.291702 kernel: xor: using function: arm64_neon (25833 MB/sec) Oct 8 20:02:15.341263 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 8 20:02:15.351880 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 8 20:02:15.361408 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 8 20:02:15.374607 systemd-udevd[461]: Using default interface naming scheme 'v255'. Oct 8 20:02:15.377745 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 8 20:02:15.386440 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 8 20:02:15.397543 dracut-pre-trigger[469]: rd.md=0: removing MD RAID activation Oct 8 20:02:15.422905 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 8 20:02:15.432485 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 8 20:02:15.472313 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 8 20:02:15.483404 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 8 20:02:15.494617 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 8 20:02:15.496070 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 8 20:02:15.498475 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 8 20:02:15.500342 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 8 20:02:15.508431 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 8 20:02:15.518290 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Oct 8 20:02:15.518447 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Oct 8 20:02:15.520630 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 8 20:02:15.523198 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 8 20:02:15.526466 kernel: GPT:9289727 != 19775487 Oct 8 20:02:15.526488 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 8 20:02:15.526499 kernel: GPT:9289727 != 19775487 Oct 8 20:02:15.527561 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 8 20:02:15.527589 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 8 20:02:15.528054 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 8 20:02:15.528187 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 20:02:15.531521 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 8 20:02:15.532472 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 8 20:02:15.532680 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 20:02:15.534466 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 20:02:15.544664 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 20:02:15.552279 kernel: BTRFS: device fsid ad786f33-c7c5-429e-95f9-4ea457bd3916 devid 1 transid 40 /dev/vda3 scanned by (udev-worker) (504) Oct 8 20:02:15.554081 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (521) Oct 8 20:02:15.556071 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 20:02:15.563675 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Oct 8 20:02:15.567931 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Oct 8 20:02:15.571587 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Oct 8 20:02:15.572496 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Oct 8 20:02:15.578409 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 8 20:02:15.590380 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 8 20:02:15.592368 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 8 20:02:15.595886 disk-uuid[549]: Primary Header is updated. Oct 8 20:02:15.595886 disk-uuid[549]: Secondary Entries is updated. Oct 8 20:02:15.595886 disk-uuid[549]: Secondary Header is updated. Oct 8 20:02:15.598839 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 8 20:02:15.614066 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 20:02:16.614750 disk-uuid[551]: The operation has completed successfully. Oct 8 20:02:16.616168 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 8 20:02:16.634052 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 8 20:02:16.634165 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 8 20:02:16.651430 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Oct 8 20:02:16.655064 sh[572]: Success Oct 8 20:02:16.668254 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Oct 8 20:02:16.704641 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Oct 8 20:02:16.706162 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Oct 8 20:02:16.706964 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Oct 8 20:02:16.716793 kernel: BTRFS info (device dm-0): first mount of filesystem ad786f33-c7c5-429e-95f9-4ea457bd3916 Oct 8 20:02:16.716823 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Oct 8 20:02:16.716834 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Oct 8 20:02:16.718638 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 8 20:02:16.718654 kernel: BTRFS info (device dm-0): using free space tree Oct 8 20:02:16.721601 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Oct 8 20:02:16.722689 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 8 20:02:16.734405 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 8 20:02:16.736418 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 8 20:02:16.742713 kernel: BTRFS info (device vda6): first mount of filesystem cbd8a2bc-d0a3-4040-91fa-086f2a330687 Oct 8 20:02:16.742749 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 8 20:02:16.742760 kernel: BTRFS info (device vda6): using free space tree Oct 8 20:02:16.745268 kernel: BTRFS info (device vda6): auto enabling async discard Oct 8 20:02:16.752409 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 8 20:02:16.753895 kernel: BTRFS info (device vda6): last unmount of filesystem cbd8a2bc-d0a3-4040-91fa-086f2a330687 Oct 8 20:02:16.759838 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 8 20:02:16.769425 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 8 20:02:16.831626 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 8 20:02:16.838416 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 8 20:02:16.867215 systemd-networkd[763]: lo: Link UP Oct 8 20:02:16.867227 systemd-networkd[763]: lo: Gained carrier Oct 8 20:02:16.867900 systemd-networkd[763]: Enumeration completed Oct 8 20:02:16.868326 systemd-networkd[763]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 20:02:16.868329 systemd-networkd[763]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 8 20:02:16.871003 ignition[661]: Ignition 2.19.0 Oct 8 20:02:16.869046 systemd-networkd[763]: eth0: Link UP Oct 8 20:02:16.871009 ignition[661]: Stage: fetch-offline Oct 8 20:02:16.869049 systemd-networkd[763]: eth0: Gained carrier Oct 8 20:02:16.871047 ignition[661]: no configs at "/usr/lib/ignition/base.d" Oct 8 20:02:16.869055 systemd-networkd[763]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 20:02:16.871056 ignition[661]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 8 20:02:16.870844 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 8 20:02:16.871217 ignition[661]: parsed url from cmdline: "" Oct 8 20:02:16.871918 systemd[1]: Reached target network.target - Network. Oct 8 20:02:16.871221 ignition[661]: no config URL provided Oct 8 20:02:16.881298 systemd-networkd[763]: eth0: DHCPv4 address 10.0.0.147/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 8 20:02:16.871225 ignition[661]: reading system config file "/usr/lib/ignition/user.ign" Oct 8 20:02:16.871232 ignition[661]: no config at "/usr/lib/ignition/user.ign" Oct 8 20:02:16.871271 ignition[661]: op(1): [started] loading QEMU firmware config module Oct 8 20:02:16.871276 ignition[661]: op(1): executing: "modprobe" "qemu_fw_cfg" Oct 8 20:02:16.879858 ignition[661]: op(1): [finished] loading QEMU firmware config module Oct 8 20:02:16.922932 ignition[661]: parsing config with SHA512: e7285970c60b96e0b09288952418b92e77bb92520b57116b6685295b0da73fccbf48811d3f9c3625b2fb7e865bb87fd922c50e45ff259c645a45acb8c2182e20 Oct 8 20:02:16.926933 unknown[661]: fetched base config from "system" Oct 8 20:02:16.926942 unknown[661]: fetched user config from "qemu" Oct 8 20:02:16.927549 ignition[661]: fetch-offline: fetch-offline passed Oct 8 20:02:16.928753 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 8 20:02:16.927677 ignition[661]: Ignition finished successfully Oct 8 20:02:16.930285 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 8 20:02:16.939414 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 8 20:02:16.949478 ignition[770]: Ignition 2.19.0 Oct 8 20:02:16.949487 ignition[770]: Stage: kargs Oct 8 20:02:16.949644 ignition[770]: no configs at "/usr/lib/ignition/base.d" Oct 8 20:02:16.949653 ignition[770]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 8 20:02:16.950515 ignition[770]: kargs: kargs passed Oct 8 20:02:16.950564 ignition[770]: Ignition finished successfully Oct 8 20:02:16.952991 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 8 20:02:16.954633 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 8 20:02:16.967653 ignition[779]: Ignition 2.19.0 Oct 8 20:02:16.967664 ignition[779]: Stage: disks Oct 8 20:02:16.967824 ignition[779]: no configs at "/usr/lib/ignition/base.d" Oct 8 20:02:16.967833 ignition[779]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 8 20:02:16.968689 ignition[779]: disks: disks passed Oct 8 20:02:16.970570 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 8 20:02:16.968735 ignition[779]: Ignition finished successfully Oct 8 20:02:16.971698 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 8 20:02:16.972783 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 8 20:02:16.974435 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 8 20:02:16.975684 systemd[1]: Reached target sysinit.target - System Initialization. Oct 8 20:02:16.977248 systemd[1]: Reached target basic.target - Basic System. Oct 8 20:02:16.988423 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 8 20:02:16.998718 systemd-fsck[789]: ROOT: clean, 14/553520 files, 52654/553472 blocks Oct 8 20:02:17.003084 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 8 20:02:17.014378 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 8 20:02:17.058259 kernel: EXT4-fs (vda9): mounted filesystem 833c86f3-93dd-4526-bb43-c7809dac8e51 r/w with ordered data mode. Quota mode: none. Oct 8 20:02:17.058432 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 8 20:02:17.059445 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 8 20:02:17.080364 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 8 20:02:17.081856 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 8 20:02:17.082829 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Oct 8 20:02:17.082912 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 8 20:02:17.082939 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 8 20:02:17.088916 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (797) Oct 8 20:02:17.088947 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 8 20:02:17.090316 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 8 20:02:17.093246 kernel: BTRFS info (device vda6): first mount of filesystem cbd8a2bc-d0a3-4040-91fa-086f2a330687 Oct 8 20:02:17.093269 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 8 20:02:17.093279 kernel: BTRFS info (device vda6): using free space tree Oct 8 20:02:17.096253 kernel: BTRFS info (device vda6): auto enabling async discard Oct 8 20:02:17.096877 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 8 20:02:17.136023 initrd-setup-root[821]: cut: /sysroot/etc/passwd: No such file or directory Oct 8 20:02:17.140403 initrd-setup-root[828]: cut: /sysroot/etc/group: No such file or directory Oct 8 20:02:17.144073 initrd-setup-root[835]: cut: /sysroot/etc/shadow: No such file or directory Oct 8 20:02:17.146911 initrd-setup-root[842]: cut: /sysroot/etc/gshadow: No such file or directory Oct 8 20:02:17.211973 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 8 20:02:17.227516 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 8 20:02:17.228871 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 8 20:02:17.233261 kernel: BTRFS info (device vda6): last unmount of filesystem cbd8a2bc-d0a3-4040-91fa-086f2a330687 Oct 8 20:02:17.248340 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 8 20:02:17.249698 ignition[910]: INFO : Ignition 2.19.0 Oct 8 20:02:17.249698 ignition[910]: INFO : Stage: mount Oct 8 20:02:17.249698 ignition[910]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 8 20:02:17.249698 ignition[910]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 8 20:02:17.252315 ignition[910]: INFO : mount: mount passed Oct 8 20:02:17.252315 ignition[910]: INFO : Ignition finished successfully Oct 8 20:02:17.251752 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 8 20:02:17.262374 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 8 20:02:17.716391 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 8 20:02:17.734516 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 8 20:02:17.740511 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (924) Oct 8 20:02:17.740547 kernel: BTRFS info (device vda6): first mount of filesystem cbd8a2bc-d0a3-4040-91fa-086f2a330687 Oct 8 20:02:17.740558 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 8 20:02:17.741765 kernel: BTRFS info (device vda6): using free space tree Oct 8 20:02:17.743253 kernel: BTRFS info (device vda6): auto enabling async discard Oct 8 20:02:17.744581 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 8 20:02:17.759886 ignition[941]: INFO : Ignition 2.19.0 Oct 8 20:02:17.759886 ignition[941]: INFO : Stage: files Oct 8 20:02:17.761084 ignition[941]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 8 20:02:17.761084 ignition[941]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 8 20:02:17.761084 ignition[941]: DEBUG : files: compiled without relabeling support, skipping Oct 8 20:02:17.763700 ignition[941]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 8 20:02:17.763700 ignition[941]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 8 20:02:17.763700 ignition[941]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 8 20:02:17.763700 ignition[941]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 8 20:02:17.767532 ignition[941]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 8 20:02:17.767532 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Oct 8 20:02:17.767532 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Oct 8 20:02:17.763805 unknown[941]: wrote ssh authorized keys file for user: core Oct 8 20:02:17.895415 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 8 20:02:18.017460 systemd-networkd[763]: eth0: Gained IPv6LL Oct 8 20:02:18.262944 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Oct 8 20:02:18.264390 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Oct 8 20:02:18.264390 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Oct 8 20:02:18.264390 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 8 20:02:18.264390 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 8 20:02:18.264390 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 8 20:02:18.264390 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 8 20:02:18.264390 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 8 20:02:18.272938 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 8 20:02:18.272938 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 8 20:02:18.272938 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 8 20:02:18.272938 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Oct 8 20:02:18.272938 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Oct 8 20:02:18.272938 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Oct 8 20:02:18.272938 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Oct 8 20:02:18.600787 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Oct 8 20:02:18.845544 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Oct 8 20:02:18.845544 ignition[941]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Oct 8 20:02:18.848302 ignition[941]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 8 20:02:18.848302 ignition[941]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 8 20:02:18.848302 ignition[941]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Oct 8 20:02:18.848302 ignition[941]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Oct 8 20:02:18.848302 ignition[941]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 8 20:02:18.848302 ignition[941]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 8 20:02:18.848302 ignition[941]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Oct 8 20:02:18.848302 ignition[941]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Oct 8 20:02:18.880042 ignition[941]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 8 20:02:18.885669 ignition[941]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 8 20:02:18.887858 ignition[941]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Oct 8 20:02:18.887858 ignition[941]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Oct 8 20:02:18.887858 ignition[941]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Oct 8 20:02:18.887858 ignition[941]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 8 20:02:18.887858 ignition[941]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 8 20:02:18.887858 ignition[941]: INFO : files: files passed Oct 8 20:02:18.887858 ignition[941]: INFO : Ignition finished successfully Oct 8 20:02:18.888414 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 8 20:02:18.896414 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 8 20:02:18.898808 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 8 20:02:18.899913 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 8 20:02:18.899997 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 8 20:02:18.907855 initrd-setup-root-after-ignition[970]: grep: /sysroot/oem/oem-release: No such file or directory Oct 8 20:02:18.911907 initrd-setup-root-after-ignition[972]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 8 20:02:18.911907 initrd-setup-root-after-ignition[972]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 8 20:02:18.915131 initrd-setup-root-after-ignition[976]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 8 20:02:18.915603 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 8 20:02:18.917467 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 8 20:02:18.923409 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 8 20:02:18.944084 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 8 20:02:18.944203 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 8 20:02:18.945878 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 8 20:02:18.947158 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 8 20:02:18.948526 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 8 20:02:18.949269 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 8 20:02:18.964469 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 8 20:02:18.966159 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 8 20:02:18.977298 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 8 20:02:18.978216 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 8 20:02:18.979688 systemd[1]: Stopped target timers.target - Timer Units. Oct 8 20:02:18.980933 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 8 20:02:18.981047 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 8 20:02:18.982939 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 8 20:02:18.984364 systemd[1]: Stopped target basic.target - Basic System. Oct 8 20:02:18.985540 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 8 20:02:18.986794 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 8 20:02:18.988277 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 8 20:02:18.989730 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 8 20:02:18.991018 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 8 20:02:18.992621 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 8 20:02:18.994021 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 8 20:02:18.995287 systemd[1]: Stopped target swap.target - Swaps. Oct 8 20:02:18.996394 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 8 20:02:18.996507 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 8 20:02:18.998186 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 8 20:02:18.999736 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 8 20:02:19.001131 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 8 20:02:19.004317 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 8 20:02:19.005302 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 8 20:02:19.005417 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 8 20:02:19.007664 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 8 20:02:19.007777 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 8 20:02:19.009292 systemd[1]: Stopped target paths.target - Path Units. Oct 8 20:02:19.010618 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 8 20:02:19.011335 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 8 20:02:19.012941 systemd[1]: Stopped target slices.target - Slice Units. Oct 8 20:02:19.014233 systemd[1]: Stopped target sockets.target - Socket Units. Oct 8 20:02:19.015955 systemd[1]: iscsid.socket: Deactivated successfully. Oct 8 20:02:19.016040 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 8 20:02:19.017208 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 8 20:02:19.017303 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 8 20:02:19.018524 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 8 20:02:19.018632 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 8 20:02:19.019940 systemd[1]: ignition-files.service: Deactivated successfully. Oct 8 20:02:19.020038 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 8 20:02:19.031398 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 8 20:02:19.032090 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 8 20:02:19.032222 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 8 20:02:19.034895 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 8 20:02:19.036057 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 8 20:02:19.036215 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 8 20:02:19.038628 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 8 20:02:19.038980 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 8 20:02:19.042199 ignition[996]: INFO : Ignition 2.19.0 Oct 8 20:02:19.042199 ignition[996]: INFO : Stage: umount Oct 8 20:02:19.042199 ignition[996]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 8 20:02:19.042199 ignition[996]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 8 20:02:19.046868 ignition[996]: INFO : umount: umount passed Oct 8 20:02:19.046868 ignition[996]: INFO : Ignition finished successfully Oct 8 20:02:19.044654 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 8 20:02:19.044750 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 8 20:02:19.046786 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 8 20:02:19.047689 systemd[1]: Stopped target network.target - Network. Oct 8 20:02:19.053036 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 8 20:02:19.053110 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 8 20:02:19.054564 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 8 20:02:19.054605 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 8 20:02:19.056118 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 8 20:02:19.056170 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 8 20:02:19.057551 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 8 20:02:19.057588 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 8 20:02:19.059176 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 8 20:02:19.060558 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 8 20:02:19.062297 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 8 20:02:19.062390 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 8 20:02:19.069631 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 8 20:02:19.069738 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 8 20:02:19.071629 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 8 20:02:19.071678 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 8 20:02:19.072307 systemd-networkd[763]: eth0: DHCPv6 lease lost Oct 8 20:02:19.074422 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 8 20:02:19.074559 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 8 20:02:19.075866 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 8 20:02:19.075898 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 8 20:02:19.090379 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 8 20:02:19.091322 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 8 20:02:19.091401 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 8 20:02:19.093218 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 8 20:02:19.093285 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 8 20:02:19.095046 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 8 20:02:19.095094 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 8 20:02:19.097389 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 8 20:02:19.110679 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 8 20:02:19.110780 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 8 20:02:19.112822 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 8 20:02:19.112986 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 8 20:02:19.114781 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 8 20:02:19.116035 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 8 20:02:19.118088 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 8 20:02:19.118157 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 8 20:02:19.119733 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 8 20:02:19.119767 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 8 20:02:19.121114 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 8 20:02:19.121171 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 8 20:02:19.123136 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 8 20:02:19.123183 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 8 20:02:19.125204 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 8 20:02:19.125266 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 20:02:19.127591 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 8 20:02:19.127641 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 8 20:02:19.142430 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 8 20:02:19.143201 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 8 20:02:19.143280 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 8 20:02:19.144876 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 8 20:02:19.144917 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 20:02:19.147275 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 8 20:02:19.147358 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 8 20:02:19.149325 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 8 20:02:19.151300 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 8 20:02:19.160645 systemd[1]: Switching root. Oct 8 20:02:19.184115 systemd-journald[237]: Journal stopped Oct 8 20:02:19.860697 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Oct 8 20:02:19.860750 kernel: SELinux: policy capability network_peer_controls=1 Oct 8 20:02:19.860765 kernel: SELinux: policy capability open_perms=1 Oct 8 20:02:19.860778 kernel: SELinux: policy capability extended_socket_class=1 Oct 8 20:02:19.860788 kernel: SELinux: policy capability always_check_network=0 Oct 8 20:02:19.860797 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 8 20:02:19.860807 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 8 20:02:19.860817 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 8 20:02:19.860826 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 8 20:02:19.860836 kernel: audit: type=1403 audit(1728417739.320:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 8 20:02:19.860848 systemd[1]: Successfully loaded SELinux policy in 31.258ms. Oct 8 20:02:19.860869 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.683ms. Oct 8 20:02:19.860881 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 8 20:02:19.860893 systemd[1]: Detected virtualization kvm. Oct 8 20:02:19.860903 systemd[1]: Detected architecture arm64. Oct 8 20:02:19.860914 systemd[1]: Detected first boot. Oct 8 20:02:19.860924 systemd[1]: Initializing machine ID from VM UUID. Oct 8 20:02:19.860935 zram_generator::config[1040]: No configuration found. Oct 8 20:02:19.860947 systemd[1]: Populated /etc with preset unit settings. Oct 8 20:02:19.860959 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 8 20:02:19.860970 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 8 20:02:19.860983 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 8 20:02:19.860994 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 8 20:02:19.861004 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 8 20:02:19.861015 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 8 20:02:19.861026 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 8 20:02:19.861037 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 8 20:02:19.861048 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 8 20:02:19.861061 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 8 20:02:19.861072 systemd[1]: Created slice user.slice - User and Session Slice. Oct 8 20:02:19.861082 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 8 20:02:19.861092 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 8 20:02:19.861103 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 8 20:02:19.861113 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 8 20:02:19.861134 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 8 20:02:19.861146 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 8 20:02:19.861157 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Oct 8 20:02:19.861169 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 8 20:02:19.861180 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 8 20:02:19.861190 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 8 20:02:19.861201 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 8 20:02:19.861211 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 8 20:02:19.861222 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 8 20:02:19.861329 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 8 20:02:19.861345 systemd[1]: Reached target slices.target - Slice Units. Oct 8 20:02:19.861358 systemd[1]: Reached target swap.target - Swaps. Oct 8 20:02:19.861368 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 8 20:02:19.861379 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 8 20:02:19.861391 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 8 20:02:19.861402 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 8 20:02:19.861413 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 8 20:02:19.861425 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 8 20:02:19.861436 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 8 20:02:19.861449 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 8 20:02:19.861461 systemd[1]: Mounting media.mount - External Media Directory... Oct 8 20:02:19.861472 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 8 20:02:19.861482 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 8 20:02:19.861492 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 8 20:02:19.861504 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 8 20:02:19.861515 systemd[1]: Reached target machines.target - Containers. Oct 8 20:02:19.861526 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 8 20:02:19.861537 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 20:02:19.861549 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 8 20:02:19.861559 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 8 20:02:19.861570 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 8 20:02:19.861580 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 8 20:02:19.861591 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 8 20:02:19.861602 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 8 20:02:19.861626 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 8 20:02:19.861638 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 8 20:02:19.861650 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 8 20:02:19.861663 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 8 20:02:19.861674 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 8 20:02:19.861684 systemd[1]: Stopped systemd-fsck-usr.service. Oct 8 20:02:19.861697 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 8 20:02:19.861707 kernel: fuse: init (API version 7.39) Oct 8 20:02:19.861717 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 8 20:02:19.861727 kernel: loop: module loaded Oct 8 20:02:19.861737 kernel: ACPI: bus type drm_connector registered Oct 8 20:02:19.861748 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 8 20:02:19.861760 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 8 20:02:19.861771 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 8 20:02:19.861781 systemd[1]: verity-setup.service: Deactivated successfully. Oct 8 20:02:19.861791 systemd[1]: Stopped verity-setup.service. Oct 8 20:02:19.861821 systemd-journald[1104]: Collecting audit messages is disabled. Oct 8 20:02:19.861843 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 8 20:02:19.861853 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 8 20:02:19.861866 systemd-journald[1104]: Journal started Oct 8 20:02:19.861888 systemd-journald[1104]: Runtime Journal (/run/log/journal/1b627052a41f4f9ebe4874a6670b6e82) is 5.9M, max 47.3M, 41.4M free. Oct 8 20:02:19.665530 systemd[1]: Queued start job for default target multi-user.target. Oct 8 20:02:19.690995 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Oct 8 20:02:19.691422 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 8 20:02:19.865167 systemd[1]: Started systemd-journald.service - Journal Service. Oct 8 20:02:19.865484 systemd[1]: Mounted media.mount - External Media Directory. Oct 8 20:02:19.866788 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 8 20:02:19.868212 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 8 20:02:19.869251 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 8 20:02:19.870338 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 8 20:02:19.873292 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 8 20:02:19.874384 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 8 20:02:19.874558 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 8 20:02:19.876072 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 8 20:02:19.877315 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 8 20:02:19.878497 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 8 20:02:19.878630 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 8 20:02:19.879729 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 8 20:02:19.879890 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 8 20:02:19.881282 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 8 20:02:19.881481 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 8 20:02:19.882593 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 8 20:02:19.882728 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 8 20:02:19.884060 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 8 20:02:19.885165 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 8 20:02:19.886937 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 8 20:02:19.898937 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 8 20:02:19.908367 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 8 20:02:19.910168 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 8 20:02:19.911013 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 8 20:02:19.911048 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 8 20:02:19.912730 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Oct 8 20:02:19.914620 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 8 20:02:19.916417 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 8 20:02:19.917253 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 20:02:19.920270 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 8 20:02:19.922453 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 8 20:02:19.926359 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 8 20:02:19.927425 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 8 20:02:19.928495 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 8 20:02:19.931461 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 8 20:02:19.934497 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 8 20:02:19.937655 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 8 20:02:19.942300 systemd-journald[1104]: Time spent on flushing to /var/log/journal/1b627052a41f4f9ebe4874a6670b6e82 is 22.529ms for 855 entries. Oct 8 20:02:19.942300 systemd-journald[1104]: System Journal (/var/log/journal/1b627052a41f4f9ebe4874a6670b6e82) is 8.0M, max 195.6M, 187.6M free. Oct 8 20:02:19.980581 systemd-journald[1104]: Received client request to flush runtime journal. Oct 8 20:02:19.980629 kernel: loop0: detected capacity change from 0 to 114432 Oct 8 20:02:19.980652 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 8 20:02:19.943314 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 8 20:02:19.944495 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 8 20:02:19.945550 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 8 20:02:19.946838 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 8 20:02:19.948420 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 8 20:02:19.953377 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 8 20:02:19.961442 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Oct 8 20:02:19.967246 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Oct 8 20:02:19.984265 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 8 20:02:19.986110 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 8 20:02:19.986799 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 8 20:02:19.988201 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Oct 8 20:02:19.992607 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 8 20:02:19.993978 udevadm[1163]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Oct 8 20:02:20.000395 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 8 20:02:20.002275 kernel: loop1: detected capacity change from 0 to 194512 Oct 8 20:02:20.016036 systemd-tmpfiles[1170]: ACLs are not supported, ignoring. Oct 8 20:02:20.016052 systemd-tmpfiles[1170]: ACLs are not supported, ignoring. Oct 8 20:02:20.019933 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 8 20:02:20.025671 kernel: loop2: detected capacity change from 0 to 114328 Oct 8 20:02:20.085263 kernel: loop3: detected capacity change from 0 to 114432 Oct 8 20:02:20.089257 kernel: loop4: detected capacity change from 0 to 194512 Oct 8 20:02:20.094254 kernel: loop5: detected capacity change from 0 to 114328 Oct 8 20:02:20.097852 (sd-merge)[1176]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Oct 8 20:02:20.098257 (sd-merge)[1176]: Merged extensions into '/usr'. Oct 8 20:02:20.101693 systemd[1]: Reloading requested from client PID 1151 ('systemd-sysext') (unit systemd-sysext.service)... Oct 8 20:02:20.101792 systemd[1]: Reloading... Oct 8 20:02:20.157269 zram_generator::config[1203]: No configuration found. Oct 8 20:02:20.198703 ldconfig[1146]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 8 20:02:20.245807 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 20:02:20.281377 systemd[1]: Reloading finished in 179 ms. Oct 8 20:02:20.318338 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 8 20:02:20.319582 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 8 20:02:20.332455 systemd[1]: Starting ensure-sysext.service... Oct 8 20:02:20.334281 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 8 20:02:20.343916 systemd[1]: Reloading requested from client PID 1237 ('systemctl') (unit ensure-sysext.service)... Oct 8 20:02:20.343938 systemd[1]: Reloading... Oct 8 20:02:20.354348 systemd-tmpfiles[1239]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 8 20:02:20.354894 systemd-tmpfiles[1239]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 8 20:02:20.355621 systemd-tmpfiles[1239]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 8 20:02:20.355959 systemd-tmpfiles[1239]: ACLs are not supported, ignoring. Oct 8 20:02:20.356078 systemd-tmpfiles[1239]: ACLs are not supported, ignoring. Oct 8 20:02:20.358423 systemd-tmpfiles[1239]: Detected autofs mount point /boot during canonicalization of boot. Oct 8 20:02:20.358517 systemd-tmpfiles[1239]: Skipping /boot Oct 8 20:02:20.365065 systemd-tmpfiles[1239]: Detected autofs mount point /boot during canonicalization of boot. Oct 8 20:02:20.365164 systemd-tmpfiles[1239]: Skipping /boot Oct 8 20:02:20.395271 zram_generator::config[1264]: No configuration found. Oct 8 20:02:20.480033 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 20:02:20.516827 systemd[1]: Reloading finished in 172 ms. Oct 8 20:02:20.531390 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 8 20:02:20.539730 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 8 20:02:20.547777 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Oct 8 20:02:20.550299 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 8 20:02:20.552343 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 8 20:02:20.557510 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 8 20:02:20.569503 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 8 20:02:20.574229 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 8 20:02:20.577264 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 20:02:20.578375 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 8 20:02:20.582094 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 8 20:02:20.584070 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 8 20:02:20.585040 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 20:02:20.592330 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 8 20:02:20.593954 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 8 20:02:20.595470 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 8 20:02:20.595597 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 8 20:02:20.596898 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 8 20:02:20.597042 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 8 20:02:20.597198 systemd-udevd[1308]: Using default interface naming scheme 'v255'. Oct 8 20:02:20.598554 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 8 20:02:20.598702 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 8 20:02:20.606695 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 8 20:02:20.606927 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 8 20:02:20.616552 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 8 20:02:20.617870 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 8 20:02:20.619726 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 8 20:02:20.624387 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 8 20:02:20.631278 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 8 20:02:20.635473 systemd[1]: Finished ensure-sysext.service. Oct 8 20:02:20.636976 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 8 20:02:20.643261 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 20:02:20.657527 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 8 20:02:20.660411 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 8 20:02:20.665499 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 8 20:02:20.669199 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 8 20:02:20.670493 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 20:02:20.673010 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 8 20:02:20.678484 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 8 20:02:20.679876 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 8 20:02:20.680375 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 8 20:02:20.682300 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 8 20:02:20.688909 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1340) Oct 8 20:02:20.690934 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Oct 8 20:02:20.699104 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 8 20:02:20.699282 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 8 20:02:20.704934 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 8 20:02:20.705315 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1337) Oct 8 20:02:20.712591 augenrules[1368]: No rules Oct 8 20:02:20.715703 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Oct 8 20:02:20.720691 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 8 20:02:20.720838 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 8 20:02:20.723809 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 8 20:02:20.723971 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 8 20:02:20.729271 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1337) Oct 8 20:02:20.731093 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 8 20:02:20.737765 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 8 20:02:20.742604 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 8 20:02:20.758759 systemd-resolved[1306]: Positive Trust Anchors: Oct 8 20:02:20.758776 systemd-resolved[1306]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 8 20:02:20.758807 systemd-resolved[1306]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 8 20:02:20.767178 systemd-resolved[1306]: Defaulting to hostname 'linux'. Oct 8 20:02:20.771009 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 8 20:02:20.779050 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 8 20:02:20.780088 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 8 20:02:20.803452 systemd-networkd[1362]: lo: Link UP Oct 8 20:02:20.803462 systemd-networkd[1362]: lo: Gained carrier Oct 8 20:02:20.804233 systemd-networkd[1362]: Enumeration completed Oct 8 20:02:20.804464 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 8 20:02:20.805523 systemd[1]: Reached target network.target - Network. Oct 8 20:02:20.807219 systemd-networkd[1362]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 20:02:20.807228 systemd-networkd[1362]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 8 20:02:20.810143 systemd-networkd[1362]: eth0: Link UP Oct 8 20:02:20.810151 systemd-networkd[1362]: eth0: Gained carrier Oct 8 20:02:20.810165 systemd-networkd[1362]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 20:02:20.817469 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 8 20:02:20.818477 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 8 20:02:20.822492 systemd[1]: Reached target time-set.target - System Time Set. Oct 8 20:02:20.827421 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 20:02:20.829293 systemd-networkd[1362]: eth0: DHCPv4 address 10.0.0.147/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 8 20:02:20.830311 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Oct 8 20:02:20.830341 systemd-timesyncd[1363]: Network configuration changed, trying to establish connection. Oct 8 20:02:20.333740 systemd-resolved[1306]: Clock change detected. Flushing caches. Oct 8 20:02:20.339445 systemd-journald[1104]: Time jumped backwards, rotating. Oct 8 20:02:20.333827 systemd-timesyncd[1363]: Contacted time server 10.0.0.1:123 (10.0.0.1). Oct 8 20:02:20.333878 systemd-timesyncd[1363]: Initial clock synchronization to Tue 2024-10-08 20:02:20.333706 UTC. Oct 8 20:02:20.335807 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Oct 8 20:02:20.351005 lvm[1393]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 8 20:02:20.372237 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 20:02:20.380546 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Oct 8 20:02:20.382202 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 8 20:02:20.383283 systemd[1]: Reached target sysinit.target - System Initialization. Oct 8 20:02:20.384318 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 8 20:02:20.385442 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 8 20:02:20.386809 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 8 20:02:20.387887 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 8 20:02:20.388858 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 8 20:02:20.389865 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 8 20:02:20.389902 systemd[1]: Reached target paths.target - Path Units. Oct 8 20:02:20.390614 systemd[1]: Reached target timers.target - Timer Units. Oct 8 20:02:20.392181 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 8 20:02:20.394327 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 8 20:02:20.404742 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 8 20:02:20.406779 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Oct 8 20:02:20.408114 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 8 20:02:20.409146 systemd[1]: Reached target sockets.target - Socket Units. Oct 8 20:02:20.409920 systemd[1]: Reached target basic.target - Basic System. Oct 8 20:02:20.410658 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 8 20:02:20.410689 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 8 20:02:20.411594 systemd[1]: Starting containerd.service - containerd container runtime... Oct 8 20:02:20.413411 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 8 20:02:20.415723 lvm[1403]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 8 20:02:20.417029 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 8 20:02:20.420855 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 8 20:02:20.421656 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 8 20:02:20.426160 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 8 20:02:20.426765 jq[1406]: false Oct 8 20:02:20.428229 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 8 20:02:20.430774 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 8 20:02:20.435823 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 8 20:02:20.444895 extend-filesystems[1407]: Found loop3 Oct 8 20:02:20.447290 extend-filesystems[1407]: Found loop4 Oct 8 20:02:20.447290 extend-filesystems[1407]: Found loop5 Oct 8 20:02:20.447290 extend-filesystems[1407]: Found vda Oct 8 20:02:20.447290 extend-filesystems[1407]: Found vda1 Oct 8 20:02:20.447290 extend-filesystems[1407]: Found vda2 Oct 8 20:02:20.447290 extend-filesystems[1407]: Found vda3 Oct 8 20:02:20.447290 extend-filesystems[1407]: Found usr Oct 8 20:02:20.447290 extend-filesystems[1407]: Found vda4 Oct 8 20:02:20.447290 extend-filesystems[1407]: Found vda6 Oct 8 20:02:20.447290 extend-filesystems[1407]: Found vda7 Oct 8 20:02:20.447290 extend-filesystems[1407]: Found vda9 Oct 8 20:02:20.447290 extend-filesystems[1407]: Checking size of /dev/vda9 Oct 8 20:02:20.445846 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 8 20:02:20.473282 extend-filesystems[1407]: Resized partition /dev/vda9 Oct 8 20:02:20.467448 dbus-daemon[1405]: [system] SELinux support is enabled Oct 8 20:02:20.449507 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 8 20:02:20.449940 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 8 20:02:20.450614 systemd[1]: Starting update-engine.service - Update Engine... Oct 8 20:02:20.454757 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 8 20:02:20.474558 jq[1424]: true Oct 8 20:02:20.456460 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Oct 8 20:02:20.460514 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 8 20:02:20.460716 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 8 20:02:20.460972 systemd[1]: motdgen.service: Deactivated successfully. Oct 8 20:02:20.461097 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 8 20:02:20.465967 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 8 20:02:20.466105 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 8 20:02:20.468944 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 8 20:02:20.484950 (ntainerd)[1434]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 8 20:02:20.488574 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 8 20:02:20.490938 jq[1431]: true Oct 8 20:02:20.489103 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 8 20:02:20.490269 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 8 20:02:20.490292 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 8 20:02:20.503374 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1356) Oct 8 20:02:20.507939 tar[1429]: linux-arm64/helm Oct 8 20:02:20.515361 update_engine[1422]: I20241008 20:02:20.514931 1422 main.cc:92] Flatcar Update Engine starting Oct 8 20:02:20.519193 systemd[1]: Started update-engine.service - Update Engine. Oct 8 20:02:20.521711 update_engine[1422]: I20241008 20:02:20.519871 1422 update_check_scheduler.cc:74] Next update check in 2m7s Oct 8 20:02:20.529536 extend-filesystems[1430]: resize2fs 1.47.1 (20-May-2024) Oct 8 20:02:20.541659 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Oct 8 20:02:20.540884 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 8 20:02:20.569417 systemd-logind[1415]: Watching system buttons on /dev/input/event0 (Power Button) Oct 8 20:02:20.570433 systemd-logind[1415]: New seat seat0. Oct 8 20:02:20.571525 systemd[1]: Started systemd-logind.service - User Login Management. Oct 8 20:02:20.576651 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Oct 8 20:02:20.598193 extend-filesystems[1430]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 8 20:02:20.598193 extend-filesystems[1430]: old_desc_blocks = 1, new_desc_blocks = 1 Oct 8 20:02:20.598193 extend-filesystems[1430]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Oct 8 20:02:20.601066 extend-filesystems[1407]: Resized filesystem in /dev/vda9 Oct 8 20:02:20.599163 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 8 20:02:20.603576 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 8 20:02:20.609962 bash[1462]: Updated "/home/core/.ssh/authorized_keys" Oct 8 20:02:20.612693 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 8 20:02:20.614159 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Oct 8 20:02:20.630465 locksmithd[1446]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 8 20:02:20.747641 containerd[1434]: time="2024-10-08T20:02:20.744972489Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Oct 8 20:02:20.779197 containerd[1434]: time="2024-10-08T20:02:20.779140369Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 8 20:02:20.780749 containerd[1434]: time="2024-10-08T20:02:20.780703569Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.54-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 8 20:02:20.780749 containerd[1434]: time="2024-10-08T20:02:20.780739529Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 8 20:02:20.780817 containerd[1434]: time="2024-10-08T20:02:20.780755449Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 8 20:02:20.780949 containerd[1434]: time="2024-10-08T20:02:20.780918609Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Oct 8 20:02:20.780949 containerd[1434]: time="2024-10-08T20:02:20.780944249Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Oct 8 20:02:20.781016 containerd[1434]: time="2024-10-08T20:02:20.781000089Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 20:02:20.781036 containerd[1434]: time="2024-10-08T20:02:20.781017769Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 8 20:02:20.781211 containerd[1434]: time="2024-10-08T20:02:20.781184089Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 20:02:20.781211 containerd[1434]: time="2024-10-08T20:02:20.781205529Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 8 20:02:20.781249 containerd[1434]: time="2024-10-08T20:02:20.781218729Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 20:02:20.781249 containerd[1434]: time="2024-10-08T20:02:20.781228369Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 8 20:02:20.781314 containerd[1434]: time="2024-10-08T20:02:20.781298529Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 8 20:02:20.781526 containerd[1434]: time="2024-10-08T20:02:20.781490089Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 8 20:02:20.781652 containerd[1434]: time="2024-10-08T20:02:20.781611089Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 20:02:20.781677 containerd[1434]: time="2024-10-08T20:02:20.781652569Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 8 20:02:20.781748 containerd[1434]: time="2024-10-08T20:02:20.781732569Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 8 20:02:20.781789 containerd[1434]: time="2024-10-08T20:02:20.781777569Z" level=info msg="metadata content store policy set" policy=shared Oct 8 20:02:20.784934 containerd[1434]: time="2024-10-08T20:02:20.784906249Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 8 20:02:20.785004 containerd[1434]: time="2024-10-08T20:02:20.784952409Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 8 20:02:20.785004 containerd[1434]: time="2024-10-08T20:02:20.784967929Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Oct 8 20:02:20.785004 containerd[1434]: time="2024-10-08T20:02:20.784982929Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Oct 8 20:02:20.785004 containerd[1434]: time="2024-10-08T20:02:20.784996369Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 8 20:02:20.785147 containerd[1434]: time="2024-10-08T20:02:20.785125969Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 8 20:02:20.785364 containerd[1434]: time="2024-10-08T20:02:20.785347769Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 8 20:02:20.785470 containerd[1434]: time="2024-10-08T20:02:20.785452849Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Oct 8 20:02:20.785498 containerd[1434]: time="2024-10-08T20:02:20.785474009Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Oct 8 20:02:20.785498 containerd[1434]: time="2024-10-08T20:02:20.785491969Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Oct 8 20:02:20.785543 containerd[1434]: time="2024-10-08T20:02:20.785516049Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 8 20:02:20.785543 containerd[1434]: time="2024-10-08T20:02:20.785531889Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 8 20:02:20.785580 containerd[1434]: time="2024-10-08T20:02:20.785545209Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 8 20:02:20.785580 containerd[1434]: time="2024-10-08T20:02:20.785559529Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 8 20:02:20.785580 containerd[1434]: time="2024-10-08T20:02:20.785573289Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 8 20:02:20.785648 containerd[1434]: time="2024-10-08T20:02:20.785586009Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 8 20:02:20.785648 containerd[1434]: time="2024-10-08T20:02:20.785599529Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 8 20:02:20.785648 containerd[1434]: time="2024-10-08T20:02:20.785616649Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 8 20:02:20.785705 containerd[1434]: time="2024-10-08T20:02:20.785672009Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 8 20:02:20.785705 containerd[1434]: time="2024-10-08T20:02:20.785686449Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 8 20:02:20.785705 containerd[1434]: time="2024-10-08T20:02:20.785699049Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 8 20:02:20.785758 containerd[1434]: time="2024-10-08T20:02:20.785713849Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 8 20:02:20.785758 containerd[1434]: time="2024-10-08T20:02:20.785726729Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 8 20:02:20.785758 containerd[1434]: time="2024-10-08T20:02:20.785738969Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 8 20:02:20.785758 containerd[1434]: time="2024-10-08T20:02:20.785750969Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 8 20:02:20.785823 containerd[1434]: time="2024-10-08T20:02:20.785763449Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 8 20:02:20.785823 containerd[1434]: time="2024-10-08T20:02:20.785776809Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Oct 8 20:02:20.785823 containerd[1434]: time="2024-10-08T20:02:20.785790409Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Oct 8 20:02:20.785823 containerd[1434]: time="2024-10-08T20:02:20.785802809Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 8 20:02:20.785823 containerd[1434]: time="2024-10-08T20:02:20.785814609Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Oct 8 20:02:20.785939 containerd[1434]: time="2024-10-08T20:02:20.785827129Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 8 20:02:20.785939 containerd[1434]: time="2024-10-08T20:02:20.785842249Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Oct 8 20:02:20.785939 containerd[1434]: time="2024-10-08T20:02:20.785861489Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Oct 8 20:02:20.785939 containerd[1434]: time="2024-10-08T20:02:20.785873729Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 8 20:02:20.785939 containerd[1434]: time="2024-10-08T20:02:20.785884369Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 8 20:02:20.786027 containerd[1434]: time="2024-10-08T20:02:20.785993769Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 8 20:02:20.786027 containerd[1434]: time="2024-10-08T20:02:20.786011089Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Oct 8 20:02:20.786027 containerd[1434]: time="2024-10-08T20:02:20.786022609Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 8 20:02:20.786080 containerd[1434]: time="2024-10-08T20:02:20.786034889Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Oct 8 20:02:20.786080 containerd[1434]: time="2024-10-08T20:02:20.786044689Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 8 20:02:20.786080 containerd[1434]: time="2024-10-08T20:02:20.786056729Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Oct 8 20:02:20.786080 containerd[1434]: time="2024-10-08T20:02:20.786065769Z" level=info msg="NRI interface is disabled by configuration." Oct 8 20:02:20.786080 containerd[1434]: time="2024-10-08T20:02:20.786076089Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 8 20:02:20.786472 containerd[1434]: time="2024-10-08T20:02:20.786414449Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 8 20:02:20.786593 containerd[1434]: time="2024-10-08T20:02:20.786479049Z" level=info msg="Connect containerd service" Oct 8 20:02:20.786593 containerd[1434]: time="2024-10-08T20:02:20.786511409Z" level=info msg="using legacy CRI server" Oct 8 20:02:20.786593 containerd[1434]: time="2024-10-08T20:02:20.786519169Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 8 20:02:20.788381 containerd[1434]: time="2024-10-08T20:02:20.788350609Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 8 20:02:20.789024 containerd[1434]: time="2024-10-08T20:02:20.788989729Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 8 20:02:20.789439 containerd[1434]: time="2024-10-08T20:02:20.789347489Z" level=info msg="Start subscribing containerd event" Oct 8 20:02:20.789779 containerd[1434]: time="2024-10-08T20:02:20.789755609Z" level=info msg="Start recovering state" Oct 8 20:02:20.791277 containerd[1434]: time="2024-10-08T20:02:20.791135289Z" level=info msg="Start event monitor" Oct 8 20:02:20.791277 containerd[1434]: time="2024-10-08T20:02:20.791173049Z" level=info msg="Start snapshots syncer" Oct 8 20:02:20.791277 containerd[1434]: time="2024-10-08T20:02:20.791184689Z" level=info msg="Start cni network conf syncer for default" Oct 8 20:02:20.791277 containerd[1434]: time="2024-10-08T20:02:20.791193769Z" level=info msg="Start streaming server" Oct 8 20:02:20.792638 containerd[1434]: time="2024-10-08T20:02:20.791612009Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 8 20:02:20.792638 containerd[1434]: time="2024-10-08T20:02:20.791694969Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 8 20:02:20.791833 systemd[1]: Started containerd.service - containerd container runtime. Oct 8 20:02:20.792817 containerd[1434]: time="2024-10-08T20:02:20.792788129Z" level=info msg="containerd successfully booted in 0.049777s" Oct 8 20:02:20.907770 tar[1429]: linux-arm64/LICENSE Oct 8 20:02:20.907770 tar[1429]: linux-arm64/README.md Oct 8 20:02:20.921039 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 8 20:02:21.679842 systemd-networkd[1362]: eth0: Gained IPv6LL Oct 8 20:02:21.685390 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 8 20:02:21.687007 systemd[1]: Reached target network-online.target - Network is Online. Oct 8 20:02:21.698845 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Oct 8 20:02:21.701374 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:02:21.703278 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 8 20:02:21.724162 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 8 20:02:21.731327 systemd[1]: coreos-metadata.service: Deactivated successfully. Oct 8 20:02:21.731771 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Oct 8 20:02:21.733014 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 8 20:02:21.897393 sshd_keygen[1425]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 8 20:02:21.917402 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 8 20:02:21.924927 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 8 20:02:21.931260 systemd[1]: issuegen.service: Deactivated successfully. Oct 8 20:02:21.931454 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 8 20:02:21.934668 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 8 20:02:21.951318 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 8 20:02:21.953890 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 8 20:02:21.955790 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Oct 8 20:02:21.957115 systemd[1]: Reached target getty.target - Login Prompts. Oct 8 20:02:22.216311 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:02:22.217494 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 8 20:02:22.220990 (kubelet)[1517]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:02:22.222823 systemd[1]: Startup finished in 518ms (kernel) + 4.621s (initrd) + 3.433s (userspace) = 8.574s. Oct 8 20:02:22.697385 kubelet[1517]: E1008 20:02:22.697235 1517 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:02:22.699909 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:02:22.700059 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:02:27.163361 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 8 20:02:27.164460 systemd[1]: Started sshd@0-10.0.0.147:22-10.0.0.1:46580.service - OpenSSH per-connection server daemon (10.0.0.1:46580). Oct 8 20:02:27.214090 sshd[1531]: Accepted publickey for core from 10.0.0.1 port 46580 ssh2: RSA SHA256:PeFR0GwG3Km7u6+IJymPx7tkM/vpusnYsvzmiMSzq3A Oct 8 20:02:27.215903 sshd[1531]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:02:27.226238 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 8 20:02:27.236858 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 8 20:02:27.239251 systemd-logind[1415]: New session 1 of user core. Oct 8 20:02:27.246608 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 8 20:02:27.248589 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 8 20:02:27.254500 (systemd)[1535]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 8 20:02:27.326371 systemd[1535]: Queued start job for default target default.target. Oct 8 20:02:27.335753 systemd[1535]: Created slice app.slice - User Application Slice. Oct 8 20:02:27.335797 systemd[1535]: Reached target paths.target - Paths. Oct 8 20:02:27.335809 systemd[1535]: Reached target timers.target - Timers. Oct 8 20:02:27.337310 systemd[1535]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 8 20:02:27.350338 systemd[1535]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 8 20:02:27.350457 systemd[1535]: Reached target sockets.target - Sockets. Oct 8 20:02:27.350476 systemd[1535]: Reached target basic.target - Basic System. Oct 8 20:02:27.350527 systemd[1535]: Reached target default.target - Main User Target. Oct 8 20:02:27.350555 systemd[1535]: Startup finished in 91ms. Oct 8 20:02:27.350891 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 8 20:02:27.352125 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 8 20:02:27.415103 systemd[1]: Started sshd@1-10.0.0.147:22-10.0.0.1:46584.service - OpenSSH per-connection server daemon (10.0.0.1:46584). Oct 8 20:02:27.460987 sshd[1546]: Accepted publickey for core from 10.0.0.1 port 46584 ssh2: RSA SHA256:PeFR0GwG3Km7u6+IJymPx7tkM/vpusnYsvzmiMSzq3A Oct 8 20:02:27.462742 sshd[1546]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:02:27.468233 systemd-logind[1415]: New session 2 of user core. Oct 8 20:02:27.482876 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 8 20:02:27.537074 sshd[1546]: pam_unix(sshd:session): session closed for user core Oct 8 20:02:27.546121 systemd[1]: sshd@1-10.0.0.147:22-10.0.0.1:46584.service: Deactivated successfully. Oct 8 20:02:27.547521 systemd[1]: session-2.scope: Deactivated successfully. Oct 8 20:02:27.548697 systemd-logind[1415]: Session 2 logged out. Waiting for processes to exit. Oct 8 20:02:27.558024 systemd[1]: Started sshd@2-10.0.0.147:22-10.0.0.1:46592.service - OpenSSH per-connection server daemon (10.0.0.1:46592). Oct 8 20:02:27.559655 systemd-logind[1415]: Removed session 2. Oct 8 20:02:27.591255 sshd[1553]: Accepted publickey for core from 10.0.0.1 port 46592 ssh2: RSA SHA256:PeFR0GwG3Km7u6+IJymPx7tkM/vpusnYsvzmiMSzq3A Oct 8 20:02:27.592543 sshd[1553]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:02:27.596728 systemd-logind[1415]: New session 3 of user core. Oct 8 20:02:27.603841 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 8 20:02:27.654815 sshd[1553]: pam_unix(sshd:session): session closed for user core Oct 8 20:02:27.665091 systemd[1]: sshd@2-10.0.0.147:22-10.0.0.1:46592.service: Deactivated successfully. Oct 8 20:02:27.668026 systemd[1]: session-3.scope: Deactivated successfully. Oct 8 20:02:27.669282 systemd-logind[1415]: Session 3 logged out. Waiting for processes to exit. Oct 8 20:02:27.670439 systemd[1]: Started sshd@3-10.0.0.147:22-10.0.0.1:46594.service - OpenSSH per-connection server daemon (10.0.0.1:46594). Oct 8 20:02:27.672072 systemd-logind[1415]: Removed session 3. Oct 8 20:02:27.708204 sshd[1561]: Accepted publickey for core from 10.0.0.1 port 46594 ssh2: RSA SHA256:PeFR0GwG3Km7u6+IJymPx7tkM/vpusnYsvzmiMSzq3A Oct 8 20:02:27.709444 sshd[1561]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:02:27.713470 systemd-logind[1415]: New session 4 of user core. Oct 8 20:02:27.723783 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 8 20:02:27.776103 sshd[1561]: pam_unix(sshd:session): session closed for user core Oct 8 20:02:27.789965 systemd[1]: sshd@3-10.0.0.147:22-10.0.0.1:46594.service: Deactivated successfully. Oct 8 20:02:27.791316 systemd[1]: session-4.scope: Deactivated successfully. Oct 8 20:02:27.793595 systemd-logind[1415]: Session 4 logged out. Waiting for processes to exit. Oct 8 20:02:27.802914 systemd[1]: Started sshd@4-10.0.0.147:22-10.0.0.1:46606.service - OpenSSH per-connection server daemon (10.0.0.1:46606). Oct 8 20:02:27.804076 systemd-logind[1415]: Removed session 4. Oct 8 20:02:27.841293 sshd[1568]: Accepted publickey for core from 10.0.0.1 port 46606 ssh2: RSA SHA256:PeFR0GwG3Km7u6+IJymPx7tkM/vpusnYsvzmiMSzq3A Oct 8 20:02:27.842570 sshd[1568]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:02:27.846677 systemd-logind[1415]: New session 5 of user core. Oct 8 20:02:27.859780 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 8 20:02:27.928135 sudo[1571]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 8 20:02:27.928409 sudo[1571]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 8 20:02:27.946434 sudo[1571]: pam_unix(sudo:session): session closed for user root Oct 8 20:02:27.948197 sshd[1568]: pam_unix(sshd:session): session closed for user core Oct 8 20:02:27.956966 systemd[1]: sshd@4-10.0.0.147:22-10.0.0.1:46606.service: Deactivated successfully. Oct 8 20:02:27.958505 systemd[1]: session-5.scope: Deactivated successfully. Oct 8 20:02:27.959814 systemd-logind[1415]: Session 5 logged out. Waiting for processes to exit. Oct 8 20:02:27.961127 systemd[1]: Started sshd@5-10.0.0.147:22-10.0.0.1:46608.service - OpenSSH per-connection server daemon (10.0.0.1:46608). Oct 8 20:02:27.963695 systemd-logind[1415]: Removed session 5. Oct 8 20:02:27.997254 sshd[1576]: Accepted publickey for core from 10.0.0.1 port 46608 ssh2: RSA SHA256:PeFR0GwG3Km7u6+IJymPx7tkM/vpusnYsvzmiMSzq3A Oct 8 20:02:27.998538 sshd[1576]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:02:28.002379 systemd-logind[1415]: New session 6 of user core. Oct 8 20:02:28.016784 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 8 20:02:28.067856 sudo[1580]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 8 20:02:28.068121 sudo[1580]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 8 20:02:28.070932 sudo[1580]: pam_unix(sudo:session): session closed for user root Oct 8 20:02:28.075431 sudo[1579]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Oct 8 20:02:28.075770 sudo[1579]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 8 20:02:28.097932 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Oct 8 20:02:28.099284 auditctl[1583]: No rules Oct 8 20:02:28.099715 systemd[1]: audit-rules.service: Deactivated successfully. Oct 8 20:02:28.099901 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Oct 8 20:02:28.103558 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Oct 8 20:02:28.126689 augenrules[1601]: No rules Oct 8 20:02:28.128713 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Oct 8 20:02:28.129652 sudo[1579]: pam_unix(sudo:session): session closed for user root Oct 8 20:02:28.131021 sshd[1576]: pam_unix(sshd:session): session closed for user core Oct 8 20:02:28.141007 systemd[1]: sshd@5-10.0.0.147:22-10.0.0.1:46608.service: Deactivated successfully. Oct 8 20:02:28.142365 systemd[1]: session-6.scope: Deactivated successfully. Oct 8 20:02:28.143579 systemd-logind[1415]: Session 6 logged out. Waiting for processes to exit. Oct 8 20:02:28.151925 systemd[1]: Started sshd@6-10.0.0.147:22-10.0.0.1:46610.service - OpenSSH per-connection server daemon (10.0.0.1:46610). Oct 8 20:02:28.152751 systemd-logind[1415]: Removed session 6. Oct 8 20:02:28.184329 sshd[1609]: Accepted publickey for core from 10.0.0.1 port 46610 ssh2: RSA SHA256:PeFR0GwG3Km7u6+IJymPx7tkM/vpusnYsvzmiMSzq3A Oct 8 20:02:28.185100 sshd[1609]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:02:28.188693 systemd-logind[1415]: New session 7 of user core. Oct 8 20:02:28.195760 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 8 20:02:28.246736 sudo[1612]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 8 20:02:28.247007 sudo[1612]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 8 20:02:28.554888 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 8 20:02:28.555034 (dockerd)[1630]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 8 20:02:28.814573 dockerd[1630]: time="2024-10-08T20:02:28.814446649Z" level=info msg="Starting up" Oct 8 20:02:28.949021 dockerd[1630]: time="2024-10-08T20:02:28.948976489Z" level=info msg="Loading containers: start." Oct 8 20:02:29.031641 kernel: Initializing XFRM netlink socket Oct 8 20:02:29.095358 systemd-networkd[1362]: docker0: Link UP Oct 8 20:02:29.109810 dockerd[1630]: time="2024-10-08T20:02:29.109772849Z" level=info msg="Loading containers: done." Oct 8 20:02:29.121902 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4028385432-merged.mount: Deactivated successfully. Oct 8 20:02:29.123713 dockerd[1630]: time="2024-10-08T20:02:29.123665009Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 8 20:02:29.123790 dockerd[1630]: time="2024-10-08T20:02:29.123762689Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Oct 8 20:02:29.123874 dockerd[1630]: time="2024-10-08T20:02:29.123856129Z" level=info msg="Daemon has completed initialization" Oct 8 20:02:29.152406 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 8 20:02:29.152893 dockerd[1630]: time="2024-10-08T20:02:29.152184769Z" level=info msg="API listen on /run/docker.sock" Oct 8 20:02:29.761722 containerd[1434]: time="2024-10-08T20:02:29.761640329Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.9\"" Oct 8 20:02:30.420342 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount49404280.mount: Deactivated successfully. Oct 8 20:02:32.785114 containerd[1434]: time="2024-10-08T20:02:32.785054409Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:02:32.785512 containerd[1434]: time="2024-10-08T20:02:32.785470609Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.9: active requests=0, bytes read=32286060" Oct 8 20:02:32.786386 containerd[1434]: time="2024-10-08T20:02:32.786354729Z" level=info msg="ImageCreate event name:\"sha256:0ca432c382d835cda3e9fb9d7f97eeb68f8c26290c208142886893943f157b80\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:02:32.789618 containerd[1434]: time="2024-10-08T20:02:32.789582289Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b88538e7fdf73583c8670540eec5b3620af75c9ec200434a5815ee7fba5021f3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:02:32.790867 containerd[1434]: time="2024-10-08T20:02:32.790822809Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.9\" with image id \"sha256:0ca432c382d835cda3e9fb9d7f97eeb68f8c26290c208142886893943f157b80\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b88538e7fdf73583c8670540eec5b3620af75c9ec200434a5815ee7fba5021f3\", size \"32282858\" in 3.02913992s" Oct 8 20:02:32.790867 containerd[1434]: time="2024-10-08T20:02:32.790863609Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.9\" returns image reference \"sha256:0ca432c382d835cda3e9fb9d7f97eeb68f8c26290c208142886893943f157b80\"" Oct 8 20:02:32.811346 containerd[1434]: time="2024-10-08T20:02:32.811310089Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.9\"" Oct 8 20:02:32.950320 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 8 20:02:32.959701 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:02:33.051441 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:02:33.054895 (kubelet)[1855]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:02:33.099134 kubelet[1855]: E1008 20:02:33.099078 1855 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:02:33.102588 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:02:33.102762 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:02:34.539514 containerd[1434]: time="2024-10-08T20:02:34.539425089Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:02:34.540303 containerd[1434]: time="2024-10-08T20:02:34.540271209Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.9: active requests=0, bytes read=29374206" Oct 8 20:02:34.541004 containerd[1434]: time="2024-10-08T20:02:34.540950889Z" level=info msg="ImageCreate event name:\"sha256:3e4860b5f4cadd23ec0c1f66f8cd323718a56721b4eaffc560dd5bbdae0a3373\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:02:34.546651 containerd[1434]: time="2024-10-08T20:02:34.543749569Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f2f18973ccb6996687d10ba5bd1b8f303e3dd2fed80f831a44d2ac8191e5bb9b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:02:34.546651 containerd[1434]: time="2024-10-08T20:02:34.545275689Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.9\" with image id \"sha256:3e4860b5f4cadd23ec0c1f66f8cd323718a56721b4eaffc560dd5bbdae0a3373\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f2f18973ccb6996687d10ba5bd1b8f303e3dd2fed80f831a44d2ac8191e5bb9b\", size \"30862018\" in 1.73392684s" Oct 8 20:02:34.546651 containerd[1434]: time="2024-10-08T20:02:34.545307089Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.9\" returns image reference \"sha256:3e4860b5f4cadd23ec0c1f66f8cd323718a56721b4eaffc560dd5bbdae0a3373\"" Oct 8 20:02:34.565440 containerd[1434]: time="2024-10-08T20:02:34.565390929Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.9\"" Oct 8 20:02:36.576403 containerd[1434]: time="2024-10-08T20:02:36.576347929Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:02:36.577047 containerd[1434]: time="2024-10-08T20:02:36.577010129Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.9: active requests=0, bytes read=15751219" Oct 8 20:02:36.577659 containerd[1434]: time="2024-10-08T20:02:36.577631969Z" level=info msg="ImageCreate event name:\"sha256:8282449c9a5dac69ec2afe9dc048807bbe6e8bae88040c889d1e219eca6f8a7d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:02:36.581058 containerd[1434]: time="2024-10-08T20:02:36.581010729Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9c164076eebaefdaebad46a5ccd550e9f38c63588c02d35163c6a09e164ab8a8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:02:36.581731 containerd[1434]: time="2024-10-08T20:02:36.581701369Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.9\" with image id \"sha256:8282449c9a5dac69ec2afe9dc048807bbe6e8bae88040c889d1e219eca6f8a7d\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9c164076eebaefdaebad46a5ccd550e9f38c63588c02d35163c6a09e164ab8a8\", size \"17239049\" in 2.01627148s" Oct 8 20:02:36.581788 containerd[1434]: time="2024-10-08T20:02:36.581730449Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.9\" returns image reference \"sha256:8282449c9a5dac69ec2afe9dc048807bbe6e8bae88040c889d1e219eca6f8a7d\"" Oct 8 20:02:36.599399 containerd[1434]: time="2024-10-08T20:02:36.599202889Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.9\"" Oct 8 20:02:37.620802 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2676875069.mount: Deactivated successfully. Oct 8 20:02:38.086404 containerd[1434]: time="2024-10-08T20:02:38.085941689Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:02:38.087583 containerd[1434]: time="2024-10-08T20:02:38.087542249Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.9: active requests=0, bytes read=25254040" Oct 8 20:02:38.089520 containerd[1434]: time="2024-10-08T20:02:38.089441649Z" level=info msg="ImageCreate event name:\"sha256:0e8a375be0a8ed2d79dab5b4513dc4639ed6e7d3da10a53172b619355f666d4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:02:38.094404 containerd[1434]: time="2024-10-08T20:02:38.094339809Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:124040dbe6b5294352355f5d34c692ecbc940cdc57a8fd06d0f38f76b6138906\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:02:38.095423 containerd[1434]: time="2024-10-08T20:02:38.095071369Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.9\" with image id \"sha256:0e8a375be0a8ed2d79dab5b4513dc4639ed6e7d3da10a53172b619355f666d4f\", repo tag \"registry.k8s.io/kube-proxy:v1.29.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:124040dbe6b5294352355f5d34c692ecbc940cdc57a8fd06d0f38f76b6138906\", size \"25253057\" in 1.49583416s" Oct 8 20:02:38.095423 containerd[1434]: time="2024-10-08T20:02:38.095106929Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.9\" returns image reference \"sha256:0e8a375be0a8ed2d79dab5b4513dc4639ed6e7d3da10a53172b619355f666d4f\"" Oct 8 20:02:38.116059 containerd[1434]: time="2024-10-08T20:02:38.116010129Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Oct 8 20:02:38.701359 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount427888446.mount: Deactivated successfully. Oct 8 20:02:39.301032 containerd[1434]: time="2024-10-08T20:02:39.300981689Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:02:39.302079 containerd[1434]: time="2024-10-08T20:02:39.301688329Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Oct 8 20:02:39.303292 containerd[1434]: time="2024-10-08T20:02:39.303227569Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:02:39.306004 containerd[1434]: time="2024-10-08T20:02:39.305957449Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:02:39.307393 containerd[1434]: time="2024-10-08T20:02:39.307346129Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.19129764s" Oct 8 20:02:39.307393 containerd[1434]: time="2024-10-08T20:02:39.307383689Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Oct 8 20:02:39.326654 containerd[1434]: time="2024-10-08T20:02:39.326600209Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Oct 8 20:02:39.785587 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3817320178.mount: Deactivated successfully. Oct 8 20:02:39.790614 containerd[1434]: time="2024-10-08T20:02:39.790547529Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:02:39.792287 containerd[1434]: time="2024-10-08T20:02:39.792195969Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" Oct 8 20:02:39.793159 containerd[1434]: time="2024-10-08T20:02:39.793121689Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:02:39.795418 containerd[1434]: time="2024-10-08T20:02:39.795377329Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:02:39.796916 containerd[1434]: time="2024-10-08T20:02:39.796884489Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 470.22732ms" Oct 8 20:02:39.796916 containerd[1434]: time="2024-10-08T20:02:39.796912569Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Oct 8 20:02:39.818190 containerd[1434]: time="2024-10-08T20:02:39.818130889Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Oct 8 20:02:40.405029 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount650104933.mount: Deactivated successfully. Oct 8 20:02:42.313758 containerd[1434]: time="2024-10-08T20:02:42.313577569Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:02:42.314693 containerd[1434]: time="2024-10-08T20:02:42.314445489Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200788" Oct 8 20:02:42.315389 containerd[1434]: time="2024-10-08T20:02:42.315344449Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:02:42.319531 containerd[1434]: time="2024-10-08T20:02:42.319481849Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:02:42.320329 containerd[1434]: time="2024-10-08T20:02:42.320296329Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 2.50212376s" Oct 8 20:02:42.320385 containerd[1434]: time="2024-10-08T20:02:42.320328809Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Oct 8 20:02:43.159926 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 8 20:02:43.168820 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:02:43.262055 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:02:43.266027 (kubelet)[2088]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:02:43.317430 kubelet[2088]: E1008 20:02:43.317375 2088 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:02:43.320248 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:02:43.320390 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:02:49.255157 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:02:49.270871 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:02:49.289073 systemd[1]: Reloading requested from client PID 2104 ('systemctl') (unit session-7.scope)... Oct 8 20:02:49.289090 systemd[1]: Reloading... Oct 8 20:02:49.354738 zram_generator::config[2144]: No configuration found. Oct 8 20:02:49.589241 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 20:02:49.640744 systemd[1]: Reloading finished in 351 ms. Oct 8 20:02:49.676797 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 8 20:02:49.676859 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 8 20:02:49.677043 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:02:49.678991 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:02:49.766133 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:02:49.769755 (kubelet)[2189]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 8 20:02:49.811146 kubelet[2189]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 20:02:49.811146 kubelet[2189]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 8 20:02:49.811146 kubelet[2189]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 20:02:49.811440 kubelet[2189]: I1008 20:02:49.811195 2189 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 8 20:02:50.647018 kubelet[2189]: I1008 20:02:50.646979 2189 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Oct 8 20:02:50.647018 kubelet[2189]: I1008 20:02:50.647009 2189 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 8 20:02:50.647220 kubelet[2189]: I1008 20:02:50.647203 2189 server.go:919] "Client rotation is on, will bootstrap in background" Oct 8 20:02:50.683116 kubelet[2189]: I1008 20:02:50.683077 2189 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 8 20:02:50.684954 kubelet[2189]: E1008 20:02:50.684935 2189 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.147:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.147:6443: connect: connection refused Oct 8 20:02:50.690716 kubelet[2189]: I1008 20:02:50.690649 2189 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 8 20:02:50.690879 kubelet[2189]: I1008 20:02:50.690855 2189 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 8 20:02:50.691038 kubelet[2189]: I1008 20:02:50.691019 2189 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Oct 8 20:02:50.691118 kubelet[2189]: I1008 20:02:50.691043 2189 topology_manager.go:138] "Creating topology manager with none policy" Oct 8 20:02:50.691118 kubelet[2189]: I1008 20:02:50.691052 2189 container_manager_linux.go:301] "Creating device plugin manager" Oct 8 20:02:50.691169 kubelet[2189]: I1008 20:02:50.691149 2189 state_mem.go:36] "Initialized new in-memory state store" Oct 8 20:02:50.693411 kubelet[2189]: I1008 20:02:50.693386 2189 kubelet.go:396] "Attempting to sync node with API server" Oct 8 20:02:50.693411 kubelet[2189]: I1008 20:02:50.693411 2189 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 8 20:02:50.693481 kubelet[2189]: I1008 20:02:50.693439 2189 kubelet.go:312] "Adding apiserver pod source" Oct 8 20:02:50.693481 kubelet[2189]: I1008 20:02:50.693454 2189 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 8 20:02:50.697653 kubelet[2189]: W1008 20:02:50.694936 2189 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.147:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.147:6443: connect: connection refused Oct 8 20:02:50.697653 kubelet[2189]: E1008 20:02:50.694993 2189 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.147:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.147:6443: connect: connection refused Oct 8 20:02:50.697653 kubelet[2189]: I1008 20:02:50.695541 2189 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Oct 8 20:02:50.697653 kubelet[2189]: I1008 20:02:50.696150 2189 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 8 20:02:50.697653 kubelet[2189]: W1008 20:02:50.696315 2189 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 8 20:02:50.697653 kubelet[2189]: I1008 20:02:50.697302 2189 server.go:1256] "Started kubelet" Oct 8 20:02:50.702447 kubelet[2189]: W1008 20:02:50.702376 2189 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.147:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.147:6443: connect: connection refused Oct 8 20:02:50.702447 kubelet[2189]: E1008 20:02:50.702446 2189 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.147:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.147:6443: connect: connection refused Oct 8 20:02:50.703228 kubelet[2189]: I1008 20:02:50.703210 2189 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 8 20:02:50.703323 kubelet[2189]: E1008 20:02:50.703295 2189 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.147:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.147:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.17fc92ce0de25341 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-10-08 20:02:50.697282369 +0000 UTC m=+0.924260641,LastTimestamp:2024-10-08 20:02:50.697282369 +0000 UTC m=+0.924260641,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 8 20:02:50.703550 kubelet[2189]: I1008 20:02:50.703523 2189 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Oct 8 20:02:50.704000 kubelet[2189]: I1008 20:02:50.703980 2189 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 8 20:02:50.704339 kubelet[2189]: I1008 20:02:50.704305 2189 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 8 20:02:50.704553 kubelet[2189]: I1008 20:02:50.704524 2189 server.go:461] "Adding debug handlers to kubelet server" Oct 8 20:02:50.707279 kubelet[2189]: I1008 20:02:50.705963 2189 volume_manager.go:291] "Starting Kubelet Volume Manager" Oct 8 20:02:50.707279 kubelet[2189]: I1008 20:02:50.706033 2189 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Oct 8 20:02:50.707279 kubelet[2189]: I1008 20:02:50.706082 2189 reconciler_new.go:29] "Reconciler: start to sync state" Oct 8 20:02:50.707279 kubelet[2189]: W1008 20:02:50.706356 2189 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.147:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.147:6443: connect: connection refused Oct 8 20:02:50.707279 kubelet[2189]: E1008 20:02:50.706389 2189 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.147:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.147:6443: connect: connection refused Oct 8 20:02:50.707788 kubelet[2189]: E1008 20:02:50.707765 2189 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.147:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.147:6443: connect: connection refused" interval="200ms" Oct 8 20:02:50.709130 kubelet[2189]: I1008 20:02:50.709110 2189 factory.go:221] Registration of the systemd container factory successfully Oct 8 20:02:50.709268 kubelet[2189]: I1008 20:02:50.709250 2189 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 8 20:02:50.710176 kubelet[2189]: I1008 20:02:50.710148 2189 factory.go:221] Registration of the containerd container factory successfully Oct 8 20:02:50.713082 kubelet[2189]: E1008 20:02:50.713039 2189 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 8 20:02:50.719815 kubelet[2189]: I1008 20:02:50.719742 2189 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 8 20:02:50.721035 kubelet[2189]: I1008 20:02:50.721017 2189 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 8 20:02:50.721076 kubelet[2189]: I1008 20:02:50.721039 2189 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 8 20:02:50.721076 kubelet[2189]: I1008 20:02:50.721056 2189 kubelet.go:2329] "Starting kubelet main sync loop" Oct 8 20:02:50.721127 kubelet[2189]: E1008 20:02:50.721096 2189 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 8 20:02:50.724406 kubelet[2189]: W1008 20:02:50.724354 2189 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.147:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.147:6443: connect: connection refused Oct 8 20:02:50.724406 kubelet[2189]: E1008 20:02:50.724407 2189 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.147:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.147:6443: connect: connection refused Oct 8 20:02:50.725261 kubelet[2189]: I1008 20:02:50.725227 2189 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 8 20:02:50.725325 kubelet[2189]: I1008 20:02:50.725307 2189 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 8 20:02:50.725346 kubelet[2189]: I1008 20:02:50.725327 2189 state_mem.go:36] "Initialized new in-memory state store" Oct 8 20:02:50.788243 kubelet[2189]: I1008 20:02:50.788199 2189 policy_none.go:49] "None policy: Start" Oct 8 20:02:50.789286 kubelet[2189]: I1008 20:02:50.789259 2189 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 8 20:02:50.789439 kubelet[2189]: I1008 20:02:50.789309 2189 state_mem.go:35] "Initializing new in-memory state store" Oct 8 20:02:50.794969 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 8 20:02:50.807587 kubelet[2189]: I1008 20:02:50.807564 2189 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 8 20:02:50.807987 kubelet[2189]: E1008 20:02:50.807963 2189 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.147:6443/api/v1/nodes\": dial tcp 10.0.0.147:6443: connect: connection refused" node="localhost" Oct 8 20:02:50.815671 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 8 20:02:50.818875 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 8 20:02:50.821605 kubelet[2189]: E1008 20:02:50.821574 2189 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 8 20:02:50.828503 kubelet[2189]: I1008 20:02:50.828391 2189 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 8 20:02:50.828753 kubelet[2189]: I1008 20:02:50.828678 2189 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 8 20:02:50.830375 kubelet[2189]: E1008 20:02:50.830347 2189 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Oct 8 20:02:50.908734 kubelet[2189]: E1008 20:02:50.908593 2189 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.147:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.147:6443: connect: connection refused" interval="400ms" Oct 8 20:02:51.010958 kubelet[2189]: I1008 20:02:51.010917 2189 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 8 20:02:51.011263 kubelet[2189]: E1008 20:02:51.011244 2189 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.147:6443/api/v1/nodes\": dial tcp 10.0.0.147:6443: connect: connection refused" node="localhost" Oct 8 20:02:51.022596 kubelet[2189]: I1008 20:02:51.022491 2189 topology_manager.go:215] "Topology Admit Handler" podUID="ce23f7a36879d97fbf1b6f0c78823c8a" podNamespace="kube-system" podName="kube-apiserver-localhost" Oct 8 20:02:51.023566 kubelet[2189]: I1008 20:02:51.023543 2189 topology_manager.go:215] "Topology Admit Handler" podUID="b21621a72929ad4d87bc59a877761c7f" podNamespace="kube-system" podName="kube-controller-manager-localhost" Oct 8 20:02:51.024354 kubelet[2189]: I1008 20:02:51.024331 2189 topology_manager.go:215] "Topology Admit Handler" podUID="f13040d390753ac4a1fef67bb9676230" podNamespace="kube-system" podName="kube-scheduler-localhost" Oct 8 20:02:51.030397 systemd[1]: Created slice kubepods-burstable-podce23f7a36879d97fbf1b6f0c78823c8a.slice - libcontainer container kubepods-burstable-podce23f7a36879d97fbf1b6f0c78823c8a.slice. Oct 8 20:02:51.050970 systemd[1]: Created slice kubepods-burstable-podb21621a72929ad4d87bc59a877761c7f.slice - libcontainer container kubepods-burstable-podb21621a72929ad4d87bc59a877761c7f.slice. Oct 8 20:02:51.069199 systemd[1]: Created slice kubepods-burstable-podf13040d390753ac4a1fef67bb9676230.slice - libcontainer container kubepods-burstable-podf13040d390753ac4a1fef67bb9676230.slice. Oct 8 20:02:51.110291 kubelet[2189]: I1008 20:02:51.110244 2189 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce23f7a36879d97fbf1b6f0c78823c8a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"ce23f7a36879d97fbf1b6f0c78823c8a\") " pod="kube-system/kube-apiserver-localhost" Oct 8 20:02:51.110291 kubelet[2189]: I1008 20:02:51.110295 2189 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 20:02:51.110380 kubelet[2189]: I1008 20:02:51.110326 2189 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 20:02:51.110380 kubelet[2189]: I1008 20:02:51.110348 2189 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f13040d390753ac4a1fef67bb9676230-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"f13040d390753ac4a1fef67bb9676230\") " pod="kube-system/kube-scheduler-localhost" Oct 8 20:02:51.110380 kubelet[2189]: I1008 20:02:51.110368 2189 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce23f7a36879d97fbf1b6f0c78823c8a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"ce23f7a36879d97fbf1b6f0c78823c8a\") " pod="kube-system/kube-apiserver-localhost" Oct 8 20:02:51.110466 kubelet[2189]: I1008 20:02:51.110409 2189 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce23f7a36879d97fbf1b6f0c78823c8a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"ce23f7a36879d97fbf1b6f0c78823c8a\") " pod="kube-system/kube-apiserver-localhost" Oct 8 20:02:51.110466 kubelet[2189]: I1008 20:02:51.110435 2189 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 20:02:51.110466 kubelet[2189]: I1008 20:02:51.110456 2189 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 20:02:51.110525 kubelet[2189]: I1008 20:02:51.110476 2189 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 20:02:51.309235 kubelet[2189]: E1008 20:02:51.309187 2189 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.147:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.147:6443: connect: connection refused" interval="800ms" Oct 8 20:02:51.349572 kubelet[2189]: E1008 20:02:51.349534 2189 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:02:51.350161 containerd[1434]: time="2024-10-08T20:02:51.350127049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:ce23f7a36879d97fbf1b6f0c78823c8a,Namespace:kube-system,Attempt:0,}" Oct 8 20:02:51.353439 kubelet[2189]: E1008 20:02:51.353399 2189 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:02:51.353966 containerd[1434]: time="2024-10-08T20:02:51.353738049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b21621a72929ad4d87bc59a877761c7f,Namespace:kube-system,Attempt:0,}" Oct 8 20:02:51.371218 kubelet[2189]: E1008 20:02:51.371187 2189 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:02:51.371830 containerd[1434]: time="2024-10-08T20:02:51.371544129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:f13040d390753ac4a1fef67bb9676230,Namespace:kube-system,Attempt:0,}" Oct 8 20:02:51.413189 kubelet[2189]: I1008 20:02:51.413157 2189 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 8 20:02:51.413541 kubelet[2189]: E1008 20:02:51.413508 2189 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.147:6443/api/v1/nodes\": dial tcp 10.0.0.147:6443: connect: connection refused" node="localhost" Oct 8 20:02:51.600511 kubelet[2189]: W1008 20:02:51.600392 2189 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.147:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.147:6443: connect: connection refused Oct 8 20:02:51.600511 kubelet[2189]: E1008 20:02:51.600439 2189 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.147:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.147:6443: connect: connection refused Oct 8 20:02:51.685676 kubelet[2189]: W1008 20:02:51.685607 2189 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.147:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.147:6443: connect: connection refused Oct 8 20:02:51.685676 kubelet[2189]: E1008 20:02:51.685666 2189 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.147:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.147:6443: connect: connection refused Oct 8 20:02:51.851476 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2863529748.mount: Deactivated successfully. Oct 8 20:02:51.855767 containerd[1434]: time="2024-10-08T20:02:51.855705729Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 20:02:51.857286 containerd[1434]: time="2024-10-08T20:02:51.857247449Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Oct 8 20:02:51.861931 containerd[1434]: time="2024-10-08T20:02:51.861889609Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 20:02:51.863329 containerd[1434]: time="2024-10-08T20:02:51.863274209Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 20:02:51.864187 containerd[1434]: time="2024-10-08T20:02:51.864157009Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 8 20:02:51.864867 containerd[1434]: time="2024-10-08T20:02:51.864818529Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 20:02:51.867498 containerd[1434]: time="2024-10-08T20:02:51.867445209Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 8 20:02:51.869470 containerd[1434]: time="2024-10-08T20:02:51.869394649Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 20:02:51.871824 containerd[1434]: time="2024-10-08T20:02:51.871796009Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 500.18868ms" Oct 8 20:02:51.873260 containerd[1434]: time="2024-10-08T20:02:51.873169929Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 522.96616ms" Oct 8 20:02:51.873969 containerd[1434]: time="2024-10-08T20:02:51.873882129Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 520.0804ms" Oct 8 20:02:52.040924 containerd[1434]: time="2024-10-08T20:02:52.040837929Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:02:52.040924 containerd[1434]: time="2024-10-08T20:02:52.040890729Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:02:52.041066 containerd[1434]: time="2024-10-08T20:02:52.040512729Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:02:52.041066 containerd[1434]: time="2024-10-08T20:02:52.041018889Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:02:52.041066 containerd[1434]: time="2024-10-08T20:02:52.041040289Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:02:52.041298 containerd[1434]: time="2024-10-08T20:02:52.041124889Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:02:52.041338 containerd[1434]: time="2024-10-08T20:02:52.041172609Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:02:52.041338 containerd[1434]: time="2024-10-08T20:02:52.041282649Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:02:52.041894 containerd[1434]: time="2024-10-08T20:02:52.041835009Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:02:52.041894 containerd[1434]: time="2024-10-08T20:02:52.041873129Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:02:52.041980 containerd[1434]: time="2024-10-08T20:02:52.041892449Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:02:52.041980 containerd[1434]: time="2024-10-08T20:02:52.041958009Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:02:52.053854 kubelet[2189]: W1008 20:02:52.053010 2189 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.147:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.147:6443: connect: connection refused Oct 8 20:02:52.053854 kubelet[2189]: E1008 20:02:52.053059 2189 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.147:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.147:6443: connect: connection refused Oct 8 20:02:52.068812 systemd[1]: Started cri-containerd-01e6189cebcf84141042ec2014805ead1e5399201f527d22398f088e94f2270e.scope - libcontainer container 01e6189cebcf84141042ec2014805ead1e5399201f527d22398f088e94f2270e. Oct 8 20:02:52.070159 systemd[1]: Started cri-containerd-4bb36c806339008177f5efd82092cd0dca1727f94921d0f1e9e7688158dd9057.scope - libcontainer container 4bb36c806339008177f5efd82092cd0dca1727f94921d0f1e9e7688158dd9057. Oct 8 20:02:52.071318 systemd[1]: Started cri-containerd-d8788eb3e94ab6f67b7e2a44adc03c0f705999beb90f63faa070ca888ea2435e.scope - libcontainer container d8788eb3e94ab6f67b7e2a44adc03c0f705999beb90f63faa070ca888ea2435e. Oct 8 20:02:52.106238 containerd[1434]: time="2024-10-08T20:02:52.103939289Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:f13040d390753ac4a1fef67bb9676230,Namespace:kube-system,Attempt:0,} returns sandbox id \"01e6189cebcf84141042ec2014805ead1e5399201f527d22398f088e94f2270e\"" Oct 8 20:02:52.106238 containerd[1434]: time="2024-10-08T20:02:52.104103289Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:ce23f7a36879d97fbf1b6f0c78823c8a,Namespace:kube-system,Attempt:0,} returns sandbox id \"4bb36c806339008177f5efd82092cd0dca1727f94921d0f1e9e7688158dd9057\"" Oct 8 20:02:52.107865 kubelet[2189]: E1008 20:02:52.107827 2189 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:02:52.108147 kubelet[2189]: E1008 20:02:52.108030 2189 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:02:52.109188 containerd[1434]: time="2024-10-08T20:02:52.109122969Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b21621a72929ad4d87bc59a877761c7f,Namespace:kube-system,Attempt:0,} returns sandbox id \"d8788eb3e94ab6f67b7e2a44adc03c0f705999beb90f63faa070ca888ea2435e\"" Oct 8 20:02:52.109587 kubelet[2189]: E1008 20:02:52.109563 2189 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.147:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.147:6443: connect: connection refused" interval="1.6s" Oct 8 20:02:52.110037 kubelet[2189]: E1008 20:02:52.110019 2189 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:02:52.111067 containerd[1434]: time="2024-10-08T20:02:52.111027449Z" level=info msg="CreateContainer within sandbox \"01e6189cebcf84141042ec2014805ead1e5399201f527d22398f088e94f2270e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 8 20:02:52.111444 containerd[1434]: time="2024-10-08T20:02:52.111404609Z" level=info msg="CreateContainer within sandbox \"4bb36c806339008177f5efd82092cd0dca1727f94921d0f1e9e7688158dd9057\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 8 20:02:52.111980 containerd[1434]: time="2024-10-08T20:02:52.111943609Z" level=info msg="CreateContainer within sandbox \"d8788eb3e94ab6f67b7e2a44adc03c0f705999beb90f63faa070ca888ea2435e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 8 20:02:52.127526 containerd[1434]: time="2024-10-08T20:02:52.127490249Z" level=info msg="CreateContainer within sandbox \"d8788eb3e94ab6f67b7e2a44adc03c0f705999beb90f63faa070ca888ea2435e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c443e7b867fb4f36352f57ad5a2ce4745d8c2966bd5103c073f331eacddf7ecf\"" Oct 8 20:02:52.128139 containerd[1434]: time="2024-10-08T20:02:52.128112769Z" level=info msg="StartContainer for \"c443e7b867fb4f36352f57ad5a2ce4745d8c2966bd5103c073f331eacddf7ecf\"" Oct 8 20:02:52.128199 containerd[1434]: time="2024-10-08T20:02:52.128142729Z" level=info msg="CreateContainer within sandbox \"01e6189cebcf84141042ec2014805ead1e5399201f527d22398f088e94f2270e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"10978ef25540df84fadf0b788330f69de1278d6c6b8f50447fa869e4676308ef\"" Oct 8 20:02:52.128550 containerd[1434]: time="2024-10-08T20:02:52.128522649Z" level=info msg="StartContainer for \"10978ef25540df84fadf0b788330f69de1278d6c6b8f50447fa869e4676308ef\"" Oct 8 20:02:52.133771 containerd[1434]: time="2024-10-08T20:02:52.133651049Z" level=info msg="CreateContainer within sandbox \"4bb36c806339008177f5efd82092cd0dca1727f94921d0f1e9e7688158dd9057\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"135c556f84e3372593afb13910cf00f7b72a9383f3efd8eba093c34dedd8d9dc\"" Oct 8 20:02:52.134066 containerd[1434]: time="2024-10-08T20:02:52.134039529Z" level=info msg="StartContainer for \"135c556f84e3372593afb13910cf00f7b72a9383f3efd8eba093c34dedd8d9dc\"" Oct 8 20:02:52.154803 systemd[1]: Started cri-containerd-c443e7b867fb4f36352f57ad5a2ce4745d8c2966bd5103c073f331eacddf7ecf.scope - libcontainer container c443e7b867fb4f36352f57ad5a2ce4745d8c2966bd5103c073f331eacddf7ecf. Oct 8 20:02:52.158374 systemd[1]: Started cri-containerd-10978ef25540df84fadf0b788330f69de1278d6c6b8f50447fa869e4676308ef.scope - libcontainer container 10978ef25540df84fadf0b788330f69de1278d6c6b8f50447fa869e4676308ef. Oct 8 20:02:52.159714 systemd[1]: Started cri-containerd-135c556f84e3372593afb13910cf00f7b72a9383f3efd8eba093c34dedd8d9dc.scope - libcontainer container 135c556f84e3372593afb13910cf00f7b72a9383f3efd8eba093c34dedd8d9dc. Oct 8 20:02:52.192747 containerd[1434]: time="2024-10-08T20:02:52.192544929Z" level=info msg="StartContainer for \"c443e7b867fb4f36352f57ad5a2ce4745d8c2966bd5103c073f331eacddf7ecf\" returns successfully" Oct 8 20:02:52.206773 kubelet[2189]: W1008 20:02:52.206663 2189 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.147:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.147:6443: connect: connection refused Oct 8 20:02:52.206773 kubelet[2189]: E1008 20:02:52.206736 2189 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.147:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.147:6443: connect: connection refused Oct 8 20:02:52.217704 kubelet[2189]: I1008 20:02:52.217682 2189 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 8 20:02:52.221692 containerd[1434]: time="2024-10-08T20:02:52.221558489Z" level=info msg="StartContainer for \"10978ef25540df84fadf0b788330f69de1278d6c6b8f50447fa869e4676308ef\" returns successfully" Oct 8 20:02:52.221692 containerd[1434]: time="2024-10-08T20:02:52.221599289Z" level=info msg="StartContainer for \"135c556f84e3372593afb13910cf00f7b72a9383f3efd8eba093c34dedd8d9dc\" returns successfully" Oct 8 20:02:52.223500 kubelet[2189]: E1008 20:02:52.223427 2189 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.147:6443/api/v1/nodes\": dial tcp 10.0.0.147:6443: connect: connection refused" node="localhost" Oct 8 20:02:52.736462 kubelet[2189]: E1008 20:02:52.736290 2189 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:02:52.738583 kubelet[2189]: E1008 20:02:52.737019 2189 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:02:52.739678 kubelet[2189]: E1008 20:02:52.739601 2189 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:02:53.715582 kubelet[2189]: E1008 20:02:53.715543 2189 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Oct 8 20:02:53.741113 kubelet[2189]: E1008 20:02:53.741085 2189 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:02:53.824821 kubelet[2189]: I1008 20:02:53.824760 2189 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 8 20:02:53.831371 kubelet[2189]: I1008 20:02:53.831250 2189 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Oct 8 20:02:54.697313 kubelet[2189]: I1008 20:02:54.697273 2189 apiserver.go:52] "Watching apiserver" Oct 8 20:02:54.706890 kubelet[2189]: I1008 20:02:54.706854 2189 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Oct 8 20:02:56.397165 systemd[1]: Reloading requested from client PID 2468 ('systemctl') (unit session-7.scope)... Oct 8 20:02:56.397182 systemd[1]: Reloading... Oct 8 20:02:56.462650 zram_generator::config[2510]: No configuration found. Oct 8 20:02:56.541163 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 20:02:56.604483 systemd[1]: Reloading finished in 207 ms. Oct 8 20:02:56.636824 kubelet[2189]: I1008 20:02:56.636736 2189 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 8 20:02:56.637029 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:02:56.646584 systemd[1]: kubelet.service: Deactivated successfully. Oct 8 20:02:56.646838 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:02:56.646903 systemd[1]: kubelet.service: Consumed 1.274s CPU time, 114.6M memory peak, 0B memory swap peak. Oct 8 20:02:56.655005 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:02:56.752122 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:02:56.756387 (kubelet)[2549]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 8 20:02:56.798706 kubelet[2549]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 20:02:56.798706 kubelet[2549]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 8 20:02:56.798706 kubelet[2549]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 20:02:56.799023 kubelet[2549]: I1008 20:02:56.798757 2549 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 8 20:02:56.802614 kubelet[2549]: I1008 20:02:56.802589 2549 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Oct 8 20:02:56.802614 kubelet[2549]: I1008 20:02:56.802631 2549 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 8 20:02:56.802845 kubelet[2549]: I1008 20:02:56.802828 2549 server.go:919] "Client rotation is on, will bootstrap in background" Oct 8 20:02:56.804275 kubelet[2549]: I1008 20:02:56.804244 2549 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Oct 8 20:02:56.805985 kubelet[2549]: I1008 20:02:56.805961 2549 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 8 20:02:56.812461 kubelet[2549]: I1008 20:02:56.812436 2549 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 8 20:02:56.814099 kubelet[2549]: I1008 20:02:56.812841 2549 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 8 20:02:56.814099 kubelet[2549]: I1008 20:02:56.812998 2549 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Oct 8 20:02:56.814099 kubelet[2549]: I1008 20:02:56.813021 2549 topology_manager.go:138] "Creating topology manager with none policy" Oct 8 20:02:56.814099 kubelet[2549]: I1008 20:02:56.813029 2549 container_manager_linux.go:301] "Creating device plugin manager" Oct 8 20:02:56.814099 kubelet[2549]: I1008 20:02:56.813056 2549 state_mem.go:36] "Initialized new in-memory state store" Oct 8 20:02:56.814099 kubelet[2549]: I1008 20:02:56.813152 2549 kubelet.go:396] "Attempting to sync node with API server" Oct 8 20:02:56.814373 kubelet[2549]: I1008 20:02:56.813166 2549 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 8 20:02:56.814373 kubelet[2549]: I1008 20:02:56.813189 2549 kubelet.go:312] "Adding apiserver pod source" Oct 8 20:02:56.814373 kubelet[2549]: I1008 20:02:56.813203 2549 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 8 20:02:56.815124 kubelet[2549]: I1008 20:02:56.815101 2549 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Oct 8 20:02:56.815400 kubelet[2549]: I1008 20:02:56.815383 2549 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 8 20:02:56.815964 kubelet[2549]: I1008 20:02:56.815943 2549 server.go:1256] "Started kubelet" Oct 8 20:02:56.817527 kubelet[2549]: I1008 20:02:56.817505 2549 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 8 20:02:56.817660 kubelet[2549]: I1008 20:02:56.817638 2549 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Oct 8 20:02:56.817769 kubelet[2549]: I1008 20:02:56.817749 2549 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 8 20:02:56.818373 kubelet[2549]: I1008 20:02:56.818348 2549 server.go:461] "Adding debug handlers to kubelet server" Oct 8 20:02:56.818967 kubelet[2549]: I1008 20:02:56.818944 2549 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 8 20:02:56.819489 kubelet[2549]: E1008 20:02:56.819212 2549 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 8 20:02:56.819489 kubelet[2549]: I1008 20:02:56.819261 2549 volume_manager.go:291] "Starting Kubelet Volume Manager" Oct 8 20:02:56.819489 kubelet[2549]: E1008 20:02:56.819284 2549 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 20:02:56.819489 kubelet[2549]: I1008 20:02:56.819346 2549 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Oct 8 20:02:56.819489 kubelet[2549]: I1008 20:02:56.819452 2549 reconciler_new.go:29] "Reconciler: start to sync state" Oct 8 20:02:56.823526 kubelet[2549]: I1008 20:02:56.823495 2549 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 8 20:02:56.840321 kubelet[2549]: I1008 20:02:56.839924 2549 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 8 20:02:56.842451 kubelet[2549]: I1008 20:02:56.841815 2549 factory.go:221] Registration of the containerd container factory successfully Oct 8 20:02:56.842451 kubelet[2549]: I1008 20:02:56.841836 2549 factory.go:221] Registration of the systemd container factory successfully Oct 8 20:02:56.845353 kubelet[2549]: I1008 20:02:56.845316 2549 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 8 20:02:56.845353 kubelet[2549]: I1008 20:02:56.845343 2549 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 8 20:02:56.845353 kubelet[2549]: I1008 20:02:56.845357 2549 kubelet.go:2329] "Starting kubelet main sync loop" Oct 8 20:02:56.845468 kubelet[2549]: E1008 20:02:56.845404 2549 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 8 20:02:56.876974 kubelet[2549]: I1008 20:02:56.876941 2549 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 8 20:02:56.876974 kubelet[2549]: I1008 20:02:56.876965 2549 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 8 20:02:56.877110 kubelet[2549]: I1008 20:02:56.876996 2549 state_mem.go:36] "Initialized new in-memory state store" Oct 8 20:02:56.877580 kubelet[2549]: I1008 20:02:56.877138 2549 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 8 20:02:56.877580 kubelet[2549]: I1008 20:02:56.877164 2549 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 8 20:02:56.877580 kubelet[2549]: I1008 20:02:56.877171 2549 policy_none.go:49] "None policy: Start" Oct 8 20:02:56.878358 kubelet[2549]: I1008 20:02:56.877693 2549 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 8 20:02:56.878358 kubelet[2549]: I1008 20:02:56.877726 2549 state_mem.go:35] "Initializing new in-memory state store" Oct 8 20:02:56.878358 kubelet[2549]: I1008 20:02:56.877884 2549 state_mem.go:75] "Updated machine memory state" Oct 8 20:02:56.882089 kubelet[2549]: I1008 20:02:56.882064 2549 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 8 20:02:56.882298 kubelet[2549]: I1008 20:02:56.882277 2549 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 8 20:02:56.923309 kubelet[2549]: I1008 20:02:56.922954 2549 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 8 20:02:56.931178 kubelet[2549]: I1008 20:02:56.931147 2549 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Oct 8 20:02:56.931261 kubelet[2549]: I1008 20:02:56.931216 2549 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Oct 8 20:02:56.945734 kubelet[2549]: I1008 20:02:56.945659 2549 topology_manager.go:215] "Topology Admit Handler" podUID="f13040d390753ac4a1fef67bb9676230" podNamespace="kube-system" podName="kube-scheduler-localhost" Oct 8 20:02:56.945734 kubelet[2549]: I1008 20:02:56.945738 2549 topology_manager.go:215] "Topology Admit Handler" podUID="ce23f7a36879d97fbf1b6f0c78823c8a" podNamespace="kube-system" podName="kube-apiserver-localhost" Oct 8 20:02:56.945853 kubelet[2549]: I1008 20:02:56.945794 2549 topology_manager.go:215] "Topology Admit Handler" podUID="b21621a72929ad4d87bc59a877761c7f" podNamespace="kube-system" podName="kube-controller-manager-localhost" Oct 8 20:02:57.120672 kubelet[2549]: I1008 20:02:57.120596 2549 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce23f7a36879d97fbf1b6f0c78823c8a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"ce23f7a36879d97fbf1b6f0c78823c8a\") " pod="kube-system/kube-apiserver-localhost" Oct 8 20:02:57.120672 kubelet[2549]: I1008 20:02:57.120671 2549 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce23f7a36879d97fbf1b6f0c78823c8a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"ce23f7a36879d97fbf1b6f0c78823c8a\") " pod="kube-system/kube-apiserver-localhost" Oct 8 20:02:57.120823 kubelet[2549]: I1008 20:02:57.120698 2549 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce23f7a36879d97fbf1b6f0c78823c8a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"ce23f7a36879d97fbf1b6f0c78823c8a\") " pod="kube-system/kube-apiserver-localhost" Oct 8 20:02:57.120823 kubelet[2549]: I1008 20:02:57.120753 2549 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 20:02:57.120823 kubelet[2549]: I1008 20:02:57.120778 2549 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 20:02:57.120823 kubelet[2549]: I1008 20:02:57.120812 2549 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f13040d390753ac4a1fef67bb9676230-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"f13040d390753ac4a1fef67bb9676230\") " pod="kube-system/kube-scheduler-localhost" Oct 8 20:02:57.120909 kubelet[2549]: I1008 20:02:57.120887 2549 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 20:02:57.120946 kubelet[2549]: I1008 20:02:57.120926 2549 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 20:02:57.120972 kubelet[2549]: I1008 20:02:57.120957 2549 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 20:02:57.263109 kubelet[2549]: E1008 20:02:57.262971 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:02:57.263109 kubelet[2549]: E1008 20:02:57.262993 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:02:57.263747 kubelet[2549]: E1008 20:02:57.263642 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:02:57.814304 kubelet[2549]: I1008 20:02:57.814154 2549 apiserver.go:52] "Watching apiserver" Oct 8 20:02:57.820195 kubelet[2549]: I1008 20:02:57.820148 2549 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Oct 8 20:02:57.862821 kubelet[2549]: E1008 20:02:57.862788 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:02:57.877681 kubelet[2549]: E1008 20:02:57.877650 2549 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 8 20:02:57.878467 kubelet[2549]: E1008 20:02:57.878386 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:02:57.884359 kubelet[2549]: E1008 20:02:57.884046 2549 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Oct 8 20:02:57.884359 kubelet[2549]: E1008 20:02:57.884298 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:02:57.923421 kubelet[2549]: I1008 20:02:57.923381 2549 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.923339381 podStartE2EDuration="1.923339381s" podCreationTimestamp="2024-10-08 20:02:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 20:02:57.916690751 +0000 UTC m=+1.157214189" watchObservedRunningTime="2024-10-08 20:02:57.923339381 +0000 UTC m=+1.163862819" Oct 8 20:02:57.931705 kubelet[2549]: I1008 20:02:57.931585 2549 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.931553072 podStartE2EDuration="1.931553072s" podCreationTimestamp="2024-10-08 20:02:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 20:02:57.924997208 +0000 UTC m=+1.165520646" watchObservedRunningTime="2024-10-08 20:02:57.931553072 +0000 UTC m=+1.172076470" Oct 8 20:02:57.938536 kubelet[2549]: I1008 20:02:57.938505 2549 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.938475359 podStartE2EDuration="1.938475359s" podCreationTimestamp="2024-10-08 20:02:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 20:02:57.931737484 +0000 UTC m=+1.172260922" watchObservedRunningTime="2024-10-08 20:02:57.938475359 +0000 UTC m=+1.178998797" Oct 8 20:02:58.863958 kubelet[2549]: E1008 20:02:58.863905 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:02:58.864279 kubelet[2549]: E1008 20:02:58.863964 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:02:58.865201 kubelet[2549]: E1008 20:02:58.864390 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:03:00.948501 sudo[1612]: pam_unix(sudo:session): session closed for user root Oct 8 20:03:00.951705 sshd[1609]: pam_unix(sshd:session): session closed for user core Oct 8 20:03:00.954385 systemd[1]: sshd@6-10.0.0.147:22-10.0.0.1:46610.service: Deactivated successfully. Oct 8 20:03:00.956021 systemd[1]: session-7.scope: Deactivated successfully. Oct 8 20:03:00.956207 systemd[1]: session-7.scope: Consumed 9.243s CPU time, 187.5M memory peak, 0B memory swap peak. Oct 8 20:03:00.957364 systemd-logind[1415]: Session 7 logged out. Waiting for processes to exit. Oct 8 20:03:00.958324 systemd-logind[1415]: Removed session 7. Oct 8 20:03:01.145473 kubelet[2549]: E1008 20:03:01.145312 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:03:04.604953 kubelet[2549]: E1008 20:03:04.604895 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:03:04.872716 kubelet[2549]: E1008 20:03:04.872571 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:03:05.649674 update_engine[1422]: I20241008 20:03:05.649259 1422 update_attempter.cc:509] Updating boot flags... Oct 8 20:03:05.671688 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2645) Oct 8 20:03:05.694685 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2648) Oct 8 20:03:05.722672 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2648) Oct 8 20:03:08.435641 kubelet[2549]: E1008 20:03:08.435558 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:03:11.159972 kubelet[2549]: E1008 20:03:11.159915 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:03:11.680182 kubelet[2549]: I1008 20:03:11.680156 2549 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 8 20:03:11.698666 containerd[1434]: time="2024-10-08T20:03:11.698514050Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 8 20:03:11.698980 kubelet[2549]: I1008 20:03:11.698861 2549 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 8 20:03:12.521671 kubelet[2549]: I1008 20:03:12.519476 2549 topology_manager.go:215] "Topology Admit Handler" podUID="9320a5f7-b3cd-48e9-a7be-c715264a17f2" podNamespace="kube-system" podName="kube-proxy-kkhvv" Oct 8 20:03:12.531332 systemd[1]: Created slice kubepods-besteffort-pod9320a5f7_b3cd_48e9_a7be_c715264a17f2.slice - libcontainer container kubepods-besteffort-pod9320a5f7_b3cd_48e9_a7be_c715264a17f2.slice. Oct 8 20:03:12.637773 kubelet[2549]: I1008 20:03:12.637729 2549 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9320a5f7-b3cd-48e9-a7be-c715264a17f2-kube-proxy\") pod \"kube-proxy-kkhvv\" (UID: \"9320a5f7-b3cd-48e9-a7be-c715264a17f2\") " pod="kube-system/kube-proxy-kkhvv" Oct 8 20:03:12.637773 kubelet[2549]: I1008 20:03:12.637775 2549 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9320a5f7-b3cd-48e9-a7be-c715264a17f2-xtables-lock\") pod \"kube-proxy-kkhvv\" (UID: \"9320a5f7-b3cd-48e9-a7be-c715264a17f2\") " pod="kube-system/kube-proxy-kkhvv" Oct 8 20:03:12.637931 kubelet[2549]: I1008 20:03:12.637811 2549 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmjcq\" (UniqueName: \"kubernetes.io/projected/9320a5f7-b3cd-48e9-a7be-c715264a17f2-kube-api-access-vmjcq\") pod \"kube-proxy-kkhvv\" (UID: \"9320a5f7-b3cd-48e9-a7be-c715264a17f2\") " pod="kube-system/kube-proxy-kkhvv" Oct 8 20:03:12.637931 kubelet[2549]: I1008 20:03:12.637834 2549 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9320a5f7-b3cd-48e9-a7be-c715264a17f2-lib-modules\") pod \"kube-proxy-kkhvv\" (UID: \"9320a5f7-b3cd-48e9-a7be-c715264a17f2\") " pod="kube-system/kube-proxy-kkhvv" Oct 8 20:03:12.737055 kubelet[2549]: I1008 20:03:12.737010 2549 topology_manager.go:215] "Topology Admit Handler" podUID="c8f1fc26-cca7-425c-8be3-899ecf78b9d2" podNamespace="tigera-operator" podName="tigera-operator-5d56685c77-jj6dx" Oct 8 20:03:12.744813 systemd[1]: Created slice kubepods-besteffort-podc8f1fc26_cca7_425c_8be3_899ecf78b9d2.slice - libcontainer container kubepods-besteffort-podc8f1fc26_cca7_425c_8be3_899ecf78b9d2.slice. Oct 8 20:03:12.839360 kubelet[2549]: I1008 20:03:12.839219 2549 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qznb\" (UniqueName: \"kubernetes.io/projected/c8f1fc26-cca7-425c-8be3-899ecf78b9d2-kube-api-access-2qznb\") pod \"tigera-operator-5d56685c77-jj6dx\" (UID: \"c8f1fc26-cca7-425c-8be3-899ecf78b9d2\") " pod="tigera-operator/tigera-operator-5d56685c77-jj6dx" Oct 8 20:03:12.839360 kubelet[2549]: I1008 20:03:12.839265 2549 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c8f1fc26-cca7-425c-8be3-899ecf78b9d2-var-lib-calico\") pod \"tigera-operator-5d56685c77-jj6dx\" (UID: \"c8f1fc26-cca7-425c-8be3-899ecf78b9d2\") " pod="tigera-operator/tigera-operator-5d56685c77-jj6dx" Oct 8 20:03:12.840420 kubelet[2549]: E1008 20:03:12.840362 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:03:12.840892 containerd[1434]: time="2024-10-08T20:03:12.840828987Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kkhvv,Uid:9320a5f7-b3cd-48e9-a7be-c715264a17f2,Namespace:kube-system,Attempt:0,}" Oct 8 20:03:12.862612 containerd[1434]: time="2024-10-08T20:03:12.862357715Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:03:12.862612 containerd[1434]: time="2024-10-08T20:03:12.862456038Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:03:12.862612 containerd[1434]: time="2024-10-08T20:03:12.862475438Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:03:12.862794 containerd[1434]: time="2024-10-08T20:03:12.862569641Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:03:12.884793 systemd[1]: Started cri-containerd-85af2cdcfeb52c44a9a69472bdc0b945c37c65c6013a5c5ec50f379bcb3aeb02.scope - libcontainer container 85af2cdcfeb52c44a9a69472bdc0b945c37c65c6013a5c5ec50f379bcb3aeb02. Oct 8 20:03:12.905194 containerd[1434]: time="2024-10-08T20:03:12.905023323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kkhvv,Uid:9320a5f7-b3cd-48e9-a7be-c715264a17f2,Namespace:kube-system,Attempt:0,} returns sandbox id \"85af2cdcfeb52c44a9a69472bdc0b945c37c65c6013a5c5ec50f379bcb3aeb02\"" Oct 8 20:03:12.908674 kubelet[2549]: E1008 20:03:12.908559 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:03:12.912655 containerd[1434]: time="2024-10-08T20:03:12.912558108Z" level=info msg="CreateContainer within sandbox \"85af2cdcfeb52c44a9a69472bdc0b945c37c65c6013a5c5ec50f379bcb3aeb02\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 8 20:03:12.926889 containerd[1434]: time="2024-10-08T20:03:12.926820418Z" level=info msg="CreateContainer within sandbox \"85af2cdcfeb52c44a9a69472bdc0b945c37c65c6013a5c5ec50f379bcb3aeb02\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"cc6080c4daaed99571b36434344401d404708b5e16dbdf46db760870fa412e2d\"" Oct 8 20:03:12.927772 containerd[1434]: time="2024-10-08T20:03:12.927744521Z" level=info msg="StartContainer for \"cc6080c4daaed99571b36434344401d404708b5e16dbdf46db760870fa412e2d\"" Oct 8 20:03:12.966824 systemd[1]: Started cri-containerd-cc6080c4daaed99571b36434344401d404708b5e16dbdf46db760870fa412e2d.scope - libcontainer container cc6080c4daaed99571b36434344401d404708b5e16dbdf46db760870fa412e2d. Oct 8 20:03:12.989812 containerd[1434]: time="2024-10-08T20:03:12.989251670Z" level=info msg="StartContainer for \"cc6080c4daaed99571b36434344401d404708b5e16dbdf46db760870fa412e2d\" returns successfully" Oct 8 20:03:13.053198 containerd[1434]: time="2024-10-08T20:03:13.052780471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5d56685c77-jj6dx,Uid:c8f1fc26-cca7-425c-8be3-899ecf78b9d2,Namespace:tigera-operator,Attempt:0,}" Oct 8 20:03:13.083907 containerd[1434]: time="2024-10-08T20:03:13.083692782Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:03:13.083907 containerd[1434]: time="2024-10-08T20:03:13.083814265Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:03:13.083907 containerd[1434]: time="2024-10-08T20:03:13.083842106Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:03:13.084112 containerd[1434]: time="2024-10-08T20:03:13.083941228Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:03:13.102801 systemd[1]: Started cri-containerd-f48e9a90440ef49d8a61b014fc0e1f30d47a2c36105e68c110387c28180e2aeb.scope - libcontainer container f48e9a90440ef49d8a61b014fc0e1f30d47a2c36105e68c110387c28180e2aeb. Oct 8 20:03:13.133416 containerd[1434]: time="2024-10-08T20:03:13.133359965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5d56685c77-jj6dx,Uid:c8f1fc26-cca7-425c-8be3-899ecf78b9d2,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"f48e9a90440ef49d8a61b014fc0e1f30d47a2c36105e68c110387c28180e2aeb\"" Oct 8 20:03:13.135867 containerd[1434]: time="2024-10-08T20:03:13.135723980Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\"" Oct 8 20:03:13.767312 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount33466255.mount: Deactivated successfully. Oct 8 20:03:13.886143 kubelet[2549]: E1008 20:03:13.886104 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:03:13.895685 kubelet[2549]: I1008 20:03:13.895490 2549 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-kkhvv" podStartSLOduration=1.895385062 podStartE2EDuration="1.895385062s" podCreationTimestamp="2024-10-08 20:03:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 20:03:13.894786688 +0000 UTC m=+17.135310086" watchObservedRunningTime="2024-10-08 20:03:13.895385062 +0000 UTC m=+17.135908500" Oct 8 20:03:14.012186 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2193027478.mount: Deactivated successfully. Oct 8 20:03:14.869897 containerd[1434]: time="2024-10-08T20:03:14.869837559Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:03:14.870330 containerd[1434]: time="2024-10-08T20:03:14.870292129Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.3: active requests=0, bytes read=19485899" Oct 8 20:03:14.871608 containerd[1434]: time="2024-10-08T20:03:14.871568956Z" level=info msg="ImageCreate event name:\"sha256:2fd8a2c22d96f6b41bf5709bd6ebbb915093532073f7039d03ab056b4e148f56\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:03:14.874072 containerd[1434]: time="2024-10-08T20:03:14.874038930Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:03:14.874778 containerd[1434]: time="2024-10-08T20:03:14.874746425Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.3\" with image id \"sha256:2fd8a2c22d96f6b41bf5709bd6ebbb915093532073f7039d03ab056b4e148f56\", repo tag \"quay.io/tigera/operator:v1.34.3\", repo digest \"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\", size \"19480102\" in 1.738985525s" Oct 8 20:03:14.874778 containerd[1434]: time="2024-10-08T20:03:14.874775946Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\" returns image reference \"sha256:2fd8a2c22d96f6b41bf5709bd6ebbb915093532073f7039d03ab056b4e148f56\"" Oct 8 20:03:14.881125 containerd[1434]: time="2024-10-08T20:03:14.881078842Z" level=info msg="CreateContainer within sandbox \"f48e9a90440ef49d8a61b014fc0e1f30d47a2c36105e68c110387c28180e2aeb\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 8 20:03:14.893445 containerd[1434]: time="2024-10-08T20:03:14.893283265Z" level=info msg="CreateContainer within sandbox \"f48e9a90440ef49d8a61b014fc0e1f30d47a2c36105e68c110387c28180e2aeb\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"9337f33c462375f0da02dda042255cb35c2598416049e58937386690a3822b06\"" Oct 8 20:03:14.893908 containerd[1434]: time="2024-10-08T20:03:14.893876758Z" level=info msg="StartContainer for \"9337f33c462375f0da02dda042255cb35c2598416049e58937386690a3822b06\"" Oct 8 20:03:14.918768 systemd[1]: Started cri-containerd-9337f33c462375f0da02dda042255cb35c2598416049e58937386690a3822b06.scope - libcontainer container 9337f33c462375f0da02dda042255cb35c2598416049e58937386690a3822b06. Oct 8 20:03:14.939138 containerd[1434]: time="2024-10-08T20:03:14.939097333Z" level=info msg="StartContainer for \"9337f33c462375f0da02dda042255cb35c2598416049e58937386690a3822b06\" returns successfully" Oct 8 20:03:15.889519 systemd[1]: run-containerd-runc-k8s.io-9337f33c462375f0da02dda042255cb35c2598416049e58937386690a3822b06-runc.7I55b6.mount: Deactivated successfully. Oct 8 20:03:18.964829 kubelet[2549]: I1008 20:03:18.964204 2549 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5d56685c77-jj6dx" podStartSLOduration=5.22257075 podStartE2EDuration="6.964158772s" podCreationTimestamp="2024-10-08 20:03:12 +0000 UTC" firstStartedPulling="2024-10-08 20:03:13.134474511 +0000 UTC m=+16.374997949" lastFinishedPulling="2024-10-08 20:03:14.876062533 +0000 UTC m=+18.116585971" observedRunningTime="2024-10-08 20:03:15.923676413 +0000 UTC m=+19.164199851" watchObservedRunningTime="2024-10-08 20:03:18.964158772 +0000 UTC m=+22.204682210" Oct 8 20:03:18.964829 kubelet[2549]: I1008 20:03:18.964345 2549 topology_manager.go:215] "Topology Admit Handler" podUID="dcb9570e-47bb-4851-b90e-3c10c91716e1" podNamespace="calico-system" podName="calico-typha-787797f787-k5nvx" Oct 8 20:03:18.974108 systemd[1]: Created slice kubepods-besteffort-poddcb9570e_47bb_4851_b90e_3c10c91716e1.slice - libcontainer container kubepods-besteffort-poddcb9570e_47bb_4851_b90e_3c10c91716e1.slice. Oct 8 20:03:18.980671 kubelet[2549]: I1008 20:03:18.980581 2549 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/dcb9570e-47bb-4851-b90e-3c10c91716e1-typha-certs\") pod \"calico-typha-787797f787-k5nvx\" (UID: \"dcb9570e-47bb-4851-b90e-3c10c91716e1\") " pod="calico-system/calico-typha-787797f787-k5nvx" Oct 8 20:03:18.980671 kubelet[2549]: I1008 20:03:18.980643 2549 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ck2xz\" (UniqueName: \"kubernetes.io/projected/dcb9570e-47bb-4851-b90e-3c10c91716e1-kube-api-access-ck2xz\") pod \"calico-typha-787797f787-k5nvx\" (UID: \"dcb9570e-47bb-4851-b90e-3c10c91716e1\") " pod="calico-system/calico-typha-787797f787-k5nvx" Oct 8 20:03:18.980811 kubelet[2549]: I1008 20:03:18.980738 2549 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcb9570e-47bb-4851-b90e-3c10c91716e1-tigera-ca-bundle\") pod \"calico-typha-787797f787-k5nvx\" (UID: \"dcb9570e-47bb-4851-b90e-3c10c91716e1\") " pod="calico-system/calico-typha-787797f787-k5nvx" Oct 8 20:03:19.054183 kubelet[2549]: I1008 20:03:19.054129 2549 topology_manager.go:215] "Topology Admit Handler" podUID="092a96b1-f202-4f02-b8d7-071c960409bc" podNamespace="calico-system" podName="calico-node-wl5h5" Oct 8 20:03:19.063512 systemd[1]: Created slice kubepods-besteffort-pod092a96b1_f202_4f02_b8d7_071c960409bc.slice - libcontainer container kubepods-besteffort-pod092a96b1_f202_4f02_b8d7_071c960409bc.slice. Oct 8 20:03:19.081968 kubelet[2549]: I1008 20:03:19.081928 2549 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/092a96b1-f202-4f02-b8d7-071c960409bc-var-lib-calico\") pod \"calico-node-wl5h5\" (UID: \"092a96b1-f202-4f02-b8d7-071c960409bc\") " pod="calico-system/calico-node-wl5h5" Oct 8 20:03:19.082114 kubelet[2549]: I1008 20:03:19.082066 2549 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/092a96b1-f202-4f02-b8d7-071c960409bc-policysync\") pod \"calico-node-wl5h5\" (UID: \"092a96b1-f202-4f02-b8d7-071c960409bc\") " pod="calico-system/calico-node-wl5h5" Oct 8 20:03:19.082546 kubelet[2549]: I1008 20:03:19.082193 2549 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/092a96b1-f202-4f02-b8d7-071c960409bc-cni-log-dir\") pod \"calico-node-wl5h5\" (UID: \"092a96b1-f202-4f02-b8d7-071c960409bc\") " pod="calico-system/calico-node-wl5h5" Oct 8 20:03:19.082546 kubelet[2549]: I1008 20:03:19.082232 2549 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7qtt\" (UniqueName: \"kubernetes.io/projected/092a96b1-f202-4f02-b8d7-071c960409bc-kube-api-access-p7qtt\") pod \"calico-node-wl5h5\" (UID: \"092a96b1-f202-4f02-b8d7-071c960409bc\") " pod="calico-system/calico-node-wl5h5" Oct 8 20:03:19.082546 kubelet[2549]: I1008 20:03:19.082261 2549 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/092a96b1-f202-4f02-b8d7-071c960409bc-var-run-calico\") pod \"calico-node-wl5h5\" (UID: \"092a96b1-f202-4f02-b8d7-071c960409bc\") " pod="calico-system/calico-node-wl5h5" Oct 8 20:03:19.082546 kubelet[2549]: I1008 20:03:19.082287 2549 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/092a96b1-f202-4f02-b8d7-071c960409bc-cni-net-dir\") pod \"calico-node-wl5h5\" (UID: \"092a96b1-f202-4f02-b8d7-071c960409bc\") " pod="calico-system/calico-node-wl5h5" Oct 8 20:03:19.082546 kubelet[2549]: I1008 20:03:19.082320 2549 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/092a96b1-f202-4f02-b8d7-071c960409bc-cni-bin-dir\") pod \"calico-node-wl5h5\" (UID: \"092a96b1-f202-4f02-b8d7-071c960409bc\") " pod="calico-system/calico-node-wl5h5" Oct 8 20:03:19.082709 kubelet[2549]: I1008 20:03:19.082339 2549 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/092a96b1-f202-4f02-b8d7-071c960409bc-xtables-lock\") pod \"calico-node-wl5h5\" (UID: \"092a96b1-f202-4f02-b8d7-071c960409bc\") " pod="calico-system/calico-node-wl5h5" Oct 8 20:03:19.082709 kubelet[2549]: I1008 20:03:19.082361 2549 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/092a96b1-f202-4f02-b8d7-071c960409bc-node-certs\") pod \"calico-node-wl5h5\" (UID: \"092a96b1-f202-4f02-b8d7-071c960409bc\") " pod="calico-system/calico-node-wl5h5" Oct 8 20:03:19.082709 kubelet[2549]: I1008 20:03:19.082383 2549 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/092a96b1-f202-4f02-b8d7-071c960409bc-tigera-ca-bundle\") pod \"calico-node-wl5h5\" (UID: \"092a96b1-f202-4f02-b8d7-071c960409bc\") " pod="calico-system/calico-node-wl5h5" Oct 8 20:03:19.082709 kubelet[2549]: I1008 20:03:19.082403 2549 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/092a96b1-f202-4f02-b8d7-071c960409bc-flexvol-driver-host\") pod \"calico-node-wl5h5\" (UID: \"092a96b1-f202-4f02-b8d7-071c960409bc\") " pod="calico-system/calico-node-wl5h5" Oct 8 20:03:19.082709 kubelet[2549]: I1008 20:03:19.082422 2549 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/092a96b1-f202-4f02-b8d7-071c960409bc-lib-modules\") pod \"calico-node-wl5h5\" (UID: \"092a96b1-f202-4f02-b8d7-071c960409bc\") " pod="calico-system/calico-node-wl5h5" Oct 8 20:03:19.167553 kubelet[2549]: I1008 20:03:19.167492 2549 topology_manager.go:215] "Topology Admit Handler" podUID="02b0616c-b70a-4eb4-99b0-3609843a3ee6" podNamespace="calico-system" podName="csi-node-driver-jvqdn" Oct 8 20:03:19.169204 kubelet[2549]: E1008 20:03:19.169175 2549 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jvqdn" podUID="02b0616c-b70a-4eb4-99b0-3609843a3ee6" Oct 8 20:03:19.191606 kubelet[2549]: E1008 20:03:19.191557 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:19.191606 kubelet[2549]: W1008 20:03:19.191587 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:19.191606 kubelet[2549]: E1008 20:03:19.191611 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:19.205348 kubelet[2549]: E1008 20:03:19.205296 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:19.205348 kubelet[2549]: W1008 20:03:19.205325 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:19.205348 kubelet[2549]: E1008 20:03:19.205346 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:19.268962 kubelet[2549]: E1008 20:03:19.268838 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:19.268962 kubelet[2549]: W1008 20:03:19.268887 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:19.268962 kubelet[2549]: E1008 20:03:19.268909 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:19.269137 kubelet[2549]: E1008 20:03:19.269119 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:19.269137 kubelet[2549]: W1008 20:03:19.269129 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:19.269137 kubelet[2549]: E1008 20:03:19.269140 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:19.269338 kubelet[2549]: E1008 20:03:19.269278 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:19.269338 kubelet[2549]: W1008 20:03:19.269288 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:19.269338 kubelet[2549]: E1008 20:03:19.269298 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:19.269442 kubelet[2549]: E1008 20:03:19.269418 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:19.269442 kubelet[2549]: W1008 20:03:19.269427 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:19.269493 kubelet[2549]: E1008 20:03:19.269445 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:19.269608 kubelet[2549]: E1008 20:03:19.269584 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:19.269608 kubelet[2549]: W1008 20:03:19.269595 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:19.269608 kubelet[2549]: E1008 20:03:19.269605 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:19.269769 kubelet[2549]: E1008 20:03:19.269740 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:19.269769 kubelet[2549]: W1008 20:03:19.269751 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:19.269769 kubelet[2549]: E1008 20:03:19.269760 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:19.269886 kubelet[2549]: E1008 20:03:19.269870 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:19.269886 kubelet[2549]: W1008 20:03:19.269879 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:19.269952 kubelet[2549]: E1008 20:03:19.269890 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:19.270218 kubelet[2549]: E1008 20:03:19.270189 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:19.270218 kubelet[2549]: W1008 20:03:19.270204 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:19.270555 kubelet[2549]: E1008 20:03:19.270374 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:19.273721 kubelet[2549]: E1008 20:03:19.271081 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:19.273721 kubelet[2549]: W1008 20:03:19.271093 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:19.273721 kubelet[2549]: E1008 20:03:19.271106 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:19.273721 kubelet[2549]: E1008 20:03:19.271256 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:19.273721 kubelet[2549]: W1008 20:03:19.271263 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:19.273721 kubelet[2549]: E1008 20:03:19.271272 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:19.273721 kubelet[2549]: E1008 20:03:19.271396 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:19.273721 kubelet[2549]: W1008 20:03:19.271403 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:19.273721 kubelet[2549]: E1008 20:03:19.271412 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:19.273721 kubelet[2549]: E1008 20:03:19.271570 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:19.273936 kubelet[2549]: W1008 20:03:19.271577 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:19.273936 kubelet[2549]: E1008 20:03:19.271587 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:19.273936 kubelet[2549]: E1008 20:03:19.271805 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:19.273936 kubelet[2549]: W1008 20:03:19.271813 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:19.273936 kubelet[2549]: E1008 20:03:19.271838 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:19.273936 kubelet[2549]: E1008 20:03:19.271970 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:19.273936 kubelet[2549]: W1008 20:03:19.271976 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:19.273936 kubelet[2549]: E1008 20:03:19.271985 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:19.273936 kubelet[2549]: E1008 20:03:19.272118 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:19.273936 kubelet[2549]: W1008 20:03:19.272124 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:19.274302 kubelet[2549]: E1008 20:03:19.272132 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:19.274302 kubelet[2549]: E1008 20:03:19.272289 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:19.274302 kubelet[2549]: W1008 20:03:19.272296 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:19.274302 kubelet[2549]: E1008 20:03:19.272305 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:19.274302 kubelet[2549]: E1008 20:03:19.272550 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:19.274302 kubelet[2549]: W1008 20:03:19.272558 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:19.274302 kubelet[2549]: E1008 20:03:19.272568 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:19.274302 kubelet[2549]: E1008 20:03:19.272762 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:19.274302 kubelet[2549]: W1008 20:03:19.272774 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:19.274302 kubelet[2549]: E1008 20:03:19.272785 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:19.275377 kubelet[2549]: E1008 20:03:19.272975 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:19.275377 kubelet[2549]: W1008 20:03:19.272989 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:19.275377 kubelet[2549]: E1008 20:03:19.272999 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:19.275377 kubelet[2549]: E1008 20:03:19.273207 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:19.275377 kubelet[2549]: W1008 20:03:19.273214 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:19.275377 kubelet[2549]: E1008 20:03:19.273226 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:19.281231 kubelet[2549]: E1008 20:03:19.281189 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:03:19.281901 containerd[1434]: time="2024-10-08T20:03:19.281813814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-787797f787-k5nvx,Uid:dcb9570e-47bb-4851-b90e-3c10c91716e1,Namespace:calico-system,Attempt:0,}" Oct 8 20:03:19.284970 kubelet[2549]: E1008 20:03:19.284584 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:19.284970 kubelet[2549]: W1008 20:03:19.284603 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:19.284970 kubelet[2549]: E1008 20:03:19.284635 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:19.284970 kubelet[2549]: I1008 20:03:19.284768 2549 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/02b0616c-b70a-4eb4-99b0-3609843a3ee6-varrun\") pod \"csi-node-driver-jvqdn\" (UID: \"02b0616c-b70a-4eb4-99b0-3609843a3ee6\") " pod="calico-system/csi-node-driver-jvqdn" Oct 8 20:03:19.285236 kubelet[2549]: E1008 20:03:19.285218 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:19.285299 kubelet[2549]: W1008 20:03:19.285285 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:19.285363 kubelet[2549]: E1008 20:03:19.285352 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:19.285448 kubelet[2549]: I1008 20:03:19.285426 2549 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/02b0616c-b70a-4eb4-99b0-3609843a3ee6-registration-dir\") pod \"csi-node-driver-jvqdn\" (UID: \"02b0616c-b70a-4eb4-99b0-3609843a3ee6\") " pod="calico-system/csi-node-driver-jvqdn" Oct 8 20:03:19.285689 kubelet[2549]: E1008 20:03:19.285666 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:19.285689 kubelet[2549]: W1008 20:03:19.285683 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:19.285850 kubelet[2549]: E1008 20:03:19.285703 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:19.286054 kubelet[2549]: E1008 20:03:19.285969 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:19.286054 kubelet[2549]: W1008 20:03:19.285980 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:19.286054 kubelet[2549]: E1008 20:03:19.285995 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:19.286230 kubelet[2549]: E1008 20:03:19.286215 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:19.286230 kubelet[2549]: W1008 20:03:19.286229 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:19.286304 kubelet[2549]: E1008 20:03:19.286245 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:19.286422 kubelet[2549]: E1008 20:03:19.286392 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:19.286422 kubelet[2549]: W1008 20:03:19.286409 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:19.286422 kubelet[2549]: E1008 20:03:19.286423 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:19.286703 kubelet[2549]: E1008 20:03:19.286687 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:19.286703 kubelet[2549]: W1008 20:03:19.286701 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:19.286797 kubelet[2549]: E1008 20:03:19.286715 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:19.286797 kubelet[2549]: I1008 20:03:19.286752 2549 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pq894\" (UniqueName: \"kubernetes.io/projected/02b0616c-b70a-4eb4-99b0-3609843a3ee6-kube-api-access-pq894\") pod \"csi-node-driver-jvqdn\" (UID: \"02b0616c-b70a-4eb4-99b0-3609843a3ee6\") " pod="calico-system/csi-node-driver-jvqdn" Oct 8 20:03:19.287270 kubelet[2549]: E1008 20:03:19.287250 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:19.287270 kubelet[2549]: W1008 20:03:19.287269 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:19.287335 kubelet[2549]: E1008 20:03:19.287289 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:19.287335 kubelet[2549]: I1008 20:03:19.287313 2549 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/02b0616c-b70a-4eb4-99b0-3609843a3ee6-kubelet-dir\") pod \"csi-node-driver-jvqdn\" (UID: \"02b0616c-b70a-4eb4-99b0-3609843a3ee6\") " pod="calico-system/csi-node-driver-jvqdn" Oct 8 20:03:19.287640 kubelet[2549]: E1008 20:03:19.287604 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:19.287640 kubelet[2549]: W1008 20:03:19.287619 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:19.287701 kubelet[2549]: E1008 20:03:19.287651 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:19.287701 kubelet[2549]: I1008 20:03:19.287675 2549 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/02b0616c-b70a-4eb4-99b0-3609843a3ee6-socket-dir\") pod \"csi-node-driver-jvqdn\" (UID: \"02b0616c-b70a-4eb4-99b0-3609843a3ee6\") " pod="calico-system/csi-node-driver-jvqdn" Oct 8 20:03:19.288403 kubelet[2549]: E1008 20:03:19.288383 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:19.288403 kubelet[2549]: W1008 20:03:19.288401 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:19.288524 kubelet[2549]: E1008 20:03:19.288423 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:19.288602 kubelet[2549]: E1008 20:03:19.288591 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:19.288807 kubelet[2549]: W1008 20:03:19.288602 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:19.288931 kubelet[2549]: E1008 20:03:19.288890 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:19.289878 kubelet[2549]: E1008 20:03:19.289838 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:19.289878 kubelet[2549]: W1008 20:03:19.289857 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:19.289989 kubelet[2549]: E1008 20:03:19.289909 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:19.290402 kubelet[2549]: E1008 20:03:19.290339 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:19.290402 kubelet[2549]: W1008 20:03:19.290365 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:19.290554 kubelet[2549]: E1008 20:03:19.290466 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:19.290823 kubelet[2549]: E1008 20:03:19.290798 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:19.290823 kubelet[2549]: W1008 20:03:19.290816 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:19.290823 kubelet[2549]: E1008 20:03:19.290829 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:19.291071 kubelet[2549]: E1008 20:03:19.291022 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:19.291071 kubelet[2549]: W1008 20:03:19.291031 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:19.291071 kubelet[2549]: E1008 20:03:19.291043 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:19.334843 containerd[1434]: time="2024-10-08T20:03:19.334736521Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:03:19.334843 containerd[1434]: time="2024-10-08T20:03:19.334787442Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:03:19.334843 containerd[1434]: time="2024-10-08T20:03:19.334798642Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:03:19.335321 containerd[1434]: time="2024-10-08T20:03:19.334868963Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:03:19.363884 systemd[1]: Started cri-containerd-7536e052a2071cbf445be19e6d4ad83ecfd446ca063bbecb8d188588fc6ea7fd.scope - libcontainer container 7536e052a2071cbf445be19e6d4ad83ecfd446ca063bbecb8d188588fc6ea7fd. Oct 8 20:03:19.366746 kubelet[2549]: E1008 20:03:19.365458 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:03:19.366864 containerd[1434]: time="2024-10-08T20:03:19.366279974Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-wl5h5,Uid:092a96b1-f202-4f02-b8d7-071c960409bc,Namespace:calico-system,Attempt:0,}" Oct 8 20:03:19.389910 kubelet[2549]: E1008 20:03:19.389164 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:19.389910 kubelet[2549]: W1008 20:03:19.389185 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:19.389910 kubelet[2549]: E1008 20:03:19.389214 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:19.389910 kubelet[2549]: E1008 20:03:19.389461 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:19.389910 kubelet[2549]: W1008 20:03:19.389472 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:19.389910 kubelet[2549]: E1008 20:03:19.389511 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:19.389910 kubelet[2549]: E1008 20:03:19.389757 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:19.389910 kubelet[2549]: W1008 20:03:19.389766 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:19.389910 kubelet[2549]: E1008 20:03:19.389860 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:19.390356 kubelet[2549]: E1008 20:03:19.390208 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:19.390356 kubelet[2549]: W1008 20:03:19.390220 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:19.390356 kubelet[2549]: E1008 20:03:19.390238 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:19.390956 kubelet[2549]: E1008 20:03:19.390446 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:19.390956 kubelet[2549]: W1008 20:03:19.390460 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:19.390956 kubelet[2549]: E1008 20:03:19.390477 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:19.390956 kubelet[2549]: E1008 20:03:19.390778 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:19.390956 kubelet[2549]: W1008 20:03:19.390796 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:19.390956 kubelet[2549]: E1008 20:03:19.390849 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:19.391128 kubelet[2549]: E1008 20:03:19.391013 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:19.391128 kubelet[2549]: W1008 20:03:19.391022 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:19.391128 kubelet[2549]: E1008 20:03:19.391076 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:19.391540 kubelet[2549]: E1008 20:03:19.391514 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:19.391641 kubelet[2549]: W1008 20:03:19.391528 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:19.391641 kubelet[2549]: E1008 20:03:19.391593 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:19.391868 kubelet[2549]: E1008 20:03:19.391842 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:19.391868 kubelet[2549]: W1008 20:03:19.391862 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:19.391927 kubelet[2549]: E1008 20:03:19.391901 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:19.392065 kubelet[2549]: E1008 20:03:19.392043 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:19.392065 kubelet[2549]: W1008 20:03:19.392058 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:19.392260 kubelet[2549]: E1008 20:03:19.392232 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:19.392313 kubelet[2549]: E1008 20:03:19.392238 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:19.392313 kubelet[2549]: W1008 20:03:19.392274 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:19.392362 kubelet[2549]: E1008 20:03:19.392337 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:19.392532 kubelet[2549]: E1008 20:03:19.392510 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:19.392532 kubelet[2549]: W1008 20:03:19.392523 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:19.392670 kubelet[2549]: E1008 20:03:19.392641 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:19.392936 kubelet[2549]: E1008 20:03:19.392913 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:19.392983 kubelet[2549]: W1008 20:03:19.392930 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:19.392983 kubelet[2549]: E1008 20:03:19.392958 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:19.393805 kubelet[2549]: E1008 20:03:19.393767 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:19.393805 kubelet[2549]: W1008 20:03:19.393787 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:19.393919 kubelet[2549]: E1008 20:03:19.393821 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:19.395020 kubelet[2549]: E1008 20:03:19.394113 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:19.395020 kubelet[2549]: W1008 20:03:19.394129 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:19.395020 kubelet[2549]: E1008 20:03:19.394222 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:19.395020 kubelet[2549]: E1008 20:03:19.394487 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:19.395020 kubelet[2549]: W1008 20:03:19.394501 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:19.395020 kubelet[2549]: E1008 20:03:19.394588 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:19.395020 kubelet[2549]: E1008 20:03:19.394847 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:19.395020 kubelet[2549]: W1008 20:03:19.394860 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:19.395230 kubelet[2549]: E1008 20:03:19.395042 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:19.395303 kubelet[2549]: E1008 20:03:19.395277 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:19.395303 kubelet[2549]: W1008 20:03:19.395293 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:19.395365 kubelet[2549]: E1008 20:03:19.395328 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:19.396948 kubelet[2549]: E1008 20:03:19.396728 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:19.396948 kubelet[2549]: W1008 20:03:19.396745 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:19.396948 kubelet[2549]: E1008 20:03:19.396838 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:19.396948 kubelet[2549]: E1008 20:03:19.396923 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:19.396948 kubelet[2549]: W1008 20:03:19.396929 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:19.397140 kubelet[2549]: E1008 20:03:19.397005 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:19.397196 kubelet[2549]: E1008 20:03:19.397178 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:19.397196 kubelet[2549]: W1008 20:03:19.397190 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:19.397361 kubelet[2549]: E1008 20:03:19.397336 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:19.397478 kubelet[2549]: E1008 20:03:19.397459 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:19.397478 kubelet[2549]: W1008 20:03:19.397473 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:19.397538 kubelet[2549]: E1008 20:03:19.397515 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:19.400464 kubelet[2549]: E1008 20:03:19.397975 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:19.400464 kubelet[2549]: W1008 20:03:19.397992 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:19.400464 kubelet[2549]: E1008 20:03:19.398062 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:19.400464 kubelet[2549]: E1008 20:03:19.398267 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:19.400464 kubelet[2549]: W1008 20:03:19.398278 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:19.400464 kubelet[2549]: E1008 20:03:19.398302 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:19.400464 kubelet[2549]: E1008 20:03:19.398905 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:19.400464 kubelet[2549]: W1008 20:03:19.398919 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:19.400464 kubelet[2549]: E1008 20:03:19.398936 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:19.404184 containerd[1434]: time="2024-10-08T20:03:19.404069364Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:03:19.404184 containerd[1434]: time="2024-10-08T20:03:19.404134605Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:03:19.404184 containerd[1434]: time="2024-10-08T20:03:19.404161846Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:03:19.404322 containerd[1434]: time="2024-10-08T20:03:19.404261247Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:03:19.412015 kubelet[2549]: E1008 20:03:19.411971 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:19.412015 kubelet[2549]: W1008 20:03:19.411995 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:19.412015 kubelet[2549]: E1008 20:03:19.412014 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:19.432518 systemd[1]: Started cri-containerd-ba329c69a6ce43f00704064510bf4c40e562235938dbbe310e205b3065c98f9b.scope - libcontainer container ba329c69a6ce43f00704064510bf4c40e562235938dbbe310e205b3065c98f9b. Oct 8 20:03:19.433165 containerd[1434]: time="2024-10-08T20:03:19.433038737Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-787797f787-k5nvx,Uid:dcb9570e-47bb-4851-b90e-3c10c91716e1,Namespace:calico-system,Attempt:0,} returns sandbox id \"7536e052a2071cbf445be19e6d4ad83ecfd446ca063bbecb8d188588fc6ea7fd\"" Oct 8 20:03:19.434024 kubelet[2549]: E1008 20:03:19.433994 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:03:19.435242 containerd[1434]: time="2024-10-08T20:03:19.435017168Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\"" Oct 8 20:03:19.463364 containerd[1434]: time="2024-10-08T20:03:19.463304010Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-wl5h5,Uid:092a96b1-f202-4f02-b8d7-071c960409bc,Namespace:calico-system,Attempt:0,} returns sandbox id \"ba329c69a6ce43f00704064510bf4c40e562235938dbbe310e205b3065c98f9b\"" Oct 8 20:03:19.464101 kubelet[2549]: E1008 20:03:19.464077 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:03:20.809225 containerd[1434]: time="2024-10-08T20:03:20.809170290Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:03:20.810125 containerd[1434]: time="2024-10-08T20:03:20.810077384Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.1: active requests=0, bytes read=27474479" Oct 8 20:03:20.811024 containerd[1434]: time="2024-10-08T20:03:20.810989397Z" level=info msg="ImageCreate event name:\"sha256:c1d0081df1580fc17ebf95ca7499d2e1af1b1ab8c75835172213221419018924\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:03:20.813430 containerd[1434]: time="2024-10-08T20:03:20.813397752Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:03:20.814691 containerd[1434]: time="2024-10-08T20:03:20.814653331Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.1\" with image id \"sha256:c1d0081df1580fc17ebf95ca7499d2e1af1b1ab8c75835172213221419018924\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\", size \"28841990\" in 1.379601203s" Oct 8 20:03:20.814742 containerd[1434]: time="2024-10-08T20:03:20.814691771Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\" returns image reference \"sha256:c1d0081df1580fc17ebf95ca7499d2e1af1b1ab8c75835172213221419018924\"" Oct 8 20:03:20.815829 containerd[1434]: time="2024-10-08T20:03:20.815695866Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\"" Oct 8 20:03:20.821516 containerd[1434]: time="2024-10-08T20:03:20.821474311Z" level=info msg="CreateContainer within sandbox \"7536e052a2071cbf445be19e6d4ad83ecfd446ca063bbecb8d188588fc6ea7fd\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 8 20:03:20.832043 containerd[1434]: time="2024-10-08T20:03:20.831998865Z" level=info msg="CreateContainer within sandbox \"7536e052a2071cbf445be19e6d4ad83ecfd446ca063bbecb8d188588fc6ea7fd\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"be1254c9f1b0fc405668696c9ad183ccc8e8c95e39e4d5497e77564e97a74b4c\"" Oct 8 20:03:20.833490 containerd[1434]: time="2024-10-08T20:03:20.832420951Z" level=info msg="StartContainer for \"be1254c9f1b0fc405668696c9ad183ccc8e8c95e39e4d5497e77564e97a74b4c\"" Oct 8 20:03:20.846283 kubelet[2549]: E1008 20:03:20.846229 2549 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jvqdn" podUID="02b0616c-b70a-4eb4-99b0-3609843a3ee6" Oct 8 20:03:20.861141 systemd[1]: Started cri-containerd-be1254c9f1b0fc405668696c9ad183ccc8e8c95e39e4d5497e77564e97a74b4c.scope - libcontainer container be1254c9f1b0fc405668696c9ad183ccc8e8c95e39e4d5497e77564e97a74b4c. Oct 8 20:03:20.901855 containerd[1434]: time="2024-10-08T20:03:20.901802207Z" level=info msg="StartContainer for \"be1254c9f1b0fc405668696c9ad183ccc8e8c95e39e4d5497e77564e97a74b4c\" returns successfully" Oct 8 20:03:20.921660 kubelet[2549]: E1008 20:03:20.921614 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:03:20.986718 kubelet[2549]: E1008 20:03:20.986673 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:20.986718 kubelet[2549]: W1008 20:03:20.986694 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:20.986718 kubelet[2549]: E1008 20:03:20.986720 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:20.987593 kubelet[2549]: E1008 20:03:20.986930 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:20.987593 kubelet[2549]: W1008 20:03:20.986939 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:20.987593 kubelet[2549]: E1008 20:03:20.986950 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:20.987593 kubelet[2549]: E1008 20:03:20.987167 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:20.987593 kubelet[2549]: W1008 20:03:20.987175 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:20.987593 kubelet[2549]: E1008 20:03:20.987186 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:20.987593 kubelet[2549]: E1008 20:03:20.987410 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:20.987593 kubelet[2549]: W1008 20:03:20.987419 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:20.987593 kubelet[2549]: E1008 20:03:20.987433 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:20.988034 kubelet[2549]: E1008 20:03:20.987830 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:20.988034 kubelet[2549]: W1008 20:03:20.987841 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:20.988034 kubelet[2549]: E1008 20:03:20.987853 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:20.988518 kubelet[2549]: E1008 20:03:20.988504 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:20.988518 kubelet[2549]: W1008 20:03:20.988517 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:20.988598 kubelet[2549]: E1008 20:03:20.988529 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:20.988736 kubelet[2549]: E1008 20:03:20.988722 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:20.988772 kubelet[2549]: W1008 20:03:20.988736 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:20.988772 kubelet[2549]: E1008 20:03:20.988748 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:20.988927 kubelet[2549]: E1008 20:03:20.988914 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:20.988927 kubelet[2549]: W1008 20:03:20.988927 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:20.989008 kubelet[2549]: E1008 20:03:20.988938 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:20.989118 kubelet[2549]: E1008 20:03:20.989107 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:20.989156 kubelet[2549]: W1008 20:03:20.989128 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:20.989156 kubelet[2549]: E1008 20:03:20.989139 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:20.989335 kubelet[2549]: E1008 20:03:20.989315 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:20.989335 kubelet[2549]: W1008 20:03:20.989327 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:20.989440 kubelet[2549]: E1008 20:03:20.989339 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:20.989692 kubelet[2549]: E1008 20:03:20.989663 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:20.989692 kubelet[2549]: W1008 20:03:20.989677 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:20.989692 kubelet[2549]: E1008 20:03:20.989689 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:20.990003 kubelet[2549]: E1008 20:03:20.989988 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:20.990003 kubelet[2549]: W1008 20:03:20.989997 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:20.990157 kubelet[2549]: E1008 20:03:20.990007 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:20.990278 kubelet[2549]: E1008 20:03:20.990267 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:20.990278 kubelet[2549]: W1008 20:03:20.990277 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:20.990278 kubelet[2549]: E1008 20:03:20.990289 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:20.990565 kubelet[2549]: E1008 20:03:20.990554 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:20.990565 kubelet[2549]: W1008 20:03:20.990564 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:20.990653 kubelet[2549]: E1008 20:03:20.990574 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:20.990863 kubelet[2549]: E1008 20:03:20.990828 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:20.990863 kubelet[2549]: W1008 20:03:20.990837 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:20.990863 kubelet[2549]: E1008 20:03:20.990847 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:21.003222 kubelet[2549]: E1008 20:03:21.003093 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:21.003222 kubelet[2549]: W1008 20:03:21.003106 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:21.003222 kubelet[2549]: E1008 20:03:21.003141 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:21.003615 kubelet[2549]: E1008 20:03:21.003513 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:21.003615 kubelet[2549]: W1008 20:03:21.003540 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:21.003615 kubelet[2549]: E1008 20:03:21.003554 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:21.004013 kubelet[2549]: E1008 20:03:21.003988 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:21.004013 kubelet[2549]: W1008 20:03:21.004000 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:21.004183 kubelet[2549]: E1008 20:03:21.004065 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:21.004550 kubelet[2549]: E1008 20:03:21.004511 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:21.004550 kubelet[2549]: W1008 20:03:21.004524 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:21.004736 kubelet[2549]: E1008 20:03:21.004664 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:21.005085 kubelet[2549]: E1008 20:03:21.004959 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:21.005085 kubelet[2549]: W1008 20:03:21.004970 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:21.005085 kubelet[2549]: E1008 20:03:21.005020 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:21.005327 kubelet[2549]: E1008 20:03:21.005302 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:21.005389 kubelet[2549]: W1008 20:03:21.005377 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:21.005503 kubelet[2549]: E1008 20:03:21.005483 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:21.005806 kubelet[2549]: E1008 20:03:21.005779 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:21.005806 kubelet[2549]: W1008 20:03:21.005791 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:21.005983 kubelet[2549]: E1008 20:03:21.005919 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:21.006267 kubelet[2549]: E1008 20:03:21.006197 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:21.006267 kubelet[2549]: W1008 20:03:21.006210 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:21.006946 kubelet[2549]: E1008 20:03:21.006272 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:21.007398 kubelet[2549]: E1008 20:03:21.007164 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:21.007398 kubelet[2549]: W1008 20:03:21.007176 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:21.007398 kubelet[2549]: E1008 20:03:21.007208 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:21.009090 kubelet[2549]: E1008 20:03:21.008985 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:21.009090 kubelet[2549]: W1008 20:03:21.008998 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:21.009090 kubelet[2549]: E1008 20:03:21.009014 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:21.009356 kubelet[2549]: E1008 20:03:21.009204 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:21.009356 kubelet[2549]: W1008 20:03:21.009213 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:21.009356 kubelet[2549]: E1008 20:03:21.009277 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:21.009477 kubelet[2549]: E1008 20:03:21.009465 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:21.010017 kubelet[2549]: W1008 20:03:21.009555 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:21.010238 kubelet[2549]: E1008 20:03:21.010181 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:21.011476 kubelet[2549]: E1008 20:03:21.010594 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:21.011476 kubelet[2549]: W1008 20:03:21.010608 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:21.011774 kubelet[2549]: E1008 20:03:21.011739 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:21.012423 kubelet[2549]: E1008 20:03:21.011998 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:21.012423 kubelet[2549]: W1008 20:03:21.012011 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:21.012423 kubelet[2549]: E1008 20:03:21.012073 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:21.012860 kubelet[2549]: E1008 20:03:21.012822 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:21.012860 kubelet[2549]: W1008 20:03:21.012836 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:21.013035 kubelet[2549]: E1008 20:03:21.012956 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:21.014226 kubelet[2549]: E1008 20:03:21.013842 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:21.014226 kubelet[2549]: W1008 20:03:21.013856 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:21.014226 kubelet[2549]: E1008 20:03:21.013874 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:21.014226 kubelet[2549]: E1008 20:03:21.014057 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:21.014226 kubelet[2549]: W1008 20:03:21.014069 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:21.014226 kubelet[2549]: E1008 20:03:21.014083 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:21.015171 kubelet[2549]: E1008 20:03:21.015092 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:21.015171 kubelet[2549]: W1008 20:03:21.015106 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:21.015171 kubelet[2549]: E1008 20:03:21.015120 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:21.924702 kubelet[2549]: I1008 20:03:21.924672 2549 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 8 20:03:21.925267 kubelet[2549]: E1008 20:03:21.925247 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:03:21.996295 kubelet[2549]: E1008 20:03:21.996189 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:21.996295 kubelet[2549]: W1008 20:03:21.996205 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:21.996295 kubelet[2549]: E1008 20:03:21.996222 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:21.996504 kubelet[2549]: E1008 20:03:21.996491 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:21.996663 kubelet[2549]: W1008 20:03:21.996551 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:21.996663 kubelet[2549]: E1008 20:03:21.996568 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:21.996803 kubelet[2549]: E1008 20:03:21.996791 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:21.996857 kubelet[2549]: W1008 20:03:21.996846 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:21.996909 kubelet[2549]: E1008 20:03:21.996900 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:21.997223 kubelet[2549]: E1008 20:03:21.997119 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:21.997223 kubelet[2549]: W1008 20:03:21.997130 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:21.997223 kubelet[2549]: E1008 20:03:21.997144 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:21.997381 kubelet[2549]: E1008 20:03:21.997369 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:21.997433 kubelet[2549]: W1008 20:03:21.997423 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:21.997501 kubelet[2549]: E1008 20:03:21.997489 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:21.997814 kubelet[2549]: E1008 20:03:21.997723 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:21.997814 kubelet[2549]: W1008 20:03:21.997736 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:21.997814 kubelet[2549]: E1008 20:03:21.997748 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:21.998001 kubelet[2549]: E1008 20:03:21.997988 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:21.998169 kubelet[2549]: W1008 20:03:21.998074 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:21.998169 kubelet[2549]: E1008 20:03:21.998092 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:21.998301 kubelet[2549]: E1008 20:03:21.998288 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:21.998355 kubelet[2549]: W1008 20:03:21.998345 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:21.998475 kubelet[2549]: E1008 20:03:21.998451 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:21.998781 kubelet[2549]: E1008 20:03:21.998693 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:21.998781 kubelet[2549]: W1008 20:03:21.998704 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:21.998781 kubelet[2549]: E1008 20:03:21.998716 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:21.998947 kubelet[2549]: E1008 20:03:21.998935 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:21.999084 kubelet[2549]: W1008 20:03:21.998993 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:21.999084 kubelet[2549]: E1008 20:03:21.999010 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:21.999203 kubelet[2549]: E1008 20:03:21.999192 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:21.999255 kubelet[2549]: W1008 20:03:21.999245 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:21.999307 kubelet[2549]: E1008 20:03:21.999299 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:21.999562 kubelet[2549]: E1008 20:03:21.999549 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:21.999736 kubelet[2549]: W1008 20:03:21.999632 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:21.999736 kubelet[2549]: E1008 20:03:21.999650 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:21.999864 kubelet[2549]: E1008 20:03:21.999853 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:21.999918 kubelet[2549]: W1008 20:03:21.999908 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:22.000038 kubelet[2549]: E1008 20:03:21.999961 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:22.000139 kubelet[2549]: E1008 20:03:22.000127 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:22.000225 kubelet[2549]: W1008 20:03:22.000212 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:22.000279 kubelet[2549]: E1008 20:03:22.000270 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:22.000531 kubelet[2549]: E1008 20:03:22.000473 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:22.000531 kubelet[2549]: W1008 20:03:22.000483 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:22.000531 kubelet[2549]: E1008 20:03:22.000494 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:22.009817 kubelet[2549]: E1008 20:03:22.009800 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:22.009817 kubelet[2549]: W1008 20:03:22.009814 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:22.009899 kubelet[2549]: E1008 20:03:22.009828 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:22.010031 kubelet[2549]: E1008 20:03:22.010017 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:22.010031 kubelet[2549]: W1008 20:03:22.010030 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:22.010116 kubelet[2549]: E1008 20:03:22.010044 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:22.010219 kubelet[2549]: E1008 20:03:22.010203 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:22.010219 kubelet[2549]: W1008 20:03:22.010214 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:22.010265 kubelet[2549]: E1008 20:03:22.010226 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:22.010460 kubelet[2549]: E1008 20:03:22.010448 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:22.010460 kubelet[2549]: W1008 20:03:22.010459 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:22.010524 kubelet[2549]: E1008 20:03:22.010482 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:22.010724 kubelet[2549]: E1008 20:03:22.010709 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:22.010724 kubelet[2549]: W1008 20:03:22.010723 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:22.010779 kubelet[2549]: E1008 20:03:22.010740 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:22.010895 kubelet[2549]: E1008 20:03:22.010883 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:22.010895 kubelet[2549]: W1008 20:03:22.010894 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:22.011020 kubelet[2549]: E1008 20:03:22.010909 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:22.011068 kubelet[2549]: E1008 20:03:22.011054 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:22.011068 kubelet[2549]: W1008 20:03:22.011065 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:22.011124 kubelet[2549]: E1008 20:03:22.011078 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:22.011381 kubelet[2549]: E1008 20:03:22.011323 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:22.011381 kubelet[2549]: W1008 20:03:22.011338 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:22.011381 kubelet[2549]: E1008 20:03:22.011358 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:22.011580 kubelet[2549]: E1008 20:03:22.011565 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:22.011580 kubelet[2549]: W1008 20:03:22.011580 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:22.011660 kubelet[2549]: E1008 20:03:22.011596 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:22.011792 kubelet[2549]: E1008 20:03:22.011779 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:22.011792 kubelet[2549]: W1008 20:03:22.011790 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:22.011852 kubelet[2549]: E1008 20:03:22.011804 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:22.011962 kubelet[2549]: E1008 20:03:22.011952 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:22.011962 kubelet[2549]: W1008 20:03:22.011962 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:22.012019 kubelet[2549]: E1008 20:03:22.011992 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:22.012097 kubelet[2549]: E1008 20:03:22.012085 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:22.012097 kubelet[2549]: W1008 20:03:22.012095 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:22.012154 kubelet[2549]: E1008 20:03:22.012108 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:22.012280 kubelet[2549]: E1008 20:03:22.012269 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:22.012280 kubelet[2549]: W1008 20:03:22.012279 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:22.012335 kubelet[2549]: E1008 20:03:22.012295 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:22.012508 kubelet[2549]: E1008 20:03:22.012493 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:22.012552 kubelet[2549]: W1008 20:03:22.012524 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:22.012552 kubelet[2549]: E1008 20:03:22.012540 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:22.012751 kubelet[2549]: E1008 20:03:22.012737 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:22.012751 kubelet[2549]: W1008 20:03:22.012748 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:22.012818 kubelet[2549]: E1008 20:03:22.012765 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:22.013230 kubelet[2549]: E1008 20:03:22.013038 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:22.013230 kubelet[2549]: W1008 20:03:22.013052 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:22.013230 kubelet[2549]: E1008 20:03:22.013070 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:22.013339 kubelet[2549]: E1008 20:03:22.013274 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:22.013339 kubelet[2549]: W1008 20:03:22.013283 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:22.013339 kubelet[2549]: E1008 20:03:22.013300 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:22.013495 kubelet[2549]: E1008 20:03:22.013461 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:03:22.013495 kubelet[2549]: W1008 20:03:22.013481 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:03:22.013495 kubelet[2549]: E1008 20:03:22.013491 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:03:22.238800 containerd[1434]: time="2024-10-08T20:03:22.238692974Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:03:22.242881 containerd[1434]: time="2024-10-08T20:03:22.242840148Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1: active requests=0, bytes read=4916957" Oct 8 20:03:22.243907 containerd[1434]: time="2024-10-08T20:03:22.243869401Z" level=info msg="ImageCreate event name:\"sha256:20b54f73684933653d4a4b8b63c59211e3c828f94251ecf4d1bff2a334ff4ba0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:03:22.246433 containerd[1434]: time="2024-10-08T20:03:22.246394674Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:03:22.246928 containerd[1434]: time="2024-10-08T20:03:22.246893560Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" with image id \"sha256:20b54f73684933653d4a4b8b63c59211e3c828f94251ecf4d1bff2a334ff4ba0\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\", size \"6284436\" in 1.431165454s" Oct 8 20:03:22.246966 containerd[1434]: time="2024-10-08T20:03:22.246925200Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" returns image reference \"sha256:20b54f73684933653d4a4b8b63c59211e3c828f94251ecf4d1bff2a334ff4ba0\"" Oct 8 20:03:22.248958 containerd[1434]: time="2024-10-08T20:03:22.248923106Z" level=info msg="CreateContainer within sandbox \"ba329c69a6ce43f00704064510bf4c40e562235938dbbe310e205b3065c98f9b\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 8 20:03:22.260510 containerd[1434]: time="2024-10-08T20:03:22.260456415Z" level=info msg="CreateContainer within sandbox \"ba329c69a6ce43f00704064510bf4c40e562235938dbbe310e205b3065c98f9b\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"473875c1ec9b666e37927c82e1bb82508fb87e089776383c1cf073a33d954398\"" Oct 8 20:03:22.260933 containerd[1434]: time="2024-10-08T20:03:22.260861380Z" level=info msg="StartContainer for \"473875c1ec9b666e37927c82e1bb82508fb87e089776383c1cf073a33d954398\"" Oct 8 20:03:22.290834 systemd[1]: Started cri-containerd-473875c1ec9b666e37927c82e1bb82508fb87e089776383c1cf073a33d954398.scope - libcontainer container 473875c1ec9b666e37927c82e1bb82508fb87e089776383c1cf073a33d954398. Oct 8 20:03:22.321530 containerd[1434]: time="2024-10-08T20:03:22.321490080Z" level=info msg="StartContainer for \"473875c1ec9b666e37927c82e1bb82508fb87e089776383c1cf073a33d954398\" returns successfully" Oct 8 20:03:22.347920 systemd[1]: cri-containerd-473875c1ec9b666e37927c82e1bb82508fb87e089776383c1cf073a33d954398.scope: Deactivated successfully. Oct 8 20:03:22.435508 containerd[1434]: time="2024-10-08T20:03:22.431782660Z" level=info msg="shim disconnected" id=473875c1ec9b666e37927c82e1bb82508fb87e089776383c1cf073a33d954398 namespace=k8s.io Oct 8 20:03:22.435508 containerd[1434]: time="2024-10-08T20:03:22.435401707Z" level=warning msg="cleaning up after shim disconnected" id=473875c1ec9b666e37927c82e1bb82508fb87e089776383c1cf073a33d954398 namespace=k8s.io Oct 8 20:03:22.435508 containerd[1434]: time="2024-10-08T20:03:22.435413267Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 20:03:22.846640 kubelet[2549]: E1008 20:03:22.846299 2549 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jvqdn" podUID="02b0616c-b70a-4eb4-99b0-3609843a3ee6" Oct 8 20:03:22.926486 kubelet[2549]: E1008 20:03:22.926451 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:03:22.928445 containerd[1434]: time="2024-10-08T20:03:22.928408974Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\"" Oct 8 20:03:22.940006 kubelet[2549]: I1008 20:03:22.939941 2549 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-787797f787-k5nvx" podStartSLOduration=3.55962543 podStartE2EDuration="4.939907762s" podCreationTimestamp="2024-10-08 20:03:18 +0000 UTC" firstStartedPulling="2024-10-08 20:03:19.434737164 +0000 UTC m=+22.675260602" lastFinishedPulling="2024-10-08 20:03:20.815019496 +0000 UTC m=+24.055542934" observedRunningTime="2024-10-08 20:03:20.933132626 +0000 UTC m=+24.173656104" watchObservedRunningTime="2024-10-08 20:03:22.939907762 +0000 UTC m=+26.180431200" Oct 8 20:03:23.257619 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-473875c1ec9b666e37927c82e1bb82508fb87e089776383c1cf073a33d954398-rootfs.mount: Deactivated successfully. Oct 8 20:03:24.846288 kubelet[2549]: E1008 20:03:24.846248 2549 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jvqdn" podUID="02b0616c-b70a-4eb4-99b0-3609843a3ee6" Oct 8 20:03:25.463698 containerd[1434]: time="2024-10-08T20:03:25.463618123Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:03:25.464311 containerd[1434]: time="2024-10-08T20:03:25.464266850Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.1: active requests=0, bytes read=86859887" Oct 8 20:03:25.465352 containerd[1434]: time="2024-10-08T20:03:25.465311101Z" level=info msg="ImageCreate event name:\"sha256:6123e515001d9cafdf3dbe8f8dc8b5ae1c56165013052b8cbc7d27f3395cfd85\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:03:25.467102 containerd[1434]: time="2024-10-08T20:03:25.467067000Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:03:25.467977 containerd[1434]: time="2024-10-08T20:03:25.467940289Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.1\" with image id \"sha256:6123e515001d9cafdf3dbe8f8dc8b5ae1c56165013052b8cbc7d27f3395cfd85\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\", size \"88227406\" in 2.539482634s" Oct 8 20:03:25.467977 containerd[1434]: time="2024-10-08T20:03:25.467975849Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\" returns image reference \"sha256:6123e515001d9cafdf3dbe8f8dc8b5ae1c56165013052b8cbc7d27f3395cfd85\"" Oct 8 20:03:25.469747 containerd[1434]: time="2024-10-08T20:03:25.469681268Z" level=info msg="CreateContainer within sandbox \"ba329c69a6ce43f00704064510bf4c40e562235938dbbe310e205b3065c98f9b\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 8 20:03:25.480039 containerd[1434]: time="2024-10-08T20:03:25.479995617Z" level=info msg="CreateContainer within sandbox \"ba329c69a6ce43f00704064510bf4c40e562235938dbbe310e205b3065c98f9b\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"992f6285624e36bb4c3af33b40fe717fc0bda3569014ae9451091bd98ed2cddc\"" Oct 8 20:03:25.480437 containerd[1434]: time="2024-10-08T20:03:25.480404541Z" level=info msg="StartContainer for \"992f6285624e36bb4c3af33b40fe717fc0bda3569014ae9451091bd98ed2cddc\"" Oct 8 20:03:25.511845 systemd[1]: Started cri-containerd-992f6285624e36bb4c3af33b40fe717fc0bda3569014ae9451091bd98ed2cddc.scope - libcontainer container 992f6285624e36bb4c3af33b40fe717fc0bda3569014ae9451091bd98ed2cddc. Oct 8 20:03:25.533957 containerd[1434]: time="2024-10-08T20:03:25.533889629Z" level=info msg="StartContainer for \"992f6285624e36bb4c3af33b40fe717fc0bda3569014ae9451091bd98ed2cddc\" returns successfully" Oct 8 20:03:25.934871 kubelet[2549]: E1008 20:03:25.934840 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:03:26.071339 systemd[1]: cri-containerd-992f6285624e36bb4c3af33b40fe717fc0bda3569014ae9451091bd98ed2cddc.scope: Deactivated successfully. Oct 8 20:03:26.090406 kubelet[2549]: I1008 20:03:26.090355 2549 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Oct 8 20:03:26.165246 kubelet[2549]: I1008 20:03:26.165208 2549 topology_manager.go:215] "Topology Admit Handler" podUID="607115c5-20c3-45be-b2fc-233b05ba8ddd" podNamespace="kube-system" podName="coredns-76f75df574-2fx9x" Oct 8 20:03:26.172578 systemd[1]: Created slice kubepods-burstable-pod607115c5_20c3_45be_b2fc_233b05ba8ddd.slice - libcontainer container kubepods-burstable-pod607115c5_20c3_45be_b2fc_233b05ba8ddd.slice. Oct 8 20:03:26.176445 kubelet[2549]: I1008 20:03:26.176144 2549 topology_manager.go:215] "Topology Admit Handler" podUID="fd379779-9501-4c20-b03d-dee08f6a4a3c" podNamespace="kube-system" podName="coredns-76f75df574-htgrq" Oct 8 20:03:26.177882 kubelet[2549]: I1008 20:03:26.177671 2549 topology_manager.go:215] "Topology Admit Handler" podUID="38f894d1-54bd-4496-83b6-2cd084c2920d" podNamespace="calico-system" podName="calico-kube-controllers-6b8996986b-k6ktv" Oct 8 20:03:26.183848 systemd[1]: Created slice kubepods-burstable-podfd379779_9501_4c20_b03d_dee08f6a4a3c.slice - libcontainer container kubepods-burstable-podfd379779_9501_4c20_b03d_dee08f6a4a3c.slice. Oct 8 20:03:26.189368 containerd[1434]: time="2024-10-08T20:03:26.189219336Z" level=info msg="shim disconnected" id=992f6285624e36bb4c3af33b40fe717fc0bda3569014ae9451091bd98ed2cddc namespace=k8s.io Oct 8 20:03:26.189368 containerd[1434]: time="2024-10-08T20:03:26.189274737Z" level=warning msg="cleaning up after shim disconnected" id=992f6285624e36bb4c3af33b40fe717fc0bda3569014ae9451091bd98ed2cddc namespace=k8s.io Oct 8 20:03:26.189368 containerd[1434]: time="2024-10-08T20:03:26.189282937Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 20:03:26.193929 systemd[1]: Created slice kubepods-besteffort-pod38f894d1_54bd_4496_83b6_2cd084c2920d.slice - libcontainer container kubepods-besteffort-pod38f894d1_54bd_4496_83b6_2cd084c2920d.slice. Oct 8 20:03:26.237465 kubelet[2549]: I1008 20:03:26.237390 2549 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zsh5\" (UniqueName: \"kubernetes.io/projected/fd379779-9501-4c20-b03d-dee08f6a4a3c-kube-api-access-5zsh5\") pod \"coredns-76f75df574-htgrq\" (UID: \"fd379779-9501-4c20-b03d-dee08f6a4a3c\") " pod="kube-system/coredns-76f75df574-htgrq" Oct 8 20:03:26.237465 kubelet[2549]: I1008 20:03:26.237463 2549 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/607115c5-20c3-45be-b2fc-233b05ba8ddd-config-volume\") pod \"coredns-76f75df574-2fx9x\" (UID: \"607115c5-20c3-45be-b2fc-233b05ba8ddd\") " pod="kube-system/coredns-76f75df574-2fx9x" Oct 8 20:03:26.237652 kubelet[2549]: I1008 20:03:26.237523 2549 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/38f894d1-54bd-4496-83b6-2cd084c2920d-tigera-ca-bundle\") pod \"calico-kube-controllers-6b8996986b-k6ktv\" (UID: \"38f894d1-54bd-4496-83b6-2cd084c2920d\") " pod="calico-system/calico-kube-controllers-6b8996986b-k6ktv" Oct 8 20:03:26.237652 kubelet[2549]: I1008 20:03:26.237558 2549 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fd379779-9501-4c20-b03d-dee08f6a4a3c-config-volume\") pod \"coredns-76f75df574-htgrq\" (UID: \"fd379779-9501-4c20-b03d-dee08f6a4a3c\") " pod="kube-system/coredns-76f75df574-htgrq" Oct 8 20:03:26.237652 kubelet[2549]: I1008 20:03:26.237587 2549 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsxls\" (UniqueName: \"kubernetes.io/projected/607115c5-20c3-45be-b2fc-233b05ba8ddd-kube-api-access-gsxls\") pod \"coredns-76f75df574-2fx9x\" (UID: \"607115c5-20c3-45be-b2fc-233b05ba8ddd\") " pod="kube-system/coredns-76f75df574-2fx9x" Oct 8 20:03:26.237652 kubelet[2549]: I1008 20:03:26.237611 2549 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z88jl\" (UniqueName: \"kubernetes.io/projected/38f894d1-54bd-4496-83b6-2cd084c2920d-kube-api-access-z88jl\") pod \"calico-kube-controllers-6b8996986b-k6ktv\" (UID: \"38f894d1-54bd-4496-83b6-2cd084c2920d\") " pod="calico-system/calico-kube-controllers-6b8996986b-k6ktv" Oct 8 20:03:26.478325 kubelet[2549]: E1008 20:03:26.477938 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:03:26.479498 containerd[1434]: time="2024-10-08T20:03:26.478775856Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-2fx9x,Uid:607115c5-20c3-45be-b2fc-233b05ba8ddd,Namespace:kube-system,Attempt:0,}" Oct 8 20:03:26.480537 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-992f6285624e36bb4c3af33b40fe717fc0bda3569014ae9451091bd98ed2cddc-rootfs.mount: Deactivated successfully. Oct 8 20:03:26.488800 kubelet[2549]: E1008 20:03:26.488768 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:03:26.491321 containerd[1434]: time="2024-10-08T20:03:26.490414172Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-htgrq,Uid:fd379779-9501-4c20-b03d-dee08f6a4a3c,Namespace:kube-system,Attempt:0,}" Oct 8 20:03:26.498308 containerd[1434]: time="2024-10-08T20:03:26.498278650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b8996986b-k6ktv,Uid:38f894d1-54bd-4496-83b6-2cd084c2920d,Namespace:calico-system,Attempt:0,}" Oct 8 20:03:26.828734 containerd[1434]: time="2024-10-08T20:03:26.828523494Z" level=error msg="Failed to destroy network for sandbox \"17cbb0ffeb5bde4c23040ffa36b878ea33796d5b1600384746fe647e501bd6e0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:03:26.829336 containerd[1434]: time="2024-10-08T20:03:26.829302902Z" level=error msg="encountered an error cleaning up failed sandbox \"17cbb0ffeb5bde4c23040ffa36b878ea33796d5b1600384746fe647e501bd6e0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:03:26.829514 containerd[1434]: time="2024-10-08T20:03:26.829458624Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b8996986b-k6ktv,Uid:38f894d1-54bd-4496-83b6-2cd084c2920d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"17cbb0ffeb5bde4c23040ffa36b878ea33796d5b1600384746fe647e501bd6e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:03:26.832559 kubelet[2549]: E1008 20:03:26.832501 2549 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"17cbb0ffeb5bde4c23040ffa36b878ea33796d5b1600384746fe647e501bd6e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:03:26.832734 kubelet[2549]: E1008 20:03:26.832606 2549 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"17cbb0ffeb5bde4c23040ffa36b878ea33796d5b1600384746fe647e501bd6e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6b8996986b-k6ktv" Oct 8 20:03:26.832734 kubelet[2549]: E1008 20:03:26.832641 2549 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"17cbb0ffeb5bde4c23040ffa36b878ea33796d5b1600384746fe647e501bd6e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6b8996986b-k6ktv" Oct 8 20:03:26.833283 kubelet[2549]: E1008 20:03:26.833257 2549 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6b8996986b-k6ktv_calico-system(38f894d1-54bd-4496-83b6-2cd084c2920d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6b8996986b-k6ktv_calico-system(38f894d1-54bd-4496-83b6-2cd084c2920d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"17cbb0ffeb5bde4c23040ffa36b878ea33796d5b1600384746fe647e501bd6e0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6b8996986b-k6ktv" podUID="38f894d1-54bd-4496-83b6-2cd084c2920d" Oct 8 20:03:26.834259 containerd[1434]: time="2024-10-08T20:03:26.834227951Z" level=error msg="Failed to destroy network for sandbox \"6d0c2d44c174279033af56b5e5da6c2f434d94a80e5ba36ed3fe54fe3b7f6170\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:03:26.834751 containerd[1434]: time="2024-10-08T20:03:26.834645475Z" level=error msg="encountered an error cleaning up failed sandbox \"6d0c2d44c174279033af56b5e5da6c2f434d94a80e5ba36ed3fe54fe3b7f6170\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:03:26.834751 containerd[1434]: time="2024-10-08T20:03:26.834708036Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-htgrq,Uid:fd379779-9501-4c20-b03d-dee08f6a4a3c,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6d0c2d44c174279033af56b5e5da6c2f434d94a80e5ba36ed3fe54fe3b7f6170\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:03:26.835052 containerd[1434]: time="2024-10-08T20:03:26.835013399Z" level=error msg="Failed to destroy network for sandbox \"1288112314736d4f32884b6cfd673dc8ba21db58407ff1e92ad6fede9391addc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:03:26.835447 containerd[1434]: time="2024-10-08T20:03:26.835342762Z" level=error msg="encountered an error cleaning up failed sandbox \"1288112314736d4f32884b6cfd673dc8ba21db58407ff1e92ad6fede9391addc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:03:26.835447 containerd[1434]: time="2024-10-08T20:03:26.835384963Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-2fx9x,Uid:607115c5-20c3-45be-b2fc-233b05ba8ddd,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1288112314736d4f32884b6cfd673dc8ba21db58407ff1e92ad6fede9391addc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:03:26.835779 kubelet[2549]: E1008 20:03:26.835757 2549 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1288112314736d4f32884b6cfd673dc8ba21db58407ff1e92ad6fede9391addc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:03:26.836157 kubelet[2549]: E1008 20:03:26.835951 2549 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1288112314736d4f32884b6cfd673dc8ba21db58407ff1e92ad6fede9391addc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-2fx9x" Oct 8 20:03:26.836157 kubelet[2549]: E1008 20:03:26.835848 2549 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d0c2d44c174279033af56b5e5da6c2f434d94a80e5ba36ed3fe54fe3b7f6170\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:03:26.836157 kubelet[2549]: E1008 20:03:26.835988 2549 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1288112314736d4f32884b6cfd673dc8ba21db58407ff1e92ad6fede9391addc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-2fx9x" Oct 8 20:03:26.836157 kubelet[2549]: E1008 20:03:26.836017 2549 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d0c2d44c174279033af56b5e5da6c2f434d94a80e5ba36ed3fe54fe3b7f6170\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-htgrq" Oct 8 20:03:26.836368 kubelet[2549]: E1008 20:03:26.836044 2549 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d0c2d44c174279033af56b5e5da6c2f434d94a80e5ba36ed3fe54fe3b7f6170\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-htgrq" Oct 8 20:03:26.836368 kubelet[2549]: E1008 20:03:26.836094 2549 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-htgrq_kube-system(fd379779-9501-4c20-b03d-dee08f6a4a3c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-htgrq_kube-system(fd379779-9501-4c20-b03d-dee08f6a4a3c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6d0c2d44c174279033af56b5e5da6c2f434d94a80e5ba36ed3fe54fe3b7f6170\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-htgrq" podUID="fd379779-9501-4c20-b03d-dee08f6a4a3c" Oct 8 20:03:26.836368 kubelet[2549]: E1008 20:03:26.836126 2549 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-2fx9x_kube-system(607115c5-20c3-45be-b2fc-233b05ba8ddd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-2fx9x_kube-system(607115c5-20c3-45be-b2fc-233b05ba8ddd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1288112314736d4f32884b6cfd673dc8ba21db58407ff1e92ad6fede9391addc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-2fx9x" podUID="607115c5-20c3-45be-b2fc-233b05ba8ddd" Oct 8 20:03:26.851866 systemd[1]: Created slice kubepods-besteffort-pod02b0616c_b70a_4eb4_99b0_3609843a3ee6.slice - libcontainer container kubepods-besteffort-pod02b0616c_b70a_4eb4_99b0_3609843a3ee6.slice. Oct 8 20:03:26.853724 containerd[1434]: time="2024-10-08T20:03:26.853688105Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jvqdn,Uid:02b0616c-b70a-4eb4-99b0-3609843a3ee6,Namespace:calico-system,Attempt:0,}" Oct 8 20:03:26.913821 containerd[1434]: time="2024-10-08T20:03:26.913779902Z" level=error msg="Failed to destroy network for sandbox \"c8d4223e11afdcb2aff9569c15e7423b8ef6328779bf3fc85931761157f97c2c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:03:26.914262 containerd[1434]: time="2024-10-08T20:03:26.914168626Z" level=error msg="encountered an error cleaning up failed sandbox \"c8d4223e11afdcb2aff9569c15e7423b8ef6328779bf3fc85931761157f97c2c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:03:26.914262 containerd[1434]: time="2024-10-08T20:03:26.914216227Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jvqdn,Uid:02b0616c-b70a-4eb4-99b0-3609843a3ee6,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c8d4223e11afdcb2aff9569c15e7423b8ef6328779bf3fc85931761157f97c2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:03:26.914571 kubelet[2549]: E1008 20:03:26.914543 2549 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8d4223e11afdcb2aff9569c15e7423b8ef6328779bf3fc85931761157f97c2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:03:26.914654 kubelet[2549]: E1008 20:03:26.914599 2549 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8d4223e11afdcb2aff9569c15e7423b8ef6328779bf3fc85931761157f97c2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jvqdn" Oct 8 20:03:26.914708 kubelet[2549]: E1008 20:03:26.914619 2549 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8d4223e11afdcb2aff9569c15e7423b8ef6328779bf3fc85931761157f97c2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jvqdn" Oct 8 20:03:26.914739 kubelet[2549]: E1008 20:03:26.914715 2549 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-jvqdn_calico-system(02b0616c-b70a-4eb4-99b0-3609843a3ee6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-jvqdn_calico-system(02b0616c-b70a-4eb4-99b0-3609843a3ee6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c8d4223e11afdcb2aff9569c15e7423b8ef6328779bf3fc85931761157f97c2c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jvqdn" podUID="02b0616c-b70a-4eb4-99b0-3609843a3ee6" Oct 8 20:03:26.940862 kubelet[2549]: E1008 20:03:26.940388 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:03:26.943048 kubelet[2549]: I1008 20:03:26.941922 2549 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6d0c2d44c174279033af56b5e5da6c2f434d94a80e5ba36ed3fe54fe3b7f6170" Oct 8 20:03:26.944471 containerd[1434]: time="2024-10-08T20:03:26.941167135Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\"" Oct 8 20:03:26.944471 containerd[1434]: time="2024-10-08T20:03:26.944019883Z" level=info msg="StopPodSandbox for \"6d0c2d44c174279033af56b5e5da6c2f434d94a80e5ba36ed3fe54fe3b7f6170\"" Oct 8 20:03:26.945304 kubelet[2549]: I1008 20:03:26.945263 2549 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1288112314736d4f32884b6cfd673dc8ba21db58407ff1e92ad6fede9391addc" Oct 8 20:03:26.946444 containerd[1434]: time="2024-10-08T20:03:26.945943622Z" level=info msg="Ensure that sandbox 6d0c2d44c174279033af56b5e5da6c2f434d94a80e5ba36ed3fe54fe3b7f6170 in task-service has been cleanup successfully" Oct 8 20:03:26.946904 containerd[1434]: time="2024-10-08T20:03:26.946790510Z" level=info msg="StopPodSandbox for \"1288112314736d4f32884b6cfd673dc8ba21db58407ff1e92ad6fede9391addc\"" Oct 8 20:03:26.947980 containerd[1434]: time="2024-10-08T20:03:26.946956712Z" level=info msg="Ensure that sandbox 1288112314736d4f32884b6cfd673dc8ba21db58407ff1e92ad6fede9391addc in task-service has been cleanup successfully" Oct 8 20:03:26.949874 kubelet[2549]: I1008 20:03:26.949839 2549 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c8d4223e11afdcb2aff9569c15e7423b8ef6328779bf3fc85931761157f97c2c" Oct 8 20:03:26.952095 kubelet[2549]: I1008 20:03:26.952050 2549 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="17cbb0ffeb5bde4c23040ffa36b878ea33796d5b1600384746fe647e501bd6e0" Oct 8 20:03:26.952494 containerd[1434]: time="2024-10-08T20:03:26.952362726Z" level=info msg="StopPodSandbox for \"c8d4223e11afdcb2aff9569c15e7423b8ef6328779bf3fc85931761157f97c2c\"" Oct 8 20:03:26.952777 containerd[1434]: time="2024-10-08T20:03:26.952748730Z" level=info msg="Ensure that sandbox c8d4223e11afdcb2aff9569c15e7423b8ef6328779bf3fc85931761157f97c2c in task-service has been cleanup successfully" Oct 8 20:03:26.953689 containerd[1434]: time="2024-10-08T20:03:26.953249135Z" level=info msg="StopPodSandbox for \"17cbb0ffeb5bde4c23040ffa36b878ea33796d5b1600384746fe647e501bd6e0\"" Oct 8 20:03:26.953689 containerd[1434]: time="2024-10-08T20:03:26.953387136Z" level=info msg="Ensure that sandbox 17cbb0ffeb5bde4c23040ffa36b878ea33796d5b1600384746fe647e501bd6e0 in task-service has been cleanup successfully" Oct 8 20:03:26.980881 containerd[1434]: time="2024-10-08T20:03:26.980787729Z" level=error msg="StopPodSandbox for \"17cbb0ffeb5bde4c23040ffa36b878ea33796d5b1600384746fe647e501bd6e0\" failed" error="failed to destroy network for sandbox \"17cbb0ffeb5bde4c23040ffa36b878ea33796d5b1600384746fe647e501bd6e0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:03:26.980881 containerd[1434]: time="2024-10-08T20:03:26.980832809Z" level=error msg="StopPodSandbox for \"6d0c2d44c174279033af56b5e5da6c2f434d94a80e5ba36ed3fe54fe3b7f6170\" failed" error="failed to destroy network for sandbox \"6d0c2d44c174279033af56b5e5da6c2f434d94a80e5ba36ed3fe54fe3b7f6170\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:03:26.988927 kubelet[2549]: E1008 20:03:26.988125 2549 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6d0c2d44c174279033af56b5e5da6c2f434d94a80e5ba36ed3fe54fe3b7f6170\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6d0c2d44c174279033af56b5e5da6c2f434d94a80e5ba36ed3fe54fe3b7f6170" Oct 8 20:03:26.988927 kubelet[2549]: E1008 20:03:26.988246 2549 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6d0c2d44c174279033af56b5e5da6c2f434d94a80e5ba36ed3fe54fe3b7f6170"} Oct 8 20:03:26.988927 kubelet[2549]: E1008 20:03:26.988296 2549 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fd379779-9501-4c20-b03d-dee08f6a4a3c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6d0c2d44c174279033af56b5e5da6c2f434d94a80e5ba36ed3fe54fe3b7f6170\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 8 20:03:26.988927 kubelet[2549]: E1008 20:03:26.988328 2549 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fd379779-9501-4c20-b03d-dee08f6a4a3c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6d0c2d44c174279033af56b5e5da6c2f434d94a80e5ba36ed3fe54fe3b7f6170\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-htgrq" podUID="fd379779-9501-4c20-b03d-dee08f6a4a3c" Oct 8 20:03:26.989166 kubelet[2549]: E1008 20:03:26.988816 2549 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"17cbb0ffeb5bde4c23040ffa36b878ea33796d5b1600384746fe647e501bd6e0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="17cbb0ffeb5bde4c23040ffa36b878ea33796d5b1600384746fe647e501bd6e0" Oct 8 20:03:26.989166 kubelet[2549]: E1008 20:03:26.988839 2549 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"17cbb0ffeb5bde4c23040ffa36b878ea33796d5b1600384746fe647e501bd6e0"} Oct 8 20:03:26.989166 kubelet[2549]: E1008 20:03:26.988878 2549 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"38f894d1-54bd-4496-83b6-2cd084c2920d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"17cbb0ffeb5bde4c23040ffa36b878ea33796d5b1600384746fe647e501bd6e0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 8 20:03:26.989166 kubelet[2549]: E1008 20:03:26.988903 2549 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"38f894d1-54bd-4496-83b6-2cd084c2920d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"17cbb0ffeb5bde4c23040ffa36b878ea33796d5b1600384746fe647e501bd6e0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6b8996986b-k6ktv" podUID="38f894d1-54bd-4496-83b6-2cd084c2920d" Oct 8 20:03:26.993606 containerd[1434]: time="2024-10-08T20:03:26.993539695Z" level=error msg="StopPodSandbox for \"c8d4223e11afdcb2aff9569c15e7423b8ef6328779bf3fc85931761157f97c2c\" failed" error="failed to destroy network for sandbox \"c8d4223e11afdcb2aff9569c15e7423b8ef6328779bf3fc85931761157f97c2c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:03:26.993841 kubelet[2549]: E1008 20:03:26.993799 2549 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c8d4223e11afdcb2aff9569c15e7423b8ef6328779bf3fc85931761157f97c2c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c8d4223e11afdcb2aff9569c15e7423b8ef6328779bf3fc85931761157f97c2c" Oct 8 20:03:26.993841 kubelet[2549]: E1008 20:03:26.993841 2549 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c8d4223e11afdcb2aff9569c15e7423b8ef6328779bf3fc85931761157f97c2c"} Oct 8 20:03:26.993944 kubelet[2549]: E1008 20:03:26.993874 2549 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"02b0616c-b70a-4eb4-99b0-3609843a3ee6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c8d4223e11afdcb2aff9569c15e7423b8ef6328779bf3fc85931761157f97c2c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 8 20:03:26.993944 kubelet[2549]: E1008 20:03:26.993913 2549 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"02b0616c-b70a-4eb4-99b0-3609843a3ee6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c8d4223e11afdcb2aff9569c15e7423b8ef6328779bf3fc85931761157f97c2c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jvqdn" podUID="02b0616c-b70a-4eb4-99b0-3609843a3ee6" Oct 8 20:03:26.997139 containerd[1434]: time="2024-10-08T20:03:26.997098171Z" level=error msg="StopPodSandbox for \"1288112314736d4f32884b6cfd673dc8ba21db58407ff1e92ad6fede9391addc\" failed" error="failed to destroy network for sandbox \"1288112314736d4f32884b6cfd673dc8ba21db58407ff1e92ad6fede9391addc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:03:26.997562 kubelet[2549]: E1008 20:03:26.997418 2549 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1288112314736d4f32884b6cfd673dc8ba21db58407ff1e92ad6fede9391addc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1288112314736d4f32884b6cfd673dc8ba21db58407ff1e92ad6fede9391addc" Oct 8 20:03:26.997562 kubelet[2549]: E1008 20:03:26.997450 2549 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1288112314736d4f32884b6cfd673dc8ba21db58407ff1e92ad6fede9391addc"} Oct 8 20:03:26.997562 kubelet[2549]: E1008 20:03:26.997495 2549 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"607115c5-20c3-45be-b2fc-233b05ba8ddd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1288112314736d4f32884b6cfd673dc8ba21db58407ff1e92ad6fede9391addc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 8 20:03:26.997562 kubelet[2549]: E1008 20:03:26.997534 2549 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"607115c5-20c3-45be-b2fc-233b05ba8ddd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1288112314736d4f32884b6cfd673dc8ba21db58407ff1e92ad6fede9391addc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-2fx9x" podUID="607115c5-20c3-45be-b2fc-233b05ba8ddd" Oct 8 20:03:27.478554 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6d0c2d44c174279033af56b5e5da6c2f434d94a80e5ba36ed3fe54fe3b7f6170-shm.mount: Deactivated successfully. Oct 8 20:03:27.478670 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1288112314736d4f32884b6cfd673dc8ba21db58407ff1e92ad6fede9391addc-shm.mount: Deactivated successfully. Oct 8 20:03:29.955955 systemd[1]: Started sshd@7-10.0.0.147:22-10.0.0.1:56868.service - OpenSSH per-connection server daemon (10.0.0.1:56868). Oct 8 20:03:30.013722 sshd[3597]: Accepted publickey for core from 10.0.0.1 port 56868 ssh2: RSA SHA256:PeFR0GwG3Km7u6+IJymPx7tkM/vpusnYsvzmiMSzq3A Oct 8 20:03:30.009379 sshd[3597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:03:30.020214 systemd-logind[1415]: New session 8 of user core. Oct 8 20:03:30.025795 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 8 20:03:30.173228 sshd[3597]: pam_unix(sshd:session): session closed for user core Oct 8 20:03:30.176784 systemd-logind[1415]: Session 8 logged out. Waiting for processes to exit. Oct 8 20:03:30.177140 systemd[1]: sshd@7-10.0.0.147:22-10.0.0.1:56868.service: Deactivated successfully. Oct 8 20:03:30.183218 systemd[1]: session-8.scope: Deactivated successfully. Oct 8 20:03:30.185387 systemd-logind[1415]: Removed session 8. Oct 8 20:03:30.228467 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3654097171.mount: Deactivated successfully. Oct 8 20:03:30.484736 containerd[1434]: time="2024-10-08T20:03:30.484407584Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:03:30.485060 containerd[1434]: time="2024-10-08T20:03:30.484909188Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.1: active requests=0, bytes read=113057300" Oct 8 20:03:30.485779 containerd[1434]: time="2024-10-08T20:03:30.485743634Z" level=info msg="ImageCreate event name:\"sha256:373272045e41e00ebf8da7ce9fc6b26d326fb8b3e665d9f78bb121976f83b1dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:03:30.487663 containerd[1434]: time="2024-10-08T20:03:30.487599528Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:03:30.488219 containerd[1434]: time="2024-10-08T20:03:30.488191613Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.1\" with image id \"sha256:373272045e41e00ebf8da7ce9fc6b26d326fb8b3e665d9f78bb121976f83b1dc\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\", size \"113057162\" in 3.546979518s" Oct 8 20:03:30.488261 containerd[1434]: time="2024-10-08T20:03:30.488226333Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\" returns image reference \"sha256:373272045e41e00ebf8da7ce9fc6b26d326fb8b3e665d9f78bb121976f83b1dc\"" Oct 8 20:03:30.496658 containerd[1434]: time="2024-10-08T20:03:30.494444541Z" level=info msg="CreateContainer within sandbox \"ba329c69a6ce43f00704064510bf4c40e562235938dbbe310e205b3065c98f9b\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 8 20:03:30.525036 containerd[1434]: time="2024-10-08T20:03:30.524984936Z" level=info msg="CreateContainer within sandbox \"ba329c69a6ce43f00704064510bf4c40e562235938dbbe310e205b3065c98f9b\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"41026c3aaf52bc771d6cd18911ff7e6398dd69c829fb8141e32bb88c767e727e\"" Oct 8 20:03:30.525477 containerd[1434]: time="2024-10-08T20:03:30.525443019Z" level=info msg="StartContainer for \"41026c3aaf52bc771d6cd18911ff7e6398dd69c829fb8141e32bb88c767e727e\"" Oct 8 20:03:30.575813 systemd[1]: Started cri-containerd-41026c3aaf52bc771d6cd18911ff7e6398dd69c829fb8141e32bb88c767e727e.scope - libcontainer container 41026c3aaf52bc771d6cd18911ff7e6398dd69c829fb8141e32bb88c767e727e. Oct 8 20:03:30.633186 containerd[1434]: time="2024-10-08T20:03:30.633140286Z" level=info msg="StartContainer for \"41026c3aaf52bc771d6cd18911ff7e6398dd69c829fb8141e32bb88c767e727e\" returns successfully" Oct 8 20:03:30.777120 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 8 20:03:30.777230 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 8 20:03:30.964380 kubelet[2549]: E1008 20:03:30.964292 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:03:31.965133 kubelet[2549]: I1008 20:03:31.965086 2549 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 8 20:03:31.965890 kubelet[2549]: E1008 20:03:31.965865 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:03:33.182024 kubelet[2549]: I1008 20:03:33.181974 2549 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 8 20:03:33.182777 kubelet[2549]: E1008 20:03:33.182755 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:03:35.184913 systemd[1]: Started sshd@8-10.0.0.147:22-10.0.0.1:56646.service - OpenSSH per-connection server daemon (10.0.0.1:56646). Oct 8 20:03:35.227944 sshd[3880]: Accepted publickey for core from 10.0.0.1 port 56646 ssh2: RSA SHA256:PeFR0GwG3Km7u6+IJymPx7tkM/vpusnYsvzmiMSzq3A Oct 8 20:03:35.229537 sshd[3880]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:03:35.239887 systemd-logind[1415]: New session 9 of user core. Oct 8 20:03:35.249114 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 8 20:03:35.371653 sshd[3880]: pam_unix(sshd:session): session closed for user core Oct 8 20:03:35.375441 systemd[1]: sshd@8-10.0.0.147:22-10.0.0.1:56646.service: Deactivated successfully. Oct 8 20:03:35.377241 systemd[1]: session-9.scope: Deactivated successfully. Oct 8 20:03:35.378277 systemd-logind[1415]: Session 9 logged out. Waiting for processes to exit. Oct 8 20:03:35.379342 systemd-logind[1415]: Removed session 9. Oct 8 20:03:37.846768 containerd[1434]: time="2024-10-08T20:03:37.846697329Z" level=info msg="StopPodSandbox for \"c8d4223e11afdcb2aff9569c15e7423b8ef6328779bf3fc85931761157f97c2c\"" Oct 8 20:03:37.934058 kubelet[2549]: I1008 20:03:37.934016 2549 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-wl5h5" podStartSLOduration=7.910384455 podStartE2EDuration="18.933950355s" podCreationTimestamp="2024-10-08 20:03:19 +0000 UTC" firstStartedPulling="2024-10-08 20:03:19.464892195 +0000 UTC m=+22.705415633" lastFinishedPulling="2024-10-08 20:03:30.488458095 +0000 UTC m=+33.728981533" observedRunningTime="2024-10-08 20:03:30.980418954 +0000 UTC m=+34.220942352" watchObservedRunningTime="2024-10-08 20:03:37.933950355 +0000 UTC m=+41.174473793" Oct 8 20:03:38.009705 containerd[1434]: 2024-10-08 20:03:37.933 [INFO][3979] k8s.go 608: Cleaning up netns ContainerID="c8d4223e11afdcb2aff9569c15e7423b8ef6328779bf3fc85931761157f97c2c" Oct 8 20:03:38.009705 containerd[1434]: 2024-10-08 20:03:37.933 [INFO][3979] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="c8d4223e11afdcb2aff9569c15e7423b8ef6328779bf3fc85931761157f97c2c" iface="eth0" netns="/var/run/netns/cni-7a490d1e-b388-b204-9e5e-d8c0b123aebf" Oct 8 20:03:38.009705 containerd[1434]: 2024-10-08 20:03:37.934 [INFO][3979] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="c8d4223e11afdcb2aff9569c15e7423b8ef6328779bf3fc85931761157f97c2c" iface="eth0" netns="/var/run/netns/cni-7a490d1e-b388-b204-9e5e-d8c0b123aebf" Oct 8 20:03:38.009705 containerd[1434]: 2024-10-08 20:03:37.935 [INFO][3979] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="c8d4223e11afdcb2aff9569c15e7423b8ef6328779bf3fc85931761157f97c2c" iface="eth0" netns="/var/run/netns/cni-7a490d1e-b388-b204-9e5e-d8c0b123aebf" Oct 8 20:03:38.009705 containerd[1434]: 2024-10-08 20:03:37.935 [INFO][3979] k8s.go 615: Releasing IP address(es) ContainerID="c8d4223e11afdcb2aff9569c15e7423b8ef6328779bf3fc85931761157f97c2c" Oct 8 20:03:38.009705 containerd[1434]: 2024-10-08 20:03:37.935 [INFO][3979] utils.go 188: Calico CNI releasing IP address ContainerID="c8d4223e11afdcb2aff9569c15e7423b8ef6328779bf3fc85931761157f97c2c" Oct 8 20:03:38.009705 containerd[1434]: 2024-10-08 20:03:37.996 [INFO][3987] ipam_plugin.go 417: Releasing address using handleID ContainerID="c8d4223e11afdcb2aff9569c15e7423b8ef6328779bf3fc85931761157f97c2c" HandleID="k8s-pod-network.c8d4223e11afdcb2aff9569c15e7423b8ef6328779bf3fc85931761157f97c2c" Workload="localhost-k8s-csi--node--driver--jvqdn-eth0" Oct 8 20:03:38.009705 containerd[1434]: 2024-10-08 20:03:37.996 [INFO][3987] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:03:38.009705 containerd[1434]: 2024-10-08 20:03:37.996 [INFO][3987] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:03:38.009705 containerd[1434]: 2024-10-08 20:03:38.005 [WARNING][3987] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="c8d4223e11afdcb2aff9569c15e7423b8ef6328779bf3fc85931761157f97c2c" HandleID="k8s-pod-network.c8d4223e11afdcb2aff9569c15e7423b8ef6328779bf3fc85931761157f97c2c" Workload="localhost-k8s-csi--node--driver--jvqdn-eth0" Oct 8 20:03:38.009705 containerd[1434]: 2024-10-08 20:03:38.005 [INFO][3987] ipam_plugin.go 445: Releasing address using workloadID ContainerID="c8d4223e11afdcb2aff9569c15e7423b8ef6328779bf3fc85931761157f97c2c" HandleID="k8s-pod-network.c8d4223e11afdcb2aff9569c15e7423b8ef6328779bf3fc85931761157f97c2c" Workload="localhost-k8s-csi--node--driver--jvqdn-eth0" Oct 8 20:03:38.009705 containerd[1434]: 2024-10-08 20:03:38.006 [INFO][3987] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:03:38.009705 containerd[1434]: 2024-10-08 20:03:38.008 [INFO][3979] k8s.go 621: Teardown processing complete. ContainerID="c8d4223e11afdcb2aff9569c15e7423b8ef6328779bf3fc85931761157f97c2c" Oct 8 20:03:38.011660 containerd[1434]: time="2024-10-08T20:03:38.010780088Z" level=info msg="TearDown network for sandbox \"c8d4223e11afdcb2aff9569c15e7423b8ef6328779bf3fc85931761157f97c2c\" successfully" Oct 8 20:03:38.011660 containerd[1434]: time="2024-10-08T20:03:38.010820528Z" level=info msg="StopPodSandbox for \"c8d4223e11afdcb2aff9569c15e7423b8ef6328779bf3fc85931761157f97c2c\" returns successfully" Oct 8 20:03:38.012464 containerd[1434]: time="2024-10-08T20:03:38.012233495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jvqdn,Uid:02b0616c-b70a-4eb4-99b0-3609843a3ee6,Namespace:calico-system,Attempt:1,}" Oct 8 20:03:38.012706 systemd[1]: run-netns-cni\x2d7a490d1e\x2db388\x2db204\x2d9e5e\x2dd8c0b123aebf.mount: Deactivated successfully. Oct 8 20:03:38.124097 systemd-networkd[1362]: cali01a91bc46e3: Link UP Oct 8 20:03:38.124295 systemd-networkd[1362]: cali01a91bc46e3: Gained carrier Oct 8 20:03:38.136758 containerd[1434]: 2024-10-08 20:03:38.046 [INFO][3995] utils.go 100: File /var/lib/calico/mtu does not exist Oct 8 20:03:38.136758 containerd[1434]: 2024-10-08 20:03:38.058 [INFO][3995] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--jvqdn-eth0 csi-node-driver- calico-system 02b0616c-b70a-4eb4-99b0-3609843a3ee6 762 0 2024-10-08 20:03:19 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:78cd84fb8c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s localhost csi-node-driver-jvqdn eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali01a91bc46e3 [] []}} ContainerID="ebb35eb5e4466c97fbb0f34b3526a70983127ae13b2dcc19bca328bd1f7d9ccc" Namespace="calico-system" Pod="csi-node-driver-jvqdn" WorkloadEndpoint="localhost-k8s-csi--node--driver--jvqdn-" Oct 8 20:03:38.136758 containerd[1434]: 2024-10-08 20:03:38.059 [INFO][3995] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ebb35eb5e4466c97fbb0f34b3526a70983127ae13b2dcc19bca328bd1f7d9ccc" Namespace="calico-system" Pod="csi-node-driver-jvqdn" WorkloadEndpoint="localhost-k8s-csi--node--driver--jvqdn-eth0" Oct 8 20:03:38.136758 containerd[1434]: 2024-10-08 20:03:38.082 [INFO][4010] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ebb35eb5e4466c97fbb0f34b3526a70983127ae13b2dcc19bca328bd1f7d9ccc" HandleID="k8s-pod-network.ebb35eb5e4466c97fbb0f34b3526a70983127ae13b2dcc19bca328bd1f7d9ccc" Workload="localhost-k8s-csi--node--driver--jvqdn-eth0" Oct 8 20:03:38.136758 containerd[1434]: 2024-10-08 20:03:38.093 [INFO][4010] ipam_plugin.go 270: Auto assigning IP ContainerID="ebb35eb5e4466c97fbb0f34b3526a70983127ae13b2dcc19bca328bd1f7d9ccc" HandleID="k8s-pod-network.ebb35eb5e4466c97fbb0f34b3526a70983127ae13b2dcc19bca328bd1f7d9ccc" Workload="localhost-k8s-csi--node--driver--jvqdn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000300330), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-jvqdn", "timestamp":"2024-10-08 20:03:38.082691538 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 20:03:38.136758 containerd[1434]: 2024-10-08 20:03:38.093 [INFO][4010] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:03:38.136758 containerd[1434]: 2024-10-08 20:03:38.093 [INFO][4010] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:03:38.136758 containerd[1434]: 2024-10-08 20:03:38.093 [INFO][4010] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 8 20:03:38.136758 containerd[1434]: 2024-10-08 20:03:38.095 [INFO][4010] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ebb35eb5e4466c97fbb0f34b3526a70983127ae13b2dcc19bca328bd1f7d9ccc" host="localhost" Oct 8 20:03:38.136758 containerd[1434]: 2024-10-08 20:03:38.101 [INFO][4010] ipam.go 372: Looking up existing affinities for host host="localhost" Oct 8 20:03:38.136758 containerd[1434]: 2024-10-08 20:03:38.104 [INFO][4010] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Oct 8 20:03:38.136758 containerd[1434]: 2024-10-08 20:03:38.106 [INFO][4010] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 8 20:03:38.136758 containerd[1434]: 2024-10-08 20:03:38.107 [INFO][4010] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 8 20:03:38.136758 containerd[1434]: 2024-10-08 20:03:38.107 [INFO][4010] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ebb35eb5e4466c97fbb0f34b3526a70983127ae13b2dcc19bca328bd1f7d9ccc" host="localhost" Oct 8 20:03:38.136758 containerd[1434]: 2024-10-08 20:03:38.109 [INFO][4010] ipam.go 1685: Creating new handle: k8s-pod-network.ebb35eb5e4466c97fbb0f34b3526a70983127ae13b2dcc19bca328bd1f7d9ccc Oct 8 20:03:38.136758 containerd[1434]: 2024-10-08 20:03:38.112 [INFO][4010] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ebb35eb5e4466c97fbb0f34b3526a70983127ae13b2dcc19bca328bd1f7d9ccc" host="localhost" Oct 8 20:03:38.136758 containerd[1434]: 2024-10-08 20:03:38.116 [INFO][4010] ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.ebb35eb5e4466c97fbb0f34b3526a70983127ae13b2dcc19bca328bd1f7d9ccc" host="localhost" Oct 8 20:03:38.136758 containerd[1434]: 2024-10-08 20:03:38.116 [INFO][4010] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.ebb35eb5e4466c97fbb0f34b3526a70983127ae13b2dcc19bca328bd1f7d9ccc" host="localhost" Oct 8 20:03:38.136758 containerd[1434]: 2024-10-08 20:03:38.117 [INFO][4010] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:03:38.136758 containerd[1434]: 2024-10-08 20:03:38.117 [INFO][4010] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="ebb35eb5e4466c97fbb0f34b3526a70983127ae13b2dcc19bca328bd1f7d9ccc" HandleID="k8s-pod-network.ebb35eb5e4466c97fbb0f34b3526a70983127ae13b2dcc19bca328bd1f7d9ccc" Workload="localhost-k8s-csi--node--driver--jvqdn-eth0" Oct 8 20:03:38.137240 containerd[1434]: 2024-10-08 20:03:38.119 [INFO][3995] k8s.go 386: Populated endpoint ContainerID="ebb35eb5e4466c97fbb0f34b3526a70983127ae13b2dcc19bca328bd1f7d9ccc" Namespace="calico-system" Pod="csi-node-driver-jvqdn" WorkloadEndpoint="localhost-k8s-csi--node--driver--jvqdn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--jvqdn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"02b0616c-b70a-4eb4-99b0-3609843a3ee6", ResourceVersion:"762", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 3, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-jvqdn", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali01a91bc46e3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:03:38.137240 containerd[1434]: 2024-10-08 20:03:38.119 [INFO][3995] k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="ebb35eb5e4466c97fbb0f34b3526a70983127ae13b2dcc19bca328bd1f7d9ccc" Namespace="calico-system" Pod="csi-node-driver-jvqdn" WorkloadEndpoint="localhost-k8s-csi--node--driver--jvqdn-eth0" Oct 8 20:03:38.137240 containerd[1434]: 2024-10-08 20:03:38.119 [INFO][3995] dataplane_linux.go 68: Setting the host side veth name to cali01a91bc46e3 ContainerID="ebb35eb5e4466c97fbb0f34b3526a70983127ae13b2dcc19bca328bd1f7d9ccc" Namespace="calico-system" Pod="csi-node-driver-jvqdn" WorkloadEndpoint="localhost-k8s-csi--node--driver--jvqdn-eth0" Oct 8 20:03:38.137240 containerd[1434]: 2024-10-08 20:03:38.124 [INFO][3995] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="ebb35eb5e4466c97fbb0f34b3526a70983127ae13b2dcc19bca328bd1f7d9ccc" Namespace="calico-system" Pod="csi-node-driver-jvqdn" WorkloadEndpoint="localhost-k8s-csi--node--driver--jvqdn-eth0" Oct 8 20:03:38.137240 containerd[1434]: 2024-10-08 20:03:38.124 [INFO][3995] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ebb35eb5e4466c97fbb0f34b3526a70983127ae13b2dcc19bca328bd1f7d9ccc" Namespace="calico-system" Pod="csi-node-driver-jvqdn" WorkloadEndpoint="localhost-k8s-csi--node--driver--jvqdn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--jvqdn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"02b0616c-b70a-4eb4-99b0-3609843a3ee6", ResourceVersion:"762", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 3, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ebb35eb5e4466c97fbb0f34b3526a70983127ae13b2dcc19bca328bd1f7d9ccc", Pod:"csi-node-driver-jvqdn", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali01a91bc46e3", MAC:"3a:58:ac:82:6d:9c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:03:38.137240 containerd[1434]: 2024-10-08 20:03:38.132 [INFO][3995] k8s.go 500: Wrote updated endpoint to datastore ContainerID="ebb35eb5e4466c97fbb0f34b3526a70983127ae13b2dcc19bca328bd1f7d9ccc" Namespace="calico-system" Pod="csi-node-driver-jvqdn" WorkloadEndpoint="localhost-k8s-csi--node--driver--jvqdn-eth0" Oct 8 20:03:38.151231 containerd[1434]: time="2024-10-08T20:03:38.150816290Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:03:38.151231 containerd[1434]: time="2024-10-08T20:03:38.151212532Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:03:38.151231 containerd[1434]: time="2024-10-08T20:03:38.151224292Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:03:38.151426 containerd[1434]: time="2024-10-08T20:03:38.151297972Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:03:38.167786 systemd[1]: Started cri-containerd-ebb35eb5e4466c97fbb0f34b3526a70983127ae13b2dcc19bca328bd1f7d9ccc.scope - libcontainer container ebb35eb5e4466c97fbb0f34b3526a70983127ae13b2dcc19bca328bd1f7d9ccc. Oct 8 20:03:38.178709 systemd-resolved[1306]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 8 20:03:38.190825 containerd[1434]: time="2024-10-08T20:03:38.190792793Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jvqdn,Uid:02b0616c-b70a-4eb4-99b0-3609843a3ee6,Namespace:calico-system,Attempt:1,} returns sandbox id \"ebb35eb5e4466c97fbb0f34b3526a70983127ae13b2dcc19bca328bd1f7d9ccc\"" Oct 8 20:03:38.192231 containerd[1434]: time="2024-10-08T20:03:38.192208040Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\"" Oct 8 20:03:38.846955 containerd[1434]: time="2024-10-08T20:03:38.846914761Z" level=info msg="StopPodSandbox for \"1288112314736d4f32884b6cfd673dc8ba21db58407ff1e92ad6fede9391addc\"" Oct 8 20:03:38.914476 containerd[1434]: 2024-10-08 20:03:38.885 [INFO][4112] k8s.go 608: Cleaning up netns ContainerID="1288112314736d4f32884b6cfd673dc8ba21db58407ff1e92ad6fede9391addc" Oct 8 20:03:38.914476 containerd[1434]: 2024-10-08 20:03:38.885 [INFO][4112] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="1288112314736d4f32884b6cfd673dc8ba21db58407ff1e92ad6fede9391addc" iface="eth0" netns="/var/run/netns/cni-3b630281-e684-a587-e98c-baffb6279c20" Oct 8 20:03:38.914476 containerd[1434]: 2024-10-08 20:03:38.885 [INFO][4112] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="1288112314736d4f32884b6cfd673dc8ba21db58407ff1e92ad6fede9391addc" iface="eth0" netns="/var/run/netns/cni-3b630281-e684-a587-e98c-baffb6279c20" Oct 8 20:03:38.914476 containerd[1434]: 2024-10-08 20:03:38.886 [INFO][4112] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="1288112314736d4f32884b6cfd673dc8ba21db58407ff1e92ad6fede9391addc" iface="eth0" netns="/var/run/netns/cni-3b630281-e684-a587-e98c-baffb6279c20" Oct 8 20:03:38.914476 containerd[1434]: 2024-10-08 20:03:38.886 [INFO][4112] k8s.go 615: Releasing IP address(es) ContainerID="1288112314736d4f32884b6cfd673dc8ba21db58407ff1e92ad6fede9391addc" Oct 8 20:03:38.914476 containerd[1434]: 2024-10-08 20:03:38.886 [INFO][4112] utils.go 188: Calico CNI releasing IP address ContainerID="1288112314736d4f32884b6cfd673dc8ba21db58407ff1e92ad6fede9391addc" Oct 8 20:03:38.914476 containerd[1434]: 2024-10-08 20:03:38.902 [INFO][4120] ipam_plugin.go 417: Releasing address using handleID ContainerID="1288112314736d4f32884b6cfd673dc8ba21db58407ff1e92ad6fede9391addc" HandleID="k8s-pod-network.1288112314736d4f32884b6cfd673dc8ba21db58407ff1e92ad6fede9391addc" Workload="localhost-k8s-coredns--76f75df574--2fx9x-eth0" Oct 8 20:03:38.914476 containerd[1434]: 2024-10-08 20:03:38.902 [INFO][4120] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:03:38.914476 containerd[1434]: 2024-10-08 20:03:38.902 [INFO][4120] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:03:38.914476 containerd[1434]: 2024-10-08 20:03:38.910 [WARNING][4120] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="1288112314736d4f32884b6cfd673dc8ba21db58407ff1e92ad6fede9391addc" HandleID="k8s-pod-network.1288112314736d4f32884b6cfd673dc8ba21db58407ff1e92ad6fede9391addc" Workload="localhost-k8s-coredns--76f75df574--2fx9x-eth0" Oct 8 20:03:38.914476 containerd[1434]: 2024-10-08 20:03:38.910 [INFO][4120] ipam_plugin.go 445: Releasing address using workloadID ContainerID="1288112314736d4f32884b6cfd673dc8ba21db58407ff1e92ad6fede9391addc" HandleID="k8s-pod-network.1288112314736d4f32884b6cfd673dc8ba21db58407ff1e92ad6fede9391addc" Workload="localhost-k8s-coredns--76f75df574--2fx9x-eth0" Oct 8 20:03:38.914476 containerd[1434]: 2024-10-08 20:03:38.911 [INFO][4120] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:03:38.914476 containerd[1434]: 2024-10-08 20:03:38.913 [INFO][4112] k8s.go 621: Teardown processing complete. ContainerID="1288112314736d4f32884b6cfd673dc8ba21db58407ff1e92ad6fede9391addc" Oct 8 20:03:38.919673 containerd[1434]: time="2024-10-08T20:03:38.919617455Z" level=info msg="TearDown network for sandbox \"1288112314736d4f32884b6cfd673dc8ba21db58407ff1e92ad6fede9391addc\" successfully" Oct 8 20:03:38.919673 containerd[1434]: time="2024-10-08T20:03:38.919662975Z" level=info msg="StopPodSandbox for \"1288112314736d4f32884b6cfd673dc8ba21db58407ff1e92ad6fede9391addc\" returns successfully" Oct 8 20:03:38.919945 kubelet[2549]: E1008 20:03:38.919915 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:03:38.920367 containerd[1434]: time="2024-10-08T20:03:38.920233137Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-2fx9x,Uid:607115c5-20c3-45be-b2fc-233b05ba8ddd,Namespace:kube-system,Attempt:1,}" Oct 8 20:03:39.013528 systemd[1]: run-netns-cni\x2d3b630281\x2de684\x2da587\x2de98c\x2dbaffb6279c20.mount: Deactivated successfully. Oct 8 20:03:39.024729 systemd-networkd[1362]: calic52852a10ec: Link UP Oct 8 20:03:39.024881 systemd-networkd[1362]: calic52852a10ec: Gained carrier Oct 8 20:03:39.037694 containerd[1434]: 2024-10-08 20:03:38.944 [INFO][4129] utils.go 100: File /var/lib/calico/mtu does not exist Oct 8 20:03:39.037694 containerd[1434]: 2024-10-08 20:03:38.956 [INFO][4129] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--76f75df574--2fx9x-eth0 coredns-76f75df574- kube-system 607115c5-20c3-45be-b2fc-233b05ba8ddd 772 0 2024-10-08 20:03:12 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-76f75df574-2fx9x eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic52852a10ec [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="311bc7fb6f53633b3499d202796a118e93788d09c8be133d4e91a87f672140f6" Namespace="kube-system" Pod="coredns-76f75df574-2fx9x" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--2fx9x-" Oct 8 20:03:39.037694 containerd[1434]: 2024-10-08 20:03:38.957 [INFO][4129] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="311bc7fb6f53633b3499d202796a118e93788d09c8be133d4e91a87f672140f6" Namespace="kube-system" Pod="coredns-76f75df574-2fx9x" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--2fx9x-eth0" Oct 8 20:03:39.037694 containerd[1434]: 2024-10-08 20:03:38.980 [INFO][4142] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="311bc7fb6f53633b3499d202796a118e93788d09c8be133d4e91a87f672140f6" HandleID="k8s-pod-network.311bc7fb6f53633b3499d202796a118e93788d09c8be133d4e91a87f672140f6" Workload="localhost-k8s-coredns--76f75df574--2fx9x-eth0" Oct 8 20:03:39.037694 containerd[1434]: 2024-10-08 20:03:38.991 [INFO][4142] ipam_plugin.go 270: Auto assigning IP ContainerID="311bc7fb6f53633b3499d202796a118e93788d09c8be133d4e91a87f672140f6" HandleID="k8s-pod-network.311bc7fb6f53633b3499d202796a118e93788d09c8be133d4e91a87f672140f6" Workload="localhost-k8s-coredns--76f75df574--2fx9x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400058fb00), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-76f75df574-2fx9x", "timestamp":"2024-10-08 20:03:38.980138572 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 20:03:39.037694 containerd[1434]: 2024-10-08 20:03:38.991 [INFO][4142] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:03:39.037694 containerd[1434]: 2024-10-08 20:03:38.991 [INFO][4142] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:03:39.037694 containerd[1434]: 2024-10-08 20:03:38.991 [INFO][4142] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 8 20:03:39.037694 containerd[1434]: 2024-10-08 20:03:38.992 [INFO][4142] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.311bc7fb6f53633b3499d202796a118e93788d09c8be133d4e91a87f672140f6" host="localhost" Oct 8 20:03:39.037694 containerd[1434]: 2024-10-08 20:03:38.995 [INFO][4142] ipam.go 372: Looking up existing affinities for host host="localhost" Oct 8 20:03:39.037694 containerd[1434]: 2024-10-08 20:03:39.001 [INFO][4142] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Oct 8 20:03:39.037694 containerd[1434]: 2024-10-08 20:03:39.004 [INFO][4142] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 8 20:03:39.037694 containerd[1434]: 2024-10-08 20:03:39.006 [INFO][4142] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 8 20:03:39.037694 containerd[1434]: 2024-10-08 20:03:39.006 [INFO][4142] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.311bc7fb6f53633b3499d202796a118e93788d09c8be133d4e91a87f672140f6" host="localhost" Oct 8 20:03:39.037694 containerd[1434]: 2024-10-08 20:03:39.008 [INFO][4142] ipam.go 1685: Creating new handle: k8s-pod-network.311bc7fb6f53633b3499d202796a118e93788d09c8be133d4e91a87f672140f6 Oct 8 20:03:39.037694 containerd[1434]: 2024-10-08 20:03:39.014 [INFO][4142] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.311bc7fb6f53633b3499d202796a118e93788d09c8be133d4e91a87f672140f6" host="localhost" Oct 8 20:03:39.037694 containerd[1434]: 2024-10-08 20:03:39.021 [INFO][4142] ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.311bc7fb6f53633b3499d202796a118e93788d09c8be133d4e91a87f672140f6" host="localhost" Oct 8 20:03:39.037694 containerd[1434]: 2024-10-08 20:03:39.021 [INFO][4142] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.311bc7fb6f53633b3499d202796a118e93788d09c8be133d4e91a87f672140f6" host="localhost" Oct 8 20:03:39.037694 containerd[1434]: 2024-10-08 20:03:39.021 [INFO][4142] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:03:39.037694 containerd[1434]: 2024-10-08 20:03:39.021 [INFO][4142] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="311bc7fb6f53633b3499d202796a118e93788d09c8be133d4e91a87f672140f6" HandleID="k8s-pod-network.311bc7fb6f53633b3499d202796a118e93788d09c8be133d4e91a87f672140f6" Workload="localhost-k8s-coredns--76f75df574--2fx9x-eth0" Oct 8 20:03:39.038296 containerd[1434]: 2024-10-08 20:03:39.023 [INFO][4129] k8s.go 386: Populated endpoint ContainerID="311bc7fb6f53633b3499d202796a118e93788d09c8be133d4e91a87f672140f6" Namespace="kube-system" Pod="coredns-76f75df574-2fx9x" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--2fx9x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--2fx9x-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"607115c5-20c3-45be-b2fc-233b05ba8ddd", ResourceVersion:"772", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 3, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-76f75df574-2fx9x", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic52852a10ec", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:03:39.038296 containerd[1434]: 2024-10-08 20:03:39.023 [INFO][4129] k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="311bc7fb6f53633b3499d202796a118e93788d09c8be133d4e91a87f672140f6" Namespace="kube-system" Pod="coredns-76f75df574-2fx9x" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--2fx9x-eth0" Oct 8 20:03:39.038296 containerd[1434]: 2024-10-08 20:03:39.023 [INFO][4129] dataplane_linux.go 68: Setting the host side veth name to calic52852a10ec ContainerID="311bc7fb6f53633b3499d202796a118e93788d09c8be133d4e91a87f672140f6" Namespace="kube-system" Pod="coredns-76f75df574-2fx9x" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--2fx9x-eth0" Oct 8 20:03:39.038296 containerd[1434]: 2024-10-08 20:03:39.024 [INFO][4129] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="311bc7fb6f53633b3499d202796a118e93788d09c8be133d4e91a87f672140f6" Namespace="kube-system" Pod="coredns-76f75df574-2fx9x" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--2fx9x-eth0" Oct 8 20:03:39.038296 containerd[1434]: 2024-10-08 20:03:39.024 [INFO][4129] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="311bc7fb6f53633b3499d202796a118e93788d09c8be133d4e91a87f672140f6" Namespace="kube-system" Pod="coredns-76f75df574-2fx9x" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--2fx9x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--2fx9x-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"607115c5-20c3-45be-b2fc-233b05ba8ddd", ResourceVersion:"772", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 3, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"311bc7fb6f53633b3499d202796a118e93788d09c8be133d4e91a87f672140f6", Pod:"coredns-76f75df574-2fx9x", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic52852a10ec", MAC:"1e:eb:8e:11:34:23", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:03:39.038296 containerd[1434]: 2024-10-08 20:03:39.034 [INFO][4129] k8s.go 500: Wrote updated endpoint to datastore ContainerID="311bc7fb6f53633b3499d202796a118e93788d09c8be133d4e91a87f672140f6" Namespace="kube-system" Pod="coredns-76f75df574-2fx9x" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--2fx9x-eth0" Oct 8 20:03:39.067670 containerd[1434]: time="2024-10-08T20:03:39.067560474Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:03:39.067670 containerd[1434]: time="2024-10-08T20:03:39.067637074Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:03:39.067670 containerd[1434]: time="2024-10-08T20:03:39.067653074Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:03:39.068106 containerd[1434]: time="2024-10-08T20:03:39.068083356Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:03:39.081840 systemd[1]: run-containerd-runc-k8s.io-311bc7fb6f53633b3499d202796a118e93788d09c8be133d4e91a87f672140f6-runc.1AGYtW.mount: Deactivated successfully. Oct 8 20:03:39.089947 systemd[1]: Started cri-containerd-311bc7fb6f53633b3499d202796a118e93788d09c8be133d4e91a87f672140f6.scope - libcontainer container 311bc7fb6f53633b3499d202796a118e93788d09c8be133d4e91a87f672140f6. Oct 8 20:03:39.103248 systemd-resolved[1306]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 8 20:03:39.127660 containerd[1434]: time="2024-10-08T20:03:39.127595572Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-2fx9x,Uid:607115c5-20c3-45be-b2fc-233b05ba8ddd,Namespace:kube-system,Attempt:1,} returns sandbox id \"311bc7fb6f53633b3499d202796a118e93788d09c8be133d4e91a87f672140f6\"" Oct 8 20:03:39.128586 kubelet[2549]: E1008 20:03:39.128510 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:03:39.138318 containerd[1434]: time="2024-10-08T20:03:39.138267058Z" level=info msg="CreateContainer within sandbox \"311bc7fb6f53633b3499d202796a118e93788d09c8be133d4e91a87f672140f6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 8 20:03:39.147924 containerd[1434]: time="2024-10-08T20:03:39.147877659Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:03:39.148589 containerd[1434]: time="2024-10-08T20:03:39.148543382Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.1: active requests=0, bytes read=7211060" Oct 8 20:03:39.149574 containerd[1434]: time="2024-10-08T20:03:39.149272305Z" level=info msg="ImageCreate event name:\"sha256:dd6cf4bf9b3656f9dd9713f21ac1be96858f750a9a3bf340983fb7072f4eda2a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:03:39.154329 containerd[1434]: time="2024-10-08T20:03:39.154278847Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:03:39.155066 containerd[1434]: time="2024-10-08T20:03:39.155018730Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.1\" with image id \"sha256:dd6cf4bf9b3656f9dd9713f21ac1be96858f750a9a3bf340983fb7072f4eda2a\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\", size \"8578579\" in 962.78093ms" Oct 8 20:03:39.155066 containerd[1434]: time="2024-10-08T20:03:39.155060130Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\" returns image reference \"sha256:dd6cf4bf9b3656f9dd9713f21ac1be96858f750a9a3bf340983fb7072f4eda2a\"" Oct 8 20:03:39.157546 containerd[1434]: time="2024-10-08T20:03:39.157514381Z" level=info msg="CreateContainer within sandbox \"ebb35eb5e4466c97fbb0f34b3526a70983127ae13b2dcc19bca328bd1f7d9ccc\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Oct 8 20:03:39.171858 containerd[1434]: time="2024-10-08T20:03:39.171779882Z" level=info msg="CreateContainer within sandbox \"311bc7fb6f53633b3499d202796a118e93788d09c8be133d4e91a87f672140f6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"23f4a87c64d585777b51f3e272037c9d94dbe0a0b3b79392c67e78f0c39f8328\"" Oct 8 20:03:39.173181 containerd[1434]: time="2024-10-08T20:03:39.173146408Z" level=info msg="StartContainer for \"23f4a87c64d585777b51f3e272037c9d94dbe0a0b3b79392c67e78f0c39f8328\"" Oct 8 20:03:39.182740 containerd[1434]: time="2024-10-08T20:03:39.182701129Z" level=info msg="CreateContainer within sandbox \"ebb35eb5e4466c97fbb0f34b3526a70983127ae13b2dcc19bca328bd1f7d9ccc\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"0ff4ae47c858404d62bf96e2c2b8c6ed338468d0d3e4214fb217c6cb5e223414\"" Oct 8 20:03:39.184785 containerd[1434]: time="2024-10-08T20:03:39.183440412Z" level=info msg="StartContainer for \"0ff4ae47c858404d62bf96e2c2b8c6ed338468d0d3e4214fb217c6cb5e223414\"" Oct 8 20:03:39.197799 systemd[1]: Started cri-containerd-23f4a87c64d585777b51f3e272037c9d94dbe0a0b3b79392c67e78f0c39f8328.scope - libcontainer container 23f4a87c64d585777b51f3e272037c9d94dbe0a0b3b79392c67e78f0c39f8328. Oct 8 20:03:39.217805 systemd[1]: Started cri-containerd-0ff4ae47c858404d62bf96e2c2b8c6ed338468d0d3e4214fb217c6cb5e223414.scope - libcontainer container 0ff4ae47c858404d62bf96e2c2b8c6ed338468d0d3e4214fb217c6cb5e223414. Oct 8 20:03:39.238889 containerd[1434]: time="2024-10-08T20:03:39.238287168Z" level=info msg="StartContainer for \"23f4a87c64d585777b51f3e272037c9d94dbe0a0b3b79392c67e78f0c39f8328\" returns successfully" Oct 8 20:03:39.268141 containerd[1434]: time="2024-10-08T20:03:39.267986695Z" level=info msg="StartContainer for \"0ff4ae47c858404d62bf96e2c2b8c6ed338468d0d3e4214fb217c6cb5e223414\" returns successfully" Oct 8 20:03:39.270011 containerd[1434]: time="2024-10-08T20:03:39.269592702Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\"" Oct 8 20:03:39.631849 systemd-networkd[1362]: cali01a91bc46e3: Gained IPv6LL Oct 8 20:03:39.991149 kubelet[2549]: E1008 20:03:39.990851 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:03:40.001979 kubelet[2549]: I1008 20:03:40.001543 2549 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-2fx9x" podStartSLOduration=28.001505928 podStartE2EDuration="28.001505928s" podCreationTimestamp="2024-10-08 20:03:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 20:03:40.000823965 +0000 UTC m=+43.241347443" watchObservedRunningTime="2024-10-08 20:03:40.001505928 +0000 UTC m=+43.242029366" Oct 8 20:03:40.017910 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1181679469.mount: Deactivated successfully. Oct 8 20:03:40.302700 containerd[1434]: time="2024-10-08T20:03:40.302569901Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:03:40.303227 containerd[1434]: time="2024-10-08T20:03:40.303202783Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1: active requests=0, bytes read=12116870" Oct 8 20:03:40.304131 containerd[1434]: time="2024-10-08T20:03:40.304096907Z" level=info msg="ImageCreate event name:\"sha256:4df800f2dc90e056e3dc95be5afe5cd399ce8785c6817ddeaf07b498cb85207a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:03:40.306111 containerd[1434]: time="2024-10-08T20:03:40.306079275Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:03:40.306818 containerd[1434]: time="2024-10-08T20:03:40.306778278Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" with image id \"sha256:4df800f2dc90e056e3dc95be5afe5cd399ce8785c6817ddeaf07b498cb85207a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\", size \"13484341\" in 1.037135936s" Oct 8 20:03:40.306900 containerd[1434]: time="2024-10-08T20:03:40.306818478Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" returns image reference \"sha256:4df800f2dc90e056e3dc95be5afe5cd399ce8785c6817ddeaf07b498cb85207a\"" Oct 8 20:03:40.309049 containerd[1434]: time="2024-10-08T20:03:40.309011967Z" level=info msg="CreateContainer within sandbox \"ebb35eb5e4466c97fbb0f34b3526a70983127ae13b2dcc19bca328bd1f7d9ccc\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Oct 8 20:03:40.321093 containerd[1434]: time="2024-10-08T20:03:40.321032855Z" level=info msg="CreateContainer within sandbox \"ebb35eb5e4466c97fbb0f34b3526a70983127ae13b2dcc19bca328bd1f7d9ccc\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"c732d07046aea0b8cebe3ded6f492806c69238354167f8b597d57f7909b10d7a\"" Oct 8 20:03:40.321615 containerd[1434]: time="2024-10-08T20:03:40.321522737Z" level=info msg="StartContainer for \"c732d07046aea0b8cebe3ded6f492806c69238354167f8b597d57f7909b10d7a\"" Oct 8 20:03:40.344796 systemd[1]: Started cri-containerd-c732d07046aea0b8cebe3ded6f492806c69238354167f8b597d57f7909b10d7a.scope - libcontainer container c732d07046aea0b8cebe3ded6f492806c69238354167f8b597d57f7909b10d7a. Oct 8 20:03:40.371541 containerd[1434]: time="2024-10-08T20:03:40.371476659Z" level=info msg="StartContainer for \"c732d07046aea0b8cebe3ded6f492806c69238354167f8b597d57f7909b10d7a\" returns successfully" Oct 8 20:03:40.384834 systemd[1]: Started sshd@9-10.0.0.147:22-10.0.0.1:56650.service - OpenSSH per-connection server daemon (10.0.0.1:56650). Oct 8 20:03:40.430095 sshd[4349]: Accepted publickey for core from 10.0.0.1 port 56650 ssh2: RSA SHA256:PeFR0GwG3Km7u6+IJymPx7tkM/vpusnYsvzmiMSzq3A Oct 8 20:03:40.431646 sshd[4349]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:03:40.435593 systemd-logind[1415]: New session 10 of user core. Oct 8 20:03:40.440834 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 8 20:03:40.594979 systemd-networkd[1362]: calic52852a10ec: Gained IPv6LL Oct 8 20:03:40.601692 sshd[4349]: pam_unix(sshd:session): session closed for user core Oct 8 20:03:40.610944 systemd[1]: sshd@9-10.0.0.147:22-10.0.0.1:56650.service: Deactivated successfully. Oct 8 20:03:40.613861 systemd[1]: session-10.scope: Deactivated successfully. Oct 8 20:03:40.615465 systemd-logind[1415]: Session 10 logged out. Waiting for processes to exit. Oct 8 20:03:40.624037 systemd[1]: Started sshd@10-10.0.0.147:22-10.0.0.1:56666.service - OpenSSH per-connection server daemon (10.0.0.1:56666). Oct 8 20:03:40.627710 systemd-logind[1415]: Removed session 10. Oct 8 20:03:40.666403 sshd[4390]: Accepted publickey for core from 10.0.0.1 port 56666 ssh2: RSA SHA256:PeFR0GwG3Km7u6+IJymPx7tkM/vpusnYsvzmiMSzq3A Oct 8 20:03:40.667801 sshd[4390]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:03:40.671742 systemd-logind[1415]: New session 11 of user core. Oct 8 20:03:40.681797 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 8 20:03:40.868682 sshd[4390]: pam_unix(sshd:session): session closed for user core Oct 8 20:03:40.879183 systemd[1]: sshd@10-10.0.0.147:22-10.0.0.1:56666.service: Deactivated successfully. Oct 8 20:03:40.880814 systemd[1]: session-11.scope: Deactivated successfully. Oct 8 20:03:40.883816 systemd-logind[1415]: Session 11 logged out. Waiting for processes to exit. Oct 8 20:03:40.887646 kubelet[2549]: I1008 20:03:40.884968 2549 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 8 20:03:40.887646 kubelet[2549]: E1008 20:03:40.885537 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:03:40.891725 systemd[1]: Started sshd@11-10.0.0.147:22-10.0.0.1:56680.service - OpenSSH per-connection server daemon (10.0.0.1:56680). Oct 8 20:03:40.893298 systemd-logind[1415]: Removed session 11. Oct 8 20:03:40.933196 kubelet[2549]: I1008 20:03:40.933154 2549 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Oct 8 20:03:40.940047 kubelet[2549]: I1008 20:03:40.940003 2549 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Oct 8 20:03:40.941027 sshd[4402]: Accepted publickey for core from 10.0.0.1 port 56680 ssh2: RSA SHA256:PeFR0GwG3Km7u6+IJymPx7tkM/vpusnYsvzmiMSzq3A Oct 8 20:03:40.942376 sshd[4402]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:03:40.946963 systemd-logind[1415]: New session 12 of user core. Oct 8 20:03:40.956681 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 8 20:03:41.013899 kubelet[2549]: E1008 20:03:41.013778 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:03:41.014665 kubelet[2549]: E1008 20:03:41.014032 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:03:41.026173 kubelet[2549]: I1008 20:03:41.026125 2549 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-jvqdn" podStartSLOduration=19.91100913 podStartE2EDuration="22.02607133s" podCreationTimestamp="2024-10-08 20:03:19 +0000 UTC" firstStartedPulling="2024-10-08 20:03:38.191991079 +0000 UTC m=+41.432514517" lastFinishedPulling="2024-10-08 20:03:40.307053319 +0000 UTC m=+43.547576717" observedRunningTime="2024-10-08 20:03:41.025466768 +0000 UTC m=+44.265990206" watchObservedRunningTime="2024-10-08 20:03:41.02607133 +0000 UTC m=+44.266594768" Oct 8 20:03:41.079341 sshd[4402]: pam_unix(sshd:session): session closed for user core Oct 8 20:03:41.082712 systemd[1]: sshd@11-10.0.0.147:22-10.0.0.1:56680.service: Deactivated successfully. Oct 8 20:03:41.085125 systemd[1]: session-12.scope: Deactivated successfully. Oct 8 20:03:41.086517 systemd-logind[1415]: Session 12 logged out. Waiting for processes to exit. Oct 8 20:03:41.087772 systemd-logind[1415]: Removed session 12. Oct 8 20:03:41.466665 kernel: bpftool[4436]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Oct 8 20:03:41.644284 systemd-networkd[1362]: vxlan.calico: Link UP Oct 8 20:03:41.644292 systemd-networkd[1362]: vxlan.calico: Gained carrier Oct 8 20:03:41.848156 containerd[1434]: time="2024-10-08T20:03:41.846947591Z" level=info msg="StopPodSandbox for \"17cbb0ffeb5bde4c23040ffa36b878ea33796d5b1600384746fe647e501bd6e0\"" Oct 8 20:03:41.848156 containerd[1434]: time="2024-10-08T20:03:41.847917114Z" level=info msg="StopPodSandbox for \"6d0c2d44c174279033af56b5e5da6c2f434d94a80e5ba36ed3fe54fe3b7f6170\"" Oct 8 20:03:41.976976 containerd[1434]: 2024-10-08 20:03:41.929 [INFO][4565] k8s.go 608: Cleaning up netns ContainerID="17cbb0ffeb5bde4c23040ffa36b878ea33796d5b1600384746fe647e501bd6e0" Oct 8 20:03:41.976976 containerd[1434]: 2024-10-08 20:03:41.929 [INFO][4565] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="17cbb0ffeb5bde4c23040ffa36b878ea33796d5b1600384746fe647e501bd6e0" iface="eth0" netns="/var/run/netns/cni-d36b46ab-b394-cf00-481c-695b93fe8a9d" Oct 8 20:03:41.976976 containerd[1434]: 2024-10-08 20:03:41.929 [INFO][4565] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="17cbb0ffeb5bde4c23040ffa36b878ea33796d5b1600384746fe647e501bd6e0" iface="eth0" netns="/var/run/netns/cni-d36b46ab-b394-cf00-481c-695b93fe8a9d" Oct 8 20:03:41.976976 containerd[1434]: 2024-10-08 20:03:41.929 [INFO][4565] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="17cbb0ffeb5bde4c23040ffa36b878ea33796d5b1600384746fe647e501bd6e0" iface="eth0" netns="/var/run/netns/cni-d36b46ab-b394-cf00-481c-695b93fe8a9d" Oct 8 20:03:41.976976 containerd[1434]: 2024-10-08 20:03:41.929 [INFO][4565] k8s.go 615: Releasing IP address(es) ContainerID="17cbb0ffeb5bde4c23040ffa36b878ea33796d5b1600384746fe647e501bd6e0" Oct 8 20:03:41.976976 containerd[1434]: 2024-10-08 20:03:41.929 [INFO][4565] utils.go 188: Calico CNI releasing IP address ContainerID="17cbb0ffeb5bde4c23040ffa36b878ea33796d5b1600384746fe647e501bd6e0" Oct 8 20:03:41.976976 containerd[1434]: 2024-10-08 20:03:41.957 [INFO][4596] ipam_plugin.go 417: Releasing address using handleID ContainerID="17cbb0ffeb5bde4c23040ffa36b878ea33796d5b1600384746fe647e501bd6e0" HandleID="k8s-pod-network.17cbb0ffeb5bde4c23040ffa36b878ea33796d5b1600384746fe647e501bd6e0" Workload="localhost-k8s-calico--kube--controllers--6b8996986b--k6ktv-eth0" Oct 8 20:03:41.976976 containerd[1434]: 2024-10-08 20:03:41.957 [INFO][4596] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:03:41.976976 containerd[1434]: 2024-10-08 20:03:41.958 [INFO][4596] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:03:41.976976 containerd[1434]: 2024-10-08 20:03:41.970 [WARNING][4596] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="17cbb0ffeb5bde4c23040ffa36b878ea33796d5b1600384746fe647e501bd6e0" HandleID="k8s-pod-network.17cbb0ffeb5bde4c23040ffa36b878ea33796d5b1600384746fe647e501bd6e0" Workload="localhost-k8s-calico--kube--controllers--6b8996986b--k6ktv-eth0" Oct 8 20:03:41.976976 containerd[1434]: 2024-10-08 20:03:41.970 [INFO][4596] ipam_plugin.go 445: Releasing address using workloadID ContainerID="17cbb0ffeb5bde4c23040ffa36b878ea33796d5b1600384746fe647e501bd6e0" HandleID="k8s-pod-network.17cbb0ffeb5bde4c23040ffa36b878ea33796d5b1600384746fe647e501bd6e0" Workload="localhost-k8s-calico--kube--controllers--6b8996986b--k6ktv-eth0" Oct 8 20:03:41.976976 containerd[1434]: 2024-10-08 20:03:41.971 [INFO][4596] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:03:41.976976 containerd[1434]: 2024-10-08 20:03:41.973 [INFO][4565] k8s.go 621: Teardown processing complete. ContainerID="17cbb0ffeb5bde4c23040ffa36b878ea33796d5b1600384746fe647e501bd6e0" Oct 8 20:03:41.976976 containerd[1434]: time="2024-10-08T20:03:41.976738401Z" level=info msg="TearDown network for sandbox \"17cbb0ffeb5bde4c23040ffa36b878ea33796d5b1600384746fe647e501bd6e0\" successfully" Oct 8 20:03:41.976976 containerd[1434]: time="2024-10-08T20:03:41.976767641Z" level=info msg="StopPodSandbox for \"17cbb0ffeb5bde4c23040ffa36b878ea33796d5b1600384746fe647e501bd6e0\" returns successfully" Oct 8 20:03:41.978669 containerd[1434]: time="2024-10-08T20:03:41.977402123Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b8996986b-k6ktv,Uid:38f894d1-54bd-4496-83b6-2cd084c2920d,Namespace:calico-system,Attempt:1,}" Oct 8 20:03:41.977148 systemd[1]: run-netns-cni\x2dd36b46ab\x2db394\x2dcf00\x2d481c\x2d695b93fe8a9d.mount: Deactivated successfully. Oct 8 20:03:41.997336 containerd[1434]: 2024-10-08 20:03:41.929 [INFO][4566] k8s.go 608: Cleaning up netns ContainerID="6d0c2d44c174279033af56b5e5da6c2f434d94a80e5ba36ed3fe54fe3b7f6170" Oct 8 20:03:41.997336 containerd[1434]: 2024-10-08 20:03:41.929 [INFO][4566] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="6d0c2d44c174279033af56b5e5da6c2f434d94a80e5ba36ed3fe54fe3b7f6170" iface="eth0" netns="/var/run/netns/cni-7d74699d-9022-75af-a1ae-2b973390f989" Oct 8 20:03:41.997336 containerd[1434]: 2024-10-08 20:03:41.929 [INFO][4566] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="6d0c2d44c174279033af56b5e5da6c2f434d94a80e5ba36ed3fe54fe3b7f6170" iface="eth0" netns="/var/run/netns/cni-7d74699d-9022-75af-a1ae-2b973390f989" Oct 8 20:03:41.997336 containerd[1434]: 2024-10-08 20:03:41.929 [INFO][4566] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="6d0c2d44c174279033af56b5e5da6c2f434d94a80e5ba36ed3fe54fe3b7f6170" iface="eth0" netns="/var/run/netns/cni-7d74699d-9022-75af-a1ae-2b973390f989" Oct 8 20:03:41.997336 containerd[1434]: 2024-10-08 20:03:41.929 [INFO][4566] k8s.go 615: Releasing IP address(es) ContainerID="6d0c2d44c174279033af56b5e5da6c2f434d94a80e5ba36ed3fe54fe3b7f6170" Oct 8 20:03:41.997336 containerd[1434]: 2024-10-08 20:03:41.929 [INFO][4566] utils.go 188: Calico CNI releasing IP address ContainerID="6d0c2d44c174279033af56b5e5da6c2f434d94a80e5ba36ed3fe54fe3b7f6170" Oct 8 20:03:41.997336 containerd[1434]: 2024-10-08 20:03:41.959 [INFO][4597] ipam_plugin.go 417: Releasing address using handleID ContainerID="6d0c2d44c174279033af56b5e5da6c2f434d94a80e5ba36ed3fe54fe3b7f6170" HandleID="k8s-pod-network.6d0c2d44c174279033af56b5e5da6c2f434d94a80e5ba36ed3fe54fe3b7f6170" Workload="localhost-k8s-coredns--76f75df574--htgrq-eth0" Oct 8 20:03:41.997336 containerd[1434]: 2024-10-08 20:03:41.959 [INFO][4597] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:03:41.997336 containerd[1434]: 2024-10-08 20:03:41.971 [INFO][4597] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:03:41.997336 containerd[1434]: 2024-10-08 20:03:41.991 [WARNING][4597] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="6d0c2d44c174279033af56b5e5da6c2f434d94a80e5ba36ed3fe54fe3b7f6170" HandleID="k8s-pod-network.6d0c2d44c174279033af56b5e5da6c2f434d94a80e5ba36ed3fe54fe3b7f6170" Workload="localhost-k8s-coredns--76f75df574--htgrq-eth0" Oct 8 20:03:41.997336 containerd[1434]: 2024-10-08 20:03:41.991 [INFO][4597] ipam_plugin.go 445: Releasing address using workloadID ContainerID="6d0c2d44c174279033af56b5e5da6c2f434d94a80e5ba36ed3fe54fe3b7f6170" HandleID="k8s-pod-network.6d0c2d44c174279033af56b5e5da6c2f434d94a80e5ba36ed3fe54fe3b7f6170" Workload="localhost-k8s-coredns--76f75df574--htgrq-eth0" Oct 8 20:03:41.997336 containerd[1434]: 2024-10-08 20:03:41.992 [INFO][4597] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:03:41.997336 containerd[1434]: 2024-10-08 20:03:41.994 [INFO][4566] k8s.go 621: Teardown processing complete. ContainerID="6d0c2d44c174279033af56b5e5da6c2f434d94a80e5ba36ed3fe54fe3b7f6170" Oct 8 20:03:41.997336 containerd[1434]: time="2024-10-08T20:03:41.997201718Z" level=info msg="TearDown network for sandbox \"6d0c2d44c174279033af56b5e5da6c2f434d94a80e5ba36ed3fe54fe3b7f6170\" successfully" Oct 8 20:03:41.997336 containerd[1434]: time="2024-10-08T20:03:41.997226318Z" level=info msg="StopPodSandbox for \"6d0c2d44c174279033af56b5e5da6c2f434d94a80e5ba36ed3fe54fe3b7f6170\" returns successfully" Oct 8 20:03:41.998362 kubelet[2549]: E1008 20:03:41.997737 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:03:41.999760 containerd[1434]: time="2024-10-08T20:03:41.999727408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-htgrq,Uid:fd379779-9501-4c20-b03d-dee08f6a4a3c,Namespace:kube-system,Attempt:1,}" Oct 8 20:03:42.000850 systemd[1]: run-netns-cni\x2d7d74699d\x2d9022\x2d75af\x2da1ae\x2d2b973390f989.mount: Deactivated successfully. Oct 8 20:03:42.017091 kubelet[2549]: E1008 20:03:42.016842 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:03:42.138757 systemd-networkd[1362]: cali4a80b6989b1: Link UP Oct 8 20:03:42.142755 systemd-networkd[1362]: cali4a80b6989b1: Gained carrier Oct 8 20:03:42.160938 containerd[1434]: 2024-10-08 20:03:42.051 [INFO][4611] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--6b8996986b--k6ktv-eth0 calico-kube-controllers-6b8996986b- calico-system 38f894d1-54bd-4496-83b6-2cd084c2920d 852 0 2024-10-08 20:03:19 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6b8996986b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-6b8996986b-k6ktv eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali4a80b6989b1 [] []}} ContainerID="96cbd0214a96157e0f3376aaa1b3e4cfcbb611adc2c633f62d3fbb7325b60adc" Namespace="calico-system" Pod="calico-kube-controllers-6b8996986b-k6ktv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6b8996986b--k6ktv-" Oct 8 20:03:42.160938 containerd[1434]: 2024-10-08 20:03:42.051 [INFO][4611] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="96cbd0214a96157e0f3376aaa1b3e4cfcbb611adc2c633f62d3fbb7325b60adc" Namespace="calico-system" Pod="calico-kube-controllers-6b8996986b-k6ktv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6b8996986b--k6ktv-eth0" Oct 8 20:03:42.160938 containerd[1434]: 2024-10-08 20:03:42.077 [INFO][4636] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="96cbd0214a96157e0f3376aaa1b3e4cfcbb611adc2c633f62d3fbb7325b60adc" HandleID="k8s-pod-network.96cbd0214a96157e0f3376aaa1b3e4cfcbb611adc2c633f62d3fbb7325b60adc" Workload="localhost-k8s-calico--kube--controllers--6b8996986b--k6ktv-eth0" Oct 8 20:03:42.160938 containerd[1434]: 2024-10-08 20:03:42.089 [INFO][4636] ipam_plugin.go 270: Auto assigning IP ContainerID="96cbd0214a96157e0f3376aaa1b3e4cfcbb611adc2c633f62d3fbb7325b60adc" HandleID="k8s-pod-network.96cbd0214a96157e0f3376aaa1b3e4cfcbb611adc2c633f62d3fbb7325b60adc" Workload="localhost-k8s-calico--kube--controllers--6b8996986b--k6ktv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002fd140), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-6b8996986b-k6ktv", "timestamp":"2024-10-08 20:03:42.077466563 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 20:03:42.160938 containerd[1434]: 2024-10-08 20:03:42.090 [INFO][4636] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:03:42.160938 containerd[1434]: 2024-10-08 20:03:42.090 [INFO][4636] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:03:42.160938 containerd[1434]: 2024-10-08 20:03:42.090 [INFO][4636] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 8 20:03:42.160938 containerd[1434]: 2024-10-08 20:03:42.091 [INFO][4636] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.96cbd0214a96157e0f3376aaa1b3e4cfcbb611adc2c633f62d3fbb7325b60adc" host="localhost" Oct 8 20:03:42.160938 containerd[1434]: 2024-10-08 20:03:42.096 [INFO][4636] ipam.go 372: Looking up existing affinities for host host="localhost" Oct 8 20:03:42.160938 containerd[1434]: 2024-10-08 20:03:42.100 [INFO][4636] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Oct 8 20:03:42.160938 containerd[1434]: 2024-10-08 20:03:42.102 [INFO][4636] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 8 20:03:42.160938 containerd[1434]: 2024-10-08 20:03:42.104 [INFO][4636] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 8 20:03:42.160938 containerd[1434]: 2024-10-08 20:03:42.104 [INFO][4636] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.96cbd0214a96157e0f3376aaa1b3e4cfcbb611adc2c633f62d3fbb7325b60adc" host="localhost" Oct 8 20:03:42.160938 containerd[1434]: 2024-10-08 20:03:42.105 [INFO][4636] ipam.go 1685: Creating new handle: k8s-pod-network.96cbd0214a96157e0f3376aaa1b3e4cfcbb611adc2c633f62d3fbb7325b60adc Oct 8 20:03:42.160938 containerd[1434]: 2024-10-08 20:03:42.110 [INFO][4636] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.96cbd0214a96157e0f3376aaa1b3e4cfcbb611adc2c633f62d3fbb7325b60adc" host="localhost" Oct 8 20:03:42.160938 containerd[1434]: 2024-10-08 20:03:42.119 [INFO][4636] ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.96cbd0214a96157e0f3376aaa1b3e4cfcbb611adc2c633f62d3fbb7325b60adc" host="localhost" Oct 8 20:03:42.160938 containerd[1434]: 2024-10-08 20:03:42.119 [INFO][4636] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.96cbd0214a96157e0f3376aaa1b3e4cfcbb611adc2c633f62d3fbb7325b60adc" host="localhost" Oct 8 20:03:42.160938 containerd[1434]: 2024-10-08 20:03:42.119 [INFO][4636] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:03:42.160938 containerd[1434]: 2024-10-08 20:03:42.119 [INFO][4636] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="96cbd0214a96157e0f3376aaa1b3e4cfcbb611adc2c633f62d3fbb7325b60adc" HandleID="k8s-pod-network.96cbd0214a96157e0f3376aaa1b3e4cfcbb611adc2c633f62d3fbb7325b60adc" Workload="localhost-k8s-calico--kube--controllers--6b8996986b--k6ktv-eth0" Oct 8 20:03:42.162050 containerd[1434]: 2024-10-08 20:03:42.127 [INFO][4611] k8s.go 386: Populated endpoint ContainerID="96cbd0214a96157e0f3376aaa1b3e4cfcbb611adc2c633f62d3fbb7325b60adc" Namespace="calico-system" Pod="calico-kube-controllers-6b8996986b-k6ktv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6b8996986b--k6ktv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6b8996986b--k6ktv-eth0", GenerateName:"calico-kube-controllers-6b8996986b-", Namespace:"calico-system", SelfLink:"", UID:"38f894d1-54bd-4496-83b6-2cd084c2920d", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 3, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6b8996986b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-6b8996986b-k6ktv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4a80b6989b1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:03:42.162050 containerd[1434]: 2024-10-08 20:03:42.128 [INFO][4611] k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="96cbd0214a96157e0f3376aaa1b3e4cfcbb611adc2c633f62d3fbb7325b60adc" Namespace="calico-system" Pod="calico-kube-controllers-6b8996986b-k6ktv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6b8996986b--k6ktv-eth0" Oct 8 20:03:42.162050 containerd[1434]: 2024-10-08 20:03:42.128 [INFO][4611] dataplane_linux.go 68: Setting the host side veth name to cali4a80b6989b1 ContainerID="96cbd0214a96157e0f3376aaa1b3e4cfcbb611adc2c633f62d3fbb7325b60adc" Namespace="calico-system" Pod="calico-kube-controllers-6b8996986b-k6ktv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6b8996986b--k6ktv-eth0" Oct 8 20:03:42.162050 containerd[1434]: 2024-10-08 20:03:42.140 [INFO][4611] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="96cbd0214a96157e0f3376aaa1b3e4cfcbb611adc2c633f62d3fbb7325b60adc" Namespace="calico-system" Pod="calico-kube-controllers-6b8996986b-k6ktv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6b8996986b--k6ktv-eth0" Oct 8 20:03:42.162050 containerd[1434]: 2024-10-08 20:03:42.141 [INFO][4611] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="96cbd0214a96157e0f3376aaa1b3e4cfcbb611adc2c633f62d3fbb7325b60adc" Namespace="calico-system" Pod="calico-kube-controllers-6b8996986b-k6ktv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6b8996986b--k6ktv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6b8996986b--k6ktv-eth0", GenerateName:"calico-kube-controllers-6b8996986b-", Namespace:"calico-system", SelfLink:"", UID:"38f894d1-54bd-4496-83b6-2cd084c2920d", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 3, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6b8996986b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"96cbd0214a96157e0f3376aaa1b3e4cfcbb611adc2c633f62d3fbb7325b60adc", Pod:"calico-kube-controllers-6b8996986b-k6ktv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4a80b6989b1", MAC:"3e:fe:d0:86:c4:fd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:03:42.162050 containerd[1434]: 2024-10-08 20:03:42.158 [INFO][4611] k8s.go 500: Wrote updated endpoint to datastore ContainerID="96cbd0214a96157e0f3376aaa1b3e4cfcbb611adc2c633f62d3fbb7325b60adc" Namespace="calico-system" Pod="calico-kube-controllers-6b8996986b-k6ktv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6b8996986b--k6ktv-eth0" Oct 8 20:03:42.174339 systemd-networkd[1362]: califef0e6cb959: Link UP Oct 8 20:03:42.174602 systemd-networkd[1362]: califef0e6cb959: Gained carrier Oct 8 20:03:42.188153 containerd[1434]: 2024-10-08 20:03:42.056 [INFO][4627] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--76f75df574--htgrq-eth0 coredns-76f75df574- kube-system fd379779-9501-4c20-b03d-dee08f6a4a3c 853 0 2024-10-08 20:03:12 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-76f75df574-htgrq eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] califef0e6cb959 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="28bcad831f9d6f502942f16059e753c967fe8233480b89b6f48233c47c0af591" Namespace="kube-system" Pod="coredns-76f75df574-htgrq" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--htgrq-" Oct 8 20:03:42.188153 containerd[1434]: 2024-10-08 20:03:42.056 [INFO][4627] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="28bcad831f9d6f502942f16059e753c967fe8233480b89b6f48233c47c0af591" Namespace="kube-system" Pod="coredns-76f75df574-htgrq" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--htgrq-eth0" Oct 8 20:03:42.188153 containerd[1434]: 2024-10-08 20:03:42.088 [INFO][4643] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="28bcad831f9d6f502942f16059e753c967fe8233480b89b6f48233c47c0af591" HandleID="k8s-pod-network.28bcad831f9d6f502942f16059e753c967fe8233480b89b6f48233c47c0af591" Workload="localhost-k8s-coredns--76f75df574--htgrq-eth0" Oct 8 20:03:42.188153 containerd[1434]: 2024-10-08 20:03:42.101 [INFO][4643] ipam_plugin.go 270: Auto assigning IP ContainerID="28bcad831f9d6f502942f16059e753c967fe8233480b89b6f48233c47c0af591" HandleID="k8s-pod-network.28bcad831f9d6f502942f16059e753c967fe8233480b89b6f48233c47c0af591" Workload="localhost-k8s-coredns--76f75df574--htgrq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400039e390), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-76f75df574-htgrq", "timestamp":"2024-10-08 20:03:42.088720403 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 20:03:42.188153 containerd[1434]: 2024-10-08 20:03:42.101 [INFO][4643] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:03:42.188153 containerd[1434]: 2024-10-08 20:03:42.119 [INFO][4643] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:03:42.188153 containerd[1434]: 2024-10-08 20:03:42.120 [INFO][4643] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 8 20:03:42.188153 containerd[1434]: 2024-10-08 20:03:42.122 [INFO][4643] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.28bcad831f9d6f502942f16059e753c967fe8233480b89b6f48233c47c0af591" host="localhost" Oct 8 20:03:42.188153 containerd[1434]: 2024-10-08 20:03:42.127 [INFO][4643] ipam.go 372: Looking up existing affinities for host host="localhost" Oct 8 20:03:42.188153 containerd[1434]: 2024-10-08 20:03:42.142 [INFO][4643] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Oct 8 20:03:42.188153 containerd[1434]: 2024-10-08 20:03:42.148 [INFO][4643] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 8 20:03:42.188153 containerd[1434]: 2024-10-08 20:03:42.151 [INFO][4643] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 8 20:03:42.188153 containerd[1434]: 2024-10-08 20:03:42.151 [INFO][4643] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.28bcad831f9d6f502942f16059e753c967fe8233480b89b6f48233c47c0af591" host="localhost" Oct 8 20:03:42.188153 containerd[1434]: 2024-10-08 20:03:42.154 [INFO][4643] ipam.go 1685: Creating new handle: k8s-pod-network.28bcad831f9d6f502942f16059e753c967fe8233480b89b6f48233c47c0af591 Oct 8 20:03:42.188153 containerd[1434]: 2024-10-08 20:03:42.162 [INFO][4643] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.28bcad831f9d6f502942f16059e753c967fe8233480b89b6f48233c47c0af591" host="localhost" Oct 8 20:03:42.188153 containerd[1434]: 2024-10-08 20:03:42.169 [INFO][4643] ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.28bcad831f9d6f502942f16059e753c967fe8233480b89b6f48233c47c0af591" host="localhost" Oct 8 20:03:42.188153 containerd[1434]: 2024-10-08 20:03:42.169 [INFO][4643] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.28bcad831f9d6f502942f16059e753c967fe8233480b89b6f48233c47c0af591" host="localhost" Oct 8 20:03:42.188153 containerd[1434]: 2024-10-08 20:03:42.169 [INFO][4643] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:03:42.188153 containerd[1434]: 2024-10-08 20:03:42.169 [INFO][4643] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="28bcad831f9d6f502942f16059e753c967fe8233480b89b6f48233c47c0af591" HandleID="k8s-pod-network.28bcad831f9d6f502942f16059e753c967fe8233480b89b6f48233c47c0af591" Workload="localhost-k8s-coredns--76f75df574--htgrq-eth0" Oct 8 20:03:42.188720 containerd[1434]: 2024-10-08 20:03:42.171 [INFO][4627] k8s.go 386: Populated endpoint ContainerID="28bcad831f9d6f502942f16059e753c967fe8233480b89b6f48233c47c0af591" Namespace="kube-system" Pod="coredns-76f75df574-htgrq" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--htgrq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--htgrq-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"fd379779-9501-4c20-b03d-dee08f6a4a3c", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 3, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-76f75df574-htgrq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califef0e6cb959", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:03:42.188720 containerd[1434]: 2024-10-08 20:03:42.171 [INFO][4627] k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="28bcad831f9d6f502942f16059e753c967fe8233480b89b6f48233c47c0af591" Namespace="kube-system" Pod="coredns-76f75df574-htgrq" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--htgrq-eth0" Oct 8 20:03:42.188720 containerd[1434]: 2024-10-08 20:03:42.171 [INFO][4627] dataplane_linux.go 68: Setting the host side veth name to califef0e6cb959 ContainerID="28bcad831f9d6f502942f16059e753c967fe8233480b89b6f48233c47c0af591" Namespace="kube-system" Pod="coredns-76f75df574-htgrq" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--htgrq-eth0" Oct 8 20:03:42.188720 containerd[1434]: 2024-10-08 20:03:42.173 [INFO][4627] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="28bcad831f9d6f502942f16059e753c967fe8233480b89b6f48233c47c0af591" Namespace="kube-system" Pod="coredns-76f75df574-htgrq" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--htgrq-eth0" Oct 8 20:03:42.188720 containerd[1434]: 2024-10-08 20:03:42.173 [INFO][4627] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="28bcad831f9d6f502942f16059e753c967fe8233480b89b6f48233c47c0af591" Namespace="kube-system" Pod="coredns-76f75df574-htgrq" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--htgrq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--htgrq-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"fd379779-9501-4c20-b03d-dee08f6a4a3c", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 3, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"28bcad831f9d6f502942f16059e753c967fe8233480b89b6f48233c47c0af591", Pod:"coredns-76f75df574-htgrq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califef0e6cb959", MAC:"fa:a3:f1:8b:cc:10", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:03:42.188720 containerd[1434]: 2024-10-08 20:03:42.183 [INFO][4627] k8s.go 500: Wrote updated endpoint to datastore ContainerID="28bcad831f9d6f502942f16059e753c967fe8233480b89b6f48233c47c0af591" Namespace="kube-system" Pod="coredns-76f75df574-htgrq" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--htgrq-eth0" Oct 8 20:03:42.201161 containerd[1434]: time="2024-10-08T20:03:42.199567756Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:03:42.201161 containerd[1434]: time="2024-10-08T20:03:42.199719036Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:03:42.201161 containerd[1434]: time="2024-10-08T20:03:42.200214638Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:03:42.201161 containerd[1434]: time="2024-10-08T20:03:42.200354279Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:03:42.205927 containerd[1434]: time="2024-10-08T20:03:42.205857338Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:03:42.206014 containerd[1434]: time="2024-10-08T20:03:42.205954698Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:03:42.206014 containerd[1434]: time="2024-10-08T20:03:42.205982579Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:03:42.206160 containerd[1434]: time="2024-10-08T20:03:42.206133139Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:03:42.236804 systemd[1]: Started cri-containerd-28bcad831f9d6f502942f16059e753c967fe8233480b89b6f48233c47c0af591.scope - libcontainer container 28bcad831f9d6f502942f16059e753c967fe8233480b89b6f48233c47c0af591. Oct 8 20:03:42.238014 systemd[1]: Started cri-containerd-96cbd0214a96157e0f3376aaa1b3e4cfcbb611adc2c633f62d3fbb7325b60adc.scope - libcontainer container 96cbd0214a96157e0f3376aaa1b3e4cfcbb611adc2c633f62d3fbb7325b60adc. Oct 8 20:03:42.248743 systemd-resolved[1306]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 8 20:03:42.254138 systemd-resolved[1306]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 8 20:03:42.275321 containerd[1434]: time="2024-10-08T20:03:42.275183104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-htgrq,Uid:fd379779-9501-4c20-b03d-dee08f6a4a3c,Namespace:kube-system,Attempt:1,} returns sandbox id \"28bcad831f9d6f502942f16059e753c967fe8233480b89b6f48233c47c0af591\"" Oct 8 20:03:42.276183 kubelet[2549]: E1008 20:03:42.276156 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:03:42.280483 containerd[1434]: time="2024-10-08T20:03:42.280246362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b8996986b-k6ktv,Uid:38f894d1-54bd-4496-83b6-2cd084c2920d,Namespace:calico-system,Attempt:1,} returns sandbox id \"96cbd0214a96157e0f3376aaa1b3e4cfcbb611adc2c633f62d3fbb7325b60adc\"" Oct 8 20:03:42.280761 containerd[1434]: time="2024-10-08T20:03:42.280730443Z" level=info msg="CreateContainer within sandbox \"28bcad831f9d6f502942f16059e753c967fe8233480b89b6f48233c47c0af591\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 8 20:03:42.281550 containerd[1434]: time="2024-10-08T20:03:42.281480566Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\"" Oct 8 20:03:42.295036 containerd[1434]: time="2024-10-08T20:03:42.294974614Z" level=info msg="CreateContainer within sandbox \"28bcad831f9d6f502942f16059e753c967fe8233480b89b6f48233c47c0af591\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"eed935fc06173dac557b4a64c138ab847d727d5b3651810284a9b3da0db7ccd8\"" Oct 8 20:03:42.295581 containerd[1434]: time="2024-10-08T20:03:42.295540936Z" level=info msg="StartContainer for \"eed935fc06173dac557b4a64c138ab847d727d5b3651810284a9b3da0db7ccd8\"" Oct 8 20:03:42.325898 systemd[1]: Started cri-containerd-eed935fc06173dac557b4a64c138ab847d727d5b3651810284a9b3da0db7ccd8.scope - libcontainer container eed935fc06173dac557b4a64c138ab847d727d5b3651810284a9b3da0db7ccd8. Oct 8 20:03:42.367094 containerd[1434]: time="2024-10-08T20:03:42.367042149Z" level=info msg="StartContainer for \"eed935fc06173dac557b4a64c138ab847d727d5b3651810284a9b3da0db7ccd8\" returns successfully" Oct 8 20:03:43.027673 kubelet[2549]: E1008 20:03:43.025324 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:03:43.026781 systemd-networkd[1362]: vxlan.calico: Gained IPv6LL Oct 8 20:03:43.036267 kubelet[2549]: I1008 20:03:43.036212 2549 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-htgrq" podStartSLOduration=31.036179551 podStartE2EDuration="31.036179551s" podCreationTimestamp="2024-10-08 20:03:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 20:03:43.035184748 +0000 UTC m=+46.275708226" watchObservedRunningTime="2024-10-08 20:03:43.036179551 +0000 UTC m=+46.276702989" Oct 8 20:03:43.408735 systemd-networkd[1362]: califef0e6cb959: Gained IPv6LL Oct 8 20:03:43.634749 containerd[1434]: time="2024-10-08T20:03:43.634701258Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:03:43.635678 containerd[1434]: time="2024-10-08T20:03:43.635458300Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.1: active requests=0, bytes read=31361753" Oct 8 20:03:43.636397 containerd[1434]: time="2024-10-08T20:03:43.636363063Z" level=info msg="ImageCreate event name:\"sha256:dde0e0aa888dfe01de8f2b6b4879c4391e01cc95a7a8a608194d8ed663fe6a39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:03:43.638695 containerd[1434]: time="2024-10-08T20:03:43.638664751Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:03:43.639655 containerd[1434]: time="2024-10-08T20:03:43.639347513Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" with image id \"sha256:dde0e0aa888dfe01de8f2b6b4879c4391e01cc95a7a8a608194d8ed663fe6a39\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\", size \"32729240\" in 1.357833947s" Oct 8 20:03:43.639655 containerd[1434]: time="2024-10-08T20:03:43.639380273Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" returns image reference \"sha256:dde0e0aa888dfe01de8f2b6b4879c4391e01cc95a7a8a608194d8ed663fe6a39\"" Oct 8 20:03:43.645700 containerd[1434]: time="2024-10-08T20:03:43.644983452Z" level=info msg="CreateContainer within sandbox \"96cbd0214a96157e0f3376aaa1b3e4cfcbb611adc2c633f62d3fbb7325b60adc\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Oct 8 20:03:43.655495 containerd[1434]: time="2024-10-08T20:03:43.655439567Z" level=info msg="CreateContainer within sandbox \"96cbd0214a96157e0f3376aaa1b3e4cfcbb611adc2c633f62d3fbb7325b60adc\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"c1047a575433825bf0645df6ddbe3bf6085965b84003e35011cd167081f8d30a\"" Oct 8 20:03:43.656406 containerd[1434]: time="2024-10-08T20:03:43.655854968Z" level=info msg="StartContainer for \"c1047a575433825bf0645df6ddbe3bf6085965b84003e35011cd167081f8d30a\"" Oct 8 20:03:43.663782 systemd-networkd[1362]: cali4a80b6989b1: Gained IPv6LL Oct 8 20:03:43.689833 systemd[1]: Started cri-containerd-c1047a575433825bf0645df6ddbe3bf6085965b84003e35011cd167081f8d30a.scope - libcontainer container c1047a575433825bf0645df6ddbe3bf6085965b84003e35011cd167081f8d30a. Oct 8 20:03:43.715658 containerd[1434]: time="2024-10-08T20:03:43.715567326Z" level=info msg="StartContainer for \"c1047a575433825bf0645df6ddbe3bf6085965b84003e35011cd167081f8d30a\" returns successfully" Oct 8 20:03:44.035023 kubelet[2549]: E1008 20:03:44.033759 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:03:44.096816 kubelet[2549]: I1008 20:03:44.096777 2549 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6b8996986b-k6ktv" podStartSLOduration=23.738395183 podStartE2EDuration="25.096738092s" podCreationTimestamp="2024-10-08 20:03:19 +0000 UTC" firstStartedPulling="2024-10-08 20:03:42.281303525 +0000 UTC m=+45.521826963" lastFinishedPulling="2024-10-08 20:03:43.639646434 +0000 UTC m=+46.880169872" observedRunningTime="2024-10-08 20:03:44.09615245 +0000 UTC m=+47.336675888" watchObservedRunningTime="2024-10-08 20:03:44.096738092 +0000 UTC m=+47.337261530" Oct 8 20:03:45.035233 kubelet[2549]: I1008 20:03:45.035149 2549 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 8 20:03:45.036057 kubelet[2549]: E1008 20:03:45.035609 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:03:46.093934 systemd[1]: Started sshd@12-10.0.0.147:22-10.0.0.1:51498.service - OpenSSH per-connection server daemon (10.0.0.1:51498). Oct 8 20:03:46.136528 sshd[4857]: Accepted publickey for core from 10.0.0.1 port 51498 ssh2: RSA SHA256:PeFR0GwG3Km7u6+IJymPx7tkM/vpusnYsvzmiMSzq3A Oct 8 20:03:46.138119 sshd[4857]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:03:46.142461 systemd-logind[1415]: New session 13 of user core. Oct 8 20:03:46.153796 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 8 20:03:46.326193 sshd[4857]: pam_unix(sshd:session): session closed for user core Oct 8 20:03:46.337350 systemd[1]: sshd@12-10.0.0.147:22-10.0.0.1:51498.service: Deactivated successfully. Oct 8 20:03:46.339806 systemd[1]: session-13.scope: Deactivated successfully. Oct 8 20:03:46.341240 systemd-logind[1415]: Session 13 logged out. Waiting for processes to exit. Oct 8 20:03:46.349908 systemd[1]: Started sshd@13-10.0.0.147:22-10.0.0.1:51512.service - OpenSSH per-connection server daemon (10.0.0.1:51512). Oct 8 20:03:46.351297 systemd-logind[1415]: Removed session 13. Oct 8 20:03:46.384842 sshd[4872]: Accepted publickey for core from 10.0.0.1 port 51512 ssh2: RSA SHA256:PeFR0GwG3Km7u6+IJymPx7tkM/vpusnYsvzmiMSzq3A Oct 8 20:03:46.386259 sshd[4872]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:03:46.390120 systemd-logind[1415]: New session 14 of user core. Oct 8 20:03:46.402816 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 8 20:03:46.640348 sshd[4872]: pam_unix(sshd:session): session closed for user core Oct 8 20:03:46.650243 systemd[1]: sshd@13-10.0.0.147:22-10.0.0.1:51512.service: Deactivated successfully. Oct 8 20:03:46.651715 systemd[1]: session-14.scope: Deactivated successfully. Oct 8 20:03:46.653458 systemd-logind[1415]: Session 14 logged out. Waiting for processes to exit. Oct 8 20:03:46.659874 systemd[1]: Started sshd@14-10.0.0.147:22-10.0.0.1:51516.service - OpenSSH per-connection server daemon (10.0.0.1:51516). Oct 8 20:03:46.661009 systemd-logind[1415]: Removed session 14. Oct 8 20:03:46.700617 sshd[4884]: Accepted publickey for core from 10.0.0.1 port 51516 ssh2: RSA SHA256:PeFR0GwG3Km7u6+IJymPx7tkM/vpusnYsvzmiMSzq3A Oct 8 20:03:46.702139 sshd[4884]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:03:46.706164 systemd-logind[1415]: New session 15 of user core. Oct 8 20:03:46.717783 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 8 20:03:48.087837 sshd[4884]: pam_unix(sshd:session): session closed for user core Oct 8 20:03:48.102467 systemd[1]: sshd@14-10.0.0.147:22-10.0.0.1:51516.service: Deactivated successfully. Oct 8 20:03:48.108309 systemd[1]: session-15.scope: Deactivated successfully. Oct 8 20:03:48.112474 systemd-logind[1415]: Session 15 logged out. Waiting for processes to exit. Oct 8 20:03:48.123274 systemd[1]: Started sshd@15-10.0.0.147:22-10.0.0.1:51526.service - OpenSSH per-connection server daemon (10.0.0.1:51526). Oct 8 20:03:48.124729 systemd-logind[1415]: Removed session 15. Oct 8 20:03:48.162455 sshd[4915]: Accepted publickey for core from 10.0.0.1 port 51526 ssh2: RSA SHA256:PeFR0GwG3Km7u6+IJymPx7tkM/vpusnYsvzmiMSzq3A Oct 8 20:03:48.163771 sshd[4915]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:03:48.168166 systemd-logind[1415]: New session 16 of user core. Oct 8 20:03:48.178761 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 8 20:03:48.452170 sshd[4915]: pam_unix(sshd:session): session closed for user core Oct 8 20:03:48.460426 systemd[1]: sshd@15-10.0.0.147:22-10.0.0.1:51526.service: Deactivated successfully. Oct 8 20:03:48.463056 systemd[1]: session-16.scope: Deactivated successfully. Oct 8 20:03:48.465611 systemd-logind[1415]: Session 16 logged out. Waiting for processes to exit. Oct 8 20:03:48.473912 systemd[1]: Started sshd@16-10.0.0.147:22-10.0.0.1:51530.service - OpenSSH per-connection server daemon (10.0.0.1:51530). Oct 8 20:03:48.477939 systemd-logind[1415]: Removed session 16. Oct 8 20:03:48.509129 sshd[4927]: Accepted publickey for core from 10.0.0.1 port 51530 ssh2: RSA SHA256:PeFR0GwG3Km7u6+IJymPx7tkM/vpusnYsvzmiMSzq3A Oct 8 20:03:48.510461 sshd[4927]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:03:48.515978 systemd-logind[1415]: New session 17 of user core. Oct 8 20:03:48.521765 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 8 20:03:48.687870 sshd[4927]: pam_unix(sshd:session): session closed for user core Oct 8 20:03:48.691396 systemd[1]: sshd@16-10.0.0.147:22-10.0.0.1:51530.service: Deactivated successfully. Oct 8 20:03:48.693820 systemd[1]: session-17.scope: Deactivated successfully. Oct 8 20:03:48.694367 systemd-logind[1415]: Session 17 logged out. Waiting for processes to exit. Oct 8 20:03:48.695170 systemd-logind[1415]: Removed session 17. Oct 8 20:03:53.709184 systemd[1]: Started sshd@17-10.0.0.147:22-10.0.0.1:54082.service - OpenSSH per-connection server daemon (10.0.0.1:54082). Oct 8 20:03:53.744890 sshd[4958]: Accepted publickey for core from 10.0.0.1 port 54082 ssh2: RSA SHA256:PeFR0GwG3Km7u6+IJymPx7tkM/vpusnYsvzmiMSzq3A Oct 8 20:03:53.746151 sshd[4958]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:03:53.750048 systemd-logind[1415]: New session 18 of user core. Oct 8 20:03:53.759820 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 8 20:03:53.878675 sshd[4958]: pam_unix(sshd:session): session closed for user core Oct 8 20:03:53.882825 systemd[1]: sshd@17-10.0.0.147:22-10.0.0.1:54082.service: Deactivated successfully. Oct 8 20:03:53.884992 systemd[1]: session-18.scope: Deactivated successfully. Oct 8 20:03:53.886004 systemd-logind[1415]: Session 18 logged out. Waiting for processes to exit. Oct 8 20:03:53.887075 systemd-logind[1415]: Removed session 18. Oct 8 20:03:54.404261 kubelet[2549]: I1008 20:03:54.404087 2549 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 8 20:03:56.829827 containerd[1434]: time="2024-10-08T20:03:56.829766519Z" level=info msg="StopPodSandbox for \"1288112314736d4f32884b6cfd673dc8ba21db58407ff1e92ad6fede9391addc\"" Oct 8 20:03:56.904322 containerd[1434]: 2024-10-08 20:03:56.865 [WARNING][5029] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1288112314736d4f32884b6cfd673dc8ba21db58407ff1e92ad6fede9391addc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--2fx9x-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"607115c5-20c3-45be-b2fc-233b05ba8ddd", ResourceVersion:"793", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 3, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"311bc7fb6f53633b3499d202796a118e93788d09c8be133d4e91a87f672140f6", Pod:"coredns-76f75df574-2fx9x", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic52852a10ec", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:03:56.904322 containerd[1434]: 2024-10-08 20:03:56.865 [INFO][5029] k8s.go 608: Cleaning up netns ContainerID="1288112314736d4f32884b6cfd673dc8ba21db58407ff1e92ad6fede9391addc" Oct 8 20:03:56.904322 containerd[1434]: 2024-10-08 20:03:56.865 [INFO][5029] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="1288112314736d4f32884b6cfd673dc8ba21db58407ff1e92ad6fede9391addc" iface="eth0" netns="" Oct 8 20:03:56.904322 containerd[1434]: 2024-10-08 20:03:56.865 [INFO][5029] k8s.go 615: Releasing IP address(es) ContainerID="1288112314736d4f32884b6cfd673dc8ba21db58407ff1e92ad6fede9391addc" Oct 8 20:03:56.904322 containerd[1434]: 2024-10-08 20:03:56.865 [INFO][5029] utils.go 188: Calico CNI releasing IP address ContainerID="1288112314736d4f32884b6cfd673dc8ba21db58407ff1e92ad6fede9391addc" Oct 8 20:03:56.904322 containerd[1434]: 2024-10-08 20:03:56.890 [INFO][5038] ipam_plugin.go 417: Releasing address using handleID ContainerID="1288112314736d4f32884b6cfd673dc8ba21db58407ff1e92ad6fede9391addc" HandleID="k8s-pod-network.1288112314736d4f32884b6cfd673dc8ba21db58407ff1e92ad6fede9391addc" Workload="localhost-k8s-coredns--76f75df574--2fx9x-eth0" Oct 8 20:03:56.904322 containerd[1434]: 2024-10-08 20:03:56.890 [INFO][5038] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:03:56.904322 containerd[1434]: 2024-10-08 20:03:56.890 [INFO][5038] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:03:56.904322 containerd[1434]: 2024-10-08 20:03:56.899 [WARNING][5038] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="1288112314736d4f32884b6cfd673dc8ba21db58407ff1e92ad6fede9391addc" HandleID="k8s-pod-network.1288112314736d4f32884b6cfd673dc8ba21db58407ff1e92ad6fede9391addc" Workload="localhost-k8s-coredns--76f75df574--2fx9x-eth0" Oct 8 20:03:56.904322 containerd[1434]: 2024-10-08 20:03:56.899 [INFO][5038] ipam_plugin.go 445: Releasing address using workloadID ContainerID="1288112314736d4f32884b6cfd673dc8ba21db58407ff1e92ad6fede9391addc" HandleID="k8s-pod-network.1288112314736d4f32884b6cfd673dc8ba21db58407ff1e92ad6fede9391addc" Workload="localhost-k8s-coredns--76f75df574--2fx9x-eth0" Oct 8 20:03:56.904322 containerd[1434]: 2024-10-08 20:03:56.900 [INFO][5038] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:03:56.904322 containerd[1434]: 2024-10-08 20:03:56.902 [INFO][5029] k8s.go 621: Teardown processing complete. ContainerID="1288112314736d4f32884b6cfd673dc8ba21db58407ff1e92ad6fede9391addc" Oct 8 20:03:56.905214 containerd[1434]: time="2024-10-08T20:03:56.904359634Z" level=info msg="TearDown network for sandbox \"1288112314736d4f32884b6cfd673dc8ba21db58407ff1e92ad6fede9391addc\" successfully" Oct 8 20:03:56.905214 containerd[1434]: time="2024-10-08T20:03:56.904383594Z" level=info msg="StopPodSandbox for \"1288112314736d4f32884b6cfd673dc8ba21db58407ff1e92ad6fede9391addc\" returns successfully" Oct 8 20:03:56.905214 containerd[1434]: time="2024-10-08T20:03:56.904858152Z" level=info msg="RemovePodSandbox for \"1288112314736d4f32884b6cfd673dc8ba21db58407ff1e92ad6fede9391addc\"" Oct 8 20:03:56.907675 containerd[1434]: time="2024-10-08T20:03:56.907651865Z" level=info msg="Forcibly stopping sandbox \"1288112314736d4f32884b6cfd673dc8ba21db58407ff1e92ad6fede9391addc\"" Oct 8 20:03:56.977208 containerd[1434]: 2024-10-08 20:03:56.942 [WARNING][5062] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1288112314736d4f32884b6cfd673dc8ba21db58407ff1e92ad6fede9391addc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--2fx9x-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"607115c5-20c3-45be-b2fc-233b05ba8ddd", ResourceVersion:"793", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 3, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"311bc7fb6f53633b3499d202796a118e93788d09c8be133d4e91a87f672140f6", Pod:"coredns-76f75df574-2fx9x", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic52852a10ec", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:03:56.977208 containerd[1434]: 2024-10-08 20:03:56.943 [INFO][5062] k8s.go 608: Cleaning up netns ContainerID="1288112314736d4f32884b6cfd673dc8ba21db58407ff1e92ad6fede9391addc" Oct 8 20:03:56.977208 containerd[1434]: 2024-10-08 20:03:56.943 [INFO][5062] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="1288112314736d4f32884b6cfd673dc8ba21db58407ff1e92ad6fede9391addc" iface="eth0" netns="" Oct 8 20:03:56.977208 containerd[1434]: 2024-10-08 20:03:56.943 [INFO][5062] k8s.go 615: Releasing IP address(es) ContainerID="1288112314736d4f32884b6cfd673dc8ba21db58407ff1e92ad6fede9391addc" Oct 8 20:03:56.977208 containerd[1434]: 2024-10-08 20:03:56.943 [INFO][5062] utils.go 188: Calico CNI releasing IP address ContainerID="1288112314736d4f32884b6cfd673dc8ba21db58407ff1e92ad6fede9391addc" Oct 8 20:03:56.977208 containerd[1434]: 2024-10-08 20:03:56.964 [INFO][5070] ipam_plugin.go 417: Releasing address using handleID ContainerID="1288112314736d4f32884b6cfd673dc8ba21db58407ff1e92ad6fede9391addc" HandleID="k8s-pod-network.1288112314736d4f32884b6cfd673dc8ba21db58407ff1e92ad6fede9391addc" Workload="localhost-k8s-coredns--76f75df574--2fx9x-eth0" Oct 8 20:03:56.977208 containerd[1434]: 2024-10-08 20:03:56.964 [INFO][5070] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:03:56.977208 containerd[1434]: 2024-10-08 20:03:56.964 [INFO][5070] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:03:56.977208 containerd[1434]: 2024-10-08 20:03:56.972 [WARNING][5070] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="1288112314736d4f32884b6cfd673dc8ba21db58407ff1e92ad6fede9391addc" HandleID="k8s-pod-network.1288112314736d4f32884b6cfd673dc8ba21db58407ff1e92ad6fede9391addc" Workload="localhost-k8s-coredns--76f75df574--2fx9x-eth0" Oct 8 20:03:56.977208 containerd[1434]: 2024-10-08 20:03:56.972 [INFO][5070] ipam_plugin.go 445: Releasing address using workloadID ContainerID="1288112314736d4f32884b6cfd673dc8ba21db58407ff1e92ad6fede9391addc" HandleID="k8s-pod-network.1288112314736d4f32884b6cfd673dc8ba21db58407ff1e92ad6fede9391addc" Workload="localhost-k8s-coredns--76f75df574--2fx9x-eth0" Oct 8 20:03:56.977208 containerd[1434]: 2024-10-08 20:03:56.973 [INFO][5070] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:03:56.977208 containerd[1434]: 2024-10-08 20:03:56.975 [INFO][5062] k8s.go 621: Teardown processing complete. ContainerID="1288112314736d4f32884b6cfd673dc8ba21db58407ff1e92ad6fede9391addc" Oct 8 20:03:56.977208 containerd[1434]: time="2024-10-08T20:03:56.977201794Z" level=info msg="TearDown network for sandbox \"1288112314736d4f32884b6cfd673dc8ba21db58407ff1e92ad6fede9391addc\" successfully" Oct 8 20:03:56.980669 containerd[1434]: time="2024-10-08T20:03:56.980640344Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1288112314736d4f32884b6cfd673dc8ba21db58407ff1e92ad6fede9391addc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 8 20:03:56.980753 containerd[1434]: time="2024-10-08T20:03:56.980697224Z" level=info msg="RemovePodSandbox \"1288112314736d4f32884b6cfd673dc8ba21db58407ff1e92ad6fede9391addc\" returns successfully" Oct 8 20:03:56.981211 containerd[1434]: time="2024-10-08T20:03:56.981186143Z" level=info msg="StopPodSandbox for \"17cbb0ffeb5bde4c23040ffa36b878ea33796d5b1600384746fe647e501bd6e0\"" Oct 8 20:03:57.060655 containerd[1434]: 2024-10-08 20:03:57.027 [WARNING][5093] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="17cbb0ffeb5bde4c23040ffa36b878ea33796d5b1600384746fe647e501bd6e0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6b8996986b--k6ktv-eth0", GenerateName:"calico-kube-controllers-6b8996986b-", Namespace:"calico-system", SelfLink:"", UID:"38f894d1-54bd-4496-83b6-2cd084c2920d", ResourceVersion:"959", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 3, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6b8996986b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"96cbd0214a96157e0f3376aaa1b3e4cfcbb611adc2c633f62d3fbb7325b60adc", Pod:"calico-kube-controllers-6b8996986b-k6ktv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4a80b6989b1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:03:57.060655 containerd[1434]: 2024-10-08 20:03:57.028 [INFO][5093] k8s.go 608: Cleaning up netns ContainerID="17cbb0ffeb5bde4c23040ffa36b878ea33796d5b1600384746fe647e501bd6e0" Oct 8 20:03:57.060655 containerd[1434]: 2024-10-08 20:03:57.028 [INFO][5093] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="17cbb0ffeb5bde4c23040ffa36b878ea33796d5b1600384746fe647e501bd6e0" iface="eth0" netns="" Oct 8 20:03:57.060655 containerd[1434]: 2024-10-08 20:03:57.028 [INFO][5093] k8s.go 615: Releasing IP address(es) ContainerID="17cbb0ffeb5bde4c23040ffa36b878ea33796d5b1600384746fe647e501bd6e0" Oct 8 20:03:57.060655 containerd[1434]: 2024-10-08 20:03:57.028 [INFO][5093] utils.go 188: Calico CNI releasing IP address ContainerID="17cbb0ffeb5bde4c23040ffa36b878ea33796d5b1600384746fe647e501bd6e0" Oct 8 20:03:57.060655 containerd[1434]: 2024-10-08 20:03:57.046 [INFO][5101] ipam_plugin.go 417: Releasing address using handleID ContainerID="17cbb0ffeb5bde4c23040ffa36b878ea33796d5b1600384746fe647e501bd6e0" HandleID="k8s-pod-network.17cbb0ffeb5bde4c23040ffa36b878ea33796d5b1600384746fe647e501bd6e0" Workload="localhost-k8s-calico--kube--controllers--6b8996986b--k6ktv-eth0" Oct 8 20:03:57.060655 containerd[1434]: 2024-10-08 20:03:57.046 [INFO][5101] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:03:57.060655 containerd[1434]: 2024-10-08 20:03:57.046 [INFO][5101] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:03:57.060655 containerd[1434]: 2024-10-08 20:03:57.055 [WARNING][5101] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="17cbb0ffeb5bde4c23040ffa36b878ea33796d5b1600384746fe647e501bd6e0" HandleID="k8s-pod-network.17cbb0ffeb5bde4c23040ffa36b878ea33796d5b1600384746fe647e501bd6e0" Workload="localhost-k8s-calico--kube--controllers--6b8996986b--k6ktv-eth0" Oct 8 20:03:57.060655 containerd[1434]: 2024-10-08 20:03:57.055 [INFO][5101] ipam_plugin.go 445: Releasing address using workloadID ContainerID="17cbb0ffeb5bde4c23040ffa36b878ea33796d5b1600384746fe647e501bd6e0" HandleID="k8s-pod-network.17cbb0ffeb5bde4c23040ffa36b878ea33796d5b1600384746fe647e501bd6e0" Workload="localhost-k8s-calico--kube--controllers--6b8996986b--k6ktv-eth0" Oct 8 20:03:57.060655 containerd[1434]: 2024-10-08 20:03:57.057 [INFO][5101] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:03:57.060655 containerd[1434]: 2024-10-08 20:03:57.058 [INFO][5093] k8s.go 621: Teardown processing complete. ContainerID="17cbb0ffeb5bde4c23040ffa36b878ea33796d5b1600384746fe647e501bd6e0" Oct 8 20:03:57.061187 containerd[1434]: time="2024-10-08T20:03:57.060676868Z" level=info msg="TearDown network for sandbox \"17cbb0ffeb5bde4c23040ffa36b878ea33796d5b1600384746fe647e501bd6e0\" successfully" Oct 8 20:03:57.061187 containerd[1434]: time="2024-10-08T20:03:57.060698827Z" level=info msg="StopPodSandbox for \"17cbb0ffeb5bde4c23040ffa36b878ea33796d5b1600384746fe647e501bd6e0\" returns successfully" Oct 8 20:03:57.061716 containerd[1434]: time="2024-10-08T20:03:57.061384041Z" level=info msg="RemovePodSandbox for \"17cbb0ffeb5bde4c23040ffa36b878ea33796d5b1600384746fe647e501bd6e0\"" Oct 8 20:03:57.061716 containerd[1434]: time="2024-10-08T20:03:57.061434599Z" level=info msg="Forcibly stopping sandbox \"17cbb0ffeb5bde4c23040ffa36b878ea33796d5b1600384746fe647e501bd6e0\"" Oct 8 20:03:57.126903 containerd[1434]: 2024-10-08 20:03:57.093 [WARNING][5124] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="17cbb0ffeb5bde4c23040ffa36b878ea33796d5b1600384746fe647e501bd6e0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6b8996986b--k6ktv-eth0", GenerateName:"calico-kube-controllers-6b8996986b-", Namespace:"calico-system", SelfLink:"", UID:"38f894d1-54bd-4496-83b6-2cd084c2920d", ResourceVersion:"959", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 3, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6b8996986b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"96cbd0214a96157e0f3376aaa1b3e4cfcbb611adc2c633f62d3fbb7325b60adc", Pod:"calico-kube-controllers-6b8996986b-k6ktv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4a80b6989b1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:03:57.126903 containerd[1434]: 2024-10-08 20:03:57.093 [INFO][5124] k8s.go 608: Cleaning up netns ContainerID="17cbb0ffeb5bde4c23040ffa36b878ea33796d5b1600384746fe647e501bd6e0" Oct 8 20:03:57.126903 containerd[1434]: 2024-10-08 20:03:57.093 [INFO][5124] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="17cbb0ffeb5bde4c23040ffa36b878ea33796d5b1600384746fe647e501bd6e0" iface="eth0" netns="" Oct 8 20:03:57.126903 containerd[1434]: 2024-10-08 20:03:57.093 [INFO][5124] k8s.go 615: Releasing IP address(es) ContainerID="17cbb0ffeb5bde4c23040ffa36b878ea33796d5b1600384746fe647e501bd6e0" Oct 8 20:03:57.126903 containerd[1434]: 2024-10-08 20:03:57.093 [INFO][5124] utils.go 188: Calico CNI releasing IP address ContainerID="17cbb0ffeb5bde4c23040ffa36b878ea33796d5b1600384746fe647e501bd6e0" Oct 8 20:03:57.126903 containerd[1434]: 2024-10-08 20:03:57.112 [INFO][5132] ipam_plugin.go 417: Releasing address using handleID ContainerID="17cbb0ffeb5bde4c23040ffa36b878ea33796d5b1600384746fe647e501bd6e0" HandleID="k8s-pod-network.17cbb0ffeb5bde4c23040ffa36b878ea33796d5b1600384746fe647e501bd6e0" Workload="localhost-k8s-calico--kube--controllers--6b8996986b--k6ktv-eth0" Oct 8 20:03:57.126903 containerd[1434]: 2024-10-08 20:03:57.112 [INFO][5132] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:03:57.126903 containerd[1434]: 2024-10-08 20:03:57.112 [INFO][5132] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:03:57.126903 containerd[1434]: 2024-10-08 20:03:57.121 [WARNING][5132] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="17cbb0ffeb5bde4c23040ffa36b878ea33796d5b1600384746fe647e501bd6e0" HandleID="k8s-pod-network.17cbb0ffeb5bde4c23040ffa36b878ea33796d5b1600384746fe647e501bd6e0" Workload="localhost-k8s-calico--kube--controllers--6b8996986b--k6ktv-eth0" Oct 8 20:03:57.126903 containerd[1434]: 2024-10-08 20:03:57.121 [INFO][5132] ipam_plugin.go 445: Releasing address using workloadID ContainerID="17cbb0ffeb5bde4c23040ffa36b878ea33796d5b1600384746fe647e501bd6e0" HandleID="k8s-pod-network.17cbb0ffeb5bde4c23040ffa36b878ea33796d5b1600384746fe647e501bd6e0" Workload="localhost-k8s-calico--kube--controllers--6b8996986b--k6ktv-eth0" Oct 8 20:03:57.126903 containerd[1434]: 2024-10-08 20:03:57.123 [INFO][5132] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:03:57.126903 containerd[1434]: 2024-10-08 20:03:57.124 [INFO][5124] k8s.go 621: Teardown processing complete. ContainerID="17cbb0ffeb5bde4c23040ffa36b878ea33796d5b1600384746fe647e501bd6e0" Oct 8 20:03:57.127435 containerd[1434]: time="2024-10-08T20:03:57.126907174Z" level=info msg="TearDown network for sandbox \"17cbb0ffeb5bde4c23040ffa36b878ea33796d5b1600384746fe647e501bd6e0\" successfully" Oct 8 20:03:57.129891 containerd[1434]: time="2024-10-08T20:03:57.129859983Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"17cbb0ffeb5bde4c23040ffa36b878ea33796d5b1600384746fe647e501bd6e0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 8 20:03:57.129967 containerd[1434]: time="2024-10-08T20:03:57.129916941Z" level=info msg="RemovePodSandbox \"17cbb0ffeb5bde4c23040ffa36b878ea33796d5b1600384746fe647e501bd6e0\" returns successfully" Oct 8 20:03:57.130560 containerd[1434]: time="2024-10-08T20:03:57.130305726Z" level=info msg="StopPodSandbox for \"c8d4223e11afdcb2aff9569c15e7423b8ef6328779bf3fc85931761157f97c2c\"" Oct 8 20:03:57.227140 containerd[1434]: 2024-10-08 20:03:57.165 [WARNING][5154] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c8d4223e11afdcb2aff9569c15e7423b8ef6328779bf3fc85931761157f97c2c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--jvqdn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"02b0616c-b70a-4eb4-99b0-3609843a3ee6", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 3, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ebb35eb5e4466c97fbb0f34b3526a70983127ae13b2dcc19bca328bd1f7d9ccc", Pod:"csi-node-driver-jvqdn", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali01a91bc46e3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:03:57.227140 containerd[1434]: 2024-10-08 20:03:57.166 [INFO][5154] k8s.go 608: Cleaning up netns ContainerID="c8d4223e11afdcb2aff9569c15e7423b8ef6328779bf3fc85931761157f97c2c" Oct 8 20:03:57.227140 containerd[1434]: 2024-10-08 20:03:57.166 [INFO][5154] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="c8d4223e11afdcb2aff9569c15e7423b8ef6328779bf3fc85931761157f97c2c" iface="eth0" netns="" Oct 8 20:03:57.227140 containerd[1434]: 2024-10-08 20:03:57.167 [INFO][5154] k8s.go 615: Releasing IP address(es) ContainerID="c8d4223e11afdcb2aff9569c15e7423b8ef6328779bf3fc85931761157f97c2c" Oct 8 20:03:57.227140 containerd[1434]: 2024-10-08 20:03:57.167 [INFO][5154] utils.go 188: Calico CNI releasing IP address ContainerID="c8d4223e11afdcb2aff9569c15e7423b8ef6328779bf3fc85931761157f97c2c" Oct 8 20:03:57.227140 containerd[1434]: 2024-10-08 20:03:57.207 [INFO][5162] ipam_plugin.go 417: Releasing address using handleID ContainerID="c8d4223e11afdcb2aff9569c15e7423b8ef6328779bf3fc85931761157f97c2c" HandleID="k8s-pod-network.c8d4223e11afdcb2aff9569c15e7423b8ef6328779bf3fc85931761157f97c2c" Workload="localhost-k8s-csi--node--driver--jvqdn-eth0" Oct 8 20:03:57.227140 containerd[1434]: 2024-10-08 20:03:57.207 [INFO][5162] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:03:57.227140 containerd[1434]: 2024-10-08 20:03:57.207 [INFO][5162] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:03:57.227140 containerd[1434]: 2024-10-08 20:03:57.219 [WARNING][5162] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="c8d4223e11afdcb2aff9569c15e7423b8ef6328779bf3fc85931761157f97c2c" HandleID="k8s-pod-network.c8d4223e11afdcb2aff9569c15e7423b8ef6328779bf3fc85931761157f97c2c" Workload="localhost-k8s-csi--node--driver--jvqdn-eth0" Oct 8 20:03:57.227140 containerd[1434]: 2024-10-08 20:03:57.220 [INFO][5162] ipam_plugin.go 445: Releasing address using workloadID ContainerID="c8d4223e11afdcb2aff9569c15e7423b8ef6328779bf3fc85931761157f97c2c" HandleID="k8s-pod-network.c8d4223e11afdcb2aff9569c15e7423b8ef6328779bf3fc85931761157f97c2c" Workload="localhost-k8s-csi--node--driver--jvqdn-eth0" Oct 8 20:03:57.227140 containerd[1434]: 2024-10-08 20:03:57.222 [INFO][5162] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:03:57.227140 containerd[1434]: 2024-10-08 20:03:57.224 [INFO][5154] k8s.go 621: Teardown processing complete. ContainerID="c8d4223e11afdcb2aff9569c15e7423b8ef6328779bf3fc85931761157f97c2c" Oct 8 20:03:57.227649 containerd[1434]: time="2024-10-08T20:03:57.227182559Z" level=info msg="TearDown network for sandbox \"c8d4223e11afdcb2aff9569c15e7423b8ef6328779bf3fc85931761157f97c2c\" successfully" Oct 8 20:03:57.227649 containerd[1434]: time="2024-10-08T20:03:57.227210358Z" level=info msg="StopPodSandbox for \"c8d4223e11afdcb2aff9569c15e7423b8ef6328779bf3fc85931761157f97c2c\" returns successfully" Oct 8 20:03:57.228071 containerd[1434]: time="2024-10-08T20:03:57.227832934Z" level=info msg="RemovePodSandbox for \"c8d4223e11afdcb2aff9569c15e7423b8ef6328779bf3fc85931761157f97c2c\"" Oct 8 20:03:57.228071 containerd[1434]: time="2024-10-08T20:03:57.227863413Z" level=info msg="Forcibly stopping sandbox \"c8d4223e11afdcb2aff9569c15e7423b8ef6328779bf3fc85931761157f97c2c\"" Oct 8 20:03:57.311301 containerd[1434]: 2024-10-08 20:03:57.272 [WARNING][5184] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c8d4223e11afdcb2aff9569c15e7423b8ef6328779bf3fc85931761157f97c2c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--jvqdn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"02b0616c-b70a-4eb4-99b0-3609843a3ee6", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 3, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ebb35eb5e4466c97fbb0f34b3526a70983127ae13b2dcc19bca328bd1f7d9ccc", Pod:"csi-node-driver-jvqdn", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali01a91bc46e3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:03:57.311301 containerd[1434]: 2024-10-08 20:03:57.273 [INFO][5184] k8s.go 608: Cleaning up netns ContainerID="c8d4223e11afdcb2aff9569c15e7423b8ef6328779bf3fc85931761157f97c2c" Oct 8 20:03:57.311301 containerd[1434]: 2024-10-08 20:03:57.273 [INFO][5184] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="c8d4223e11afdcb2aff9569c15e7423b8ef6328779bf3fc85931761157f97c2c" iface="eth0" netns="" Oct 8 20:03:57.311301 containerd[1434]: 2024-10-08 20:03:57.273 [INFO][5184] k8s.go 615: Releasing IP address(es) ContainerID="c8d4223e11afdcb2aff9569c15e7423b8ef6328779bf3fc85931761157f97c2c" Oct 8 20:03:57.311301 containerd[1434]: 2024-10-08 20:03:57.273 [INFO][5184] utils.go 188: Calico CNI releasing IP address ContainerID="c8d4223e11afdcb2aff9569c15e7423b8ef6328779bf3fc85931761157f97c2c" Oct 8 20:03:57.311301 containerd[1434]: 2024-10-08 20:03:57.296 [INFO][5191] ipam_plugin.go 417: Releasing address using handleID ContainerID="c8d4223e11afdcb2aff9569c15e7423b8ef6328779bf3fc85931761157f97c2c" HandleID="k8s-pod-network.c8d4223e11afdcb2aff9569c15e7423b8ef6328779bf3fc85931761157f97c2c" Workload="localhost-k8s-csi--node--driver--jvqdn-eth0" Oct 8 20:03:57.311301 containerd[1434]: 2024-10-08 20:03:57.296 [INFO][5191] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:03:57.311301 containerd[1434]: 2024-10-08 20:03:57.296 [INFO][5191] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:03:57.311301 containerd[1434]: 2024-10-08 20:03:57.305 [WARNING][5191] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="c8d4223e11afdcb2aff9569c15e7423b8ef6328779bf3fc85931761157f97c2c" HandleID="k8s-pod-network.c8d4223e11afdcb2aff9569c15e7423b8ef6328779bf3fc85931761157f97c2c" Workload="localhost-k8s-csi--node--driver--jvqdn-eth0" Oct 8 20:03:57.311301 containerd[1434]: 2024-10-08 20:03:57.305 [INFO][5191] ipam_plugin.go 445: Releasing address using workloadID ContainerID="c8d4223e11afdcb2aff9569c15e7423b8ef6328779bf3fc85931761157f97c2c" HandleID="k8s-pod-network.c8d4223e11afdcb2aff9569c15e7423b8ef6328779bf3fc85931761157f97c2c" Workload="localhost-k8s-csi--node--driver--jvqdn-eth0" Oct 8 20:03:57.311301 containerd[1434]: 2024-10-08 20:03:57.307 [INFO][5191] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:03:57.311301 containerd[1434]: 2024-10-08 20:03:57.309 [INFO][5184] k8s.go 621: Teardown processing complete. ContainerID="c8d4223e11afdcb2aff9569c15e7423b8ef6328779bf3fc85931761157f97c2c" Oct 8 20:03:57.311301 containerd[1434]: time="2024-10-08T20:03:57.311235234Z" level=info msg="TearDown network for sandbox \"c8d4223e11afdcb2aff9569c15e7423b8ef6328779bf3fc85931761157f97c2c\" successfully" Oct 8 20:03:57.314156 containerd[1434]: time="2024-10-08T20:03:57.314057967Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c8d4223e11afdcb2aff9569c15e7423b8ef6328779bf3fc85931761157f97c2c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 8 20:03:57.314267 containerd[1434]: time="2024-10-08T20:03:57.314230041Z" level=info msg="RemovePodSandbox \"c8d4223e11afdcb2aff9569c15e7423b8ef6328779bf3fc85931761157f97c2c\" returns successfully" Oct 8 20:03:57.314873 containerd[1434]: time="2024-10-08T20:03:57.314849458Z" level=info msg="StopPodSandbox for \"6d0c2d44c174279033af56b5e5da6c2f434d94a80e5ba36ed3fe54fe3b7f6170\"" Oct 8 20:03:57.404000 containerd[1434]: 2024-10-08 20:03:57.359 [WARNING][5213] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6d0c2d44c174279033af56b5e5da6c2f434d94a80e5ba36ed3fe54fe3b7f6170" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--htgrq-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"fd379779-9501-4c20-b03d-dee08f6a4a3c", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 3, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"28bcad831f9d6f502942f16059e753c967fe8233480b89b6f48233c47c0af591", Pod:"coredns-76f75df574-htgrq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califef0e6cb959", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:03:57.404000 containerd[1434]: 2024-10-08 20:03:57.359 [INFO][5213] k8s.go 608: Cleaning up netns ContainerID="6d0c2d44c174279033af56b5e5da6c2f434d94a80e5ba36ed3fe54fe3b7f6170" Oct 8 20:03:57.404000 containerd[1434]: 2024-10-08 20:03:57.359 [INFO][5213] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="6d0c2d44c174279033af56b5e5da6c2f434d94a80e5ba36ed3fe54fe3b7f6170" iface="eth0" netns="" Oct 8 20:03:57.404000 containerd[1434]: 2024-10-08 20:03:57.359 [INFO][5213] k8s.go 615: Releasing IP address(es) ContainerID="6d0c2d44c174279033af56b5e5da6c2f434d94a80e5ba36ed3fe54fe3b7f6170" Oct 8 20:03:57.404000 containerd[1434]: 2024-10-08 20:03:57.359 [INFO][5213] utils.go 188: Calico CNI releasing IP address ContainerID="6d0c2d44c174279033af56b5e5da6c2f434d94a80e5ba36ed3fe54fe3b7f6170" Oct 8 20:03:57.404000 containerd[1434]: 2024-10-08 20:03:57.386 [INFO][5221] ipam_plugin.go 417: Releasing address using handleID ContainerID="6d0c2d44c174279033af56b5e5da6c2f434d94a80e5ba36ed3fe54fe3b7f6170" HandleID="k8s-pod-network.6d0c2d44c174279033af56b5e5da6c2f434d94a80e5ba36ed3fe54fe3b7f6170" Workload="localhost-k8s-coredns--76f75df574--htgrq-eth0" Oct 8 20:03:57.404000 containerd[1434]: 2024-10-08 20:03:57.387 [INFO][5221] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:03:57.404000 containerd[1434]: 2024-10-08 20:03:57.387 [INFO][5221] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:03:57.404000 containerd[1434]: 2024-10-08 20:03:57.398 [WARNING][5221] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="6d0c2d44c174279033af56b5e5da6c2f434d94a80e5ba36ed3fe54fe3b7f6170" HandleID="k8s-pod-network.6d0c2d44c174279033af56b5e5da6c2f434d94a80e5ba36ed3fe54fe3b7f6170" Workload="localhost-k8s-coredns--76f75df574--htgrq-eth0" Oct 8 20:03:57.404000 containerd[1434]: 2024-10-08 20:03:57.398 [INFO][5221] ipam_plugin.go 445: Releasing address using workloadID ContainerID="6d0c2d44c174279033af56b5e5da6c2f434d94a80e5ba36ed3fe54fe3b7f6170" HandleID="k8s-pod-network.6d0c2d44c174279033af56b5e5da6c2f434d94a80e5ba36ed3fe54fe3b7f6170" Workload="localhost-k8s-coredns--76f75df574--htgrq-eth0" Oct 8 20:03:57.404000 containerd[1434]: 2024-10-08 20:03:57.399 [INFO][5221] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:03:57.404000 containerd[1434]: 2024-10-08 20:03:57.401 [INFO][5213] k8s.go 621: Teardown processing complete. ContainerID="6d0c2d44c174279033af56b5e5da6c2f434d94a80e5ba36ed3fe54fe3b7f6170" Oct 8 20:03:57.404000 containerd[1434]: time="2024-10-08T20:03:57.403830707Z" level=info msg="TearDown network for sandbox \"6d0c2d44c174279033af56b5e5da6c2f434d94a80e5ba36ed3fe54fe3b7f6170\" successfully" Oct 8 20:03:57.404000 containerd[1434]: time="2024-10-08T20:03:57.403855626Z" level=info msg="StopPodSandbox for \"6d0c2d44c174279033af56b5e5da6c2f434d94a80e5ba36ed3fe54fe3b7f6170\" returns successfully" Oct 8 20:03:57.405114 containerd[1434]: time="2024-10-08T20:03:57.404848709Z" level=info msg="RemovePodSandbox for \"6d0c2d44c174279033af56b5e5da6c2f434d94a80e5ba36ed3fe54fe3b7f6170\"" Oct 8 20:03:57.405114 containerd[1434]: time="2024-10-08T20:03:57.404881268Z" level=info msg="Forcibly stopping sandbox \"6d0c2d44c174279033af56b5e5da6c2f434d94a80e5ba36ed3fe54fe3b7f6170\"" Oct 8 20:03:57.474009 containerd[1434]: 2024-10-08 20:03:57.444 [WARNING][5245] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6d0c2d44c174279033af56b5e5da6c2f434d94a80e5ba36ed3fe54fe3b7f6170" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--htgrq-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"fd379779-9501-4c20-b03d-dee08f6a4a3c", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 3, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"28bcad831f9d6f502942f16059e753c967fe8233480b89b6f48233c47c0af591", Pod:"coredns-76f75df574-htgrq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califef0e6cb959", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:03:57.474009 containerd[1434]: 2024-10-08 20:03:57.444 [INFO][5245] k8s.go 608: Cleaning up netns ContainerID="6d0c2d44c174279033af56b5e5da6c2f434d94a80e5ba36ed3fe54fe3b7f6170" Oct 8 20:03:57.474009 containerd[1434]: 2024-10-08 20:03:57.444 [INFO][5245] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="6d0c2d44c174279033af56b5e5da6c2f434d94a80e5ba36ed3fe54fe3b7f6170" iface="eth0" netns="" Oct 8 20:03:57.474009 containerd[1434]: 2024-10-08 20:03:57.444 [INFO][5245] k8s.go 615: Releasing IP address(es) ContainerID="6d0c2d44c174279033af56b5e5da6c2f434d94a80e5ba36ed3fe54fe3b7f6170" Oct 8 20:03:57.474009 containerd[1434]: 2024-10-08 20:03:57.444 [INFO][5245] utils.go 188: Calico CNI releasing IP address ContainerID="6d0c2d44c174279033af56b5e5da6c2f434d94a80e5ba36ed3fe54fe3b7f6170" Oct 8 20:03:57.474009 containerd[1434]: 2024-10-08 20:03:57.461 [INFO][5253] ipam_plugin.go 417: Releasing address using handleID ContainerID="6d0c2d44c174279033af56b5e5da6c2f434d94a80e5ba36ed3fe54fe3b7f6170" HandleID="k8s-pod-network.6d0c2d44c174279033af56b5e5da6c2f434d94a80e5ba36ed3fe54fe3b7f6170" Workload="localhost-k8s-coredns--76f75df574--htgrq-eth0" Oct 8 20:03:57.474009 containerd[1434]: 2024-10-08 20:03:57.461 [INFO][5253] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:03:57.474009 containerd[1434]: 2024-10-08 20:03:57.461 [INFO][5253] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:03:57.474009 containerd[1434]: 2024-10-08 20:03:57.469 [WARNING][5253] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="6d0c2d44c174279033af56b5e5da6c2f434d94a80e5ba36ed3fe54fe3b7f6170" HandleID="k8s-pod-network.6d0c2d44c174279033af56b5e5da6c2f434d94a80e5ba36ed3fe54fe3b7f6170" Workload="localhost-k8s-coredns--76f75df574--htgrq-eth0" Oct 8 20:03:57.474009 containerd[1434]: 2024-10-08 20:03:57.469 [INFO][5253] ipam_plugin.go 445: Releasing address using workloadID ContainerID="6d0c2d44c174279033af56b5e5da6c2f434d94a80e5ba36ed3fe54fe3b7f6170" HandleID="k8s-pod-network.6d0c2d44c174279033af56b5e5da6c2f434d94a80e5ba36ed3fe54fe3b7f6170" Workload="localhost-k8s-coredns--76f75df574--htgrq-eth0" Oct 8 20:03:57.474009 containerd[1434]: 2024-10-08 20:03:57.470 [INFO][5253] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:03:57.474009 containerd[1434]: 2024-10-08 20:03:57.472 [INFO][5245] k8s.go 621: Teardown processing complete. ContainerID="6d0c2d44c174279033af56b5e5da6c2f434d94a80e5ba36ed3fe54fe3b7f6170" Oct 8 20:03:57.474714 containerd[1434]: time="2024-10-08T20:03:57.474479727Z" level=info msg="TearDown network for sandbox \"6d0c2d44c174279033af56b5e5da6c2f434d94a80e5ba36ed3fe54fe3b7f6170\" successfully" Oct 8 20:03:57.477435 containerd[1434]: time="2024-10-08T20:03:57.477341099Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6d0c2d44c174279033af56b5e5da6c2f434d94a80e5ba36ed3fe54fe3b7f6170\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 8 20:03:57.477435 containerd[1434]: time="2024-10-08T20:03:57.477410417Z" level=info msg="RemovePodSandbox \"6d0c2d44c174279033af56b5e5da6c2f434d94a80e5ba36ed3fe54fe3b7f6170\" returns successfully" Oct 8 20:03:58.890337 systemd[1]: Started sshd@18-10.0.0.147:22-10.0.0.1:54094.service - OpenSSH per-connection server daemon (10.0.0.1:54094). Oct 8 20:03:58.933002 sshd[5262]: Accepted publickey for core from 10.0.0.1 port 54094 ssh2: RSA SHA256:PeFR0GwG3Km7u6+IJymPx7tkM/vpusnYsvzmiMSzq3A Oct 8 20:03:58.934233 sshd[5262]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:03:58.938081 systemd-logind[1415]: New session 19 of user core. Oct 8 20:03:58.946769 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 8 20:03:59.065058 sshd[5262]: pam_unix(sshd:session): session closed for user core Oct 8 20:03:59.068713 systemd[1]: sshd@18-10.0.0.147:22-10.0.0.1:54094.service: Deactivated successfully. Oct 8 20:03:59.070539 systemd[1]: session-19.scope: Deactivated successfully. Oct 8 20:03:59.072664 systemd-logind[1415]: Session 19 logged out. Waiting for processes to exit. Oct 8 20:03:59.073637 systemd-logind[1415]: Removed session 19. Oct 8 20:04:03.244142 kubelet[2549]: E1008 20:04:03.243691 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:04:04.075818 systemd[1]: Started sshd@19-10.0.0.147:22-10.0.0.1:58890.service - OpenSSH per-connection server daemon (10.0.0.1:58890). Oct 8 20:04:04.120275 sshd[5310]: Accepted publickey for core from 10.0.0.1 port 58890 ssh2: RSA SHA256:PeFR0GwG3Km7u6+IJymPx7tkM/vpusnYsvzmiMSzq3A Oct 8 20:04:04.121726 sshd[5310]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:04:04.127861 systemd-logind[1415]: New session 20 of user core. Oct 8 20:04:04.137791 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 8 20:04:04.273102 sshd[5310]: pam_unix(sshd:session): session closed for user core Oct 8 20:04:04.276510 systemd[1]: sshd@19-10.0.0.147:22-10.0.0.1:58890.service: Deactivated successfully. Oct 8 20:04:04.279246 systemd[1]: session-20.scope: Deactivated successfully. Oct 8 20:04:04.279921 systemd-logind[1415]: Session 20 logged out. Waiting for processes to exit. Oct 8 20:04:04.280913 systemd-logind[1415]: Removed session 20.